🤕 clumsy.dev 🚀

playing around with r2

r2workerscloudflare workers

this is just a random collection of things when I tested r2 and workers.

use a worker with r2 bindings

here's a function to upload a received file to r2 via workers. It'll take the content-type from the header and generates a UUID for the filename.

export async function putFile(request: Request, env: any): Promise<Response> {
  const contentType = request.headers.get('content-type')
  const filename = crypto.randomUUID()

  if (request.method === 'POST') {
    try {
      await env.yourBucketName.put(filename, request.body, {
        httpMetadata: {
          contentType: contentType,
        },
      })
      return new Response(JSON.stringify({ success: true, filename: filename }))
    } catch (error) {
      return new Response(JSON.stringify({ success: false, message: 'could not upload file', error: error }))
    }
  }
}

upload large files through workers

It's probably better to just use the s3 API directly instead of pushing this through Workers, but I wanted to test it anyway. To upload files with curl, you should chunk them, like so:

curl -H 'abc-auth: abc' -H "Transfer-Encoding: chunked" -H "content-type: video/mp4" -T DJI_0160_001.MP4 https://example.com/upload

I'm using -T here to avoid loading the whole file into memory.

use AWS CLI with Cloudflare R2

this is you you can update your ~/.aws/config file to make it work with R2

[default]
cli_pager=
region = auto
output = text

[profile r2]
region = auto
s3 =
  multipart_threshold = 50MB
  multipart_chunksize = 50MB
  addressing_style = path

then use aws configure to add your credentials you got from the Cloudflare Dashboard.

interact with the bucket

now, you just need to append the --endpoint-url to your calls or create an alias.

aws s3 ls --profile r2 --endpoint-url=https://account-id.r2.cloudflarestorage.com s3://bucket-name
aws s3 cp --profile r2 --endpoint-url=https://account-id.r2.cloudflarestorage.com trailer.mp4 s3://bucket-name/