Presigned URLs
Presigned URLs are an S3 concept for sharing direct access to your bucket without revealing your token secret. A presigned URL authorizes anyone with the URL to perform an action to the S3 compatibility endpoint for an R2 bucket. By default, the S3 endpoint requires an AUTHORIZATION
header signed by your token. Every presigned URL has S3 parameters and search parameters containing the signature information that would be present in an AUTHORIZATION
header. The performable action is restricted to a specific resource, an operation, and has an associated timeout.
There are three kinds of resources in R2:
- Account: For account-level operations (such as
CreateBucket
,ListBuckets
,DeleteBucket
) the identifier is the account ID. - Bucket: For bucket-level operations (such as
ListObjects
,PutBucketCors
) the identifier is the account ID, and bucket name. - Object: For object-level operations (such as
GetObject
,PutObject
,CreateMultipartUpload
) the identifier is the account ID, bucket name, and object path.
All parts of the identifier are part of the presigned URL.
You cannot change the resource being accessed after the request is signed. For example, trying to change the bucket name to access the same object in a different bucket will return a 403
with an error code of SignatureDoesNotMatch
.
Presigned URLs must have a defined expiry. You can set a timeout from one second to 7 days (604,800 seconds) into the future. The URL will contain the time when the URL was generated (X-Amz-Date
) and the timeout (X-Amz-Expires
) as search parameters. These search parameters are signed and tampering with them will result in 403
with an error code of SignatureDoesNotMatch
.
Presigned URLs are generated with no communication with R2 and must be generated by an application with access to your R2 bucket’s credentials.
Presigned URL use cases
There are three ways to grant an application access to R2:
- The application has its own copy of an R2 API token.
- The application requests a copy of an R2 API token from a vault application and promises to not permanently store that token locally.
- The application requests a central application to give it a presigned URL it can use to perform an action.
In scenarios 1 and 2, if the application or vault application is compromised, the holder of the token can perform arbitrary actions.
Scenario 3 keeps the credential secret. If the application making a presigned URL request to the central application leaks that URL, but the central application does not have its key storage system compromised, the impact is limited to one operation on the specific resource that was signed.
Additionally, the central application can perform monitoring, auditing, logging tasks so you can review when a request was made to perform an operation on a specific resource. In the event of a security incident, you can use a central application’s logging functionality to review details of the incident.
The central application can also perform policy enforcement. For example, if you have an application responsible for uploading resources, you can restrict the upload to a specific bucket or folder within a bucket. The requesting application can obtain a JSON Web Token (JWT) from your authorization service to sign a request to the central application. The central application then uses the information contained in the JWT to validate the inbound request parameters.
The central application can be, for example, a Cloudflare Worker. Worker secrets are cryptographically impossible to obtain outside of your script running on the Workers runtime. If you do not store a copy of the secret elsewhere and do not have your code log the secret somewhere, your Worker secret will remain secure. However, as previously mentioned, presigned URLs are generated outside of R2 and all that’s required is the secret + an implementation of the signing algorithm, so you can generate them anywhere.
Another potential use case for presigned URLs is debugging. For example, if you are debugging your application and want to grant temporary access to a specific test object in a production environment, you can do this without needing to share the underlying token and remembering to revoke it.
Generate presigned URLs
Generate a presigned URL by referring to the following examples:
Presigned URL alternative with Workers
A valid alternative design to presigned URLs is to use a Worker with a binding that implements your security policy.
A possible use case may be restricting an application to only be able to upload to a specific URL. With presigned URLs, your central signing application might look like the following JavaScript code running on Cloudflare Workers, workerd, or another platform.
If the Worker received a request for https://example.com/uploads/dog.png
, it would respond with a presigned URL allowing a user to upload to your R2 bucket at the /uploads/dog.png
path.
import { AwsClient } from "aws4fetch";
const r2 = new AwsClient({ accessKeyId: "", secretAccessKey: "",
});
export default <ExportedHandler>{ async fetch(req) { // This is just an example to demonstrating using aws4fetch to generate a presigned URL. // This Worker should not be used as-is as it does not authenticate the request, meaning // that anyone can upload to your bucket. // // Consider implementing authorization, such as a preshared secret in a request header. const requestPath = new URL(req.url).pathname;
// Cannot upload to the root of a bucket if (requestPath === "/") { return new Response("Missing a filepath", { status: 400 }); }
const bucketName = ""; const accountId = "";
const url = new URL( `https://${bucketName}.${accountId}.r2.cloudflarestorage.com` );
// preserve the original path url.pathname = requestPath;
// Specify a custom expiry for the presigned URL, in seconds url.searchParams.set("X-Amz-Expires", "3600");
const signed = await r2.sign( new Request(url, { method: "PUT", }), { aws: { signQuery: true }, } );
// Caller can now use this URL to upload to that object. return new Response(signed.url, { status: 200 }); },
// ... handle other kinds of requests
};
Notice the total absence of any configuration or token secrets present in the Worker code. Instead, you would create a wrangler.toml
binding to whatever bucket represents the bucket you will upload to. Additionally, authorization is handled in-line with the upload which can reduce latency.
In some cases, Workers lets you implement certain functionality more easily. For example, if you wanted to offer a write-once guarantee so that users can only upload to a path once, with pre-signed URLs, you would need to sign specific headers and require the sender to send them. You can modify the previous Worker to sign additional headers:
const signed = await r2.sign( new Request(url, { method: "PUT", }), { aws: { signQuery: true }, headers: { "If-Unmodified-Since": "Tue, 28 Sep 2021 16:00:00 GMT", }, }
);
Note that the caller has to add the same If-Unmodified-Since
header to use the URL. The caller cannot omit the header or use a different header. If the caller uses a different header, the presigned URL signature would not match, and they would receive a 403/SignatureDoesNotMatch
.
In a Worker, you would change your upload to:
const existingObject = await env.DROP_BOX_BUCKET.put( url.toString().substring(1), request.body, { onlyIf: { // No objects will have been uploaded before September 28th, 2021 which // is the initial R2 announcement. uploadedBefore: new Date(1632844800000), }, }
);
if (existingObject?.etag !== request.headers.get('etag')) { return new Response('attempt to overwrite object', { status: 400 });
}
Cloudflare Workers currently have some limitations that you may need to consider:
- You cannot upload more than 100 MiB (200 MiB for Business customers) to a Worker.
- Enterprise customers can upload 500 MiB by default and can ask their account team to raise this limit.
- Detecting precondition failures is currently easier with presigned URLs as compared with R2 bindings.
Note that these limitations depends on R2’s extension for conditional uploads. Amazon’s S3 service does not offer such functionality at this time.
Differences between presigned URLs and public buckets
Presigned URLs share some superificial similarity with public buckets. If you give out presigned URLs only for GET
/HEAD
operations on specific objects in a bucket, then your presigned URL functionality is mostly similar to public buckets. The notable exception is that any custom metadata associated with the object is rendered in headers with the x-amz-meta-
prefix. Any error responses are returned as XML documents, as they would with normal non-presigned S3 access.
Presigned URLs can be generated for any S3 operation. After a presigned URL is generated it can be reused as many times as the holder of the URL wants until the signed expiry date.
Public buckets are available on a regular HTTP endpoint. By default, there is no authorization or access controls associated with a public bucket. Anyone with a public bucket URL can access an object in that public bucket. If you are using a custom domain to expose the R2 bucket, you can manage authorization and access controls as you would for a Cloudflare zone. Public buckets only provide GET
/HEAD
on a known object path. Public bucket errors are rendered as HTML pages.
Choosing between presigned URLs and public buckets is dependent on your specific use case. You can also use both if your architecture should use public buckets in one situation and presigned URLs in another. It is useful to note that presigned URLs will expose your account ID and bucket name to whoever gets a copy of the URL. Public bucket URLs do not contain the account ID or bucket name. Typically, you will not share presigned URLs directly with end users or browsers, as presigned URLs are used more for internal applications.