Release-notes
- The CRC-64/NVME Checksum algorithm is now supported for both single and multipart objects. This also brings support for the
FULL_OBJECTChecksum Type on Multipart Uploads. See Checksum Type Compatibility here.
- Server-side Encryption with Customer-Provided Keys is now available to all users via the Workers and S3-compatible APIs.
- Sippy can now be enabled on buckets in jurisdictions (e.g., EU, FedRAMP).
- Fixed an issue with Sippy where GET/HEAD requests to objects with certain special characters would result in error responses.
- Oceania (OC) is now available as an R2 region.
- The default maximum number of buckets per account is now 1 million. If you need more than 1 million buckets, contact Cloudflare Support.
- Public buckets accessible via custom domain now support Smart Tiered Cache.
- R2
bucket lifecyclecommand added to Wrangler. Supports listing, adding, and removing object lifecycle rules.
- R2
bucket infocommand added to Wrangler. Displays location of bucket and common metrics.
- R2
bucket dev-urlcommand added to Wrangler. Supports enabling, disabling, and getting status of bucket's r2.dev public access URL.
- R2
bucket domaincommand added to Wrangler. Supports listing, adding, removing, and updating R2 bucket custom domains.
- Add
minTLSto response of list custom domains endpoint.
- Add get custom domain endpoint.
- Event notifications can now be configured for R2 buckets in jurisdictions (e.g., EU, FedRAMP).
- Event notifications for R2 is now generally available. Event notifications now support higher throughput (up to 5,000 messages per second per Queue), can be configured in the dashboard and Wrangler, and support for lifecycle deletes.
- Add the ability to set and update minimum TLS version for R2 bucket custom domains.
- Added support for configuring R2 bucket custom domains via API.
- Sippy is now generally available. Metrics for ongoing migrations can now be found in the dashboard or via the GraphQL analytics API.
- Added migration log for Super Slurper to the migration summary in the dashboard.
- Super Slurper now supports migrating objects up to 1TB in size.
- Fixed an issue that prevented Sippy from copying over objects from S3 buckets with SSE set up.
- Added support for Infrequent Access storage class (beta).
- Added create temporary access tokens endpoint.
- Event notifications for R2 is now available as an open beta.
- Super Slurper now supports migration from Google Cloud Storage.
- When an
OPTIONSrequest against the public entrypoint does not include anoriginheader, anHTTP 400instead of anHTTP 401is returned.
- The response shape of
GET /buckets/:bucket/sippyhas changed. - The
/buckets/:bucket/sippy/validateendpoint is exposed over APIGW to validate Sippy's configuration. - The shape of the configuration object when modifying Sippy's configuration has changed.
- Updated GetBucket endpoint: Now fetches by
bucket_nameinstead ofbucket_id.
- Fixed a bug where the API would accept empty strings in the
AllowedHeadersproperty ofPutBucketCorsactions.
- Parts are now automatically sorted in ascending order regardless of input during
CompleteMultipartUpload.
- The
x-idquery param forS3 ListBucketsaction is now ignored. - The
x-idquery param is now ignored for all S3 actions.
- Fixed an issue with
ListBucketswhere thename_containsparameter would also search over the jurisdiction name.
- Users can now complete conditional multipart publish operations. When a condition failure occurs when publishing an upload, the upload is no longer available and is treated as aborted.
- Improved performance for ranged reads on very large files. Previously ranged reads near the end of very large files would be noticeably slower than ranged reads on smaller files. Performance should now be consistently good independent of filesize.
- Multipart ETags are now MD5 hashes.
- Fixed a bug where calling GetBucket on a non-existent bucket would return a 500 instead of a 404.
- Improved S3 compatibility for ListObjectsV1, now nextmarker is only set when truncated is true.
- The R2 worker bindings now support parsing conditional headers with multiple etags. These etags can now be strong, weak or a wildcard. Previously the bindings only accepted headers containing a single strong etag.
- S3 putObject now supports sha256 and sha1 checksums. These were already supported by the R2 worker bindings.
- CopyObject in the S3 compatible api now supports Cloudflare specific headers which allow the copy operation to be conditional on the state of the destination object.
- GetBucket is now available for use through the Cloudflare API.
- Location hints can now be set when creating a bucket, both through the S3 API, and the dashboard.
- The ListParts API has been implemented and is available for use.
- HTTP2 is now enabled by default for new custom domains linked to R2 buckets.
- Object Lifecycles are now available for use.
- Bug fix: Requests to public buckets will now return the
Content-Encodingheader for gzip files whenAccept-Encoding: gzipis used.
- R2 authentication tokens created via the R2 token page are now scoped to a single account by default.
- Fix CORS preflight requests for the S3 API, which allows using the S3 SDK in the browser.
- Passing a range header to the
getoperation in the R2 bindings API should now work as expected.
- Requests with the header
x-amz-acl: public-readare no longer rejected. - Fixed issues with wildcard CORS rules and presigned URLs.
- Fixed an issue where
ListObjectswould time out during delimited listing of unicode-normalized keys. - S3 API's
PutBucketCorsnow rejects requests with unknown keys in the XML body. - Signing additional headers no longer breaks CORS preflight requests for presigned URLs.
- Fixed a bug in
ListObjectswherestartAfterwould skip over objects with keys that have numbers right after thestartAfterprefix. - Add worker bindings for multipart uploads.
- Unconditionally return HTTP 206 on ranged requests to match behavior of other S3 compatible implementations.
- Fixed a CORS bug where
AllowedHeadersin the CORS config were being treated case-sensitively.
- Multipart upload part sizes are always expected to be of the same size, but this enforcement is now done when you complete an upload instead of being done very time you upload a part.
- Fixed a performance issue where concurrent multipart part uploads would get rejected.
- Fixed a CORS issue where
Access-Control-Allow-Headerswas not being set for preflight requests.
- Fixed a bug where CORS configuration was not being applied to S3 endpoint.
- No-longer render the
Access-Control-Expose-Headersresponse header ifExposeHeaderis not defined. - Public buckets will no-longer return the
Content-Rangeresponse header unless the response is partial. - Fixed CORS rendering for the S3
HeadObjectoperation. - Fixed a bug where no matching CORS configuration could result in a
403response. - Temporarily disable copying objects that were created with multipart uploads.
- Fixed a bug in the Workers bindings where an internal error was being returned for malformed ranged
.getrequests.
- CORS preflight responses and adding CORS headers for other responses is now implemented for S3 and public buckets. Currently, the only way to configure CORS is via the S3 API.
- Fixup for bindings list truncation to work more correctly when listing keys with custom metadata that have
"or when some keys/values contain certain multi-byte UTF-8 values. - The S3
GetObjectoperation now only returnsContent-Rangein response to a ranged request.
- The R2
put()binding options can now be given anonlyIffield, similar toget(), that performs a conditional upload. - The R2
delete()binding now supports deleting multiple keys at once. - The R2
put()binding now supports user-specified SHA-1, SHA-256, SHA-384, SHA-512 checksums in options. - User-specified object checksums will now be available in the R2
get()andhead()bindings response. MD5 is included by default for non-multipart uploaded objects.
- The S3
CopyObjectoperation now includesx-amz-version-idandx-amz-copy-source-version-idin the response headers for consistency with other methods. - The
ETagfor multipart files uploaded until shortly after Open Beta uploaded now include the number of parts as a suffix.
- The S3
DeleteObjectsoperation no longer trims the space from around the keys before deleting. This would result in files with leading / trailing spaces not being able to be deleted. Additionally, if there was an object with the trimmed key that existed it would be deleted instead. The S3DeleteObjectoperation was not affected by this. - Fixed presigned URL support for the S3
ListBucketsandListObjectsoperations.
- Uploads will automatically infer the
Content-Typebased on file body if one is not explicitly set in thePutObjectrequest. This functionality will come to multipart operations in the future.
- Fixed S3 conditionals to work properly when provided the
LastModifieddate of the last upload, bindings fixes will come in the next release. If-Match/If-None-Matchheaders now support arrays of ETags, Weak ETags and wildcard (*) as per the HTTP standard and undocumented AWS S3 behavior.
- Added dummy implementation of the following operation that mimics
the response that a basic AWS S3 bucket will return when first created:
GetBucketAcl.
Added dummy implementations of the following operations that mimic the response that a basic AWS S3 bucket will return when first created:
GetBucketVersioningGetBucketLifecycleConfigurationGetBucketReplicationGetBucketTaggingGetObjectLockConfiguration
- Fixed an S3 compatibility issue for error responses with MinIO .NET SDK and any other tooling that expects no
xmlnsnamespace attribute on the top-levelErrortag. - List continuation tokens prior to 2022-07-01 are no longer accepted and must be obtained again through a new
listoperation. - The
list()binding will now correctly return a smaller limit if too much data would otherwise be returned (previously would return anInternal Error).
- Improvements to 500s: we now convert errors, so things that were previously concurrency problems for some operations should now be
TooMuchConcurrencyinstead ofInternalError. We've also reduced the rate of 500s through internal improvements. ListMultipartUploadcorrectly encodes the returnedKeyif theencoding-typeis specified.
- S3 XML documents sent to R2 that have an XML declaration are not rejected with
400 Bad Request/MalformedXML. - Minor S3 XML compatibility fix impacting Arq Backup on Windows only (not the Mac version). Response now contains XML declaration tag prefix and the xmlns attribute is present on all top-level tags in the response.
- Beta
ListMultipartUploadssupport.
- Support the
r2_list_honor_includecompat flag coming up in an upcoming runtime release (default behavior as of 2022-07-14 compat date). Without that compat flag/date, list will continue to function implicitly asinclude: ['httpMetadata', 'customMetadata']regardless of what you specify. cf-create-bucket-if-missingcan be set on aPutObject/CreateMultipartUploadrequest to implicitly create the bucket if it does not exist.- Fix S3 compatibility with MinIO client spec non-compliant XML for publishing multipart uploads. Any leading and trailing quotes in
CompleteMultipartUploadare now optional and ignored as it seems to be the actual non-standard behavior AWS implements.
- Unsupported search parameters to
ListObjects/ListObjectsV2are now rejected with501 Not Implemented. - Fixes for Listing:
- Fix listing behavior when the number of files within a folder exceeds the limit (you'd end up seeing a CommonPrefix for that large folder N times where N = number of children within the CommonPrefix / limit).
- Fix corner case where listing could cause objects with sharing the base name of a "folder" to be skipped.
- Fix listing over some files that shared a certain common prefix.
DeleteObjectscan now handle 1000 objects at a time.- S3
CreateBucketrequest can specifyx-amz-bucket-object-lock-enabledwith a value offalseand not have the requested rejected with aNotImplementederror. A value oftruewill continue to be rejected as R2 does not yet support object locks.
- Fixed a regression for some clients when using an empty delimiter.
- Added support for S3 pre-signed URLs.
- Fixed a regression in the S3 API
UploadPartoperation whereTooMuchConcurrency&NoSuchUploaderrors were being returned asNoSuchBucket.
- Fixed a bug with the S3 API
ListObjectsV2operation not returning empty folder/s as common prefixes when using delimiters. - The S3 API
ListObjectsV2KeyCountparameter now correctly returns the sum of keys and common prefixes rather than just the keys. - Invalid cursors for list operations no longer fail with an
InternalErrorand now return the appropriate error message.
- The
ContinuationTokenfield is now correctly returned in the response if provided in a S3 APIListObjectsV2request. - Fixed a bug where the S3 API
AbortMultipartUploadoperation threw an error when called multiple times.
- Fixed a bug where the S3 API's
PutObjector the.put()binding could fail but still show the bucket upload as successful. - If conditional headers are provided to S3 API
UploadObjectorCreateMultipartUploadoperations, and the object exists, a412 Precondition Failedstatus code will be returned if these checks are not met.
- Fixed a bug when
Accept-Encodingwas being used inSignedHeaderswhen sending requests to the S3 API would result in aSignatureDoesNotMatchresponse.
- Fixed a bug where requests to the S3 API were not handling non-encoded parameters used for the authorization signature.
- Fixed a bug where requests to the S3 API where number-like keys were being parsed as numbers instead of strings.
- Add support for S3 virtual-hosted style paths, such as
<BUCKET>.<ACCOUNT_ID>.r2.cloudflarestorage.cominstead of path-based routing (<ACCOUNT_ID>.r2.cloudflarestorage.com/<BUCKET>). - Implemented
GetBucketLocationfor compatibility with external tools, this will always return aLocationConstraintofauto.
- S3 API
GetObjectranges are now inclusive (bytes=0-0will correctly return the first byte). - S3 API
GetObjectpartial reads return the proper206 Partial Contentresponse code. - Copying from a non-existent key (or from a non-existent bucket) to another bucket now returns the proper
NoSuchKey/NoSuchBucketresponse. - The S3 API now returns the proper
Content-Type: application/xmlresponse header on relevant endpoints. - Multipart uploads now have a
-Nsuffix on the etag representing the number of parts the file was published with. UploadPartandUploadPartCopynow return proper error messages, such asTooMuchConcurrencyorNoSuchUpload, instead of 'internal error'.UploadPartcan now be sent a 0-length part.
- When using the S3 API, an empty string and
us-east-1will now alias to theautoregion for compatibility with external tools. GetBucketEncryption,PutBucketEncryptionandDeleteBucketEncrypotionare now supported (the only supported value currently isAES256).- Unsupported operations are explicitly rejected as unimplemented rather than implicitly converting them into
ListObjectsV2/PutBucket/DeleteBucketrespectively. - S3 API
CompleteMultipartUploadsrequests are now properly escaped.
- Pagination cursors are no longer returned when the keys in a bucket is the same as the
MaxKeysargument. - The S3 API
ListBucketsoperation now acceptscf-max-keys,cf-start-afterandcf-continuation-tokenheaders behave the same as the respective URL parameters. - The S3 API
ListBucketsandListObjectsendpoints now allowper_pageto be 0. - The S3 API
CopyObjectsource parameter now requires a leading slash. - The S3 API
CopyObjectoperation now returns aNoSuchBucketerror when copying to a non-existent bucket instead of an internal error. - Enforce the requirement for
autoin SigV4 signing and theCreateBucketLocationConstraintparameter. - The S3 API
CreateBucketoperation now returns the properlocationresponse header.
- The S3 API now supports unchunked signed payloads.
- Fixed
.put()for the Workers R2 bindings. - Fixed a regression where key names were not properly decoded when using the S3 API.
- Fixed a bug where deleting an object and then another object which is a prefix of the first could result in errors.
- The S3 API
DeleteObjectsoperation no longer returns an error even though an object has been deleted in some cases. - Fixed a bug where
startAfterandcontinuationTokenwere not working in list operations. - The S3 API
ListObjectsoperation now correctly rendersPrefix,Delimiter,StartAfterandMaxKeysin the response. - The S3 API
ListObjectsV2now correctly honors theencoding-typeparameter. - The S3 API
PutObjectoperation now works withPOSTrequests fors3cmdcompatibility.
Was this helpful?
- Resources
- API
- New to Cloudflare?
- Directory
- Sponsorships
- Open Source
- Support
- Help Center
- System Status
- Compliance
- GDPR
- Company
- cloudflare.com
- Our team
- Careers
- © 2025 Cloudflare, Inc.
- Privacy Policy
- Terms of Use
- Report Security Issues
- Trademark