Amazon is launching S3 Files. This combines the popular object storage service with file system access. File-based applications, AI agents, and ML teams can now access S3 data directly without code changes or data duplication.
Organizations that store analytics data in S3 and use data lakes have previously faced a fundamental problem: file-based tools could not access that data directly. The only solution was to duplicate data to a separate file system or build complex synchronization pipelines.
File system and object storage in one
S3 Files is built on Amazon EFS and automatically translates file system operations into S3 requests. Applications work with S3 data without code changes, while the data never leaves S3. Thousands of compute resources (instances, containers, and functions) can connect simultaneously to the same S3 system. Data remains accessible via the S3 APIs at the same time. No migration is required: S3 Files works directly with existing buckets.
This is relevant for a wide range of workloads. AI agents retain state and share it across pipelines. ML teams run data preparation without first preparing or copying files. And analytics teams access their data lake directly via file system tools. S3 Files caches actively used data for low latency and delivers up to several terabytes per second of read throughput.
S3 Files is now generally available in 34 AWS regions, including the European regions of Frankfurt, Zurich, Stockholm, Milan, Spain, Ireland, London, and Paris. No migration of existing data is required.