Amazon S3 stands forĀ Simple Storage Service. It is an AWS object storage service that provides object-level storage which means that to make a modification you need to re-upload the entire modified file.

Some facts about Amazon S3:

  • Amazon S3 can store as much data you want up to a limit of 5TB per object.
  • By default, the data is Amazon S3 is stored redundantly across multiple facilities and devices.
  • Amazon S3 can be accessed via the AWS Management Console or programmatically via the API and SDKs.
  • Storage class analysis can be used to analyze storage access patterns and transition the appropriate data to the right storage class.

Some typical use cases for Amazon S3 can include:

  • Storing and distributing static web content and media.
  • As an origin for a content delivery network such as Amazon CloudFront.
  • As a data store for computation and large-scale analytics.
  • Backup and archiving.

Data can be moved into Amazon S3 through a couple of options:

  • Amazon Management Console, Command Line Interface or API.
  • Upload it into an S3 bucket. At the time of writing the maximum file size that can be uploaded to Amazon S3 is 5GB although more can be achieved through the CLI or API.
  • The Multipart Upload API can be used to upload large objects up to 5TB.
  • AWS DataSync – a service that facilitates the movement between on-premises storage and Amazon S3.
  • AWS Transfer for SFTP – allows data to be transferred directly to Amazon S3 using the Secure File Transfer Protocol.
  • Amazon Multipart Upload can be used to upload large objects in manageable parts using a three-step process – initiate the upload, upload the object parts and complete the multipart upload. Amazon S3 recreates the full object from the individual pieces.

Amazon S3 Storage Classes

Amazon S3 provides a choice of storage classes for all the objects that are uploaded into buckets. The storage class is based on use cases and access requirements.

Reference: Amazon Simple Storage Service Developer Guide