Basic Steps to create and Configure a bucket
Create a Bucket
1. Give the bucket a name
- The name must be globally unique
- Bucket name include: 3-63 characters, lowercase letters, numbers, dots(.), hyphens(-)
- Bucket name must begin and end with a letter or number
- conbination of a bucket name, key, version ID uniquely identifies the object
- https://bucket-name.s3.amazonaws.com/folder-name/filename.zip
- can use this unique URL to reference objects within the bucket
2. Choose a Region
- Region will default to the Region that's currently selected for your account.
- Choose a Region close to you to minimize latency and costs
Configure the Bucket
Object Ownership
- controlled with an access control list (ACL)
- ACLs are the lists of AWS accounts
- Default = Turn off ACL
Block Public Access
- allows outside your AWS account to view and use your objects
- By default, don't allow public access
Versioning
- with versioning, you can recover data
- disabled by default
- versioning state applies to all of the object in that bucket
- when you enable versioning, all new objects are versioned and given a unique version ID
- After you version-enable a bucket, it can never return to an unversioned state
- but you can suspend versioning on that bucket
- Disadvantage: more strage needed, more costs charged
Tags
- You can add tags to your bucket for tracking
- tag key is the name of the tag and must be unique
- tag value is optional and doesn't have to be unique
Work with objects
Upload Objects
- Uploading objects are limited to 160GB from the console.
- if bigger than that, using AWS CLI
- You can choose the storage class for the objects
- default class is S3 standard, you can change the storage class of objects at any time
- You can use multiplart upload to upload a single object as a set of parts
- each part is a contiguous portion of the object's data
- after all parts are uploaded, S3 assembles parts and creates the object
- consider it when your object is over 100MB
- You can reference an object by copying the URL of the object
- You can download individual objects in the console
Delete Objects
- You can delete objects from the console
- In a bucket with S3 versioning enabled, you can enable multi-factor authentication(MFA) delete
Additional Features
Lifecycle Rules
Transition action
- define when objects transition from on storage class to another
Expiration action
- define when objects expire and are deleted
Replication Rules
- Replication offers automatic copying of objects across S3 buckets
- Help you to maintain copies of objects in multiple Regions, in different sotrage classes, and under different ownership
Cross-Region Replication
- Copy objects across S3 buckets in different AWS Regions
- Use case
- disaster recovery
- minimize latency by maintaining copies that are geographically closer to your users
Same-Region Replication (SRR)
- Copy objects across S3 buckets in the same AWS Region
- Use case
- aggregate logs into a single bucket for processing of logs in a single location
- live replication btw production and test accounts that use the same data
- abide by data sovereignty laws
Bucket Security
IAM Policy
- By default, all S3 resources are private
- You can grant users, groups, and roles controlled access to S3 and your objects
Bucket Policy
- Bucket Policy is attached to the bucket and can grant other AWS acccounts or users access to the objects.
- Policies are written in JSON
Encryption
- client-side encryption
- data is encrypted before you upload it, and you hold all the encryption keys
- for data in transit and data at rest
- server-side encryption
- Amazon S3 encrypts data at the object level when you upload it
- when you download the object, S3 decrypts the data
- only for data at rest
Moving Large Amounts of Data into S3
S3 Transfer Acceleration
- using the global network of hundreds of CloudFront edge locations
AWS Snowcone
- portable, rugged, and secure device for edge computing and data transfer
AWS Snowball
- using physical storage devices
AWS Snowmobile
- an exabyte-scale data transfer service that is ussed to move extremely large amounts of data to AWS
Additional storage services
Block Storage at AWS
Amazon EC2 intance store
- temporary block-level storage for EC2 instance
Amazon EBS
- detachable storage associated with an AZ
- Data availability
- when creating, it is automatically replicated within its AZ
- Data persistence
- off-instance storage that can persist independently from the life of an instance
- Data encryption
- you can create encrypted EBS volumes that uses AES-256
- Data security
- presented as raw, unformatted block devices
- Snapshots
- back up the data on EBS volumes to S3 by taking point-in-time snapshots
- Flexibility
- you can make changes without service interruptions
Amazon EBS Volume Types
- Solid state drives (SSD)
- workloads with frequent access and small I/O size for faster I/O operations per second (IOPS)
- General purpose, Provisioned IOPS
- Hard disk drives (HDD)
- large streaming workloads that need high throughput performance
- Throughput optimized HDD, Cold HDD
File Storage at AWS
- scalable file system that's used with AWS Cloud services and on-premises resources
- Fully managed
- Fully managed by AWS
- Highly available and durable
- by default, EFS stores every file system object across multiple AZs
- Elastic and scalable
- storage capacity rows and shrinks automatically
- Data encryption
'CS > 클라우드컴퓨팅' 카테고리의 다른 글
Lec 07: Database (0) | 2023.10.22 |
---|---|
Lec 06: Networking (0) | 2023.10.22 |
Lec 05-1: Intro to Amazon S3 (0) | 2023.10.16 |
Lec 04-3: Managing Amazon EC2 (0) | 2023.10.15 |
Lec 04-2: Using Amazon EC2 (0) | 2023.10.15 |