Skip to content
Home » Amazon » AWS Fundamentals Specialization » AWS Cloud Technical Essentials » Week 3: Storage & Databases on AWS

Week 3: Storage & Databases on AWS

Welcome to Week 3! This week, you will learn important concepts for AWS storage services—such as buckets and objects for Amazon Simple Storage Service (Amazon S3), and how Amazon Elastic Block Store (Amazon EBS) is used on AWS. You will also explore databases on AWS, and the use cases for each AWS storage service.

Learning Objectives

  • Create a DynamoDB table
  • Describe the function of Amazon DynamoDB on AWS
  • Explore databases on AWS
  • Create an Amazon S3 bucket
  • Explain when to use each AWS storage service
  • Explain important S3 concepts, such as S3 buckets and objects
  • Describe the function of Amazon EBS on AWS
  • Differentiate between file, block, and object storage

Storage on AWS


Video: Introduction to Week 3

What’s next in your AWS learning journey?

You’ve already covered compute (EC2) and networking (VPC). Now, it’s time to focus on:

  • Storage: Where to put your application’s data.
  • Databases: How to manage that data in an organized and efficient way.

Specific Topics

  • Storage on AWS: Explore the different options AWS offers and when to use each.
  • Amazon S3: Create a bucket to store your employee directory’s images.
  • Databases on AWS: Get introduced to various services, but focus on two:
    • Amazon RDS (Relational Database Service): For traditional database needs
    • Amazon DynamoDB: A NoSQL database for flexibility and scalability

Important Reminders

  • Take the Quizzes! They’ll test your understanding of the concepts.
  • Don’t Skip the Readings: You’ll find valuable information to support the lessons.

Let me know if you want a deeper dive into storage or database concepts!

  • Hello again. I’m so happy to see that you’ve made it to this next batch of
    lessons in our course. So far, you’ve learned about
    compute and networking on AWS. Time to move on to two
    other major categories, storage and databases. The employee directory application we are building out
    currently has a solid network in place using Amazon VPC and is being hosted on Amazon EC2. The only issue is the app
    doesn’t really work yet because we haven’t set up
    anywhere for it to store the employee information
    or the employee photos. We are going to go ahead and fix that. We will start off by learning about the different
    storage offerings AWS has and compare and contrast them. Then we will create an Amazon S3 bucket in our account that the
    Employee Directory App will use to store employee images. Then we will explore the different
    database services AWS has and there are a lot. Not to worry though we are focusing on two, Amazon
    Relational Database Service or RDS and Amazon DynamoDB. As always, take the quizzes
    and please take a good look at the readings that we have placed between the video lessons. There’s a lot of extra information
    there that could be useful to you as you continue
    your journey with AWS. Great work so far. Keep it up and we’ll see you in a bit.

Video: Storage Types on AWS

Types of Storage for Your App

  • Operating System/App Files: Need fast access and frequent updates.
  • Static Assets (Employee Photos): Accessed often, but rarely changed.
  • Structured Employee Data: Will go in a database (covered later).

Block Storage

  • Data split into fixed-sized chunks.
  • Good for:
    • Frequently updated files – allows modification of small portions.
    • Databases and operating systems require fast, granular access.

Object Storage

  • Treats each file as a single unit.
  • Good for:
    • Infrequent changes – you modify the whole object at once.
    • Storing static content like images or videos.

Why This Matters

Choosing the right storage type (block vs. object) is crucial for how your application will perform and interact with its data.

Next Steps

The course will dive into specific AWS storage services:

  • Look at the provided notes to refresh your understanding of storage types.
  • This will help you match the AWS services to the appropriate use cases.

  • The next thing we need to configure for our employee directory
    app is the storage. Our application requires several types of storage for its data. For one, we need to store the
    operating system, software, and system files of our app. We also need to store static assets, like photos for the employee headshots, and then we have more structured data, such as the name, title, and location of each employee, as well. All of that needs a home. The structured data usually
    requires a database, which we’ll talk about later this week, so for now we’ll focus on storing the application files as well as the static content. There are two main types of storage that we can use to store this data, block and object. Here’s the difference. Let’s say that I have a one
    gigabyte file with text in it. If I’m storing this in
    block storage, what happens is that this file is split into fixed size chunks of data and then stored. Object storage, on the other hand, treats each file like
    a single unit of data. This might seem like a small difference, but it can change how you
    access and work with your data. Let’s say I want to change one character out of that one gigabyte file. If my file is stored in block storage, changing that one character is simple, mainly because we can change the block, or the piece of the file
    that the character is in, and leave the rest of the file alone. In object storage, if I want
    to change that one character, I instead have to update the entire file. Let’s take these two types of
    storage and access patterns and try to apply them to
    the data we want to store. For example, our static data,
    like the employee photos, will most likely be accessed
    often, but modified rarely. Therefore, storing in
    object storage is fine. For more frequently updated data or data that has high transaction rates, like our application or system files, block storage will perform better. In this section of the course,
    we’ll discuss both block and object AWS storage services
    and how they’ll interact with our employee directory application. Before we do that, take
    a look at the notes to get a refresher of the
    different types of storage. That way you can easily
    match the storage type to the AWS storage service
    that we talk about.

Reading: Reading 3.1: Storage Types on AWS

Video: Amazon EC2 Instance Storage and Amazon Elastic Block Store

Types of Block Storage for EC2

  • Instance Store
    • Built-in directly to the physical server the instance runs on.
    • Pros: Very fast access speeds.
    • Cons: Temporary – data is lost if the instance stops or terminates. Not ideal for data you need to keep long-term.
  • Amazon Elastic Block Store (EBS)
    • Network-attached volumes, configured separately from EC2 instances.
    • Pros: Persistent storage – data remains even if your instance goes down. You can also detach and move EBS volumes between instances.
    • Cons: Slightly slower than Instance Store due to network connection.

EBS Volume Types

  • SSD-backed: Generally faster, better for frequent access workloads.
  • HDD-backed: Slower, but more cost-effective for less frequently used data.

Important Note: Backups are Essential

  • Even EBS, being persistent, needs backups to protect against data loss.
  • Use EBS Snapshots: Incremental backups for easily restoring EBS volumes in case of issues.

  • When you launch an EC2 instance you’re going to need some kind of block storage to go with it. This block storage can
    be used as a boot volume for your operating system
    or a separate data volume. For example, think about your laptop. With a laptop you store
    your data in drives, and those drives are either built-in internally to your laptop
    or connected externally. EC2 instances have the same options as far as block storage goes. The internal storage is
    called Instance Store and the external connected
    storage is called Amazon Elastic Block Store or Amazon EBS. Let’s talk about Instance Store first. Instance Store is a form of
    directly attached storage which means the underlying
    physical server has at least one storage unit
    directly attached to it. This direct attachment is
    also the main advantage of using this form of storage. Because it’s so close
    to the physical server it can be very fast and
    respond very quickly, but while it can be very fast, there is also one big downside. With Instance Store
    being directly attached to an EC2 instance, its lifecycle is tied
    to that of the instance. That means if you stop
    or terminate an instance all data in the Instance Store is gone. It can no longer be used or accessed. Naturally there are many use cases where you want the ability to keep data, even if you shut an EC2 instance down. This is where EBS volumes come in. These volumes, as the name implies, are drives of a user configured size that are separate from an EC2 instance. The drives are simply
    network attached storage for your instances. You can think of it as similar to how you might attach an
    external drive to your laptop. You can attach multiple EBS
    volumes to one EC2 instance, and then you can configure
    how to use that storage on the OS of the EC2 instance. When I connect that EBS
    volume to my instance, my instance now has a
    direct communication line to the data in that volume. Nobody else can directly
    talk to that volume so that it maintains secure communication. You need an EC2 instance to
    access data on an EBS volume. If I decided I want to use that EBS volume with a different instance,
    that’s no problem. We can stop the instance,
    detach the volume, and then attach it to another
    instance in the same AZ. Much like you can unplug
    your drive from a laptop, and plug it into another one. Or depending on the instance
    type and EBS volume we’re using we may be able to attach it to multiple instances at the same time, which is called EBS Multi-Attach. And perhaps the most
    important similarity is that an EBS volume is
    separate from your instance. Just like an external drive
    is separate from your laptop. That means if an accident happens, and the instance goes down you still have your
    data on your EBS volume. This is what we refer to
    as persistent storage. You can stop or terminate your instance, and your EBS volume can still
    exist with your data on it. EBS is often the right storage type for workloads that require
    persistence of data. However, the question
    typically comes down to which EBS volume type do I use? That’s right. There are many different types of volumes, but they’ve divided into
    two main volume types. SSD backed volumes and HDD backed volumes. In the readings, you’ll learn
    more about these two options. The last thing we’ll
    need to talk about here is backing up data. Things fail, errors happen, so you need to backup
    your data, even in AWS. The way you backup EBS volumes is by taking what we call snapshots. EBS snapshots are incremental backups that are stored redundantly. The idea here is that
    if something goes wrong you can create new volumes
    from your snapshots and restore your data to a safe state.

Reading: Reading 3.2: Amazon EC2 Instance Storage and Amazon Elastic Block Store

Reading

Video: Object Storage with Amazon S3

Why not use EBS for employee photos?

  • Accessibility: EBS volumes are typically attached to a single EC2 instance, limiting access as your application scales.
  • Capacity: EBS volumes have size limits, whereas you may need to store many large employee photos.

Amazon S3: A Better Solution

  • Standalone Storage: S3 isn’t tied to specific compute instances (EC2). You access it via URLs, making it broadly accessible (“storage for the internet”).
  • Scalability: S3 allows you to store virtually unlimited objects with individual sizes up to 5 terabytes.

Key S3 Concepts

  • Buckets: The fundamental containers in S3 where you place your objects (e.g., photos).
  • Folders (optional): Help organize objects within a bucket.
  • Object Storage: S3 uses a flat structure with unique identifiers to retrieve objects.
  • Distributed Design: S3 stores your data redundantly across multiple facilities for high availability and durability.

S3 Access Control

  • Private by Default: Data in S3 is initially only accessible to the creating AWS account.
  • Making Objects Public: While possible, it involves several explicit steps to prevent accidental data exposure.
  • Granular Control: Use the following for more fine-grained access:
    • IAM Policies: Attached to users, groups, and roles to control their S3 actions.
    • S3 Bucket Policies: JSON format policies attached to buckets. These specify permitted or denied actions on the bucket and its objects.

Let me know if you’d like a more concise summary or a focus on specific aspects of S3!

  • So we’ve figured out block
    storage for our application. Now, we need to figure out where to store our employee photos. A natural question is,
    why can’t we just store these photos in Amazon EBS? Well, there’s a few reasons. Number one, most EBS
    volumes are only connected to one EC2 instance at a time. Multi-attach is not supported by all volume and instance types. Eventually, as my app scales, I’ll need to figure out
    how to access those photos from all of my instances, that’s an issue. The second consideration
    is that an EBS volume has size limitations. That means that eventually,
    there will be a limit to how many HD 4K photos I store of my employees in one drive. Ideally, I’d store these photos
    in a more scalable solution. So EBS probably isn’t the right choice. Fortunately, AWS has a service called Amazon Simple
    Storage Service or Amazon S3 that was designed to be a
    standalone storage solution that isn’t tied to compute, meaning you don’t mount
    this type of storage onto your EC2 instances. Instead, you can access
    your data through URLs from anywhere on the web, which gives this service its nickname, storage for the internet. S3 also allows you to store
    as many objects as you’d like with an individual object
    size limit of five terabytes. This makes it ideal for
    our employee photos. Now, let’s talk about how
    we store things in S3. The underlying storage type
    for S3 is object storage. That means that all of
    the same characteristics of object storage are also
    characteristics of S3. So S3 uses a flat structure. It uses unique identifiers
    to look up objects when requested, you get the idea. S3 is also considered distributed storage, meaning that we store your data across multiple different
    facilities within one AWS region. This is what makes S3 designed
    for 99.99% availability and gives it 11 nines of durability. Alright, let’s learn
    about some S3 concepts. The first concept is a bucket. In S3, you store your objects in a bucket. You can’t upload any object,
    not even a single photo to S3 without creating a bucket first. You then place your objects
    inside of these buckets. And if you want to organize
    and arrange those objects, you can also have folders
    inside of the buckets. Let’s create a bucket in the console. When you log in, you’ll type
    S3 in the Service search bar. Once you click on it,
    you’ll see the S3 dashboard showing you all the available
    buckets for every region. I’ll then select Create bucket. What I want to point out here is that buckets are region specific, so we can choose where we
    want to place our bucket. In this case, we want to
    place our bucket close to our infrastructure for our application, which is in the Oregon region,
    so we’ll choose Oregon. Next, we have the name of our bucket. Even though our bucket is
    specific to one region, our bucket name has to be globally unique across all AWS accounts
    and must be DNS compliant. Once you create your bucket, AWS will construct a URL using this name, so it has to be something
    that is reachable over HTTP or HTTPS, meaning there can be no special characters,
    no spaces, et cetera. So for this bucket’s name, let’s choose employee-photo-bucket-sr-001, which is DNS compliant. Now we can leave the rest as defaults. Scroll down and click create. To work with this bucket,
    I’ll need to find it in the list and click on its name. Here we can start uploading our objects. To do this, I’ll click
    Upload and then Add files. Now I can choose any file I
    want and then I’ll click Upload. So as you can see, the
    object upload was successful. If I click on the name of my object, I’ll be able to see quite a bit of detail. I can see the owner, region and size, but most importantly, we can
    see the URL of my object. The first part of this URL
    is simply my bucket URL that AWS created using the bucket name. Then AWS appended the name of my object, also referred to as the
    object key to the bucket URL. Now, what happens if I click on this URL? Hmm, access denied. That’s weird, right? Well, not really. That access denied message
    leads us to a bigger question that most people have
    when they start out on AWS and that’s who can access my data. Earlier I mentioned that
    you can retrieve your data from anywhere on the web
    and people often think that means that anyone
    can retrieve that data. By default, it’s actually the opposite. Everything in S3 is private by default. This means that all S3 resources
    such as buckets, folders and objects can only be viewed by the user or AWS account that created that resource. That’s why I got an access denied message, because I was acting as an
    anonymous user on the internet trying to access an S3
    object that’s private. Now, that’s not to say no object or bucket can be open to the world. They absolutely can be if you
    explicitly choose that option and it’s actually kind of a process to make something public. The reason it’s difficult
    to make your objects public is to prevent accidental
    exposure of your data. Let’s try it. Okay, so if we want to make
    the object we created public, we need to do a few things. Normally, from the object detail page, we would be able to click on
    the object actions dropdown and then select make public using ACL, but it is currently not
    available for us to select. Going to the bucket details page, I can select the permissions tab and see that there is a default setting that blocks all public access. From there I can verify that Block public access bucket setting is set to block all public access. I’ll click Edit, then uncheck the top box that blocks all public access
    and then save the changes. I’ll type Confirm to make the change and then click Confirm to finalize it. From there, I’ll go back to
    the bucket permissions tab and scroll down to the
    Object Ownership section and click Edit for this pane. Instead of the default setting to disable access control lists or ACLs, I’ll select ACLs enabled, then acknowledge and then save the changes. Now I can go back to
    the object details page and select Make public using ACL from the object actions drop down. This will allow me to
    make the object public. To view the object, all
    I need to do is go back to the object details, find the URL, and click on it to view the photo. That’s how you make an object public. That being said, most of
    the time you don’t want your permissions to be all or nothing, to where either nobody can see
    it or everybody can see it. Typically, you want to be more granular about the way you provide
    access to resources. As far as access control, you
    can use IAM policies attached to users, groups, and roles
    to access your S3 content and you can also use a feature
    called S3 bucket policies. S3 bucket policies are
    similar to IAM policies in that they’re both defined
    using the same policy language in a JSON format. The difference is IAM
    policies are attached to users, groups and roles, whereas S3 bucket policies
    are only attached to buckets. S3 bucket policies specify
    what actions you’re allowed or denied on the bucket. For example, you might want
    to attach an S3 bucket policy to it that allows another AWS account to put objects in that bucket. Or you might want to
    create a bucket policy that allows read-only
    permissions to anonymous viewers. S3 bucket policies can
    be placed on buckets and cannot be used for
    folders, or objects. However, the policy that
    is placed on the bucket can apply to every object in that bucket. Alright, to recap, S3 uses
    containers called buckets to store your objects and
    you have several options to control access to those objects through the use of IAM
    policies and bucket policies.

Reading 3.3: Object Storage with Amazon S3

Reading

Video: Choose the Right Storage Service

Scenario Types and Best Solutions

  • Video Transcoding (Lambda Function):
    • Need: Store large files long-term.
    • Solution: Amazon S3 (objects, not tied to single compute instance)
  • E-commerce Database (EC2 Instance):
    • Need: Fast, durable storage for frequently accessed data.
    • Solution: Amazon EBS (attached, reliable volumes)
  • Temporary Calculations (Web App):
    • Need: Speed and cost are top priorities, data loss is manageable.
    • Solution: EC2 Instance Store (included with instance, but ephemeral)
  • Shared WordPress Uploads (Multiple Instances):
    • Need: Shared file system accessible by all instances.
    • Solution: Amazon EFS (network file system, not object storage)

Reasons to Rule Out Other Options

  • EBS: Can be costly for large files, and less ideal for Lambda use cases, since it’s compute-attached.
  • Instance Store: Ephemeral (non-permanent), making it unsuitable for critical or long-term data.
  • S3: While great for objects, it’s not a traditional file system that can be easily mounted by multiple instances.

(bright upbeat music) – Thank you, thank you, and welcome back to everyone’s favorite game show, which AWS storage service
do I use for my use case? Today we have one contestant during the final round of our show, and that’s you. – In order to win the grand prize, you must answer the next
three questions correctly. There will also be a bonus question so there’s an opportunity to get extra points. All right, once we read the question, you have five seconds to answer to get points. Let’s get started. – All right, this is the first question. Let’s say you’re a developer and you plan to build out an application to transcode large
media files like videos. You’ll be using an AWS Lambda function to perform the transcoding, but you need a place to store both the original media files and the transcoded media files. Due to regulations, you need to store these files for at least a year. Which of the storage services that we’ve talked about in
this course should you use? You have five seconds to answer. (timer ticking) And the answer is Amazon S3. Why is S3 the best solution here, Morgan? – Well, first of all, the question says that they’re using a Lambda function. Because of that, I’m already ruling EBS out as EBS volumes can only be attached to EC2 instances. Even if they were using EC2, video files are typically large in size, so you may have to use
multiple EBS volumes to store that data which might not be cost
effective in the long run. So EBS is out. Instance storage is out
for the same reason. We’re not using EC2 here but also because we want this data to persist for a year and instance storage is
considered ephemeral. – All right, S3 it is. Let’s put some points on the board for those who got it right. Morgan, tell us the next question. – The next question is, you’re an architect for
an e-commerce company that wants to run their MySQL database on an EC2 instance. This database needs a storage layer to store their order and customer information. The database will frequently
be accessed and updated so the storage layer
needs to respond quickly. It’s important that the
storage is fast and durable. Which AWS storage service should you use? You have five seconds. (timer ticking) And the answer is Amazon EBS. Add 30 points to your score if you got it. – It seems like we’re looking for storage attached to the compute, so why not EC2 instance store? – Right, that’s also an option but it’s not ideal. Since it’s an e-commerce company, their order and customer data
is what drives the business which means the persistence and durability of that
data is really important. Using EC2 instance store would definitely give us
the speed we’re looking for but it wouldn’t give us
the durability needed to store this data long term. So EBS is the right option. – That makes sense. All right, moving on. Two more questions. The next one is you have a web application that needs to write to disk in order to perform certain calculations. The application will store temporary data during the calculation. The most important aspects of this architecture are speed and cost. With five seconds on the clock, which storage solution would you choose? (timer ticking) And the answer is EC2 instance store. – Seph, would you mind telling us how we chose instance store and not EBS? – Sure. Once again, we’re looking for storage attached to
compute in this case. The first thing I want to point out is that this is temporary
data we’re talking about. We’re not looking at a huge amount of data and we also don’t necessarily care about the durability of that data. If the instance fails mid calculation and you want to plan for failure, you can just restart the
calculation from scratch. So durability doesn’t matter, but cost does. By not using EBS and instead using instance store, you may save yourself some costs. That is because instance store is included in the overall EC2 instance price. So instance store is the best option for this use case. – Okay, 30 more points on the board for those of you who got it. Now the final bonus question for an extra 10 points is next. This is a tricky one, and you might have to think outside of the storage options that we’ve talked about so far. The question is, let’s say you’re creating a WordPress site on multiple instances. By default, WordPress stores user uploads on the local file system. Since you want to use multiple instances, you’ll need to move the
WordPress installation and all of the user customizations into a shared storage platform. Which storage option would you use? Five seconds to go. (timer ticking) And the answer is Amazon
Elastic File System or Amazon EFS. This service was covered
in an earlier reading so if you got points for this, great job. For those of you who didn’t, no worries but I would recommend that you go back and review the reading related to file storage on AWS. – Let’s go ahead and
talk about the options. Typically, when we talk
about shared storage systems that multiple instances can access, we think Amazon S3. why wouldn’t we use that in this case? – Well, S3 isn’t a file system. It’s actually a flat structure for storing objects
instead of a hierarchy. And you can’t mount it onto multiple instances. Because S3 has a different
underlying type of storage, it’s not right for this use case. So by moving the entire
WordPress installation directory onto an EFS file system and mounting it onto each of your EC2
instances when they boot, your WordPress site and all of its data is
automatically stored on a distributed file system that isn’t dependent on
any one EC2 instance. – Nice. Well, you answered all four questions and you win the grand prize of, the satisfaction of
getting them all right. Congratulations and that’s it for today’s show. (bright upbeat music)

Reading 3.4: Choose the Right Storage Service

Reading

Video: Demo Creating an Amazon S3 Bucket

Creating an S3 Bucket

  1. Go to the S3 Console: Search for “S3” in the AWS Management Console and click on the service.
  2. Create Bucket: Give the bucket a unique name, keeping it in the same region as your other infrastructure. Leave the defaults and click “Create Bucket”.
  3. Test Upload: Upload an object (like an image) to ensure the bucket is working properly.

Modifying Bucket Permissions

  1. Bucket Policy: Navigate to the bucket’s “Permissions” tab and click “Edit” next to bucket policy.
  2. Paste Provided Policy: Replace the placeholders (insert account number, insert bucket name) with your specific details. Save the changes.
  3. IAM Role Access: This policy grants access to a specific IAM role, allowing the application to interact with the bucket.

Updating the EC2 Instance

  1. Clone Existing Instance: Go to the EC2 Instances view, select the stopped instance from previous exercises. Under “Actions” -> “Image and Templates”, choose “Launch More Like This”.
  2. Update Instance Name: Append something like “-s3” to the name to distinguish the new instance.
  3. Enable Public IP: Ensure the instance will be accessible by setting “Auto-assign Public IP” to “Enable”.
  4. User Data: In the “Advanced Details” section, insert your S3 bucket name into the user data field. This tells the application which bucket to use.
  5. Launch Updated Instance: Launch the instance and wait for status checks to pass.

Testing and Cleanup

  1. Verify Application: Copy the instance’s public IP, paste it into a browser, and confirm your application loads (note: database setup is still needed for full functionality).
  2. Stop Instance and Delete S3 Object: Prevent accidental charges after the exercise.

  • [Instructor] Hey, everyone. Welcome to our exercise walkthrough
    on creating an S3 bucket and then modifying the EC2
    instance holding the application to utilize this S3 bucket. As you can see, I’m already
    in the AWS Management Console and I’m logged in as the admin
    user that was created before. So the first thing that I need to do is to create the S3 bucket that will be utilized by the application. To do that, I’m going to go up to this
    search bar here and type in S3 and then click on the S3 service to be taken to the S3 console. Now that I’m in the S3 console, I’m going to go ahead
    and click Create Bucket. And for my bucket name, I am going to use
    employee-photo-bucket-sr, as my initials, and then dash and just
    three random digits. So I’m going to go with 963. I do want to make sure that my
    bucket is in the same region as the rest of my infrastructure, so I’m going to keep this as
    the Oregon region or US West 2, and then I’m going to keep
    all of the other defaults as they are. From there, I’ll go ahead
    and click Create Bucket. And as you can see, my bucket has been successfully created. Now that the bucket has
    been successfully created, I want to test uploading an object just to make sure that it all works. So I’ll click on the name of the bucket, which will take me to
    the bucket details page. And to upload a file, I can click this upload that’s in the center of the page here, but more often I’m going to
    use the one in the upper right. So I’ll click that upload button and then I’ll click Add Files, and I’ll go ahead and upload
    this employee two photo. After I add that file, I can click Upload. And as you can see, the upload of that file was successful. So I’ll go ahead and click Close there. And in a previous demo, you might have seen a way to make this object publicly accessible, but for this bucket and for
    the exercises moving on, we don’t want this bucket to be just completely open to the world, we want this bucket and these objects to specifically be accessed
    by the application. And so in order to do that, we need to adjust the
    permissions for this bucket, specifically the bucket policy. So since I’m already in
    the bucket details page, I’m going to go ahead and
    click on this Permissions tab. And I want to adjust the
    bucket policy for this bucket. So I’m going to scroll
    down to bucket policy and click the edit button, which will take me to a spot where I can create a bucket policy. So to create this bucket policy, I’m going to take the policy that is in the exercise instructions and paste it here. But before I move forward, before I save this policy, I need to edit a few things. The first thing that I need
    to edit is the account number so that I am utilizing
    the correct account. And that will be done here where it says, insert account number, and I will paste my account number there. And then from there, I will also scroll down and change this area where
    it says, insert bucket name, and put my bucket name there. And I need to make sure to
    do that in both locations and make sure that I’m also
    removing the caret brackets when I do so. So now that I’ve done that, I can go ahead and save these changes and my bucket policy will be created. And now my account with this
    specific role will have access to this bucket and the
    objects within this bucket. So now that I have tested
    uploading an object and created the bucket, as well as providing access
    to the bucket by that role, I need to modify the application
    to utilize this bucket. So to do that, I’m going to go over to EC2 and click on Instances. And as you can see, my stopped instance from
    previous exercise is there, and there’s a cool little
    shortcut that can be used in order to clone this instance so that I’m launching
    basically the exact same thing and making sure that I
    maintain those settings. So to do that, what I’ll do is I’ll select
    this stopped instance and then I’ll go over to Actions, and down to Image and Templates. And then I can click
    launch more like this. What that’s going to do is open up my instance launching page, but already have certain things filled out so that my instance is going to be a clone of the stopped instance
    that I’ve already launched. So what I want to do is
    make sure that I know that this is my updated
    application instance. So to do that, I’m going to append -s3 to the end of the instance name. So it’ll be employee-directory-app-s3. And my image and instance type
    are going to remain the same. So what I can do is just make sure that I’m using the same key pair that I’m using with my other instances. And then I want to scroll down and make sure that this
    is going to be accessible. So I want to change my
    auto-assign public IP to enable, and that’s just going to make sure that I have a public IP address
    to access for this instance. From there, I’m going to
    continue to scroll down, as all of the other
    settings are still correct, and then I’m going to
    expand advanced details. With the expanded advanced details, I can see that the role
    is already associated to this instance. And I’m going to scroll all
    the way down to the user data, and what I need to do is
    put my bucket name in here so that now my application
    knows what bucket to utilize. And so with my bucket name there, I can now launch my instance. And that will just take
    a little bit of time, so I’ll go over to my instances and I’ll wait for that to be launched, just occasionally
    refreshing it to make sure that everything launches correctly. And I want to wait until the status check is
    showing two of two checks passed. So now that I’ve given it some time, I’m going to go ahead
    and click refresh again. And as we can see, there are two of two checks passed. So I just wanna make sure that this application is up and running. I will select this instance
    and copy its public IP address, and then in a new tab, I will go ahead and paste that IP address. And as we can see, the application is up and running. We still can’t interact
    with this application yet because the database
    hasn’t been associated. So that is just to make sure that the application is up and running and we’ll be able to interact
    with it in just a bit. So now that that’s been done, just want to do a couple of
    the closeout tasks for this. And so just make sure that you, if you’re following along or
    if you’ve already done this, that you go ahead and stop this instance, as well as delete the object that was uploaded to the S3 bucket. And that’ll just make sure that you don’t accidentally
    accrue any charges outside of running this exercise. All right, that’s it for this one. And I will see you in the next exercise.

Databases on AWS


Video: Explore Databases on AWS

Relational Databases: The Backbone

  • Relational databases are a common choice for storing structured data, like employee information.
  • They’re widely used across many industries.

Database Management Options on AWS

  1. Databases on EC2:
    • You install and manage the database software on an EC2 instance (like migrating an existing database).
    • Benefits: More control, good for legacy systems.
    • Drawbacks: You handle installation, patching, upgrades, etc.
  2. Amazon RDS (Managed Service):
    • AWS handles the heavy lifting: setup, patching, upgrades, backups.
    • Benefits: Much less operational overhead for you.
    • Focus: You optimize the database itself (structure, queries, security).

Why RDS for the Employee Directory App

  • Lets the team focus on building the app’s features, not managing complex database infrastructure.

Upcoming Lessons

  • If you’re new to databases, the next readings will provide background on relational databases and their history.
  • The employee directory application that we’ve been building out lets you keep track of employee data, like their name, location,
    job title, and badges. The app supports adding new employees, viewing existing employees, as well as editing and deleting employees. All of this data will
    be stored in a database, which we haven’t created yet. According to the architecture diagram, we have chosen Amazon Relational Database, or Amazon RDS, to store this data. So let’s talk about
    databases for a moment. Relational databases are widely
    used across all industries and it’s likely your
    company has many databases supporting a variety of
    applications and solutions. Relational database management systems, or RDBMS, let you create, manage, and use a relational database. You can install and operate
    database applications on Amazon EC2 instances,
    and this is a good option for migrating existing databases to AWS. By running databases on EC2, you are already simplifying things from an operational perspective when it comes to on-premises, and it’s a common use case for EC2. When migrating a database
    from on-premises to EC2, you are no longer responsible for the physical infrastructure
    or OS installation, but you are still responsible for the installation
    of the database engine, setting up across multiple AZs with data replication in place, as well as taking on any
    database server management tasks like installing security patches and updating database
    software when necessary. So EC2 makes it easier, but there is a way to lift even more of the operational burden of running relational databases on AWS. What if, instead of
    managing a database on EC2, you could use one of the managed AWS database offerings like Amazon RDS? The big difference between
    these two options is instead of taking care of the instances, the patching, the upgrades, and
    the install of the database, AWS takes care of all of that undifferentiated heavy lifting for you. The task that you are then responsible for is the creation, maintenance, and optimization of the database itself. So you are still in charge
    of creating the right schema, indexing the data,
    creating stored procedures, enabling encryption, managing
    access control, and more. But all the rest of the
    undifferentiated heavy lifting that goes into operating
    a relational database AWS takes care of. To start off this section
    of lessons on databases, we will first cover RDS. The upcoming reading after the video will dive into the history of enterprise relational databases and explain what relational databases are and how they were used. If you aren’t familiar with databases, the readings coming up will give you some useful background information.

Reading 3.5: Explore Databases on AWS

Video: Amazon Relational Database Service

What is Amazon RDS?

  • A managed service that simplifies setting up, running, and scaling relational databases in the cloud.
  • You don’t have to worry about the underlying infrastructure or time-consuming database administration.

Creating a Database with RDS

  • Easy Create: Provides a quick setup using standard best practices for backups and high availability.
  • Database Engines: Choose from MySQL, PostgreSQL, MariaDB, SQL Server, or the AWS-optimized Amazon Aurora.
  • Aurora Benefits: Designed for high performance, scalability, and compatibility with MySQL/PostgreSQL.
  • Instance Selection: Similar to selecting an EC2 instance, pick a size and type based on your workload (a free tier option exists).

High Availability with RDS

  • Multi-AZ Deployment: Launch a secondary database instance in a different Availability Zone (AZ) for redundancy.
  • Automated Failover: RDS manages data replication and failover between the primary and secondary instances. Your application connects to a single endpoint that seamlessly redirects if needed.

Why Use RDS

  • Simplified Database Management: Reduce the operational burden compared to managing your own database setup.
  • Focus on Your Application: Spend less time on database administration and more on building your product.

  • Amazon RDS is a service
    that makes it easier for you to set up, operate, and
    scale a relational database. Instead of telling you about RDS, I am going to show you. As you can see, I’m already in the Amazon RDS dashboard. We are going to create
    the relational database our employee directory application can use to store employee data. First, we will click Create database, and then we are going to
    select the Easy create option, which gives us the ability to accept the standard best practices for backups and high availability. You could select Standard create if you wanted more granular control to pick and choose the different features of your database setup. Next, you choose the database engine. You can see what is currently
    supported at the time of this filming for
    database engines on RDS. The common engines out there are MySQL, PostgreSQL, MariaDB, Microsoft SQL Server, and then there’s this one, Amazon Aurora. Amazon Aurora is an AWS-specific database that was built to take advantage of the scalability and
    durability of the AWS Cloud. Aurora is designed to
    be drop in incompatible with MySQL or PostgreSql. It can be up to five times faster than the standard MySQL databases and three times faster than
    standard PostgreSQL databases. So if you have some use cases
    that require large amounts of data to be stored with
    high availability, durability, and low latency for data retrieval time, consider using Amazon
    Aurora over a standard MySQL or PostgreSQL RDS instance. In our case, we really only need a simple database without any high performance
    or large storage requirements. So I’m going to select a
    standard MySQL instance. Next up, we will choose the
    database instance size and type. This database instance is similar to how we choose an EC2
    instance size and type. Since this is just a demo, I’m going to select a free
    tier eligible instance. Now we’ll give this database a name and assign the database
    and admin user and password that would be used to
    connect to the database. Then we will accept the rest
    of the Easy create defaults and we are done with
    this instance creation. You can see that the instance is in the process of booting up, and that will take a
    few minutes to complete. So in the meantime, let’s talk about high
    availability and RDS. When you create an RDS DB instance, it gets placed inside of
    a subnet inside of a VPC, very similar to an EC2 instance. As you learned already
    in a previous lesson, subnets are bound to one AZ, and as a best practice
    for production workloads, we recommend that you always
    replicate your solutions across at least two AZs
    for high availability. With RDS, one DB instance belongs to
    one subnet inside of one AZ, so that isn’t meeting the
    criteria for best practices. Now, before you get worried about managing this all on your own, just know that you can
    easily configure RDS to launch a secondary DB
    instance in another subnet and another AZ using
    RDS Multi-AZ deployment. RDS will manage all of
    the data replication between the two instances
    so that they stay in sync. The other cool thing about
    RDS Multi-AZ deployments is that RDS also manages the
    failover for the instances. One instance is the primary and the other is the secondary database. Your app connects to one endpoint. If the primary instance goes down, the secondary instance gets promoted. The endpoint doesn’t change,
    so no code change is needed. All of the failover
    happens behind the scenes and is handled by RDS. All you do need to do is to make sure that your app can
    reconnect to the database if it experiences a momentary outage by updating any cache DNS lookups and reconnecting to the endpoint which now connects to
    the secondary instance. Pretty cool, if you ask me. All right, and we’re back in the console and we can see that our
    instance is up and running. At this point, you can connect to the database instance and load your database
    schema onto it ready to go and much, much simpler than trying to install and manage
    this all on your own. Using services like RDS make operating databases
    significantly more accessible and lowers the operational overhead that comes along with
    relational database management.

Reading 3.6: Amazon Relational Database Service

Video: Purpose Built Databases on AWS

The Problem with One-Size-Fits-All:

  • Relational databases (like RDS) are powerful, but they can be overkill for simple use cases, adding unnecessary complexity and cost.

AWS’s Purpose-Built Approach:

  • AWS offers a wide range of databases optimized for specific needs. This allows you to choose the ideal fit for your application, avoiding wasted resources and complexity.

Example: Employee Directory

  • A simple key-value lookup is better served by DynamoDB (NoSQL, usage-based pricing) than RDS for this specific use case.

Other Use Cases, Other Solutions:

  • Content Management: Amazon DocumentDB
  • Social Networks/Graphs: Amazon Neptune
  • Immutable Ledgers: Amazon QLDB

Key Takeaway:

AWS’s diverse database offerings let you focus on your application instead of managing complex database infrastructure. The goal is to pick the right tool for the job!

  • Before we move on to
    learning about Amazon DynamoDB, I want to touch on an
    idea that’s important when you’re making architecture decisions for your AWS solutions, choosing the right database to fit your business requirements rather than forcing your data to fit a certain database choice. There is no one size fits all
    database for all purposes. You should pick a database that
    fits your specific use case, and with AWS, you have
    multiple choices for databases. We covered Amazon RDS
    and relational databases, and that was the default option for businesses for a long
    time, but relational databases aren’t the best choice
    for all business needs. AWS creates services to support
    purpose-built databases, meaning that there are
    many database services that AWS offers, and they each were built with a certain use case in mind, and therefore are optimized
    for those use cases. Let’s think about the
    Employee Directory app. We had originally decided
    that we would use RDS for the database, but now after
    thinking about it some more, RDS might not be the
    best fit for our needs. All we are really doing
    is storing one record in a table for each employee. There are no complex relationships
    that need to be managed, and it’s essentially just a lookup table. Relational databases offer all sorts of features that are great for complex schemas and relationships, but those features add overhead that is unnecessarily complex for simple things like a lookup table. On top of that, the RDS option
    we chose charges per hour of instance run time, so we will get charged for the
    running instances regardless of whether we’re using it or not. Our employee directory application will have much higher
    usage during the week and no usage on the weekends. Is there an AWS database offering that better fits our needs? Introducing Amazon DynamoDB. Amazon DynamoDB is a NoSQL
    database that is great for storing key value
    pairs or document data. This service works
    great at a massive scale and provides millisecond latency. It also charges based on
    the usage of the table and the amount of data that
    you are reading from the table, not by the hour or by the second. This is a better option for our simple employee lookup table. Now, besides the employee directory app, there are other use cases that require databases of varying types. What if you are writing
    an application that needs a full content management system? Neither RDS nor DynamoDB
    would be the best solution. Luckily, AWS has quite a number
    of other database offerings. For this use case, you might
    look into Amazon DocumentDB. It’s great for content
    management, catalogs, or user profiles. Let’s think of another use case. What if you had a social network
    that you wanted to track? Keeping track of those
    kind of social webs, figuring out who is connected to who can be difficult to manage in a traditional relational database. So you could use Amazon Neptune,
    a graph database engineered for social networking and
    recommendation engines, but it’s also good for use
    cases like fraud detection, or perhaps you have a supply
    chain that you have to track with assurances that nothing is lost, or
    you have a banking system or financial records that
    require 100% immutability. What you really need
    is an immutable ledger, so perhaps Amazon QLDB, or Quantum Ledger
    Database, is a better fit for this use case. It’s an immutable system of record where any entry
    can never be removed, and therefore is great for industries that need to be audited for regulatory and compliance reasons. It can take a lot of experience and expertise
    to operate databases at scale, and that’s why it’s so
    beneficial to utilize one of the AWS database offerings. You don’t need to be an expert on running all of these
    different types of databases. Instead, you can just use the
    database service that is best for your use case and
    focus on your application and providing value to your end users. You don’t need to build up a ton of in-house expertise to operate a highly scalable
    immutable ledger database. You can just use AWS QLDB instead. The key thing to understand
    is AWS wants to make sure that you are using the
    best tool for the job. Coming up next, we will
    explore Amazon Dynamo DynamoDB and get a look at more of the details.

Discussion Prompt: Discussion – Consider this Scenario

Reading

Video: Introduction to Amazon DynamoDB

What is DynamoDB?

  • Serverless NoSQL Database: Amazon handles scaling and infrastructure, you focus on the data.
  • Flexible Schema: Items in a table don’t need identical attributes, good for varying data types.
  • High Performance: Built for speed (millisecond response) and massive scale.

Why Choose DynamoDB?

  • Scalability and Speed: Handles huge workloads with reliably fast performance, unlike some traditional databases that struggle under pressure.
  • Less Rigid Data: Great if you don’t have a strictly defined data structure or it changes frequently.
  • Simple for Some Use Cases: While it’s not good for complex queries across multiple tables, it excels at focused lookups within a single table

Example from the Video

The employee lookup table was easily switched from a relational database (RDS) to DynamoDB due to:

  • Simple Data Structure: Employee ID as unique key.
  • Focused Use Case: Fast lookups, not complex analysis required.

Important Notes:

  • DynamoDB is Purpose-Built: It’s not a万能solution for every database need.
  • Read the Additional Material: Learn more about how DynamoDB works in depth.
  • Let’s talk some more
    about Amazon DynamoDB. At the most basic level,
    DynamoDB is a database. It’s a serverless database, meaning that you don’t need to manage the underlying instances or
    infrastructure powering it. With DynamoDB, you don’t create a database with tables that relate to each other like a relational database. Instead, you create standalone tables. A DynamoDB table is just a place where you can store and query your data. Data is organized into items
    and items have attributes. If you have one item in your table or 2 million items in your table, DynamoDB manages the
    underlying storage for you and you don’t need to worry about scaling the system
    up or down for storage. DynamoDB stores data
    redundantly across AZs and mirrors the data across
    multiple drives for you under the hood, this lessens the burden of operating a highly available database. DynamoDB, beyond being massively scalable, is also highly performant. DynamoDB has a millisecond response time and when you have applications with potentially millions
    of users, having scalability and reliable lightning fast
    response times is important. Now, DynamoDB isn’t a
    normal database in the sense that DynamoDB doesn’t
    require a rigid schema or manage complex
    relationships or constraints. Relational databases like
    the MySQL database we created in an earlier lesson require that you have a well-defined schema in place that might consist of one or many tables that may or may not relate to each other. Relational databases work
    great for a lot of use cases and have been the standard
    type of database historically, however, these types
    of rigid SQL databases can have performance and scaling
    issues when under stress. The rigid schema also makes it so that you cannot have
    variation in the types of data that you store in a single table, so it might not be the
    best fit for a data set that is a little bit less rigid and is being accessed at a high rate. This is where no SQL databases
    like DynamoDB are handy. No SQL databases have flexible schemas. With DynamoDB, you can
    add or remove attributes from items in the table at any time. Not every item in the table has
    to have the same attributes. This is great for data sets
    that do have some variation from item to item. The types of queries you
    can run on no SQL databases tend to be simpler and focus
    on a collection of items from one table, not queries
    that span multiple tables. This along with other factors, including the way the
    underlying system is designed, allow DynamoDB to be very
    quick in response time and highly scalable. So things to remember, DynamoDB is no SQL. It is purpose-built, meaning
    it has specific use cases and isn’t the best fit for
    every workload out there. Taking a look at our architecture, we modified it to use DynamoDB. We are going to need to
    create an employee table for the app to write and read from. We will create this DynamoDB
    table using the console. And you know what? Let’s get Seph out here to help us out. – Hello. – We changed our minds about using RDS and decided to change it over to DynamoDB. It’s only one table and it’s
    essentially a lookup table for our employees. How hard do you think this
    change is gonna be to make? – Well, our app is actually designed to use either RDS or DynamoDB
    as the backend database so this won’t take long at all. Here, I’ll show you how. I’m in the console and will navigate to the DynamoDB service. From here, all you need to
    do is create a new table. Tables in DynamoDB require you to designate certain attributes as keys that will make an item unique. We will select employee ID
    as the unique identifier for the items in this table. Then we will just accept the
    defaults and create the table. Our app was coded to look for a table specifically called employees so this actually should all work now that the table is created. To test it out let’s
    head over to the website hosted on EC2 and try
    to add a new employee. See, that was nice and easy. Now let’s go back into
    DynamoDB and refresh the page and scan all the items in the table and boom, there it is. It’s really that simple for
    a lookup table like this one. – All right, well, that was easy. Nice. Now, not every use case is this simple. In the reading after this video there will be more information
    about how DynamoDB works so make sure to check that out.

Reading: Reading 3.8: Introduction to Amazon DynamoDB

Reading

Reading: Reading 3.9: Choose the Right AWS Database Service

Reading

Week 3 Exercise & Assessment


Video: Introduction to Exercise 3

Objective: Set up the storage and database parts of an application to make it functional.

Steps:

  1. Amazon S3 (Storage)
    • Create an S3 bucket.
    • Set up a bucket policy to give your IAM role permissions to interact with the bucket.
    • Practice uploading an object manually to get used to S3.
  2. Amazon DynamoDB (Database)
    • Create a DynamoDB table to hold employee data.
  3. Application Testing
    • Verify that your application can successfully read from and write data to both the S3 bucket and the DynamoDB table.

Troubleshooting Tip: If you encounter problems, carefully review previous steps to ensure everything was done correctly.

  • Up until this point, you’ve practiced how to
    launch an EC2 instance, but you couldn’t use the application yet because the storage and database
    pieces weren’t set up yet. In this next lab you’ll add the storage and database components and test that the application is working. To do that, you will
    create an Amazon S3 bucket and create a bucket policy that will allow the IAM role to work with
    the objects in the bucket. Then you will upload an object to the bucket manually
    to get familiar with S3. After that, you will create
    an Amazon DynamoDB table, which will be used to
    store employee information. Once the bucket and table
    are created, you will test that the application
    can read and write data. That is it for this lab. As usual, if you get stuck, try going back a few
    steps in the instructions and make sure that you
    didn’t miss anything.

Lab 3: Configuring a Web Application

Video: Demo Creating an Amazon DynamoDB Table

Database Setup for the Employee Directory Application

  1. Launch a New EC2 Instance:
    • Clone an existing instance configuration for convenience (’employee-directory-app-dynamodb’)
    • Ensure a public IP is assigned to the instance for accessibility.
    • Verify the application is running on the new instance before proceeding.
  2. Create a DynamoDB Table:
    • Name the table “Employees” for compatibility with the app.
    • Set the partition key as “id” (again, aligns with app’s structure).
    • Leave default settings and create the table.
  3. Test the Application with the Database:
    • Add a new employee record to the directory, including a photo.
    • Verify that the photo is uploaded correctly to the S3 bucket.
    • Check that the new record appears in the DynamoDB table with all the correct details.
  4. Wrap Up:
    • Stop the EC2 instance to avoid unnecessary charges while the database remains active.

Key Points

  • The instructor emphasizes the importance of testing each step to ensure everything is connected properly.
  • DynamoDB is used as the database solution in this example.
  • The application is designed to interact with specific table and key names, highlighting the importance of coordination between the app and database setup.

  • [Instructor] Welcome to our exercise on setting up the database
    for our application. So we’ve already created
    and modified the application and have created and tested
    the uploading to the S3 bucket. So to get the application
    fully functional, or at a fully functional stage, the next thing to do is
    to launch a database. But before I launch the database, I want to launch an
    instance that’s ready to use that database so that we can
    test it as soon as it’s done. So to launch that instance,
    I’m going to go over to EC2. And go over to the instances. And just like with the last time, I’m going to use the shortcut
    where I launch a clone of an instance that I
    have already launched. So to do that, I’m going to select my
    employee-directory-app-s3, because that’s the most updated
    version of this application. And then I’m going to go over to actions, image and templates. And then, I’m going to
    launch more like this. Now that I’m in that launch
    page, what I want to do, just so that I know that I
    am on the correct instance and that I’m using the correct instance. Instead of appending -s3 to this, I’m going to append -dynamodb so that I know that this
    is the application instance that is testing with the database and not just connecting to the bucket. So now that I’ve adjusted
    that, I can scroll down. I see that most of my
    settings are still there. I want to make sure that
    I’m still using the same key in case I do need to access the instance. And I also want to just verify, even though I know that this works, I just want to verify that
    I am still using the role. One other thing that I want to
    make sure to do is to ensure that the instance
    launches with a public IP. And I scrolled just right over this. And so scrolling back, I
    go to the network settings and auto-assigned public IP, and just make sure that
    I click Enable on that. And that way the instance
    will have a public IP, and I can access it and test it once the instance and the
    database are all fully launched. So now that I’ve done that, I can see that my user data is
    exactly where I left it after adding the bucket. And now I can click Launch instance. As always, give that some time to launch. And I can just go over to the instances and occasionally refresh
    this in order to make sure that my instance has launched. So I’m going to give this a few minutes while waiting for the instance
    to launch and making sure that where it currently
    says, “Initializing,” it will say, “2/2 checks passed.” Now that it’s been a few minutes, I’ll go ahead and refresh the page again. And I can see that two of
    two checks have passed. So I’ll go ahead and select
    my dynamodb labeled instance and copy its public IP address, just because I want to make sure that the application is up
    and running before I proceed. If this weren’t running,
    as we can see it is, if it wasn’t, then I would go back and see where I made a mistake
    in launching the instance. And once the application
    was successfully running, then I would, I would move forward with creating the database. And so since I can see that the base employee
    directory application is up and running, I can go ahead and proceed with creating the DynamoDB table. In order to do that, I’m going
    to go up to the search bar and type in dynamo and click
    on the DynamoDB service. And once I’m here, I’m
    just going to go ahead and click Create table. Since I don’t have any
    tables currently running, it’s just the easiest way to get to this table creation screen. For the table name, I’m
    going to put Employees because the application is set up to work with a database named Employees. So this will make it very easy
    for it to just utilize this. And then for my partition
    key, I’m going to put id. And this is because again,
    the application is set up to utilize this for
    organization within the table. And then, I’m going to
    keep this type as a string. After I’ve done that, all of the default settings can remain, and I’ll just go ahead
    and click Create table. And that takes just a little bit of time, and the table will be
    created in just a bit. So I’ll just give it a
    couple of seconds here. So that took almost no time at all, and the table has
    successfully been created. So now that the table is created, instead of adding items
    directly to this table, what I want to do is test
    the application again. So I will make sure I copy my
    instance’s public IP address. So I’ll go over to my instances and select the instance that was launched for this and copy its IP address. And then, in a new tab I
    will paste that address. And I can see that the
    Employee Directory app is still running, and it is
    currently an empty directory. But I can go ahead and add
    an employee to this directory just to make sure that
    everything is connected. So what I’ll do is I’ll click Add. And I’ll go ahead and put my name here, and then my location, and my job title. I’ll also just select a
    couple of these options here, just so that we can see what it looks like as everything is added to this table and added to this directory. So since I am a Seattle resident, I am also very much a Seattle fan, and I’m very definitely a coffee snob. I will also not just put this
    information into the table, but I will add a file. I’ll add my employee
    photo and open that up. And then once that has all been added, I can go ahead and click Save. And as we can see, I now have an entry in the Employee Directory. So it looks, it looks good here. And we can see that I
    am now in the directory. But I want to show that
    this isn’t just something that was added to this location. So, what I will do is show
    that these items were added in the table and in the S3 bucket. So starting with S3, I’ll go over to S3. And then, I will open up the
    bucket that I had created. And as we can see, my
    employee pic has been added, and it’s the employee
    picture that I uploaded. Even though it has its
    own designated name here, this is the object that was uploaded through that application. And then I can also go over to DynamoDB and can view my table, and view my employee table, and explore the table items. And I can also see that my employee items that were added through
    the directory are here. And so it shows which badges
    I associated with myself, my name, my job title, my location. It shows the name of the object that was uploaded through the directory, and it shows the ID
    specifically for this table, and the partition key that we established. So now that all of that has been done, what I want to do is just go ahead and keep the table running. But I want to go back to
    EC2 and stop the instance so that I’m not accruing
    any additional charges as I prepare to move on to the next stage of this application
    infrastructure development. So I’ll go ahead and stop that instance, and that’s where I’ll go ahead and close this walkthrough out. And I’ll see you in the next one.

Quiz: Week 3 Quiz

What is a typical use case for Amazon Simple Storage Service (Amazon S3)?

A company needs a storage layer for a high-transaction relational database on an Amazon Elastic Compute Cloud (Amazon EC2) instance. Which service should the company use?

True or False: Amazon Elastic Block Store (Amazon EBS) volumes are considered ephemeral storage.

A solutions architect is working for a healthcare facility, and they are tasked with storing 7 years of patient information that is rarely accessed. The facility’s IT manager asks the solutions architect to consider one of the Amazon Simple Storage Service (Amazon S3) storage tiers to store the patient information. Which storage tier should the solutions architect suggest?

True or False: Object storage is the best storage solution for applications that need to frequently update specific small sections of a file.

True or False: A Multi-AZ deployment is beneficial when users want to increase the availability of their database.

Which task of running and operating the database are users responsible for when they use Amazon Relational Database Service (Amazon RDS)?

Which of the following are common use cases for file storage? (Choose TWO.)

True or False: The IT department in a company can attach Amazon Elastic Block Store (Amazon EBS) volumes to Amazon Simple Storage Service (Amazon S3) to store data in a bucket.

Which of the following instance families does Amazon Relational Database Service (Amazon RDS) support? (Choose TWO.)

A solutions architect is working for a small business. The business is looking for a storage service that temporarily stores frequently changing and non-persistent data. This type of data can be deleted during instance stops or terminations. Which service should the solutions architect recommend for this use case?

Which database is a non-relational database that stores data in key-value pairs, and is a good fit for hosting simple lookup tables?

Which core component of Amazon DynamoDB corresponds to a column in a relational database table?

Which AWS database service is best suited for use cases such as social networking or recommendation engines?


Home » Amazon » AWS Fundamentals Specialization » AWS Cloud Technical Essentials » Week 3: Storage & Databases on AWS