Welcome to AWS Cloud Technical Essentials Week 1! In this week, you will learn the definition of cloud computing and how to describe the cloud value proposition. You will learn how to differentiate between workloads that run on-premises versus in the cloud, and how to create an AWS account. You will also get an overview of Amazon Web Services, including how to differentiate between AWS Regions and Availability Zones, and the different ways that you can interact with AWS. Finally, you will learn best practices for using AWS Identity and Access Management (IAM).
Learning Objectives
- Discover IAM best practices
- Create an AWS account
- Describe the different ways to interact with AWS
- Differentiate between AWS Regions and Availability Zones
- Describe Amazon Web Services
- Differentiate between workloads running on-premises and in the cloud
- Define cloud computing and its value proposition
Welcome to the Course
Video: Welcome to AWS Cloud Technical Essentials
- This course provides an overview of cloud computing and explores various AWS services.
- It is designed for people working in IT or IT-related fields who have a general knowledge of IT topics but want to learn more about the AWS cloud.
- The course covers topics such as compute, networking, storage, databases, security, monitoring, and optimization.
- It includes hands-on examples and a cloud-based application project.
- No coding is required for this course, but you will have access to the source code if you want to explore it further.
- The course also includes written segments to reinforce ideas and provide additional background information.
- Completing all the readings is highly recommended to get the full benefit of the course.
- Hey, everyone. I’m Morgan Willis, Principal
Cloud Technologist at AWS, and I want to welcome you to this course. In this course, you will
learn the key concepts behind cloud computing
and explore AWS services using real-life examples
covering compute, networking, storage, databases, security, and more. This course is intended
for people working in IT or IT-related fields, who have a general knowledge of IT topics, but have yet to learn
much about the AWS cloud. To kick off the course, we will cover the basics
of what the cloud is, the benefits of the cloud,
the AWS global infrastructure, and identity and access management. This will give you a solid foundation for learning the rest of
the more technical topics contained in the course. Then we will focus on
computing, and for this topic, we will dig into the services, Amazon Elastic Compute
Cloud, AWS Container services like Amazon Elastic Container Service, and serverless compute
options like AWS Lambda. Then we will discuss networking in AWS using services like Amazon
Virtual Private Cloud, and other networking
technologies used for creating, securing, and connecting to
your own private network in AWS. For storage, we’ll explore
how and when to use Amazon S3, Amazon Elastic Block Store, and others. For databases, we will
cover many use cases around the different database
services AWS has to offer, but with a special focus on Amazon Relational Database
Service and Amazon DynamoDB. Then finally, we will discuss monitoring and scaling your application. For that, we’ll use Amazon CloudWatch, and Amazon EC2 Auto Scaling, alongside Elastic Load Balancing. We aren’t going to focus on
theory alone in this course. Instead, we will use a hands-on example through a cloud-based application that will build over the
duration of the course piece by piece. The app we will build is an
employee directory application that stores images and information about fictional employees in a company. There’s no coding
required for this course. You will have access to the source code if you want to explore it further. This course includes
written segments we refer to as readings or notes, to reinforce ideas, dive deeper into topics, as well as provide background information on concepts we did not
cover in the videos. Because of this, I highly suggest that you take the time to
complete all of the readings to get the full benefit of the course. So again, welcome, and
as we say at Amazon, work hard, have fun, and make history.
Video: Meet the Instructors
- The page introduces two Cloud Technologists from AWS, Morgan Willis and Seph, who will be sharing their knowledge with you throughout the course.
- Morgan has a background in software development and has been in the technology field for over 10 years. She enjoys outdoor activities and has a cat named Meowzy who seems to have some knowledge of AWS.
- Seph has been working with the AWS Cloud for close to 15 years and has experience in various industries. He enjoys spending time with his dog named Fluffy, who also seems to have an interest in AWS.
- Both Morgan and Seph are excited to help you navigate the world of AWS in the upcoming lessons.
- Hey everyone. My name is Morgan Willis. I’m a Principal Cloud
Technologist with AWS. What that means is I essentially
build things, learn AWS, and then I get to share
my knowledge with you all. I had a career in software development before I started working and training in certification here at AWS. I really love technology and have been in the field for over 10 years, enrolls from technical support
to database administration to web application development. And while it’s true that
I do love technology, I also have a regular life outside of work where I’d like to hike
in a beautiful sun, rain, sleet, or snow. I also like to ski and really enjoy just about any outdoor activity
all seasons of the year. I’m looking forward to
helping you navigate the world of AWS over the coming lessons. And this here is Meowzy,
he’s my trusted sidekick. He spends time with me while
I type away on my computer, and I think he’s picked up a
thing or two over the years. His knowledge of AWS at this point is pretty great for a cat. I think I can even hear
him sneaking on my computer in the middle of the night to build his own solutions on AWS. I have a feeling his knowledge will come in handy for this course. More on that later. – Hey y’all, I’m Seph. I’m a Cloud Technologist with AWS, and I’ve been working with the AWS Cloud for close to 15 years. Like Morgan, I have spent
most of that time learning, building, and sharing my knowledge with customers like yourself. In addition to my experience with AWS, I have worked in a variety of industries and I’ve been on both the data center and the cloud side of infrastructures. Outside of my life in tech, I mostly enjoy spending time with friends, which primarily includes my four-year-old wonder dog named Fluffy. Speaking of Fluffy, sometimes
I think she knows more about AWS than she lets on. Every once in a while, I notice her drawing AWS
architectures in a journal and I wonder where she
learned all of this. She can’t bring me the ball
back when we play fetch, but she always wags her tail
when I mention cloud computing. I think Fluffy has something
to say about cloud computing, so stay tuned for some of
her helpful tips and tricks.
Video: Course Feedback
- The video on this page is about how to get help and support while taking the AWS Cloud Technical Essentials course.
- If you have questions about AWS services or something related to AWS, you can check out Repost, a website where you can ask AWS-related questions and get answers.
- If you find any out-of-date, incorrect, or broken content in the course, you can report the issue using the form provided in the course materials.
- If something goes wrong within your AWS console while working on an exercise or lab, you should not enter a ticket, as the ticket system is meant for reporting issues with the courses.
- For questions about course completion certificates or anything related to the learning platform, you should reach out to the learning platform’s support.
- Hi there. My name is Morgan Willis and I’m a Principal Cloud
Technologist here at AWS. As you’re working your
way through the course, you may want to get in touch
with the AWS community, or reach out to the team
who created this course. This video is to help you understand where to direct your questions. You may have questions
about the AWS services you are learning about, or you might find that you have a specific question about something you are
working on related to AWS. For these types of questions, I highly recommend you check out Repost. Repost can be found at
the URL, repost.aws. And this website gives you
access to a vibrant community where you can ask AWS-related
questions and get answers. Something to keep in
mind about our courses is that AWS innovates at
an extremely fast rate. This means that some of our content can get slightly out of date
when new features are released, or if the AWS console changes. If you see any out of date,
incorrect, or broken content, you can contact us
course creators directly by reporting the issue using
the form that is included in the course materials
following this video. An example of something
you would use this form for is a hands-on tutorial or exercise that has instructions that are out-of-date or are no longer correct. If you have something go
wrong within your AWS console while working through an exercise or lab, this is not something that
you would enter a ticket for, as we will not have access
to your specific AWS account. The ticket system is meant
as a place to report issues with our courses. Finally, if you have a question about a course completion certificate or anything related to
the learning platform you’re taking this course on, please reach out to the learning platform to support and resolve these issues. I hope this helps you
find the help you need. See you later.
Getting Started with AWS Cloud
Video: Introduction to Week 1
This is an introduction to a cloud computing learning course on AWS. It covers:
Concepts:
- Theory and benefits of cloud computing.
- AWS global infrastructure (regions, availability zones).
- Interacting with AWS services.
- Security and identity/access management.
Sample application:
- Employee Directory web app (CRUD – create, read, update, delete).
- Features: add/edit/delete employees, add photos.
AWS services used:
- Amazon Virtual Private Cloud (VPC) – private network.
- Amazon Elastic Compute Cloud (EC2) – virtual machines for backend code.
- Amazon Relational Database Service (RDS) – database for employee data.
- Amazon Simple Storage Service (S3) – object storage for images.
- Amazon CloudWatch – monitoring app.
- Amazon Elastic Load Balancing and EC2 Auto Scaling – scalability and fault tolerance.
- Amazon Identity and Access Management (IAM) – security and identity.
Additional notes:
- Course uses a sample app throughout to demonstrate AWS services.
- Course has additional resources like definitions, tips, and commentary.
Overall, this course covers the basics of cloud computing on AWS with a hands-on approach using a sample application.
- Hey there. I hope you’re excited to learn
about cloud computing on AWS. I’m excited to get started too. So let’s hop in. To kick things off, we
are going to cover some of the foundational concepts
you’ll need to know about when working with AWS. Working with AWS’s part theory, part technical knowledge, part vocabulary and lots of practice and experimentation. These first few lessons are going to help you
establish a little bit from each of those categories. You will learn the theory
behind cloud computing and the benefits of the cloud. This will help you make informed
decisions about the cloud from a high level, and give you some of the reasoning around why and when to use the cloud. Then we will dive into the
AWS global infrastructure, covering regions and availability zones, followed by lessons on how to
interact with AWS services. This lesson is going to give you the technical knowledge
and vocabulary you need to create and discuss AWS architectures, and properly understand AWS
documentation and examples. A lot of what AWS offers can relate back to concepts used in traditional
on-premises computing. And getting started with AWS means comparing these concepts to AWS concepts. After that, we will begin
to discuss security, and identity and access management. This is important to understand
when you’re getting started because as soon as you
create an AWS account, you’ll have some actionable knowledge on how to secure that
account right from the start. Starting off secure is a good place to be. Throughout all the topics, in these next few sections and over the duration
of the entire course, we’ll be using a sample employee
directory web application to demonstrate how AWS services are used. Let’s go ahead and take a look at the Employee Directory app. You can see I’m in the browser, and I want to show you the functionality. This is a basic CRUD app or
create read, update, delete. This app keeps track of
employees within a company. So the first thing I’m going
to do is create a new employee. To create an employee, we’ll
give the employee a name, and I’m going to add myself. So I’ll add my name. My location is USA. And then my job title, I’ll enter in Cloud Technologist. And then we can add some
badges for each employee. This is like an employee’s flare, so I’m gonna select Mac User
for myself and Photographer, and then I will click Save. Now, I’m back on the homepage, and I actually forgot to add
a photo for this employee, so let’s go ahead and edit it
and add the employee photo. You can also see this
app gives me the ability to delete employees from the directory. So, those are the features of the app from a user’s perspective. Time to review how we will build this app using AWS services. This application will be
built in a private network using Amazon Virtual Private Cloud or VPC. We will host the
application’s backend code on Amazon Elastic Compute Cloud, or EC2, which is a service that essentially offers
virtual machines on AWS. So let’s go ahead and add
those servers to our diagram. The employee data will
be stored in a database, which will also live inside this network and will be hosted using a service called Amazon Relational
Database Service or RDS. So I’ll go ahead and add
that to the diagram as well. The images for the
employees will be stored using the object storage service, Amazon Simple Storage Service or S3, which allows the unlimited
storage of any type of file, like images in our example. These are the basic building
blocks of our application. We will use Amazon CloudWatch
for monitoring this solution, and we will also want to ensure that our application is
scalable and fault tolerant. So I’m going to go ahead and add Amazon Elastic Load Balancing and Amazon EC2 Auto
Scaling to this diagram. For security and identity, we will be using Amazon
Identity and Access Management or IAM, so let’s add that. There’s a lot of pieces on this diagram, but don’t worry, we will
build this app step by step using the AWS Management Console. We will add to this diagram, reference it, and change it throughout
the course to meet our needs and let it evolve as new ideas and
techniques enter our world. One more thing to note about this course before I let you go. If you hear this noise, (notification dings) it means that you are gonna be seeing one of our informational popups on the screen, which convey extra information like word definitions, AWS
best practices, tips, tricks, or general commentary written for you by our lively, furry
sidekicks, Meowzy and Fluffy. They know all the tips and are very helpful little friends to have around during this course. That’s it for now, see you soon.
Reading 1.2: What is AWS?
Reading
What is the Cloud?
In the past, companies and organizations hosted and maintained hardware such as compute, storage, and networking equipment in their own data centers. They needed to allocate entire infrastructure departments to take care of them, resulting in a costly operation that made some workloads and experimentation impossible.
As internet usage became more widespread, the demand for compute, storage, and networking equipment increased. For some companies and organizations, the cost of maintaining a large physical presence was unsustainable. To solve this problem, cloud computing was created.
Cloud computing is the on-demand delivery of IT resources over the internet with pay-as-you-go pricing. You no longer have to manage and maintain your own hardware in your own data centers. Companies like AWS own and maintain these data centers and provide virtualized data center technologies and services to users over the internet.
To help differentiate between running workloads on-premises versus in the cloud, consider the scenario where your developers need to deploy a new feature on your application. Before they deploy, the team wants to test the feature in a separate quality assurance (QA) environment that has the exact same configurations as production.
If you run your application on-premises, creating this additional environment requires you to buy and install hardware, connect the necessary cabling, provision power, install operating systems, and more. All of these tasks can be time-consuming and take days to perform. Meanwhile, the new product feature’s time-to-market is increasing and your developers are waiting for this environment.
If you ran your application in the cloud, you can replicate the entire environment as often as needed in a matter of minutes or even seconds. Instead of physically installing hardware and connecting cabling, you can logically manage your physical infrastructure over the internet.
Using cloud computing not only saves you time from the set-up perspective, but it also removes the undifferentiated heavy lifting. If you look at any application, you’ll see that some of the aspects of it are very important to your business, like the code. However, there are other aspects that are no different than any other application you might make: for instance the compute the code runs on. By removing repetitive common tasks that don’t differentiate your business, like installing virtual machines, or storing backups, you can focus on what is strategically unique to your business and let AWS handle the tasks that are time consuming and don’t separate you from your competitors.
So where does AWS fit into all of this? Well AWS simply just provides cloud computing services. Those IT resources mentioned in the cloud computing definition are AWS services in this case. We’ll need to use these AWS services to architect a scalable, highly available, and cost effective infrastructure to host our corporate directory application. This way we can get our corporate directory app out into the world quickly, without having to manage any heavy-duty physical hardware. There are the six main advantages to running your workloads on AWS.
The Six Benefits of Cloud Computing
Pay as you go. Instead of investing in data centers and hardware before you know how you are going to use them, you pay only when you use computing resources, and pay only for how much you use.
Benefit from massive economies of scale. By using cloud computing, you can achieve a lower cost than you can get on your own. Because usage from hundreds of thousands of customers is aggregated in the cloud, AWS can achieve higher economies of scale, which translates into lower pay as-you-go prices.
Stop guessing capacity. Eliminate guessing on your infrastructure capacity needs. When you make a capacity decision prior to deploying an application, you often end up either sitting on expensive idle resources or dealing with limited capacity. With cloud computing, these problems go away. You can access as much or as little capacity as you need, and scale up and down as required with only a few minutes notice.
Increase speed and agility. IT resources are only a click away, which means that you reduce the time to make those resources available to your developers from weeks to just minutes. This results in a dramatic increase in agility for the organization since the cost and time it takes to experiment and develop is significantly lower.
Stop spending money running and maintaining data centers. Focus on projects that differentiate your business, not the infrastructure. Cloud computing lets you focus on your customers, rather than on the heavy lifting of racking, stacking, and powering physical infrastructure. This is often referred to as undifferentiated heavy lifting.
Go global in minutes. Easily deploy your application in multiple Regions around the world with just a few clicks. This means you can provide lower latency and a better experience for your customers at a minimal cost.
Resources:
Video: AWS Global Infrastructure
Storing photos in AWS for safekeeping and accessibility
The document talks about storing employee photos in AWS for safekeeping and accessibility. It starts by highlighting the importance of having multiple copies of the photos to prevent data loss in case of laptop failure.
AWS redundancy and disaster recovery
It then explains how AWS ensures data security through redundancy. AWS has clusters of data centers around the world, grouped into Availability Zones (AZs) and further into Regions. Each AZ has redundant power, networking, and connectivity to ensure uptime even if one data center fails. Similarly, Regions are connected with redundant links for disaster recovery in case an entire AZ is affected.
Choosing an AWS Region
When choosing an AWS Region to store your data, you need to consider four factors:
- Compliance: Does your application, company, or country have any regulations that dictate where your data must reside? For example, if your data must be stored within the UK, you must choose the London Region.
- Latency: How close are your IT resources to your user base? If your users are spread across the globe, it’s best to choose a Region closest to the majority of them to minimize latency.
- Price: Pricing can vary between Regions due to different tax structures. Choose a Region that offers the best balance of performance and cost.
- Service availability: Not all new AWS services are immediately available in all Regions. Ensure the Region you choose supports the services you want to use.
Global Edge Network for further latency reduction
Beyond Regions and AZs, AWS also has a Global Edge Network consisting of Edge locations and regional Edge caches. These cache frequently accessed content closer to end users, further reducing latency for geographically distant users.
In conclusion,
The document provides a comprehensive overview of storing data in AWS, emphasizing redundancy, disaster recovery, and factors to consider when choosing an AWS Region. It also introduces the Global Edge Network as an additional tool for optimizing data delivery for global audiences.
AWS Global Infrastructure Tutorial: Your Cloud’s Foundation
Welcome to the fascinating world of AWS Global Infrastructure! This tutorial will serve as your guide to understanding the backbone of your cloud deployments, from the physical data centers to the intricate connections that keep your applications running smoothly.
Building Blocks of the Cloud: Regions and Availability Zones
Imagine a vast network of fortresses scattered across the globe, each one meticulously designed to safeguard your data and applications. These fortresses, in the world of AWS, are called Regions. Each Region is a self-contained unit consisting of multiple Availability Zones (AZs). Think of AZs as smaller, secure outposts within a Region, offering redundancy and fault tolerance.
- Regions: Represented by a two-letter code (e.g., US-East-1), they provide geographical separation and cater to specific compliance requirements or latency needs.
- Availability Zones: Nestled within Regions, they house data centers with independent power, cooling, and networking. If one AZ faces an outage, your applications in other AZs within the same Region remain unaffected.
The Power of Redundancy: Safeguarding Your Data
Redundancy is the mantra of AWS Global Infrastructure. Data is replicated across multiple AZs within a Region, ensuring that even if one AZ goes down, your information remains safe and accessible. This eliminates single points of failure and keeps your applications humming.
Reaching the Globe: The Global Network at Your Fingertips
But what about users scattered across the world? That’s where the AWS Global Network comes in. This intricate web of fiber optic cables connects Regions worldwide, enabling low-latency data transfer and seamless application performance for your global audience.
Beyond Regions and AZs: Introducing the Edge Locations
For applications demanding lightning-fast response times, AWS offers Edge Locations. These strategically placed outposts cache frequently accessed content closer to end users, minimizing the distance data needs to travel. Imagine having a content delivery network right at the doorstep of your users!
Choosing the Right Region: A Strategic Decision
With so many Regions and factors to consider, selecting the right one for your needs can be daunting. But fear not! Here are some key aspects to guide your decision:
- Latency: Where are your users located? Choose a Region closest to them for optimal performance.
- Compliance: Does your industry have specific data residency requirements? Select a Region that adheres to those regulations.
- Pricing: Prices can vary across Regions. Consider your budget and find the best balance between cost and performance.
- Service Availability: Not all AWS services are available in every Region. Ensure your chosen Region supports the services you need.
Exploring the AWS Management Console:
Now that you understand the core concepts, let’s put theory into practice! The AWS Management Console is your one-stop shop for managing your cloud resources. Here, you can visualize Regions and AZs, monitor service health, and configure your infrastructure to meet your specific needs.
Remember:
As you embark on your cloud journey, keep in mind that the AWS Global Infrastructure is constantly evolving. New Regions, services, and features are added regularly, so staying updated is key. Embrace the continuous learning curve, and you’ll unlock the full potential of the cloud for your applications.
Congratulations! You’ve taken your first step towards mastering the AWS Global Infrastructure. With this knowledge as your foundation, you can confidently build, deploy, and manage your cloud applications with scalability, reliability, and global reach. So, go forth and explore the limitless possibilities of the cloud!
Remember, this is just a starting point. Feel free to delve deeper into specific aspects of the AWS Global Infrastructure that pique your interest. There’s a whole world of cloud knowledge waiting to be discovered!
- For our employee directory application, we’ll be using photos of
each of our employees. If we have only one copy of those photos and don’t want to lose them, we have to store them somewhere safe. Currently, the only copy of these photos are saved on my laptop. But if my laptop breaks, what happens? No more photos. I want to make sure this doesn’t happen, so I’m going to upload the photos to AWS to ensure that the copies exist even if my laptop is destroyed. This also allows me to access
my photos from anywhere, my home, my phone, a plane,
on a train, everywhere. When I store these
photos in an AWS service, I’m storing it in a data center somewhere, on servers inside that data center. But if a natural disaster happens, such as an alien coming down from space and destroying a data center, then what do we do? Luckily, AWS has planned for
this event and many others, including natural disasters and other unavoidable alien accidents. The way they plan for it
is through redundancy. AWS has clusters of data
centers around the world. So here AWS would have
a second data center connected to the first through redundant high
speed and low latency links. That way, if the first
data center goes down, the second data center
is still up and running. This cluster of data centers is called an availability zone or AZ. An AZ consists of one or more data centers with redundant power,
networking, and connectivity. Unfortunately, sometimes natural disasters like hurricanes or other disasters might also extend to
impacting an entire AZ, but AWS has planned for that, too, again, using redundancy. Like data centers, AWS
also clusters AZs together and also connects them with redundant high speed
and low latency links. A cluster of AZs is
simply called a region. In AWS, you get to choose the
location of your resources by not only picking an AZ,
but also choosing a region. Regions are generally named by location so you can easily tell where they are. For example, I could
put our employee photos in a region in Northern Virginia called the Northern Virginia Region. So knowing there are many
AWS regions around the world, how do you choose an AWS region? As a basic rule, there are four aspects you need to consider when
deciding which AWS region to use, compliance, latency, price,
and service availability. Let’s start with compliance. Before any other factors, you must first look at your
compliance requirements. You might find that your
application, company, or country that you live in requires you to handle
your data and IT resources in a certain way. Do you have a requirement that your data must live
in the UK boundaries? Then you should choose the
London Region, full stop. None of the rest of the factors matter. Or if you operate in Canada,
then you may be required to run inside the Canada Central Region. But if you don’t have a compliance or regulatory control
dictating your region, then you can look at other factors. For example, our employee photos are not restricted by regulations, so I can continue looking
at the next factor, which is latency. Latency is all about how
close your IT resources are to your user base. If I want every employee around the world to be able to view the
employee photos quickly, then I should place the infrastructure that hosts those photos
close to my employees. We are all bound by the speed of light. Applied to your business, that means that if your
users live in Oregon, then it makes sense to
run your application in the Oregon Region. You could run it in the Brazil Region, but the latency from Oregon to Brazil might impact your users and create a slower load time. But maybe I really want
to run my application or store my employee photos in Brazil. One problem I might run
into is the pricing, which is the next factor we’ll talk about. The pricing can vary
from region to region, so it may be that some regions, like the Sao Paulo Region, are more expensive than others due to different tax structures. So even if I wanted to store
my employee photos in Brazil, it might not make sense
from the latency perspective or the pricing perspective. And then finally, the fourth factor you’ll want to consider is the services you want to use. Often when we create new
services or features in AWS, we don’t roll those services to every region we have right away. Meaning, if you want to
begin using a new service on day one after it launches, then you’ll want to make sure
it operates in the region that you’re looking at running
your infrastructure in. To recap, regions, availability zones, and data centers exist in a
redundant, nested sort of way. There are data centers
inside of availability zones and availability zones inside of regions. And how do you choose a region? By looking at compliance, latency, pricing, and service availability. Those are the basics, but it
isn’t the end of the story when it comes to AWS
global infrastructure. We also have the Global Edge Network, which consists of Edge locations
and regional Edge caches. Edge locations and regional Edge caches are used to cache content
closer to end users, thus reducing latency. Consider this scenario. You are a company hosting a website to your users all over the world. Even though your website is
being downloaded from all over, it’s hosted out of an AWS region
in North America, say Ohio. Without caching, every user
would need to send a request to the Ohio region where
the data is downloaded, and then the data would be returned to the user and rendered in their browser. If the user is located in
the USA or nearby country, there may not be much
latency in this process. However, if a user is coming from a place that is located far from the Ohio region, then latency will be greater. Latency is a big hurdle
for many use cases, including web applications. So to reduce this latency, you could use the Edge locations to cache frequently accessed content. When you cache content
at an Edge location, a copy is hosted across the
Edge locations around the world. That way, when a user goes
to retrieve that information, it will come from the
closest Edge location, which will greatly reduce
the latency for that user. You can use services
like Amazon CloudFront to cache content using the Edge locations.
Reading 1.3: AWS Global Infrastructure
Video: Interacting with AWS
Summary: Managing AWS Infrastructure
This video explains how to manage your infrastructure on AWS after it shifts from physical servers to virtual cloud resources.
Three main ways to interact with AWS:
- AWS Management Console:
- Web-based, point-and-click interface.
- Easy to use for beginners.
- No need for scripting or syntax knowledge.
- Can be inefficient for repetitive tasks.
- AWS Command Line Interface (CLI):
- Uses terminal commands to interact with AWS API.
- Faster and more efficient for repeated tasks.
- Requires knowledge of AWS syntax.
- Reduces human error compared to Console.
- AWS Software Development Kits (SDKs):
- Libraries for popular programming languages to integrate with AWS services.
- Most powerful and flexible option.
- Requires programming skills.
Recommendations:
- Beginners start with the Console.
- Move to CLI for improved efficiency with repetitive tasks.
- Use SDKs for programmatic control and integration with applications.
This course will primarily use the Console for simplicity, but feel free to explore the CLI for deeper learning.
Interacting with AWS: A Beginner’s Guide
Welcome to the captivating world of AWS! This tutorial will equip you with the essential tools and techniques to confidently navigate your cloud journey. Dive in and discover how to interact with AWS and unleash its potential for your applications.
Understanding the Landscape:
Before we jump in, let’s paint a picture of the options at hand. You have three main avenues for interacting with AWS, each catering to different levels of expertise and needs:
- AWS Management Console: This web-based portal is your friendly neighborhood guide. Think of it as a point-and-click wonderland where you can create, manage, and monitor your AWS resources with ease. No coding involved, just intuitive menus and prompts to guide you through the process. It’s perfect for beginners and simple tasks, but be prepared for a slower pace for repetitive actions.
- AWS Command Line Interface (CLI): Now, let’s turn up the dial a notch. The CLI empowers you to interact with AWS through powerful text commands. Think of it as a direct line to the inner workings of your cloud infrastructure. This approach offers unrivaled speed and efficiency, especially for repetitive tasks. Scripting commands lets you automate routine processes, minimizing human error and maximizing productivity. But be warned, the CLI demands familiarity with AWS syntax and isn’t for the faint of heart.
- AWS Software Development Kits (SDKs): For the programming enthusiasts, here’s your playground. SDKs are libraries for popular languages like Python, Java, and Node.js, enabling you to seamlessly integrate AWS services into your applications. Think of it as building Lego blocks with code, connecting your applications to the vast possibilities of AWS with unrivaled flexibility and control. However, this path requires strong programming skills and understanding of cloud architecture.
Choosing Your Weapon:
The best option for you depends on your comfort level and goals. Start with the Console for its user-friendly interface and gentle learning curve. As you gain confidence, the CLI can become your ally for efficiency, while SDKs unlock advanced automation and deep integrations. Don’t be afraid to experiment and find the approach that resonates with you!
Mastering the Tools:
This tutorial will serve as your map in navigating each interaction method. We’ll delve into hands-on exercises to:
- Conquer the Console: Learn to create and manage essential resources like virtual servers, storage buckets, and databases using intuitive clicks and menus.
- Tame the CLI: Unleash the power of commands to automate tasks, configure resources, and gain deeper insights into your AWS environment.
- Embrace the SDKs: Explore code examples and best practices for integrating AWS services into your applications, unlocking limitless possibilities.
Remember:
AWS is a vast and ever-evolving landscape. Embrace the learning curve, explore different tools, and seek resources like documentation and online communities to continuously expand your cloud expertise. This tutorial is just the beginning of your adventure; the path to mastering AWS awaits!
Ready to embark on your journey? Buckle up, grab your chosen tool, and let’s dive into the exciting world of interacting with AWS!
- When you own the infrastructure it’s easy to understand
how you interact with it because you can see it, touch it and work with it on every level. If I have a server that
I’ve stood up in my closet, interacting with that server
is easy because it’s mine. I can touch it. When I remove the ability for
me to touch and see something like when the infrastructure
becomes virtual, the way that I work
with that infrastructure has to change a bit. Instead of physically
managing my infrastructure, now I logically manage it through the AWS Application
Program Interface or API. So now when I create, delete
or change any AWS resource whether it’s a virtual
server or a storage system for employee photos, I use
API calls to AWS to do that. You can make these API
calls in several ways but the three main ways we’re
going to talk about in AWS are the AWS Management Console, the AWS Command Line Interface and the AWS Software
Development Kits or SDKs. When people are first
getting started with AWS, they typically use the
AWS Management Console. This is a web-based method that you log into from your browser. The great thing about the console is that you can point and click. By simply clicking and following prompts, you can get started with
some of these services without any previous
knowledge of the service. With the console, there’s no need to worry about scripting or
finding the proper syntax. When you log into the console, the landing page will show you services you’ve recently worked with but you can also choose to view
all of the possible services organized into relevant categories such as compute, database
storage, and more. If I change the region to Paris, you’re making requests to
eu-west-3.console.aws.amazon.com or the Paris Region’s web console. After you work with the
console for a while, you may want to move away from the manual creation of resources. For example, in the console, you have to go through multiple screens to set configurations to
create a virtual machine. And if I wanted to create
a second virtual machine I would need to go through
that process all over again. While this is helpful, it also
leaves room for human error. I could easily miss a
checkbox or misspell something or even skip important
settings by accident. So when you get more familiar with AWS, or if you’re working in
a production environment that requires a degree of risk management, you should move to a tool that enables you to script or program these API calls. One of these tools is called the AWS Command Line Interface or CLI. You can use this tool in a couple of ways. One is to download the tool
and then use the terminal on your machine to create
and configure AWS services. Another is to access the CLI through the use of AWS Cloud Shell, which can be done through the console. With both of these options,
instead of having a GUI like the console to interact with, you run, create commands
using a defined AWS syntax. For example, if I wanted
to launch a virtual machine with the CLI through Cloud Shell, I first used this quick
shortcut to open a session. Once my session is started, I type in AWS which is how we know we
interact with the API, then type in the service. In this case, it’s EC2, the service that allows us to create and manage virtual machines,
which we’ll learn about later. And then the command that we
want to perform in that service and any other configurations
we want to set. One command versus multiple screens you have to click through in the console can help reduce accidental human errors. But that also means you have
to work with defined syntax and get that syntax correct
in order for your command to run successfully. So there is some upfront
cost in just understanding how to form commands, but after a while, you can begin to script
these commands out, making them repeatable which can greatly improve
efficiency in the long run. The other tool that allows you to interact with the AWS API programmatically is the AWS Software
Development Kits or SDKs. SDKs are created and maintained by AWS for the most popular programming languages such as Python, Java,
Node.js, .NET, Ruby, and more. This comes in handy when
you want to integrate your application source
code with AWS services. For example our employee
directory application runs using Python and Flask. If I wanted to store all
of the employee photos including pictures of employees
in an AWS storage service, I could use the Python SDK to write code to interact with that AWS storage service. The ability of managing AWS services from a place where you can run source code with conditions, loops, arrays, lists and other programming elements provides a lot of power and creativity. Alright, that wraps this video up. To recap, you have three main
options to connect with AWS, the Console, the CLI, and the SDKs. In this course we’ll mainly be using the console to interact with the services but feel free to challenge
yourself by using the CLI if you’re a bit more advanced.
Reading 1.4: Interacting with AWS
Reading
Every action you make in AWS is an API call that is authenticated and authorized. In AWS, you can make API calls to services and resources through the AWS Management Console, the AWS Command Line Interface (CLI), or the AWS Software Development Kits (SDKs).
THE AWS MANAGEMENT CONSOLE
One way to manage cloud resources is through the web-based console, where you log in and click on the desired service. This can be the easiest way to create and manage resources when you’re first begin working with the cloud. Below is a screenshot that shows the landing page when you first log into the AWS Management Console.
The services are placed in categories, such as compute, database, storage and security, identity and compliance.On the upper right corner is the Region selector. If you click it and change the Region, you will make requests to the services in the chosen Region. The URL changes, too. Changing the Region directs the browser to make requests to a whole different AWS Region, represented by a different subdomain.
THE AWS COMMAND LINE INTERFACE (CLI)
Consider the scenario where you run tens of servers on AWS for your application’s frontend. You want to run a report to collect data from all of these servers. You need to do this programmatically every day because the server details may change. Instead of manually logging into the AWS Management Console and copying/pasting information, you can schedule an AWS Command Line Interface (CLI) script with an API call to pull this data for you.The AWS CLI is a unified tool to manage AWS services. With just one tool to download and configure, you control multiple AWS services from the command line and automate them with scripts. The AWS CLI is open-source, and there are installers available for Windows, Linux, and Mac OS.Here is an example of running an API call against a service using the AWS CLI:
You get this response:
{
“Reservations”: [
{“Groups”: [],
“Instances”: [
{“AmiLaunchIndex”: 0,
and so on.
AWS SOFTWARE DEVELOPMENT KITS (SDKS)
API calls to AWS can also be performed by executing code with programming languages. You can do this by using AWS Software Development Kits (SDKs). SDKs are open-source and maintained by AWS for the most popular programming languages, such as C++, Go, Java, JavaScript, .NET, Node.js, PHP, Python, and Ruby.Developers commonly use AWS SDKs to integrate their application source code with AWS services. Let’s say the frontend of the application runs in Python and every time it receives a cat photo, it uploads that photo to a storage service. This action can be achieved from within the source code by using the AWS SDK for Python.
Here is an example of code you can implement to work with AWS resources using the Python AWS SDK.
import boto3
ec2 = boto3.client(‘ec2’)
response = ec2.describe_instances()
print(response)
Resources:
Security in the AWS Cloud
Video: Security and the AWS Shared Responsibility Model
AWS Security: A Collaborative Effort
In the cloud, security isn’t a solo act. Both you and AWS share responsibility for securing your environment. It’s like a building: AWS ensures the foundation is sturdy, but you lock your own doors within.
AWS secures the core:
- Global infrastructure (data centers, networks)
- Software components (databases, compute, storage)
- Underlying hardware and virtualization layers
You handle security “in the cloud”:
- Patching operating systems on your resources
- Encrypting data in transit and at rest
- Configuring firewalls and access controls
- Implementing compliance and industry standards
This shared model lets you customize security for your specific needs. Each AWS service may have slightly different responsibilities, so remember to tailor your approach accordingly.
Remember, secure cloud solutions are built together!
This summary condenses the key points of the passage:
- Shared responsibility model for AWS security
- AWS responsibility: Core infrastructure and software
- Your responsibility: Security “in the cloud” for your resources
- Customization based on services and needs
It also uses a relatable analogy and emphasizes the collaborative nature of cloud security.
- In order to begin using AWS effectively, it’s important to understand how security works in the cloud. You already know that by using AWS, you won’t be managing every single aspect of hosting your solutions. You’ll rely on AWS to manage portions of your workloads for you taking care of that
undifferentiated heavy lifting, like running the day-to-day
operations of the data centers and managing the various
virtualization techniques employed to keep your AWS
account isolated from, say my AWS account. So the question is who
is ultimately responsible for security and AWS? Is it A, you the customer, or B, AWS? The answer? Well, the correct answer is yes. Both you and AWS are responsible for securing your AWS environment. Let’s explore this concept. AWS follows something called the shared responsibility model. We don’t view solutions built on AWS as one singular thing to be secured. We see it as a collection of
parts that build on each other. AWS is responsible for the
security of some aspects. The others, you are
responsible for their security. Together with both you and
AWS following best practices, you have an environment
that you can trust. Let’s take a look at the shared responsibility
model diagram. You can see we have the
responsibility of security broken into two groupings, you and AWS each being responsible
for different components. We describe AWS as being responsible for security of the cloud. For example, one piece
of the puzzle for AWS is the AWS global infrastructure. And when I say global infrastructure, I mean the physical infrastructure that the cloud is running on. This is iron and concrete
buildings with fences protected by security guards and various other security measures. It also includes the AWS global backbone or the private fiber cables that connect each AWS
region to each other. Managing the security of
these pieces is all in AWS. You don’t need to worry about
that as far as security goes. Then there is the infrastructure in various software components
that run AWS services. This includes compute databases,
storage, and networking. AWS is responsible for
securing these services from the host operating system up through the virtualization layer. For example, let’s say you want to host some virtual machines or VMs on the cloud. We primarily use the service
Amazon EC2 for this use case. When you create a VM using EC2, AWS manages the physical
host the VM is placed on as well as everything
through the hypervisor level. If the host operating
system or the hypervisor needs to be patched or updated, that is the responsibility of AWS. This is good news for you as the customer, as it greatly reduces
the operational overhead in running a scalable and elastic solution leveraging virtualization. We will talk more about
EC2 and elastic solutions in upcoming lessons. For now let’s get back
to the security aspect. So if AWS manages the underlying hardware up through the virtualization layer then what are you responsible for? Well, you are responsible
for security in the cloud. Similar to how a construction
company builds a building and it’s on them to make sure that the building itself
is stable and secure then you can rent out an
apartment in that building. It’s up to you to lock the
door to your apartment. Security of the building
and security in the building are two different elements. For security in the cloud, the
base layer is secured by AWS. It’s up to you to lock the door. So for our EC2 example, you
are responsible for tasks like patching the operating
systems of your VMs, encrypting data in transit and at rest, configuring firewalls and controlling who has access to these resources and how much access they have. The main thing to understand is that you own your data in AWS. You are ultimately
responsible for ensuring that your data is encrypted, secure and has proper access controls in place. In many cases, AWS services
offer native features you can enable to achieve
a secure solution. It’s up to you to actually use them. In other cases you may
devise your own solutions to meet compliance and security standards for your specific industry or use case. So that’s the shared responsibility
model at a high level. I do want you to keep
something in mind, though. There is some amount of
nuance you should understand as we move through the course regarding the shared responsibility model. Each AWS service is different
and serves a different purpose and a different use case. Therefore, the shared responsibility model can vary from service to service as well. This is a good thing as you get to decide how to build your solutions on AWS.
Reading: Reading 1.5: Security and the AWS Shared Responsibility Model
Reading
When you begin working with the AWS Cloud, managing security and compliance is a shared responsibility between AWS and you. To depict this shared responsibility, AWS created the shared responsibility model. This distinction of responsibility is commonly referred to as security of the cloud, versus security in the cloud.
WHAT IS AWS RESPONSIBLE FOR?
AWS is responsible for security of the cloud. This means AWS is required to protect and secure the infrastructure that runs all the services offered in the AWS Cloud. AWS is responsible for:
- Protecting and securing AWS Regions, Availability Zones, and data centers, down to the physical security of the buildings
- Managing the hardware, software, and networking components that run AWS services, such as the physical server, host operating systems, virtualization layers, and AWS networking components
The level of responsibility AWS has depends on the service. AWS classifies services into three different categories. The following table provides information about each, as well as the AWS responsibility.
Category | Examples of AWS Services in the Category | AWS Responsibility |
---|---|---|
Infrastructure services | Compute services, such as Amazon Elastic Compute Cloud (Amazon EC2) | AWS manages the underlying infrastructure and foundation services. |
Container services | Services that require less management from the customer, such as Amazon Relational Database Service (Amazon RDS) | AWS manages the underlying infrastructure and foundation services, operating system, and application platform. |
Abstracted services | Services that require very little management from the customer, such as Amazon Simple Storage Service (Amazon S3) | AWS operates the infrastructure layer, operating system, and platforms, as well as server-side encryption and data protection. |
Note
Container services refer to AWS abstracting application containers behind the scenes, not Docker container services. This enables AWS to move the responsibility of managing that platform away from customers.
WHAT IS THE CUSTOMER RESPONSIBLE FOR?
You’re responsible for security in the cloud. When using any AWS service, you’re responsible for properly configuring the service and your applications, as well as ensuring your data is secure.The level of responsibility you have depends on the AWS service. Some services require you to perform all the necessary security configuration and management tasks, while other more abstracted services require you to only manage the data and control access to your resources. Using the three categories of AWS services, you can determine your level of responsibility for each AWS service you use.
Category | AWS Responsibility | Customer Responsibility |
---|---|---|
Infrastructure services | AWS manages the infrastructure and foundation services. | You control the operating system and application platform, as well as encrypting, protecting, and managing customer data. |
Container services | AWS manages the infrastructure and foundation services, operating system, and application platform. | You are responsible for customer data, encrypting that data, and protecting it through network firewalls and backups. |
Abstracted services | AWS operates the infrastructure layer, operating system, and platforms, as well as server-side encryption and data protection. | You are responsible for managing customer data and protecting it through client-side encryption. |
Due to the varying level of effort, it’s important to consider which AWS service you use and review the level of responsibility required to secure the service. It’s also important to review how the shared security model aligns with the security standards in your IT environment, as well as any applicable laws and regulations.It’s important to note that you maintain complete control of your data and are responsible for managing the security related to your content. Here are some examples of your responsibilities in context.
- Choosing a Region for AWS resources in accordance with data sovereignty regulations
- Implementing data protection mechanisms, such as encryption and managing backups
- Using access control to limit who has access to your data and AWS resources
RESOURCES
Video: Protect the AWS Root User
Protect your AWS kingdom: Avoid root user peril with MFA!
Creating an AWS account involves an email and password, granting the root user unlimited access. This root user is like a king with immense power, vulnerable to nefarious actors who could delete your data or spin up costly crypto mining.
To defend your AWS kingdom:
- Enable multi-factor authentication (MFA): This adds an extra layer of security beyond just a password. Think of it as a royal guard verifying your identity. Choose a virtual or physical device that generates unique one-time passwords, like an app on your phone. Even if someone steals your password, they’ll need your phone’s one-time code to get past the guard.
- Don’t use the root user for daily tasks: Treat it like the king’s crown, reserved for rare, critical actions. Instead, create an IAM user for everyday tasks, limiting its power just like assigning different roles to court officials.
By enabling MFA and using IAM users, you build a secure castle protecting your precious AWS data and resources. Remember, with great power comes great responsibility, so secure your root user like a vigilant king!
This summary condenses the key points:
- Root user has immense power and needs extra security.
- Enable MFA for an extra layer of protection beyond just password.
- Don’t use root user for daily tasks, create IAM users with limited permissions.
It uses an engaging analogy and emphasizes the importance of strong security practices.
- When you create an AWS account, you sign up with an email address and you create a password. The email address you sign up with becomes the root user of the AWS account. This root user has unrestricted access to everything in your
account in most cases. We will discuss in a later lesson what types of users can be restricted and how to do that. But for now, I want you to understand that when you log into your AWS account using an email address and password, it means you are logging
in as the root user. This root user can do whatever
they want in the account. It has all of the powers that can be had. And with great power comes
great responsibility, or something like that. So knowing that the root
user is all powerful, you should do everything you can do to make sure that no one can
gain access to this root user. Let’s say there’s a
nefarious actor of sorts, and they have an evil plan to log into your AWS
account with your root user, gaining access to all of
the powers and permissions. And they go and delete
all of your valuable data and AWS resources, and in their place, they spin up a lovely and expensive cryptocurrency
mining operation. Leaving you with the bill
and none of your data as they walk away with
a full crypto wallet. Sounds less than ideal. Well, how can you prevent
that from happening? You could, of course, create
a hard-to-crack password, and that will give you
some level of security. This, however, is an example of single factor authentication where all someone needs to
do is match the password with the email address, and boom, they’re in. We recommend as a best practice that right after you
create your AWS account, you enable multi-factor authentication, or MFA, on the root user. MFA introduces an additional
unique piece of information that you need to enter to
gain access to the account. There are a variety of
devices, virtual or physical, that generate one-time passwords that can be integrated with
your AWS account for MFA. For example, I personally
use a virtual MFA device that is an app on my phone. This app produces a string
of numbers for one time use that I type into the console after I log in using my
email address and password. Even if someone guessed the password, they cannot gain access to the account without the numbers generated
by the virtual MFA device. No matter what type of MFA
device that you choose to use, and I will include a link
to the supported devices and the readings for you to look into, the most important thing
is that you are using MFA on the root user. That way, even if someone,
the nefarious actor, cracks your password, they still cannot gain
access to your account. All thanks to MFA. On top of enabling MFA for the root user, we strongly recommend that
you do not use the root user for your everyday tasks, even the administrative ones. There are really only a few actions that require root user access. Coming up, you’ll learn
how to create an IAM user, and use that to log into your AWS account instead of using the root user.
Reading 1.6: Protect the AWS Root User
Reading
What’s the Big Deal About Auth?
When you’re configuring access to any account, two terms come up frequently: authentication and authorization. Though these terms may seem basic, you need to understand them to properly configure access management on AWS. It’s important to keep this mind as you progress in this course. Let’s define both terms.
Understand Authentication
When you create your AWS account, you use a combination of an email address and a password to verify your identity. If the user types in the correct email and password, the system assumes the user is allowed to enter and grants them access. This is the process of authentication.Authentication ensures that the user is who they say they are. Usernames and passwords are the most common types of authentication, but you may also work with other forms, such as token-based authentication or biometric data like a fingerprint. Authentication simply answers the question, “Are you who you say you are?”
Understand Authorization
Once you’re inside your AWS account, you might be curious about what actions you can take. This is where authorization comes in. Authorization is the process of giving users permission to access AWS resources and services. Authorization determines whether the user can perform an action—whether it be to read, edit, delete, or create resources. Authorization answers the question, “What actions can you perform?”
What Is the AWS Root User?
When you first create an AWS account, you begin with a single sign-in identity that has complete access to all AWS services and resources in the account. This identity is called the AWS root user and is accessed by signing in with the email address and password that you used to create the account.
Understand the AWS Root User Credentials
The AWS root user has two sets of credentials associated with it. One set of credentials is the email address and password used to create the account. This allows you to access the AWS Management Console. The second set of credentials is called access keys, which allow you to make programmatic requests from the AWS Command Line Interface (AWS CLI) or AWS API. Access keys consist of two parts:
- An access key ID, for example, A2lAl5EXAMPLE
- A secret access key, for example, wJalrFE/KbEKxE
Similar to a username and password combination, you need both the access key ID and secret access key to authenticate your requests via the AWS CLI or AWS API. Access keys should be managed with the same security as an email address and password.
Follow Best Practices When Working with the AWS Root User
Keep in mind that the root user has complete access to all AWS services and resources in your account, as well as your billing and personal information. Due to this, securely lock away the credentials associated with the root user and do not use the root user for everyday tasks. To ensure the safety of the root user:
- Choose a strong password for the root user.
- Never share your root user password or access keys with anyone.
- Disable or delete the access keys associated with the root user.
- Do not use the root user for administrative tasks or everyday tasks.
When is it OK to use the AWS root user? There are some tasks where it makes sense to use the AWS root user. Check out the links at the end of this section to read about them.
Delete Your Keys to Stay Safe
If you don’t already have an access key for your AWS account root user, don’t create one unless you absolutely need to. If you do have an access key for your AWS account root user and want to delete the keys:
- Go to the My Security Credentials page in the AWS Management Console and sign in with the root user’s email address and password.
- Open the Access keys section.
- Under Actions, click Delete.
- Click Yes.
The Case for Multi-Factor Authentication
When you create an AWS account and first log in to that account, you use single-factor authentication. Single-factor authentication is the simplest and most common form of authentication. It only requires one authentication method. In this case, you use a username and password to authenticate as the AWS root user. Other forms of single-factor authentication include a security pin or a security token.However, sometimes a user’s password is easy to guess.
For example, your coworker Bob’s password, IloveCats222, might be easy for someone who knows Bob personally to guess, because it’s a combination of information that is easy to remember and describes certain things about Bob (1. Bob loves cats, and 2. Bob’s birthday is February 22).
If a bad actor guessed or cracked Bob’s password through social engineering, bots, or scripts, Bob might lose control of his account. Unfortunately, this is a common scenario that users of any website often face.
This is why using MFA has become so important in preventing unwanted account access. MFA requires two or more authentication methods to verify an identity, pulling from three different categories of information.
- Something you know, such as a username and password, or pin number
- Something you have, such as a one-time passcode from a hardware device or mobile app
- Something you are, such as fingerprint or face scanning technology
Using a combination of this information enables systems to provide a layered approach to account access. Even though the first method of authentication, Bob’s password, was cracked by a malicious user, it’s very unlikely that a second method of authentication, such as a fingerprint, would also be cracked. This extra layer of security is needed when protecting your most sacred accounts, which is why it’s important to enable MFA on your AWS root user.
Use MFA on AWS
If you enable MFA on your root user, you are required to present a piece of identifying information from both the something you know category and the something you have category. The first piece of identifying information the user enters is an email and password combination. The second piece of information is a temporary numeric code provided by an MFA device.Enabling MFA adds an additional layer of security because it requires users to use a supported MFA mechanism in addition to their regular sign-in credentials. It’s best practice to enable MFA on the root user.
Review Supported MFA Devices
AWS supports a variety of MFA mechanisms, such as virtual MFA devices, hardware devices, and Universal 2nd Factor (U2F) security keys. For instructions on how to set up each method, check out the Resources section.
Device | Description | Supported Devices |
Virtual MFA | A software app that runs on a phone or other device that provides a one-time passcode. Keep in mind that these applications can run on unsecured mobile devices, and because of that, may not provide the same level of security as hardware or U2F devices. | Authy, Duo Mobile, LastPass Authenticator, Microsoft Authenticator, Google Authenticator |
Hardware | A hardware device, generally a key fob or display card device that generates a one-time six-digit numeric code | Key fob, display card |
U2F | A hardware device that you plug into a USB port on your computer | YubiKey |
Resources
- External Site: AWS: Enabling a Hardware MFA Device (Console)
- External Site: AWS: Enabling a U2F Security Key (Console)
- External Site: AWS: Enabling a Virtual Multi-Factor Authentication (MFA) Device (Console)
- External Site: AWS: Table of Supported MFA Devices
- External Site: Tasks that require the use of root user credentials
AWS Identity and Access Management
Video: Introduction to AWS Identity and Access Management
This passage discusses access control and credential management in the context of building an application on AWS. Three key takeaways are:
- Multiple needs for access control: The application requires access control for user login, application code accessing S3, and managing resources within the AWS account.
- AWS IAM for access and credential management: AWS IAM helps manage login credentials and permissions to the AWS account, as well as credentials for signing API calls to AWS services. It doesn’t handle application-level access control.
- IAM users, groups, and policies: IAM users have unique credentials for logging in. Policies grant or deny permissions to specific actions (API calls) within the account. Groups can be used to manage permissions for multiple users.
The passage also recommends best practices like using IAM users with admin permissions instead of the root user, and setting up MFA for the root user. Finally, it introduces the concept of IAM roles for temporary access, which will be covered in the next part.
- Let’s take a look at the application we are going to be building
out throughout the course. We’ve already gone over the
design of this application and what I want to focus
on now is access control. There are multiple places on this diagram where we can identify the need for access control and
credential management. The first being that we need
to manage how users log in and use the employee
directory application. We could require that people
have a valid credential like a username and password
to log into the app. That is access management
on the application level. Then there is the fact
that we know the code running the employee directory application on a virtual machine being hosted by the service Amazon EC2 and that code will need to make API calls to the object storage service Amazon S3 in order to read and write data like images for the employees. Well, here’s the thing. Just because both Amazon EC2 and Amazon S3 have existing resources in this account, it doesn’t mean that the API calls made from the code running
on the EC2 instance to S3 are automatically allowed to be made. In fact, all API calls in
AWS must be both signed and authenticated in order to be allowed, no matter if the resources live
in the same account or not. The application code running
on the Amazon EC2 instance needs access to credentials to make this signed API call to Amazon S3. So that’s another place
with a need for a credential and access management. Now let’s take this a step further. How are you going to build
out this architecture? Well, you’ll need access to an AWS account through the use of a login. Your identity within this AWS account will need permissions
to be able to do things like create this network,
launch the EC2 instances and create the resources that will host and run the solution in AWS. Yet another place you need credentials. The root user which you
have already learned about in a previous lesson
has these permissions, but you don’t want to
be using the root user to administer your AWS resources. And let’s assume you won’t
be the only one working on or building out this application. It’s more likely that
within one AWS account there would be many people
who need access to build and support your solutions. You’ll have different
groups of people responsible for different parts of the architecture. The people who would
write and deploy the code might be software developers, whereas the people who
would be responsible for making changes to say the network would be a different group of people. You wouldn’t and shouldn’t give everyone who needs access to the AWS account, the root user credentials to log in. You instead would have unique credentials for each person logging in. This is where the service AWS identity and access management comes in. We identified three places
where we will need access and credential management. AWS identity and access management or IAM can help take care of these
two spots on the diagram. AWS IAM manages the login credentials and permissions to the AWS account and it also can manage the credentials used to sign API calls
made to AWS services. IAM would not, however, be responsible for application level access management. The code running on
this instance would use separate appropriate mechanisms
for authenticating users into the application itself, not IAM. All right, so let’s start
with the AWS account level. IAM allows you to create users and each individual
person who needs access to your AWS account would have
their own unique IAM user. Creating users for everyone who
needs access to the account, takes care of authentication. Authentication being verifying if someone is who they say they are because they had the proper
credentials to log in. Now it’s time to introduce authorization. Authorization is this. Let’s say you’ve logged in and
you are who you say you are. You’ve been authenticated. Now you want to create resources
and manage AWS resources like create an Amazon
EC2 instance for example. Sure, you’ve logged in but do you have the correct permissions to be able to complete that action? The idea that your permissions control what you can or cannot
do is authorization. Are you authorized to
launch an EC2 instance? IAM users take care of authentication and you can take care of authorization by attaching IAM policies to users in order to grant or deny
permission to specific actions within an AWS account. Keep in mind when I say action here, I’m referring to an AWS API call. Everything in AWS is an API call. IAM policies are JSON-based documents. Let’s take a look at an example. This IAM policy document
contains permissions that allow the identity
to which it’s attached to perform any EC2-related action. The structure of an IAM
policy has an Effect which is either allow or deny. And Action which is the AWS API call, in this case, we have ec2:* which includes all EC2-related actions. You can restrict this to
be specific API calls. For example, I can restrict this action to be just run instances and then any user with
this policy attached would only be allowed to run EC2 instances but perform no other EC2-related tasks. IAM lets you get very granular with your permissions in that way. Continuing with this
example, we see the resource which allows you to
restrict which AWS resources the actions are allowed
to be performed against. You can also include
conditions in your policies that can further restrict actions. IAM policies can also
be attached to groups. IAM groups are very simply
just groupings of IAM users. You can attach a policy to
a specific user or a group. When you attach a policy to a group, any users that are a part of that group would inherit the permissions. We recommend that as a best practice you organize users into groups and assign permissions to groups instead of individual
users where possible. This makes it easier to manage
when people change job roles or multiple users need
permissions applied or revoked. Another best practice to follow is that we recommend when
you create your AWS account, you set up MFA for the root user. Then create an IAM user
with admin permissions. Log out of the root user and then log in with the IAM user that you just created. From there, you can
use this user to create the rest of the IAM
groups users and policies. The reason we suggest you do this is because you cannot apply
a policy to the root user but you can to an IAM user. Now that I’ve told you about
IAM users, groups and policies, we’ve addressed this
part of access management that we needed for our application but what about this part? The EC2 instance needs
credentials to be able to make the signed API call two S3 for reading and writing employee images. Am I suggesting that you make an IAM user with a username and
password for the application running on EC2 to use? No. No, I am not. This is where role-based
access comes into the picture. Coming up we will learn
about the temporary access that IAM roles provide and how it can apply
to this use case here.
Reading: Reading 1.7: Introduction to AWS Identity and Access Management
Reading
WHAT IS IAM?
IAM is a web service that enables you to manage access to your AWS account and resources. It also provides a centralized view of who and what are allowed inside your AWS account (authentication), and who and what have permissions to use and work with your AWS resources (authorization).With IAM, you can share access to an AWS account and resources without having to share your set of access keys or password. You can also provide granular access to those working in your account, so that people and services only have permissions to the resources they need. For example, to provide a user of your AWS account with read-only access to a particular AWS service, you can granularly select which actions and which resources in that service they can access.
GET TO KNOW THE IAM FEATURES
To help control access and manage identities within your AWS account, IAM offers many features to ensure security.
- IAM is global and not specific to any one Region. This means you can see and use your IAM configurations from any Region in the AWS Management Console.
- IAM is integrated with many AWS services by default.
- You can establish password policies in IAM to specify complexity requirements and mandatory rotation periods for users.
- IAM supports MFA.
- IAM supports identity federation, which allows users who already have passwords elsewhere—for example, in your corporate network or with an internet identity provider—to get temporary access to your AWS account.
- Any AWS customer can use IAM; the service is offered at no additional charge.
WHAT IS AN IAM USER?
An IAM user represents a person or service that interacts with AWS. You define the user within your AWS account. And any activity done by that user is billed to your account. Once you create a user, that user can sign in to gain access to the AWS resources inside your account.You can also add more users to your account as needed. For example, for your cat photo application, you could create individual users in your AWS account that correspond to the people who are working on your application. Each person should have their own login credentials. Providing users with their own login credentials prevents sharing of credentials.
IAM USER CREDENTIALS
An IAM user consists of a name and a set of credentials. When creating a user, you can choose to provide the user:
- Access to the AWS Management Console
- Programmatic access to the AWS Command Line Interface (AWS CLI) and AWS Application Programming Interface (AWS API)
To access the AWS Management Console, provide the users with a user name and password. For programmatic access, AWS generates a set of access keys that can be used with the AWS CLI and AWS API. IAM user credentials are considered permanent, in that they stay with the user until there’s a forced rotation by admins.When you create an IAM user, you have the option to grant permissions directly at the user level.This can seem like a good idea if you have only one or a few users. However, as the number of users helping you build your solutions on AWS increases, it becomes more complicated to keep up with permissions. For example, if you have 3,000 users in your AWS account, administering access becomes challenging, and it’s impossible to get a top-level view of who can perform what actions on which resources.If only there were a way to group IAM users and attach permissions at the group level instead. Guess what: There is!
WHAT IS AN IAM GROUP?
An IAM group is a collection of users. All users in the group inherit the permissions assigned to the group. This makes it easy to give permissions to multiple users at once. It’s a more convenient and scalable way of managing permissions for users in your AWS account. This is why using IAM groups is a best practice.If you have a an application that you’re trying to build and have multiple users in one account working on the application, you might decide to organize these users by job function. You might want IAM groups organized by developers, security, and admins. You would then place all of your IAM users in the respective group for their job function.This provides a better view to see who has what permissions within your organization and an easier way to scale as new people join, leave, and change roles in your organization.Consider the following examples.
- A new developer joins your AWS account to help with your application. You simply create a new user and add them to the developer group, without having to think about which permissions they need.
- A developer changes jobs and becomes a security engineer. Instead of editing the user’s permissions directly, you can instead remove them from the old group and add them to the new group that already has the correct level of access.
Keep in mind the following features of groups.
- Groups can have many users.
- Users can belong to many groups.
- Groups cannot belong to groups.
The root user can perform all actions on all resources inside an AWS account by default. This is in contrast to creating new IAM users, new groups, or new roles. New IAM identities can perform no actions inside your AWS account by default until you explicitly grant them permission.The way you grant permissions in IAM is by using IAM policies.
WHAT IS AN IAM POLICY?
To manage access and provide permissions to AWS services and resources, you create IAM policies and attach them to IAM users, groups, and roles. Whenever a user or role makes a request, AWS evaluates the policies associated with them. For example, if you have a developer inside the developers group who makes a request to an AWS service, AWS evaluates any policies attached to the developers group and any policies attached to the developer user to determine if the request should be allowed or denied.
IAM POLICY EXAMPLES
Most policies are stored in AWS as JSON documents with several policy elements. Take a look at the following example of what providing admin access through an IAM identity-based policy looks like.
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": "*",
"Resource": "*"
}]
}
In this policy, there are four major JSON elements: Version, Effect, Action, and Resource.
- The Version element defines the version of the policy language. It specifies the language syntax rules that are needed by AWS to process a policy. To use all the available policy features, include “Version”: “2012-10-17” before the “Statement” element in all your policies.
- The Effect element specifies whether the statement will allow or deny access. In this policy, the Effect is “Allow”, which means you’re providing access to a particular resource.
- The Action element describes the type of action that should be allowed or denied. In the above policy, the action is “*”. This is called a wildcard, and it is used to symbolize every action inside your AWS account.
- The Resource element specifies the object or objects that the policy statement covers. In the policy example above, the resource is also the wildcard “*”. This represents all resources inside your AWS console.
Putting all this information together, you have a policy that allows you to perform all actions on all resources inside your AWS account. This is what we refer to as an administrator policy.
Let’s look at another example of a more granular IAM policy.
{"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": [
"iam: ChangePassword",
"iam: GetUser"
]
"Resource":
"arn:aws:iam::123456789012:user/${aws:username}"
}]
}
After looking at the JSON, you can see that this policy allows the IAM user to change their own IAM password (iam:ChangePassword) and get information about their own user (iam:GetUser). It only permits them to access their own credentials because the resource restricts access with the variable substitution ${aws:username}.
UNDERSTAND POLICY STRUCTURE
When creating a policy, it is required to have each of the following elements inside a policy statement.
Element | Description | Required | Example |
---|---|---|---|
Effect | Specifies whether the statement results in an allow or an explicit deny | ✔ | “Effect”: “Deny” |
Action | Describes the specific actions that will be allowed or denied | ✔ | “Action”: “iam:CreateUser” |
Resource | Specifies the object or objects that the statement covers | ✔ | “Resource”: “arn:aws:iam::account-ID-without-hyphens:user/Bob” |
Resources:
Video: Role Based Access in AWS
Summary of IAM Roles:
- IAM roles provide temporary access credentials for AWS identities like applications or services.
- Roles differ from users in having no login credentials (username/password) and temporary, expiring credentials.
- Roles are assumed programmatically, enabling secure access for applications without embedding static credentials.
- We created an IAM role for the EC2 instance used in the employee directory application.
- This role provides “S3 full access” and “AmazonDynamoDBFullAccess” permissions for the app’s operations.
- Roles are commonly used for access between AWS services or for external identities (federated users) to access AWS.
- AWS services like IAM Identity Center can simplify federated user access through roles.
- All right, so you’ve learned about IAM users, groups, and policies. Policies can be applied to AWS identities like users and groups
to assign permissions. They also, however, can be
applied to another AWS identity, IAM roles. An IAM role is an identity that can be assumed by
someone or something who needs temporary
access to AWS credentials. Let’s dive into what I mean
when I say AWS credentials. Most AWS API calls that are made must be signed and authenticated, but how does that process work? When you send HTTP request to AWS you must sign the request. This signing process
happens programmatically and allows AWS to verify your identity when a request is made and run through various security processes to ensure a request is legit. IAM users have associated credentials like an access key ID
and secret access key that are used to sign requests. However, with regards to our architecture, the code running on the EC2 instance needs to sign the request sent to S3. I already told you that I don’t intend for you to create an IAM user with credentials to be used by this app, so how will the application gain access to the needed AWS access key ID and AWS secret access
key to sign the API call? The answer is IAM roles. IAM roles are identities in
AWS that like an IAM user also have associated AWS
credentials used to sign requests. However, IAM users have
usernames and passwords as well as static credentials whereas IAM roles do not
have any login credentials like a username and password and the credentials used to sign requests are programmatically
acquired, temporary in nature, and automatically rotated. In our example the EC2
instance will be assigned an IAM role. This role can then be
assumed by the application running on the virtual
machine to gain access to its temporary credentials to sign
the AWS API calls being made. A role can be assumed by
many different identities and they have many use cases. The important thing to know about roles is that the credentials
they provide expire and roles are assumed programmatically. To get an idea of this, let’s create a role using the AWS console. I’m already logged in and will navigate to the IAM service. Now I want to create the
role the EC2 instance is going to use for the
employee directory application. Now, we will click Roles
in the left-hand side and select Create role. We will then select the trusted entity this role is intended to be used for, and in our case this will be EC2, then we will click Next. Now we get to select the
permissions assigned to this role. Again, permissions being what actions does this identity have
the authority to take? We want this identity to be able to read and write to Amazon S3,
so I will search for S3 and you can see there
are multiple options here for pre-written policies
that I can choose from. I also can write my own custom policy which would also show up here
if I created it ahead of time. I’m going to select S3
full access for the policy. In the real world you would choose a more restrictive IAM policy for this but for a proof of concept like this course is intended to provide, we will leave the permissions
a bit looser for now. You would come back and
change the permissions, attach this role to be more granular if this were ever to make it to a production type environment. Now we also need to add
another permission here, and that’s for DynamoDB. So I will exit out of the S3 filter and then type in DynamoDB and hit Enter. And then we will select the AmazonDynamoDBFullAccess permission. Then we will click Next. Then we can give this role a name, which is EmployeeWebAppRole. Then we can scroll down and
we can click Create role. We can see that this role has
now been created successfully and if we click on the
role and scroll down, we can then see that this role
has two permissions attached. It’s very common for roles to be used for access between AWS services. Just because two resources
exist in the same AWS account, it doesn’t mean that they can send any API calls to each other. If one AWS service needs
to send an API call to another AWS service,
it would most likely use role-based access where the
AWS service assumes a role, gains access to contemporary credentials and then sends the API call
to the other AWS services who then verifies the request. Another identity that
can assume an IAM role to gain access to AWS is
external identity providers. For example, let’s say you have a business that employs 5,000 technical employees that all need access to AWS accounts. You already have an identity
provider system in place that allows these employees
to log into their laptops and gain access to various
corporate resources. Should you also now go
create 5,000 IAM users for these employees to access AWS? Well, that doesn’t sound very efficient. You instead can leverage IAM roles to grant access to existing identities from your enterprise user directory. These are known as federated users. AWS assigns a role to a federated user when access is requested
through an identity provider. We also have AWS services that
can make this process easier such as AWS IAM Identity Center. These are a couple of examples
of role-based access in AWS and this is just the introduction. Check out the class readings
for more information and don’t worry if you don’t
quite grasp this concept yet as we will continue using
roles in our demos and examples throughout this course.
Reading 1.8: Role Based Access in AWS
Reading
Throughout these last few lessons, there have been sprinklings of IAM best practices. It’s helpful to have a brief summary of some of the most important IAM best practices you need to be familiar with before building out solutions on AWS.
LOCK DOWN THE AWS ROOT USER
The root user is an all-powerful and all-knowing identity within your AWS account. If a malicious user were to gain control of root-user credentials, they would be able to access every resource within your account, including personal and billing information. To lock down the root user:
- Don’t share the credentials associated with the root user.
- Consider deleting the root user access keys.
- Enable MFA on the root account.
FOLLOW THE PRINCIPLE OF LEAST PRIVILEGE
Least privilege is a standard security principle that advises you to grant only the necessary permissions to do a particular job and nothing more. To implement least privilege for access control, start with the minimum set of permissions in an IAM policy and then grant additional permissions as necessary for a user, group, or role.
USE IAM APPROPRIATELY
IAM is used to secure access to your AWS account and resources. It simply provides a way to create and manage users, groups, and roles to access resources within a single AWS account. IAM is not used for website authentication and authorization, such as providing users of a website with sign-in and sign-up functionality. IAM also does not support security controls for protecting operating systems and networks.
USE IAM ROLES WHEN POSSIBLE
Maintaining roles is easier than maintaining users. When you assume a role, IAM dynamically provides temporary credentials that expire after a defined period of time, between 15 minutes and 36 hours. Users, on the other hand, have long-term credentials in the form of user name and password combinations or a set of access keys.User access keys only expire when you or the admin of your account rotates these keys. User login credentials expire if you have applied a password policy to your account that forces users to rotate their passwords.
CONSIDER USING AN IDENTITY PROVIDER
If you decide to make your cat photo application into a business and begin to have more than a handful of people working on it, consider managing employee identity information through an identity provider (IdP). Using an IdP, whether it be an AWS service such as AWS IAM Identity Center (Successor to AWS Single Sign-On) or a third-party identity provider, provides you a single source of truth for all identities in your organization.You no longer have to create separate IAM users in AWS. You can instead use IAM roles to provide permissions to identities that are federated from your IdP.For example, you have an employee, Martha, that has access to multiple AWS accounts. Instead of creating and managing multiple IAM users named Martha in each of those AWS accounts, you can manage Martha in your company’s IdP. If Martha moves within the company or leaves the company, Martha can be updated in the IdP, rather than in every AWS account you have.
CONSIDER AWS IAM IDENTITY CENTER
If you have an organization that spans many employees and multiple AWS accounts, you may want your employees to sign in with a single credential.AWS IAM Identity Center is an IdP that lets your users sign in to a user portal with a single set of credentials. It then provides them access to all their assigned accounts and applications in one central location.AWS IAM Identity Center is similar to IAM, in that it offers a directory where you can create users, organize them in groups, and set permissions across those groups, and grant access to AWS resources. However, AWS IAM Identity Center has some advantages over IAM. For example, if you’re using a third-party IdP, you can sync your users and groups to AWS IAM Identity Center.This removes the burden of having to re-create users that already exist elsewhere, and it enables you to manage those users from your IdP. More importantly, AWS IAM Identity Center separates the duties between your IdP and AWS, ensuring that your cloud access management is not inside or dependent on your IdP.
Resources:
Week 1 Exercise and & Assessment
Video: Introduction to Lab 1
The hands-on lab focuses on applying AWS IAM best practices for securing accounts. You’ll explore preloaded IAM resources like groups, users, roles, and policies. You’ll learn to manage users and groups by adding users to groups and allowing them to inherit group permissions. You’ll also explore how permissions work with AWS services using different IAM users. This lab prepares you for future labs that build on these concepts. Have fun!
- It’s time for our hands-on lab. At this point, you’ve learned about some of the best practices for securing AWS accounts using AWS IAM. So let’s go ahead and put those best
practices into practice. In this lab, you will
access the IAM dashboard and explore existing groups,
users, roles, and policies that will be preloaded into
the exercise environment. You will learn how to
manage users and groups by performing tasks like
adding users to groups and allowing users to inherit
specific group permissions. You also will learn about
how permissions work with AWS by exploring different AWS services using different IAM users. That’s all for this lab. Throughout the course,
there will be more labs that will loosely follow along with what we do in the videos. Have fun, and see you in a bit.
Video: Demo AWS IAM
Summary of IAM Roles and Users in AWS:
Creating Roles:
- Roles allow applications to assume temporary AWS credentials for API calls.
- Define trusted entities (AWS services, accounts, web identities) who can assume the role.
- Select relevant permissions policies (managed or custom) for allowed actions and resources.
- Example: Created “EmployeeWebApp” role for EC2 instance with S3 and DynamoDB access.
Creating Users:
- Users access AWS console/services directly (unlike roles).
- Can be enabled for console access and password creation.
- Assigned to groups with attached policies for permission management.
- Example: Created “EC2Admin” user with access to EC2Admins group (AmazonEC2FullAccess policy).
Access Keys:
- Used for programmatic AWS access via CLI, SDKs, etc.
- Generate and download secret access key securely (don’t share!).
- Example: Created access key for EC2Admin user’s command-line use.
Key Takeaways:
- Roles and users provide granular access control for AWS resources.
- Groups simplify permission management for multiple users.
- Securely manage and use access keys for programmatic access.
- [Instructor] Hello everyone. In this video, we are going
to create the IAM role for our employee directory application and we will also look
at how to create users, and look at the different AWS access keys, which are used for programmatic
access to AWS APIs. So to start off, let’s create the role for our application by selecting Roles in the left-hand navigation
of the IAM dashboard, and then we will click Create role. On this page we need to select what the trusted entity
type is to assume this role. We know that roles allow you to get access to temporary credentials
that are used to make calls to AWS API calls. You wanna make sure
that you’re restricting who can assume this role. Not anyone can assume this role, right? So under the Trusted entity type we have the AWS service. An AWS service, that would be something like an EC2 instance, a Lambda function, other services that are assuming a role to make AWS API calls. You could have an AWS account, this would allow you to
allow cross-account access to permissions for
resources in your account. You also could select a web identity, which would allow for federated
users to assume a role. You have a SAML 2.0 federation, so if you have a corporate directory that is on premises that would be using SAML, you could use this as
your trusted entity type or you could create a
Custom Trusts policy. We are going to select
AWS service for this and then we are going to select EC2, since our employee directory application will be running on EC2. Next, we can select the Next button and then we’re brought to
the page to Add permissions. This lists out the different
permissions policies that are in IAM, and right now it’s pulling
back the AWS managed policies that exist in this account by default. And what a managed policy
is, is it is a policy that is created and managed by AWS, and so what I mean by that is, let’s go ahead and look
at the a policy for S3. so if I type in S3 in the
search bar and then hit Enter, I’m going to expand this
Amazon S3 full access policy and we can see the JSON, which is the permissions for this policy. We can see we have the effect is Allow, that’s either gonna be Allow or Deny. There’s no other option
besides Allow or Deny, and then there’s the action, which is going to define
what AWS API calls are allowed to be made. So we can see that we have
S3:* and S3-object-lambda:*. That’s a wild card to
determine all API calls are allowed against this service, and then we also have the resource here, which is also set to *. So this would be all S3 resources. This is a very permissive policy. In the real world, you would
likely want to change this to allow just the API calls that your application needs, nothing more, and just the resources that you intend to have this policy be related to. So to do that, you would have
to create a custom policy. So if I scroll up, you could click on this
Custom policy button, which would take you to a new page where you could then
create your custom policy. For now, we’re going to use
the AWSs managed policies and I’m gonna select the
checkbox for AmazonS3FullAccess. And then I’m also going
to type in DynamoDB, and exit out of the S3 filter, and then select the
AmazonDynamoDBFullAccess policy here as well. Because later in the course, we are going to be using DynamoDB as the
database for this application. So to prepare the role, I’ve selected both the S3FullAccess
and DynamoDBFullAccess. Now we’ll click Next. Then we can give this a name. I’m gonna name this EmployeeWebApp, and then I’m going to scroll down. We can view the trusted entities here, so this is our trust policy. We’re allowing the API
call STS AssumeRole, and who’s allowed to assume this role? ec2.amazonaws.com. So an EC2 instance is going to be allowed to assume this role only. All right, so now I can scroll down and then click Create role. Once your role is created, you can then click on the role, which will bring you to the page where you can see the
information about this role, such as the ARN, the Amazon Resource Name, and you can also scroll down, you can view the permissions attached, you could add new
permissions if you want to. You can simulate the permissions, you can view the trust relationships, you can also view any
tags associated with this, which are key value pairs. So, this is where you can
get all of the information about your role and then where
you can manage your role. So, next what I wanna do is create a user. So I’m gonna click on users
in the left-hand navigation, then I’m going to click Add users, and let’s give this user a name. Let’s say it’s EC2Admin, and then I want to click the checkbox for Enabling console access. So what that means is I
want to allow this user to be able to sign in to
the AWS management console. So note, by default, this was unchecked, meaning that just
because you create a user doesn’t mean they have
access to the console. I’m gonna go ahead and check this, and then I’m gonna allow an auto-generated password to be created, and then I want to leave
the checkbox checked for users to create a
password at the next sign-in. So this will allow them
to change their password once they log in for the first time. Now we’ll click Next, and
next what I want to do, is I’m going to add the users to a group. We currently don’t have any groups, so I’m gonna go ahead
and click Create group, and then what I wanna
do is add a group name. Let’s say this is EC2Admins, and then I want to attach
a policy to this group, because we know that it’s a best practice to attach policies to groups,
not to users directly. So I’m gonna select the
AmazonEC2FullAccess policy, and then I’m gonna scroll down
and click Create user group, and now I can select this user
group to add this user into, and then I can click next, and then we can scroll down. We can see the permissions that this one user currently has. It will be inheriting the permissions from the EC2Admins group, and then it also has
directly attached to it the IAMUserChangePassword permission, which will allow these user
to change their password. So now we can click Create user, and now we can go ahead and
click Return to users list. We didn’t download the
password for this, that’s fine. I don’t intend to actually use this user. It’s just for demonstration purposes, so we’ll click Continue, and now if I click on this user, what I wanna do next is click on the Security Credentials tab, and if we scroll down, you can see that we have this panel
here called Access keys. Access keys are going to allow your users to make programmatic calls to AWS using things like the AWS command line, the AWS software development kits, where maybe they’re developing
locally on their laptop, and they need their code to
be able to reach out to AWS, so I’m gonna go ahead and
create an access key here, and then I wanna use this
for the command line, and then I’m gonna go ahead
and click the checkbox for “I understand the
above recommendation.” What this is saying is it’s saying, “Hey, there’s another service
in the browser that you could use to use the AWS
CLI called AWS CloudShell.” We’re gonna go ahead and
create the access keys anyways. I’m gonna select this
checkbox and then click Next, and then I’m gonna click Next again, and click the Create access key. All right, so here you can see, we have our access key here, and then we have our secret access key, which is not being shown
currently on this page, but you could click show and then copy it, and you would use this access
key and secret access key to be able to configure
your command line locally. Now I’ll click Done and
then click Continue. So now for a little bit of cleanup, I’m gonna select Actions and then I’m gonna click
Deactivate and then Deactivate, and then I will click
Actions and then Delete, and then we can copy and
paste the access key ID here, and then click Delete. All right, that’s it for this video. Hopefully you know a little
bit more about roles and users.
Reading: Hosting the Employee Directory Application on AWS
Reading
The next video in this course demonstrates how to host the employee directory application on AWS using services like Amazon EC2 and Amazon VPC. While watching this video, you may be looking for a copy of the scripts used so you can follow along. The exercises in next weeks content includes step by step instructions on how to launch the employee direction application, including the user data script. For this next video, we recommend that you watch without following along yet so you can understand the AWS services at a high level, then next week you will have the opportunity to walk through this demonstration in your AWS account using the instructions included in the exercises.
Video: Hosting the Employee Directory Application on AWS
Summary:
- This video focuses on launching an employee directory application on AWS using Amazon EC2.
- The instructor, Morgan, provides an overview of the architecture and explains how each piece will be built.
- Seth demonstrates launching an EC2 instance with default settings and basic configuration.
- Key steps include choosing an instance type, selecting a VPC and subnet, configuring security groups, and providing user data script.
- The script automates downloading the application code, starting the web server, and running the application.
- Finally, Seth shows how to access the instance through its IP address and confirms it’s ready to use.
- Morgan points out that data will be added later and upcoming lessons will delve deeper into EC2 and networking concepts.
Key takeaways:
- Amazon EC2 allows hosting virtual machines for cloud applications.
- Launching an EC2 instance involves configuring essential settings like network and security.
- User data scripts can automate application setup on instance launch.
- AWS provides default options for beginners to simplify initial setup.
- All right. You’ve learned some AWS vocabulary, some of the concepts
behind cloud computing as well as some of the
specifics around cloud identity and access management. As a refresher, let’s take a look at the architecture diagram
we’ll be building out. You’ll remember that it’s
fairly complicated at a glance. There are many different
services working together to host this solution and to get this entire
thing built out this way will take some time and understanding. You’ll get the opportunity to
see how each piece is built throughout the upcoming lessons. What I want to do now is show you how we are going to host our
employee directory application using a service called Amazon EC2 which has been mentioned
in previous videos. I find that the best way for me to explain to you how these services work
is to show you how they work. So what we are going to do is we are going to call Seth in here to
launch an Amazon EC2 instance and host the employee
directory application using the defaults provided by AWS. AWS provides you with
something called a default VPC which is a private network
for your AWS resources. Every EC2 instance you launch using AWS must live inside of a network. So in order to complete this demo with the limited amount of information that we have shown you about AWS services, we will be using the default
network provided by AWS and we will accept most
of the default values for launching an EC2 instance. Let’s bring Seth in for
some help with this one. Hey Seth. – Hey, Morgan. Is it time to launch
our first EC2 instance? – It is. We’ll be using the default VPC to launch this first instance, and we will configure the bare minimum to get our application up and running. Sound good? - Yep. I got it. Let’s get started. I’m already in the EC2 management console and I will navigate to
the service Amazon EC2. As Morgan already mentioned,
Amazon EC2 is a compute service that allows you to host virtual machines. You’ll learn a lot about
this topic coming up soon but for now we are going to
simply create an EC2 instance to try and help you wrap your mind around using AWS services on demand. From here, we’ll launch
a new EC2 instance. An instance is just what we
call one single virtual machine. Now we have to select the configurations for this EC2 instance. You will go over these
configurations in detail later. We can give this instance a name, I’ll call it employee-web-app and then we will select the
free tier eligible options using a Linux AMI or Amazon Machine Image. And then scrolling down
to the next section we will select the free
tier eligible t2.micro for the instance type. Next, we normally need
to select a key pair. This would be used to
SSH into the instance but we won’t need to do this, so I’m going to select
proceed without a key pair and continue scrolling
to the network settings. Here we will select the network. So looking at network
settings, we will click Edit. Then we can select the VPC
and subnet for this instance. And as we discussed earlier,
we will use the default VPC and leave the subnet at No preference. The default VPC has
configurations in place that make this experimentation
process a lot easier when you’re first
getting started with AWS. Again, we will define all
of these things later. Continuing on with network settings, we will create a new security group which is an instance level firewall that will allow HTTP and HTTPS traffic in to reach the instance. So we will add inbound
rules for both of those. Now we will scroll down and
expand advanced details. We will accept most of the defaults here except the IAM instance
profile and user data which we will provide a value for. For the IAM instance profile
we will use the IAM role that will be used by the application, though this won’t come
into play until later when we have our S3 bucket created. But for now, let’s click the dropdown and select that IAM role. Now, scrolling down to the user data, this is a script that is going to run when the instance boots up. Let’s go ahead and paste
in our user data script. This script will download
the source code for the app, start the web server, and
kick off the application code so it’s ready to start handling requests while you could have launched the instance then connected to it via SSH, configured and started
your application manually. We have decided to use a script to automate this process on launch. Alright, now we can click Launch instance. It can take a few minutes
for the instance to boot up so let’s wait for that. And now it’s up and running. To access the instance,
copy the IP address from the details listed below, paste it into a new browser
tab, and there you go. It’s up and ready to go. Morgan, what do you think? – That looked great. It’s exactly what I would
expect at this point because there is no data
being served from a database so I expect to see just
a homepage with no info. So that’s great. Thank you Seth for
walking us through that. In upcoming lessons, we
will discuss the specifics of not only EC2, but
also networking on AWS. And you’ll understand how everything we just kind of glossed over
actually works, so stay tuned.
Reading: Default Amazon Machine Image (AMI) for Amazon EC2
Reading
Hello learners!
As of March 15, 2023 the default Amazon Machine Image (AMI) for Amazon EC2 has been updated to use the Amazon Linux 2023 AMI. In the demonstrations for this course, we use the Amazon Linux 2 AMI. If you are following along with the videos please be aware that if you use the new Amazon Linux 2023 AMI with the user data the way it appears in the videos the script will not run properly and the application will not launch. We are in the process of updating the course to reflect this change.
In the meantime, there are a few ways to work around this issue. You can either use the Amazon Linux 2 AMI with the user data as shown in the demonstrations and this will resolve the issue, or you can use an updated version of the user data script which I will include in this message.
To recap, we have a new default AMI for EC2 instances called the Amazon Linux 2023 AMI. The videos show us using Amazon Linux 2. Because of changes between these two AMIs the user data script shown in the videos will not run properly on Amazon Linux 2023 based instances. You can either choose Amazon Linux 2 as the AMI when launching the instance, and use the original user data script or you can use the Amazon Linux 2023 AMI and use the updated user data script.
Amazon Linux 2 user data script:
#!/bin/bash -ex
wget https://aws-tc-largeobjects.s3-us-west-2.amazonaws.com/DEV-AWS-MO-GCNv2/FlaskApp.zip
unzip FlaskApp.zip
cd FlaskApp/
yum -y install python3 mysql
pip3 install -r requirements.txt
amazon-linux-extras install epel
yum -y install stress
export PHOTOS_BUCKET=${SUB_PHOTOS_BUCKET}
export AWS_DEFAULT_REGION=<INSERT REGION HERE>
export DYNAMO_MODE=on
FLASK_APP=application.py /usr/local/bin/flask run –host=0.0.0.0 –port=80
Amazon Linux 2023 user data script:
#!/bin/bash -ex
wget https://aws-tc-largeobjects.s3-us-west-2.amazonaws.com/DEV-AWS-MO-GCNv2/FlaskApp.zip
unzip FlaskApp.zip
cd FlaskApp/
yum -y install python3-pip
pip install -r requirements.txt
yum -y install stress
export PHOTOS_BUCKET=${SUB_PHOTOS_BUCKET}
export AWS_DEFAULT_REGION=<INSERT REGION HERE>
export DYNAMO_MODE=on
FLASK_APP=application.py /usr/local/bin/flask run –host=0.0.0.0 –port=80
When using the user data scripts, remember to replace the <INSERT REGION HERE> with whatever AWS region you are operating in, and ensure you remove both brackets as well.
Cheers!
Morgan Willis
Quiz: Week 1 Quiz
What are the four main factors that a solutions architect should consider when they must choose a Region?
Latency, price, service availability, and compliance
A solutions architect should consider the following four aspects when deciding which AWS Region to use for hosting applications and workloads: latency, price, service availability, and compliance. For more information, see the AWS Global Infrastructure video in week 1.
True or False: Every action a user takes in AWS is an API call.
True
In AWS, every action a user takes is an API call that is authenticated and authorized. A user can make API calls through the AWS Management Console, the AWS Command Line Interface (AWS CLI), or the AWS SDKs. For more information, see the Interacting with AWS video.
Which statement BEST describes the relationship between Regions, Availability Zones and data centers?
Regions are clusters of Availability Zones. Availability Zones are clusters of data centers.
The AWS Cloud infrastructure is built around AWS Regions and Availability Zones. An AWS Region is a physical location in the world that has multiple Availability Zones. Availability Zones consist of one or more discrete data centers, each with redundant power, networking, and connectivity, housed in separate facilities. For more information, see the AWS Global Infrastructure video in week 1.
Which of the following is a benefit of cloud computing?
Go global in minutes
Going global in minutes means that users can easily deploy applications in multiple Regions around the world with a few clicks. For more information, see the What is AWS reading.
A company wants to manage AWS services by using the command line and automating them with scripts. What should the company use to accomplish this goal?
AWS Command Line Interface (AWS CLI)
The AWS CLI is a unified tool that is used to manage AWS services. By downloading and configuring the AWS CLI, the company can control multiple AWS services from the command line and automate them with scripts. For more information about the correct answer, see the Interacting with AWS reading.
What is a best practice when securing the AWS account root user?
Enable multi-factor authentication
It is important to not use the AWS account root user access key to sign in to the AWS account. The access key for an AWS account root user gives full access to all resources for all AWS services, including billing information. Users cannot reduce the permissions that are associated with their AWS account root user access key. Users must delete any access keys that are associated with the root user and enable multi-factor authentication (MFA) for the root user account. For more information, see the Protect the AWS Root User reading.
A solutions architect is consulting for a company. When users in the company authenticate to a corporate network, they want to be able to use AWS without needing to sign in again. Which AWS identity should the solutions architect recommend for this use case?
IAM Role
An IAM role does not have any credentials (password or access keys) that are associated with it. Instead of being uniquely associated with one person, a role can be assumed by anyone who needs it. An IAM user can assume a role to temporarily take on different permissions for a specific task. A role can be also assigned to a federated user who signs in by using an external identity provider (IdP) instead of IAM. For more information, see the Role Based Access in AWS video.
Which of the following can be found in an AWS Identity and Access Management (IAM) policy?
A and B
An IAM policy contains a series of elements, including a Version, Statement, Sid, Effect, Principal, Action, Resource, and Condition. For more information, see Introduction to Amazon Identity and Access Management.
True or False: AWS Identity and Access Management (IAM) policies can restrict the actions of the AWS account root user.
False
The account root user has complete access to all AWS services and resources in an account, as well as billing and personal information. Because of this, we recommend that you securely lock away the credentials that are associated with the root user, and not to use the root user for everyday tasks. For more information, see the Protect the AWS Root User reading.
According to the AWS shared responsibility model, which of the following is the responsibility of AWS?
Managing the hardware, software, and networking components that run AWS services, such as the physical servers, host operating systems, virtualization layers, and AWS networking components.
AWS is responsible for protecting and securing AWS Regions, Availability Zones, and data centers, down to the physical security of the buildings, as well as managing the hardware, software, and networking components that run AWS services.
Which of the following is recommended if a company has a single AWS account, and multiple people who work with AWS services in that account?
The company should create an AWS Identity and Access Management (IAM) group, grant the group permissions to perform specific job functions, and assign users to a group, or use IAM roles.
With IAM, a company can create an IAM user group, grant the user group the permissions to perform specific job functions, and assign users to a group. This way, the company provides granular access to its employees, and people and services have permissions to only the resources that they need. The company could also achieve the same purpose by using IAM roles for federated access and using granular policies that are attached to roles. For more information, see Reading: Introduction to AWS Identity and Access Management.
True or False: According to the AWS shared responsibility model, a customer is responsible for security in the cloud.
True
A customer is responsible for security in the cloud, while AWS is responsible for security of the cloud. For more information, see the Security and the AWS Shared Responsibility video.
Which of the following provides temporary credentials (that expire after a defined period of time) to AWS services?
IAM role
When a user assumes a role, AWS Identity and Access Management (IAM) dynamically provides temporary credentials that expire after a defined period of time, between 15 minutes and 36 hours. For more information, see Reading: Role Based Access in AWS.
A user is hosting a solution on Amazon Elastic Compute Cloud (Amazon EC2). Which networking component is needed to create a private network for their AWS resources?
Virtual private cloud (VPC)
A VPC is a private network for AWS resources. For more information, see Hosting the Employee Directory Application on AWS.
Reading: Mid-Course Survey
Reading
We hope you are enjoying the course so far! We would like to get your input to make this and other courses on Coursera even better. Below, please provide us your feedback based on your experience with the first module.
This survey is hosted by an external company (Qualtrics), so the link above does not lead to our website. Please note that AWS will own the data gathered via this survey, and will not share the information/results collected with survey respondents. AWS handles your information as described in the AWS Privacy Notice