Skip to content
Home » Amazon » AWS Fundamentals Specialization » AWS Cloud Technical Essentials » Week 1: AWS Overview and Security

Week 1: AWS Overview and Security

Welcome to AWS Cloud Technical Essentials Week 1! In this week, you will learn the definition of cloud computing and how to describe the cloud value proposition. You will learn how to differentiate between workloads that run on-premises versus in the cloud, and how to create an AWS account. You will also get an overview of Amazon Web Services, including how to differentiate between AWS Regions and Availability Zones, and the different ways that you can interact with AWS. Finally, you will learn best practices for using AWS Identity and Access Management (IAM).

Learning Objectives

  • Discover IAM best practices
  • Create an AWS account
  • Describe the different ways to interact with AWS
  • Differentiate between AWS Regions and Availability Zones
  • Describe Amazon Web Services
  • Differentiate between workloads running on-premises and in the cloud
  • Define cloud computing and its value proposition

Welcome to the Course


Video: Welcome to AWS Cloud Technical Essentials

  • This course provides an overview of cloud computing and explores various AWS services.
  • It is designed for people working in IT or IT-related fields who have a general knowledge of IT topics but want to learn more about the AWS cloud.
  • The course covers topics such as compute, networking, storage, databases, security, monitoring, and optimization.
  • It includes hands-on examples and a cloud-based application project.
  • No coding is required for this course, but you will have access to the source code if you want to explore it further.
  • The course also includes written segments to reinforce ideas and provide additional background information.
  • Completing all the readings is highly recommended to get the full benefit of the course.
  • Hey, everyone. I’m Morgan Willis, Principal
    Cloud Technologist at AWS, and I want to welcome you to this course. In this course, you will
    learn the key concepts behind cloud computing
    and explore AWS services using real-life examples
    covering compute, networking, storage, databases, security, and more. This course is intended
    for people working in IT or IT-related fields, who have a general knowledge of IT topics, but have yet to learn
    much about the AWS cloud. To kick off the course, we will cover the basics
    of what the cloud is, the benefits of the cloud,
    the AWS global infrastructure, and identity and access management. This will give you a solid foundation for learning the rest of
    the more technical topics contained in the course. Then we will focus on
    computing, and for this topic, we will dig into the services, Amazon Elastic Compute
    Cloud, AWS Container services like Amazon Elastic Container Service, and serverless compute
    options like AWS Lambda. Then we will discuss networking in AWS using services like Amazon
    Virtual Private Cloud, and other networking
    technologies used for creating, securing, and connecting to
    your own private network in AWS. For storage, we’ll explore
    how and when to use Amazon S3, Amazon Elastic Block Store, and others. For databases, we will
    cover many use cases around the different database
    services AWS has to offer, but with a special focus on Amazon Relational Database
    Service and Amazon DynamoDB. Then finally, we will discuss monitoring and scaling your application. For that, we’ll use Amazon CloudWatch, and Amazon EC2 Auto Scaling, alongside Elastic Load Balancing. We aren’t going to focus on
    theory alone in this course. Instead, we will use a hands-on example through a cloud-based application that will build over the
    duration of the course piece by piece. The app we will build is an
    employee directory application that stores images and information about fictional employees in a company. There’s no coding
    required for this course. You will have access to the source code if you want to explore it further. This course includes
    written segments we refer to as readings or notes, to reinforce ideas, dive deeper into topics, as well as provide background information on concepts we did not
    cover in the videos. Because of this, I highly suggest that you take the time to
    complete all of the readings to get the full benefit of the course. So again, welcome, and
    as we say at Amazon, work hard, have fun, and make history.

Video: Meet the Instructors

  • The page introduces two Cloud Technologists from AWS, Morgan Willis and Seph, who will be sharing their knowledge with you throughout the course.
  • Morgan has a background in software development and has been in the technology field for over 10 years. She enjoys outdoor activities and has a cat named Meowzy who seems to have some knowledge of AWS.
  • Seph has been working with the AWS Cloud for close to 15 years and has experience in various industries. He enjoys spending time with his dog named Fluffy, who also seems to have an interest in AWS.
  • Both Morgan and Seph are excited to help you navigate the world of AWS in the upcoming lessons.
  • Hey everyone. My name is Morgan Willis. I’m a Principal Cloud
    Technologist with AWS. What that means is I essentially
    build things, learn AWS, and then I get to share
    my knowledge with you all. I had a career in software development before I started working and training in certification here at AWS. I really love technology and have been in the field for over 10 years, enrolls from technical support
    to database administration to web application development. And while it’s true that
    I do love technology, I also have a regular life outside of work where I’d like to hike
    in a beautiful sun, rain, sleet, or snow. I also like to ski and really enjoy just about any outdoor activity
    all seasons of the year. I’m looking forward to
    helping you navigate the world of AWS over the coming lessons. And this here is Meowzy,
    he’s my trusted sidekick. He spends time with me while
    I type away on my computer, and I think he’s picked up a
    thing or two over the years. His knowledge of AWS at this point is pretty great for a cat. I think I can even hear
    him sneaking on my computer in the middle of the night to build his own solutions on AWS. I have a feeling his knowledge will come in handy for this course. More on that later. – Hey y’all, I’m Seph. I’m a Cloud Technologist with AWS, and I’ve been working with the AWS Cloud for close to 15 years. Like Morgan, I have spent
    most of that time learning, building, and sharing my knowledge with customers like yourself. In addition to my experience with AWS, I have worked in a variety of industries and I’ve been on both the data center and the cloud side of infrastructures. Outside of my life in tech, I mostly enjoy spending time with friends, which primarily includes my four-year-old wonder dog named Fluffy. Speaking of Fluffy, sometimes
    I think she knows more about AWS than she lets on. Every once in a while, I notice her drawing AWS
    architectures in a journal and I wonder where she
    learned all of this. She can’t bring me the ball
    back when we play fetch, but she always wags her tail
    when I mention cloud computing. I think Fluffy has something
    to say about cloud computing, so stay tuned for some of
    her helpful tips and tricks.

Video: Course Feedback

  • The video on this page is about how to get help and support while taking the AWS Cloud Technical Essentials course.
  • If you have questions about AWS services or something related to AWS, you can check out Repost, a website where you can ask AWS-related questions and get answers.
  • If you find any out-of-date, incorrect, or broken content in the course, you can report the issue using the form provided in the course materials.
  • If something goes wrong within your AWS console while working on an exercise or lab, you should not enter a ticket, as the ticket system is meant for reporting issues with the courses.
  • For questions about course completion certificates or anything related to the learning platform, you should reach out to the learning platform’s support.
  • Hi there. My name is Morgan Willis and I’m a Principal Cloud
    Technologist here at AWS. As you’re working your
    way through the course, you may want to get in touch
    with the AWS community, or reach out to the team
    who created this course. This video is to help you understand where to direct your questions. You may have questions
    about the AWS services you are learning about, or you might find that you have a specific question about something you are
    working on related to AWS. For these types of questions, I highly recommend you check out Repost. Repost can be found at
    the URL, repost.aws. And this website gives you
    access to a vibrant community where you can ask AWS-related
    questions and get answers. Something to keep in
    mind about our courses is that AWS innovates at
    an extremely fast rate. This means that some of our content can get slightly out of date
    when new features are released, or if the AWS console changes. If you see any out of date,
    incorrect, or broken content, you can contact us
    course creators directly by reporting the issue using
    the form that is included in the course materials
    following this video. An example of something
    you would use this form for is a hands-on tutorial or exercise that has instructions that are out-of-date or are no longer correct. If you have something go
    wrong within your AWS console while working through an exercise or lab, this is not something that
    you would enter a ticket for, as we will not have access
    to your specific AWS account. The ticket system is meant
    as a place to report issues with our courses. Finally, if you have a question about a course completion certificate or anything related to
    the learning platform you’re taking this course on, please reach out to the learning platform to support and resolve these issues. I hope this helps you
    find the help you need. See you later.

Getting Started with AWS Cloud


Video: Introduction to Week 1

This is an introduction to a cloud computing learning course on AWS. It covers:

Concepts:

  • Theory and benefits of cloud computing.
  • AWS global infrastructure (regions, availability zones).
  • Interacting with AWS services.
  • Security and identity/access management.

Sample application:

  • Employee Directory web app (CRUD – create, read, update, delete).
  • Features: add/edit/delete employees, add photos.

AWS services used:

  • Amazon Virtual Private Cloud (VPC) – private network.
  • Amazon Elastic Compute Cloud (EC2) – virtual machines for backend code.
  • Amazon Relational Database Service (RDS) – database for employee data.
  • Amazon Simple Storage Service (S3) – object storage for images.
  • Amazon CloudWatch – monitoring app.
  • Amazon Elastic Load Balancing and EC2 Auto Scaling – scalability and fault tolerance.
  • Amazon Identity and Access Management (IAM) – security and identity.

Additional notes:

  • Course uses a sample app throughout to demonstrate AWS services.
  • Course has additional resources like definitions, tips, and commentary.

Overall, this course covers the basics of cloud computing on AWS with a hands-on approach using a sample application.

  • Hey there. I hope you’re excited to learn
    about cloud computing on AWS. I’m excited to get started too. So let’s hop in. To kick things off, we
    are going to cover some of the foundational concepts
    you’ll need to know about when working with AWS. Working with AWS’s part theory, part technical knowledge, part vocabulary and lots of practice and experimentation. These first few lessons are going to help you
    establish a little bit from each of those categories. You will learn the theory
    behind cloud computing and the benefits of the cloud. This will help you make informed
    decisions about the cloud from a high level, and give you some of the reasoning around why and when to use the cloud. Then we will dive into the
    AWS global infrastructure, covering regions and availability zones, followed by lessons on how to
    interact with AWS services. This lesson is going to give you the technical knowledge
    and vocabulary you need to create and discuss AWS architectures, and properly understand AWS
    documentation and examples. A lot of what AWS offers can relate back to concepts used in traditional
    on-premises computing. And getting started with AWS means comparing these concepts to AWS concepts. After that, we will begin
    to discuss security, and identity and access management. This is important to understand
    when you’re getting started because as soon as you
    create an AWS account, you’ll have some actionable knowledge on how to secure that
    account right from the start. Starting off secure is a good place to be. Throughout all the topics, in these next few sections and over the duration
    of the entire course, we’ll be using a sample employee
    directory web application to demonstrate how AWS services are used. Let’s go ahead and take a look at the Employee Directory app. You can see I’m in the browser, and I want to show you the functionality. This is a basic CRUD app or
    create read, update, delete. This app keeps track of
    employees within a company. So the first thing I’m going
    to do is create a new employee. To create an employee, we’ll
    give the employee a name, and I’m going to add myself. So I’ll add my name. My location is USA. And then my job title, I’ll enter in Cloud Technologist. And then we can add some
    badges for each employee. This is like an employee’s flare, so I’m gonna select Mac User
    for myself and Photographer, and then I will click Save. Now, I’m back on the homepage, and I actually forgot to add
    a photo for this employee, so let’s go ahead and edit it
    and add the employee photo. You can also see this
    app gives me the ability to delete employees from the directory. So, those are the features of the app from a user’s perspective. Time to review how we will build this app using AWS services. This application will be
    built in a private network using Amazon Virtual Private Cloud or VPC. We will host the
    application’s backend code on Amazon Elastic Compute Cloud, or EC2, which is a service that essentially offers
    virtual machines on AWS. So let’s go ahead and add
    those servers to our diagram. The employee data will
    be stored in a database, which will also live inside this network and will be hosted using a service called Amazon Relational
    Database Service or RDS. So I’ll go ahead and add
    that to the diagram as well. The images for the
    employees will be stored using the object storage service, Amazon Simple Storage Service or S3, which allows the unlimited
    storage of any type of file, like images in our example. These are the basic building
    blocks of our application. We will use Amazon CloudWatch
    for monitoring this solution, and we will also want to ensure that our application is
    scalable and fault tolerant. So I’m going to go ahead and add Amazon Elastic Load Balancing and Amazon EC2 Auto
    Scaling to this diagram. For security and identity, we will be using Amazon
    Identity and Access Management or IAM, so let’s add that. There’s a lot of pieces on this diagram, but don’t worry, we will
    build this app step by step using the AWS Management Console. We will add to this diagram, reference it, and change it throughout
    the course to meet our needs and let it evolve as new ideas and
    techniques enter our world. One more thing to note about this course before I let you go. If you hear this noise, (notification dings) it means that you are gonna be seeing one of our informational popups on the screen, which convey extra information like word definitions, AWS
    best practices, tips, tricks, or general commentary written for you by our lively, furry
    sidekicks, Meowzy and Fluffy. They know all the tips and are very helpful little friends to have around during this course. That’s it for now, see you soon.

Reading 1.2: What is AWS?

Reading

Video: AWS Global Infrastructure

Storing photos in AWS for safekeeping and accessibility

The document talks about storing employee photos in AWS for safekeeping and accessibility. It starts by highlighting the importance of having multiple copies of the photos to prevent data loss in case of laptop failure.

AWS redundancy and disaster recovery

It then explains how AWS ensures data security through redundancy. AWS has clusters of data centers around the world, grouped into Availability Zones (AZs) and further into Regions. Each AZ has redundant power, networking, and connectivity to ensure uptime even if one data center fails. Similarly, Regions are connected with redundant links for disaster recovery in case an entire AZ is affected.

Choosing an AWS Region

When choosing an AWS Region to store your data, you need to consider four factors:

  • Compliance: Does your application, company, or country have any regulations that dictate where your data must reside? For example, if your data must be stored within the UK, you must choose the London Region.
  • Latency: How close are your IT resources to your user base? If your users are spread across the globe, it’s best to choose a Region closest to the majority of them to minimize latency.
  • Price: Pricing can vary between Regions due to different tax structures. Choose a Region that offers the best balance of performance and cost.
  • Service availability: Not all new AWS services are immediately available in all Regions. Ensure the Region you choose supports the services you want to use.

Global Edge Network for further latency reduction

Beyond Regions and AZs, AWS also has a Global Edge Network consisting of Edge locations and regional Edge caches. These cache frequently accessed content closer to end users, further reducing latency for geographically distant users.

In conclusion,

The document provides a comprehensive overview of storing data in AWS, emphasizing redundancy, disaster recovery, and factors to consider when choosing an AWS Region. It also introduces the Global Edge Network as an additional tool for optimizing data delivery for global audiences.

AWS Global Infrastructure Tutorial: Your Cloud’s Foundation

Welcome to the fascinating world of AWS Global Infrastructure! This tutorial will serve as your guide to understanding the backbone of your cloud deployments, from the physical data centers to the intricate connections that keep your applications running smoothly.

Building Blocks of the Cloud: Regions and Availability Zones

Imagine a vast network of fortresses scattered across the globe, each one meticulously designed to safeguard your data and applications. These fortresses, in the world of AWS, are called Regions. Each Region is a self-contained unit consisting of multiple Availability Zones (AZs). Think of AZs as smaller, secure outposts within a Region, offering redundancy and fault tolerance.

  • Regions: Represented by a two-letter code (e.g., US-East-1), they provide geographical separation and cater to specific compliance requirements or latency needs.
  • Availability Zones: Nestled within Regions, they house data centers with independent power, cooling, and networking. If one AZ faces an outage, your applications in other AZs within the same Region remain unaffected.

The Power of Redundancy: Safeguarding Your Data

Redundancy is the mantra of AWS Global Infrastructure. Data is replicated across multiple AZs within a Region, ensuring that even if one AZ goes down, your information remains safe and accessible. This eliminates single points of failure and keeps your applications humming.

Reaching the Globe: The Global Network at Your Fingertips

But what about users scattered across the world? That’s where the AWS Global Network comes in. This intricate web of fiber optic cables connects Regions worldwide, enabling low-latency data transfer and seamless application performance for your global audience.

Beyond Regions and AZs: Introducing the Edge Locations

For applications demanding lightning-fast response times, AWS offers Edge Locations. These strategically placed outposts cache frequently accessed content closer to end users, minimizing the distance data needs to travel. Imagine having a content delivery network right at the doorstep of your users!

Choosing the Right Region: A Strategic Decision

With so many Regions and factors to consider, selecting the right one for your needs can be daunting. But fear not! Here are some key aspects to guide your decision:

  • Latency: Where are your users located? Choose a Region closest to them for optimal performance.
  • Compliance: Does your industry have specific data residency requirements? Select a Region that adheres to those regulations.
  • Pricing: Prices can vary across Regions. Consider your budget and find the best balance between cost and performance.
  • Service Availability: Not all AWS services are available in every Region. Ensure your chosen Region supports the services you need.

Exploring the AWS Management Console:

Now that you understand the core concepts, let’s put theory into practice! The AWS Management Console is your one-stop shop for managing your cloud resources. Here, you can visualize Regions and AZs, monitor service health, and configure your infrastructure to meet your specific needs.

Remember:

As you embark on your cloud journey, keep in mind that the AWS Global Infrastructure is constantly evolving. New Regions, services, and features are added regularly, so staying updated is key. Embrace the continuous learning curve, and you’ll unlock the full potential of the cloud for your applications.

Congratulations! You’ve taken your first step towards mastering the AWS Global Infrastructure. With this knowledge as your foundation, you can confidently build, deploy, and manage your cloud applications with scalability, reliability, and global reach. So, go forth and explore the limitless possibilities of the cloud!

Remember, this is just a starting point. Feel free to delve deeper into specific aspects of the AWS Global Infrastructure that pique your interest. There’s a whole world of cloud knowledge waiting to be discovered!

  • For our employee directory application, we’ll be using photos of
    each of our employees. If we have only one copy of those photos and don’t want to lose them, we have to store them somewhere safe. Currently, the only copy of these photos are saved on my laptop. But if my laptop breaks, what happens? No more photos. I want to make sure this doesn’t happen, so I’m going to upload the photos to AWS to ensure that the copies exist even if my laptop is destroyed. This also allows me to access
    my photos from anywhere, my home, my phone, a plane,
    on a train, everywhere. When I store these
    photos in an AWS service, I’m storing it in a data center somewhere, on servers inside that data center. But if a natural disaster happens, such as an alien coming down from space and destroying a data center, then what do we do? Luckily, AWS has planned for
    this event and many others, including natural disasters and other unavoidable alien accidents. The way they plan for it
    is through redundancy. AWS has clusters of data
    centers around the world. So here AWS would have
    a second data center connected to the first through redundant high
    speed and low latency links. That way, if the first
    data center goes down, the second data center
    is still up and running. This cluster of data centers is called an availability zone or AZ. An AZ consists of one or more data centers with redundant power,
    networking, and connectivity. Unfortunately, sometimes natural disasters like hurricanes or other disasters might also extend to
    impacting an entire AZ, but AWS has planned for that, too, again, using redundancy. Like data centers, AWS
    also clusters AZs together and also connects them with redundant high speed
    and low latency links. A cluster of AZs is
    simply called a region. In AWS, you get to choose the
    location of your resources by not only picking an AZ,
    but also choosing a region. Regions are generally named by location so you can easily tell where they are. For example, I could
    put our employee photos in a region in Northern Virginia called the Northern Virginia Region. So knowing there are many
    AWS regions around the world, how do you choose an AWS region? As a basic rule, there are four aspects you need to consider when
    deciding which AWS region to use, compliance, latency, price,
    and service availability. Let’s start with compliance. Before any other factors, you must first look at your
    compliance requirements. You might find that your
    application, company, or country that you live in requires you to handle
    your data and IT resources in a certain way. Do you have a requirement that your data must live
    in the UK boundaries? Then you should choose the
    London Region, full stop. None of the rest of the factors matter. Or if you operate in Canada,
    then you may be required to run inside the Canada Central Region. But if you don’t have a compliance or regulatory control
    dictating your region, then you can look at other factors. For example, our employee photos are not restricted by regulations, so I can continue looking
    at the next factor, which is latency. Latency is all about how
    close your IT resources are to your user base. If I want every employee around the world to be able to view the
    employee photos quickly, then I should place the infrastructure that hosts those photos
    close to my employees. We are all bound by the speed of light. Applied to your business, that means that if your
    users live in Oregon, then it makes sense to
    run your application in the Oregon Region. You could run it in the Brazil Region, but the latency from Oregon to Brazil might impact your users and create a slower load time. But maybe I really want
    to run my application or store my employee photos in Brazil. One problem I might run
    into is the pricing, which is the next factor we’ll talk about. The pricing can vary
    from region to region, so it may be that some regions, like the Sao Paulo Region, are more expensive than others due to different tax structures. So even if I wanted to store
    my employee photos in Brazil, it might not make sense
    from the latency perspective or the pricing perspective. And then finally, the fourth factor you’ll want to consider is the services you want to use. Often when we create new
    services or features in AWS, we don’t roll those services to every region we have right away. Meaning, if you want to
    begin using a new service on day one after it launches, then you’ll want to make sure
    it operates in the region that you’re looking at running
    your infrastructure in. To recap, regions, availability zones, and data centers exist in a
    redundant, nested sort of way. There are data centers
    inside of availability zones and availability zones inside of regions. And how do you choose a region? By looking at compliance, latency, pricing, and service availability. Those are the basics, but it
    isn’t the end of the story when it comes to AWS
    global infrastructure. We also have the Global Edge Network, which consists of Edge locations
    and regional Edge caches. Edge locations and regional Edge caches are used to cache content
    closer to end users, thus reducing latency. Consider this scenario. You are a company hosting a website to your users all over the world. Even though your website is
    being downloaded from all over, it’s hosted out of an AWS region
    in North America, say Ohio. Without caching, every user
    would need to send a request to the Ohio region where
    the data is downloaded, and then the data would be returned to the user and rendered in their browser. If the user is located in
    the USA or nearby country, there may not be much
    latency in this process. However, if a user is coming from a place that is located far from the Ohio region, then latency will be greater. Latency is a big hurdle
    for many use cases, including web applications. So to reduce this latency, you could use the Edge locations to cache frequently accessed content. When you cache content
    at an Edge location, a copy is hosted across the
    Edge locations around the world. That way, when a user goes
    to retrieve that information, it will come from the
    closest Edge location, which will greatly reduce
    the latency for that user. You can use services
    like Amazon CloudFront to cache content using the Edge locations.

Reading 1.3: AWS Global Infrastructure

Video: Interacting with AWS

Summary: Managing AWS Infrastructure

This video explains how to manage your infrastructure on AWS after it shifts from physical servers to virtual cloud resources.

Three main ways to interact with AWS:

  1. AWS Management Console:
    • Web-based, point-and-click interface.
    • Easy to use for beginners.
    • No need for scripting or syntax knowledge.
    • Can be inefficient for repetitive tasks.
  2. AWS Command Line Interface (CLI):
    • Uses terminal commands to interact with AWS API.
    • Faster and more efficient for repeated tasks.
    • Requires knowledge of AWS syntax.
    • Reduces human error compared to Console.
  3. AWS Software Development Kits (SDKs):
    • Libraries for popular programming languages to integrate with AWS services.
    • Most powerful and flexible option.
    • Requires programming skills.

Recommendations:

  • Beginners start with the Console.
  • Move to CLI for improved efficiency with repetitive tasks.
  • Use SDKs for programmatic control and integration with applications.

This course will primarily use the Console for simplicity, but feel free to explore the CLI for deeper learning.

Interacting with AWS: A Beginner’s Guide

Welcome to the captivating world of AWS! This tutorial will equip you with the essential tools and techniques to confidently navigate your cloud journey. Dive in and discover how to interact with AWS and unleash its potential for your applications.

Understanding the Landscape:

Before we jump in, let’s paint a picture of the options at hand. You have three main avenues for interacting with AWS, each catering to different levels of expertise and needs:

  1. AWS Management Console: This web-based portal is your friendly neighborhood guide. Think of it as a point-and-click wonderland where you can create, manage, and monitor your AWS resources with ease. No coding involved, just intuitive menus and prompts to guide you through the process. It’s perfect for beginners and simple tasks, but be prepared for a slower pace for repetitive actions.
  2. AWS Command Line Interface (CLI): Now, let’s turn up the dial a notch. The CLI empowers you to interact with AWS through powerful text commands. Think of it as a direct line to the inner workings of your cloud infrastructure. This approach offers unrivaled speed and efficiency, especially for repetitive tasks. Scripting commands lets you automate routine processes, minimizing human error and maximizing productivity. But be warned, the CLI demands familiarity with AWS syntax and isn’t for the faint of heart.
  3. AWS Software Development Kits (SDKs): For the programming enthusiasts, here’s your playground. SDKs are libraries for popular languages like Python, Java, and Node.js, enabling you to seamlessly integrate AWS services into your applications. Think of it as building Lego blocks with code, connecting your applications to the vast possibilities of AWS with unrivaled flexibility and control. However, this path requires strong programming skills and understanding of cloud architecture.

Choosing Your Weapon:

The best option for you depends on your comfort level and goals. Start with the Console for its user-friendly interface and gentle learning curve. As you gain confidence, the CLI can become your ally for efficiency, while SDKs unlock advanced automation and deep integrations. Don’t be afraid to experiment and find the approach that resonates with you!

Mastering the Tools:

This tutorial will serve as your map in navigating each interaction method. We’ll delve into hands-on exercises to:

  • Conquer the Console: Learn to create and manage essential resources like virtual servers, storage buckets, and databases using intuitive clicks and menus.
  • Tame the CLI: Unleash the power of commands to automate tasks, configure resources, and gain deeper insights into your AWS environment.
  • Embrace the SDKs: Explore code examples and best practices for integrating AWS services into your applications, unlocking limitless possibilities.

Remember:

AWS is a vast and ever-evolving landscape. Embrace the learning curve, explore different tools, and seek resources like documentation and online communities to continuously expand your cloud expertise. This tutorial is just the beginning of your adventure; the path to mastering AWS awaits!

Ready to embark on your journey? Buckle up, grab your chosen tool, and let’s dive into the exciting world of interacting with AWS!

  • When you own the infrastructure it’s easy to understand
    how you interact with it because you can see it, touch it and work with it on every level. If I have a server that
    I’ve stood up in my closet, interacting with that server
    is easy because it’s mine. I can touch it. When I remove the ability for
    me to touch and see something like when the infrastructure
    becomes virtual, the way that I work
    with that infrastructure has to change a bit. Instead of physically
    managing my infrastructure, now I logically manage it through the AWS Application
    Program Interface or API. So now when I create, delete
    or change any AWS resource whether it’s a virtual
    server or a storage system for employee photos, I use
    API calls to AWS to do that. You can make these API
    calls in several ways but the three main ways we’re
    going to talk about in AWS are the AWS Management Console, the AWS Command Line Interface and the AWS Software
    Development Kits or SDKs. When people are first
    getting started with AWS, they typically use the
    AWS Management Console. This is a web-based method that you log into from your browser. The great thing about the console is that you can point and click. By simply clicking and following prompts, you can get started with
    some of these services without any previous
    knowledge of the service. With the console, there’s no need to worry about scripting or
    finding the proper syntax. When you log into the console, the landing page will show you services you’ve recently worked with but you can also choose to view
    all of the possible services organized into relevant categories such as compute, database
    storage, and more. If I change the region to Paris, you’re making requests to
    eu-west-3.console.aws.amazon.com or the Paris Region’s web console. After you work with the
    console for a while, you may want to move away from the manual creation of resources. For example, in the console, you have to go through multiple screens to set configurations to
    create a virtual machine. And if I wanted to create
    a second virtual machine I would need to go through
    that process all over again. While this is helpful, it also
    leaves room for human error. I could easily miss a
    checkbox or misspell something or even skip important
    settings by accident. So when you get more familiar with AWS, or if you’re working in
    a production environment that requires a degree of risk management, you should move to a tool that enables you to script or program these API calls. One of these tools is called the AWS Command Line Interface or CLI. You can use this tool in a couple of ways. One is to download the tool
    and then use the terminal on your machine to create
    and configure AWS services. Another is to access the CLI through the use of AWS Cloud Shell, which can be done through the console. With both of these options,
    instead of having a GUI like the console to interact with, you run, create commands
    using a defined AWS syntax. For example, if I wanted
    to launch a virtual machine with the CLI through Cloud Shell, I first used this quick
    shortcut to open a session. Once my session is started, I type in AWS which is how we know we
    interact with the API, then type in the service. In this case, it’s EC2, the service that allows us to create and manage virtual machines,
    which we’ll learn about later. And then the command that we
    want to perform in that service and any other configurations
    we want to set. One command versus multiple screens you have to click through in the console can help reduce accidental human errors. But that also means you have
    to work with defined syntax and get that syntax correct
    in order for your command to run successfully. So there is some upfront
    cost in just understanding how to form commands, but after a while, you can begin to script
    these commands out, making them repeatable which can greatly improve
    efficiency in the long run. The other tool that allows you to interact with the AWS API programmatically is the AWS Software
    Development Kits or SDKs. SDKs are created and maintained by AWS for the most popular programming languages such as Python, Java,
    Node.js, .NET, Ruby, and more. This comes in handy when
    you want to integrate your application source
    code with AWS services. For example our employee
    directory application runs using Python and Flask. If I wanted to store all
    of the employee photos including pictures of employees
    in an AWS storage service, I could use the Python SDK to write code to interact with that AWS storage service. The ability of managing AWS services from a place where you can run source code with conditions, loops, arrays, lists and other programming elements provides a lot of power and creativity. Alright, that wraps this video up. To recap, you have three main
    options to connect with AWS, the Console, the CLI, and the SDKs. In this course we’ll mainly be using the console to interact with the services but feel free to challenge
    yourself by using the CLI if you’re a bit more advanced.

Reading 1.4: Interacting with AWS

Reading

Security in the AWS Cloud


Video: Security and the AWS Shared Responsibility Model

AWS Security: A Collaborative Effort

In the cloud, security isn’t a solo act. Both you and AWS share responsibility for securing your environment. It’s like a building: AWS ensures the foundation is sturdy, but you lock your own doors within.

AWS secures the core:

  • Global infrastructure (data centers, networks)
  • Software components (databases, compute, storage)
  • Underlying hardware and virtualization layers

You handle security “in the cloud”:

  • Patching operating systems on your resources
  • Encrypting data in transit and at rest
  • Configuring firewalls and access controls
  • Implementing compliance and industry standards

This shared model lets you customize security for your specific needs. Each AWS service may have slightly different responsibilities, so remember to tailor your approach accordingly.

Remember, secure cloud solutions are built together!

This summary condenses the key points of the passage:

  • Shared responsibility model for AWS security
  • AWS responsibility: Core infrastructure and software
  • Your responsibility: Security “in the cloud” for your resources
  • Customization based on services and needs

It also uses a relatable analogy and emphasizes the collaborative nature of cloud security.

  • In order to begin using AWS effectively, it’s important to understand how security works in the cloud. You already know that by using AWS, you won’t be managing every single aspect of hosting your solutions. You’ll rely on AWS to manage portions of your workloads for you taking care of that
    undifferentiated heavy lifting, like running the day-to-day
    operations of the data centers and managing the various
    virtualization techniques employed to keep your AWS
    account isolated from, say my AWS account. So the question is who
    is ultimately responsible for security and AWS? Is it A, you the customer, or B, AWS? The answer? Well, the correct answer is yes. Both you and AWS are responsible for securing your AWS environment. Let’s explore this concept. AWS follows something called the shared responsibility model. We don’t view solutions built on AWS as one singular thing to be secured. We see it as a collection of
    parts that build on each other. AWS is responsible for the
    security of some aspects. The others, you are
    responsible for their security. Together with both you and
    AWS following best practices, you have an environment
    that you can trust. Let’s take a look at the shared responsibility
    model diagram. You can see we have the
    responsibility of security broken into two groupings, you and AWS each being responsible
    for different components. We describe AWS as being responsible for security of the cloud. For example, one piece
    of the puzzle for AWS is the AWS global infrastructure. And when I say global infrastructure, I mean the physical infrastructure that the cloud is running on. This is iron and concrete
    buildings with fences protected by security guards and various other security measures. It also includes the AWS global backbone or the private fiber cables that connect each AWS
    region to each other. Managing the security of
    these pieces is all in AWS. You don’t need to worry about
    that as far as security goes. Then there is the infrastructure in various software components
    that run AWS services. This includes compute databases,
    storage, and networking. AWS is responsible for
    securing these services from the host operating system up through the virtualization layer. For example, let’s say you want to host some virtual machines or VMs on the cloud. We primarily use the service
    Amazon EC2 for this use case. When you create a VM using EC2, AWS manages the physical
    host the VM is placed on as well as everything
    through the hypervisor level. If the host operating
    system or the hypervisor needs to be patched or updated, that is the responsibility of AWS. This is good news for you as the customer, as it greatly reduces
    the operational overhead in running a scalable and elastic solution leveraging virtualization. We will talk more about
    EC2 and elastic solutions in upcoming lessons. For now let’s get back
    to the security aspect. So if AWS manages the underlying hardware up through the virtualization layer then what are you responsible for? Well, you are responsible
    for security in the cloud. Similar to how a construction
    company builds a building and it’s on them to make sure that the building itself
    is stable and secure then you can rent out an
    apartment in that building. It’s up to you to lock the
    door to your apartment. Security of the building
    and security in the building are two different elements. For security in the cloud, the
    base layer is secured by AWS. It’s up to you to lock the door. So for our EC2 example, you
    are responsible for tasks like patching the operating
    systems of your VMs, encrypting data in transit and at rest, configuring firewalls and controlling who has access to these resources and how much access they have. The main thing to understand is that you own your data in AWS. You are ultimately
    responsible for ensuring that your data is encrypted, secure and has proper access controls in place. In many cases, AWS services
    offer native features you can enable to achieve
    a secure solution. It’s up to you to actually use them. In other cases you may
    devise your own solutions to meet compliance and security standards for your specific industry or use case. So that’s the shared responsibility
    model at a high level. I do want you to keep
    something in mind, though. There is some amount of
    nuance you should understand as we move through the course regarding the shared responsibility model. Each AWS service is different
    and serves a different purpose and a different use case. Therefore, the shared responsibility model can vary from service to service as well. This is a good thing as you get to decide how to build your solutions on AWS.

Reading: Reading 1.5: Security and the AWS Shared Responsibility Model

Reading

Video: Protect the AWS Root User

Protect your AWS kingdom: Avoid root user peril with MFA!

Creating an AWS account involves an email and password, granting the root user unlimited access. This root user is like a king with immense power, vulnerable to nefarious actors who could delete your data or spin up costly crypto mining.

To defend your AWS kingdom:

  • Enable multi-factor authentication (MFA): This adds an extra layer of security beyond just a password. Think of it as a royal guard verifying your identity. Choose a virtual or physical device that generates unique one-time passwords, like an app on your phone. Even if someone steals your password, they’ll need your phone’s one-time code to get past the guard.
  • Don’t use the root user for daily tasks: Treat it like the king’s crown, reserved for rare, critical actions. Instead, create an IAM user for everyday tasks, limiting its power just like assigning different roles to court officials.

By enabling MFA and using IAM users, you build a secure castle protecting your precious AWS data and resources. Remember, with great power comes great responsibility, so secure your root user like a vigilant king!

This summary condenses the key points:

  • Root user has immense power and needs extra security.
  • Enable MFA for an extra layer of protection beyond just password.
  • Don’t use root user for daily tasks, create IAM users with limited permissions.

It uses an engaging analogy and emphasizes the importance of strong security practices.

  • When you create an AWS account, you sign up with an email address and you create a password. The email address you sign up with becomes the root user of the AWS account. This root user has unrestricted access to everything in your
    account in most cases. We will discuss in a later lesson what types of users can be restricted and how to do that. But for now, I want you to understand that when you log into your AWS account using an email address and password, it means you are logging
    in as the root user. This root user can do whatever
    they want in the account. It has all of the powers that can be had. And with great power comes
    great responsibility, or something like that. So knowing that the root
    user is all powerful, you should do everything you can do to make sure that no one can
    gain access to this root user. Let’s say there’s a
    nefarious actor of sorts, and they have an evil plan to log into your AWS
    account with your root user, gaining access to all of
    the powers and permissions. And they go and delete
    all of your valuable data and AWS resources, and in their place, they spin up a lovely and expensive cryptocurrency
    mining operation. Leaving you with the bill
    and none of your data as they walk away with
    a full crypto wallet. Sounds less than ideal. Well, how can you prevent
    that from happening? You could, of course, create
    a hard-to-crack password, and that will give you
    some level of security. This, however, is an example of single factor authentication where all someone needs to
    do is match the password with the email address, and boom, they’re in. We recommend as a best practice that right after you
    create your AWS account, you enable multi-factor authentication, or MFA, on the root user. MFA introduces an additional
    unique piece of information that you need to enter to
    gain access to the account. There are a variety of
    devices, virtual or physical, that generate one-time passwords that can be integrated with
    your AWS account for MFA. For example, I personally
    use a virtual MFA device that is an app on my phone. This app produces a string
    of numbers for one time use that I type into the console after I log in using my
    email address and password. Even if someone guessed the password, they cannot gain access to the account without the numbers generated
    by the virtual MFA device. No matter what type of MFA
    device that you choose to use, and I will include a link
    to the supported devices and the readings for you to look into, the most important thing
    is that you are using MFA on the root user. That way, even if someone,
    the nefarious actor, cracks your password, they still cannot gain
    access to your account. All thanks to MFA. On top of enabling MFA for the root user, we strongly recommend that
    you do not use the root user for your everyday tasks, even the administrative ones. There are really only a few actions that require root user access. Coming up, you’ll learn
    how to create an IAM user, and use that to log into your AWS account instead of using the root user.

Reading 1.6: Protect the AWS Root User

Reading

AWS Identity and Access Management


Video: Introduction to AWS Identity and Access Management

This passage discusses access control and credential management in the context of building an application on AWS. Three key takeaways are:

  1. Multiple needs for access control: The application requires access control for user login, application code accessing S3, and managing resources within the AWS account.
  2. AWS IAM for access and credential management: AWS IAM helps manage login credentials and permissions to the AWS account, as well as credentials for signing API calls to AWS services. It doesn’t handle application-level access control.
  3. IAM users, groups, and policies: IAM users have unique credentials for logging in. Policies grant or deny permissions to specific actions (API calls) within the account. Groups can be used to manage permissions for multiple users.

The passage also recommends best practices like using IAM users with admin permissions instead of the root user, and setting up MFA for the root user. Finally, it introduces the concept of IAM roles for temporary access, which will be covered in the next part.

  • Let’s take a look at the application we are going to be building
    out throughout the course. We’ve already gone over the
    design of this application and what I want to focus
    on now is access control. There are multiple places on this diagram where we can identify the need for access control and
    credential management. The first being that we need
    to manage how users log in and use the employee
    directory application. We could require that people
    have a valid credential like a username and password
    to log into the app. That is access management
    on the application level. Then there is the fact
    that we know the code running the employee directory application on a virtual machine being hosted by the service Amazon EC2 and that code will need to make API calls to the object storage service Amazon S3 in order to read and write data like images for the employees. Well, here’s the thing. Just because both Amazon EC2 and Amazon S3 have existing resources in this account, it doesn’t mean that the API calls made from the code running
    on the EC2 instance to S3 are automatically allowed to be made. In fact, all API calls in
    AWS must be both signed and authenticated in order to be allowed, no matter if the resources live
    in the same account or not. The application code running
    on the Amazon EC2 instance needs access to credentials to make this signed API call to Amazon S3. So that’s another place
    with a need for a credential and access management. Now let’s take this a step further. How are you going to build
    out this architecture? Well, you’ll need access to an AWS account through the use of a login. Your identity within this AWS account will need permissions
    to be able to do things like create this network,
    launch the EC2 instances and create the resources that will host and run the solution in AWS. Yet another place you need credentials. The root user which you
    have already learned about in a previous lesson
    has these permissions, but you don’t want to
    be using the root user to administer your AWS resources. And let’s assume you won’t
    be the only one working on or building out this application. It’s more likely that
    within one AWS account there would be many people
    who need access to build and support your solutions. You’ll have different
    groups of people responsible for different parts of the architecture. The people who would
    write and deploy the code might be software developers, whereas the people who
    would be responsible for making changes to say the network would be a different group of people. You wouldn’t and shouldn’t give everyone who needs access to the AWS account, the root user credentials to log in. You instead would have unique credentials for each person logging in. This is where the service AWS identity and access management comes in. We identified three places
    where we will need access and credential management. AWS identity and access management or IAM can help take care of these
    two spots on the diagram. AWS IAM manages the login credentials and permissions to the AWS account and it also can manage the credentials used to sign API calls
    made to AWS services. IAM would not, however, be responsible for application level access management. The code running on
    this instance would use separate appropriate mechanisms
    for authenticating users into the application itself, not IAM. All right, so let’s start
    with the AWS account level. IAM allows you to create users and each individual
    person who needs access to your AWS account would have
    their own unique IAM user. Creating users for everyone who
    needs access to the account, takes care of authentication. Authentication being verifying if someone is who they say they are because they had the proper
    credentials to log in. Now it’s time to introduce authorization. Authorization is this. Let’s say you’ve logged in and
    you are who you say you are. You’ve been authenticated. Now you want to create resources
    and manage AWS resources like create an Amazon
    EC2 instance for example. Sure, you’ve logged in but do you have the correct permissions to be able to complete that action? The idea that your permissions control what you can or cannot
    do is authorization. Are you authorized to
    launch an EC2 instance? IAM users take care of authentication and you can take care of authorization by attaching IAM policies to users in order to grant or deny
    permission to specific actions within an AWS account. Keep in mind when I say action here, I’m referring to an AWS API call. Everything in AWS is an API call. IAM policies are JSON-based documents. Let’s take a look at an example. This IAM policy document
    contains permissions that allow the identity
    to which it’s attached to perform any EC2-related action. The structure of an IAM
    policy has an Effect which is either allow or deny. And Action which is the AWS API call, in this case, we have ec2:* which includes all EC2-related actions. You can restrict this to
    be specific API calls. For example, I can restrict this action to be just run instances and then any user with
    this policy attached would only be allowed to run EC2 instances but perform no other EC2-related tasks. IAM lets you get very granular with your permissions in that way. Continuing with this
    example, we see the resource which allows you to
    restrict which AWS resources the actions are allowed
    to be performed against. You can also include
    conditions in your policies that can further restrict actions. IAM policies can also
    be attached to groups. IAM groups are very simply
    just groupings of IAM users. You can attach a policy to
    a specific user or a group. When you attach a policy to a group, any users that are a part of that group would inherit the permissions. We recommend that as a best practice you organize users into groups and assign permissions to groups instead of individual
    users where possible. This makes it easier to manage
    when people change job roles or multiple users need
    permissions applied or revoked. Another best practice to follow is that we recommend when
    you create your AWS account, you set up MFA for the root user. Then create an IAM user
    with admin permissions. Log out of the root user and then log in with the IAM user that you just created. From there, you can
    use this user to create the rest of the IAM
    groups users and policies. The reason we suggest you do this is because you cannot apply
    a policy to the root user but you can to an IAM user. Now that I’ve told you about
    IAM users, groups and policies, we’ve addressed this
    part of access management that we needed for our application but what about this part? The EC2 instance needs
    credentials to be able to make the signed API call two S3 for reading and writing employee images. Am I suggesting that you make an IAM user with a username and
    password for the application running on EC2 to use? No. No, I am not. This is where role-based
    access comes into the picture. Coming up we will learn
    about the temporary access that IAM roles provide and how it can apply
    to this use case here.

Reading: Reading 1.7: Introduction to AWS Identity and Access Management

Reading

Video: Role Based Access in AWS

Summary of IAM Roles:
  • IAM roles provide temporary access credentials for AWS identities like applications or services.
  • Roles differ from users in having no login credentials (username/password) and temporary, expiring credentials.
  • Roles are assumed programmatically, enabling secure access for applications without embedding static credentials.
  • We created an IAM role for the EC2 instance used in the employee directory application.
  • This role provides “S3 full access” and “AmazonDynamoDBFullAccess” permissions for the app’s operations.
  • Roles are commonly used for access between AWS services or for external identities (federated users) to access AWS.
  • AWS services like IAM Identity Center can simplify federated user access through roles.
  • All right, so you’ve learned about IAM users, groups, and policies. Policies can be applied to AWS identities like users and groups
    to assign permissions. They also, however, can be
    applied to another AWS identity, IAM roles. An IAM role is an identity that can be assumed by
    someone or something who needs temporary
    access to AWS credentials. Let’s dive into what I mean
    when I say AWS credentials. Most AWS API calls that are made must be signed and authenticated, but how does that process work? When you send HTTP request to AWS you must sign the request. This signing process
    happens programmatically and allows AWS to verify your identity when a request is made and run through various security processes to ensure a request is legit. IAM users have associated credentials like an access key ID
    and secret access key that are used to sign requests. However, with regards to our architecture, the code running on the EC2 instance needs to sign the request sent to S3. I already told you that I don’t intend for you to create an IAM user with credentials to be used by this app, so how will the application gain access to the needed AWS access key ID and AWS secret access
    key to sign the API call? The answer is IAM roles. IAM roles are identities in
    AWS that like an IAM user also have associated AWS
    credentials used to sign requests. However, IAM users have
    usernames and passwords as well as static credentials whereas IAM roles do not
    have any login credentials like a username and password and the credentials used to sign requests are programmatically
    acquired, temporary in nature, and automatically rotated. In our example the EC2
    instance will be assigned an IAM role. This role can then be
    assumed by the application running on the virtual
    machine to gain access to its temporary credentials to sign
    the AWS API calls being made. A role can be assumed by
    many different identities and they have many use cases. The important thing to know about roles is that the credentials
    they provide expire and roles are assumed programmatically. To get an idea of this, let’s create a role using the AWS console. I’m already logged in and will navigate to the IAM service. Now I want to create the
    role the EC2 instance is going to use for the
    employee directory application. Now, we will click Roles
    in the left-hand side and select Create role. We will then select the trusted entity this role is intended to be used for, and in our case this will be EC2, then we will click Next. Now we get to select the
    permissions assigned to this role. Again, permissions being what actions does this identity have
    the authority to take? We want this identity to be able to read and write to Amazon S3,
    so I will search for S3 and you can see there
    are multiple options here for pre-written policies
    that I can choose from. I also can write my own custom policy which would also show up here
    if I created it ahead of time. I’m going to select S3
    full access for the policy. In the real world you would choose a more restrictive IAM policy for this but for a proof of concept like this course is intended to provide, we will leave the permissions
    a bit looser for now. You would come back and
    change the permissions, attach this role to be more granular if this were ever to make it to a production type environment. Now we also need to add
    another permission here, and that’s for DynamoDB. So I will exit out of the S3 filter and then type in DynamoDB and hit Enter. And then we will select the AmazonDynamoDBFullAccess permission. Then we will click Next. Then we can give this role a name, which is EmployeeWebAppRole. Then we can scroll down and
    we can click Create role. We can see that this role has
    now been created successfully and if we click on the
    role and scroll down, we can then see that this role
    has two permissions attached. It’s very common for roles to be used for access between AWS services. Just because two resources
    exist in the same AWS account, it doesn’t mean that they can send any API calls to each other. If one AWS service needs
    to send an API call to another AWS service,
    it would most likely use role-based access where the
    AWS service assumes a role, gains access to contemporary credentials and then sends the API call
    to the other AWS services who then verifies the request. Another identity that
    can assume an IAM role to gain access to AWS is
    external identity providers. For example, let’s say you have a business that employs 5,000 technical employees that all need access to AWS accounts. You already have an identity
    provider system in place that allows these employees
    to log into their laptops and gain access to various
    corporate resources. Should you also now go
    create 5,000 IAM users for these employees to access AWS? Well, that doesn’t sound very efficient. You instead can leverage IAM roles to grant access to existing identities from your enterprise user directory. These are known as federated users. AWS assigns a role to a federated user when access is requested
    through an identity provider. We also have AWS services that
    can make this process easier such as AWS IAM Identity Center. These are a couple of examples
    of role-based access in AWS and this is just the introduction. Check out the class readings
    for more information and don’t worry if you don’t
    quite grasp this concept yet as we will continue using
    roles in our demos and examples throughout this course.

Reading 1.8: Role Based Access in AWS

Reading

Week 1 Exercise and & Assessment


Video: Introduction to Lab 1

The hands-on lab focuses on applying AWS IAM best practices for securing accounts. You’ll explore preloaded IAM resources like groups, users, roles, and policies. You’ll learn to manage users and groups by adding users to groups and allowing them to inherit group permissions. You’ll also explore how permissions work with AWS services using different IAM users. This lab prepares you for future labs that build on these concepts. Have fun!

  • It’s time for our hands-on lab. At this point, you’ve learned about some of the best practices for securing AWS accounts using AWS IAM. So let’s go ahead and put those best
    practices into practice. In this lab, you will
    access the IAM dashboard and explore existing groups,
    users, roles, and policies that will be preloaded into
    the exercise environment. You will learn how to
    manage users and groups by performing tasks like
    adding users to groups and allowing users to inherit
    specific group permissions. You also will learn about
    how permissions work with AWS by exploring different AWS services using different IAM users. That’s all for this lab. Throughout the course,
    there will be more labs that will loosely follow along with what we do in the videos. Have fun, and see you in a bit.

Video: Demo AWS IAM

Summary of IAM Roles and Users in AWS:

Creating Roles:

  • Roles allow applications to assume temporary AWS credentials for API calls.
  • Define trusted entities (AWS services, accounts, web identities) who can assume the role.
  • Select relevant permissions policies (managed or custom) for allowed actions and resources.
  • Example: Created “EmployeeWebApp” role for EC2 instance with S3 and DynamoDB access.

Creating Users:

  • Users access AWS console/services directly (unlike roles).
  • Can be enabled for console access and password creation.
  • Assigned to groups with attached policies for permission management.
  • Example: Created “EC2Admin” user with access to EC2Admins group (AmazonEC2FullAccess policy).

Access Keys:

  • Used for programmatic AWS access via CLI, SDKs, etc.
  • Generate and download secret access key securely (don’t share!).
  • Example: Created access key for EC2Admin user’s command-line use.

Key Takeaways:

  • Roles and users provide granular access control for AWS resources.
  • Groups simplify permission management for multiple users.
  • Securely manage and use access keys for programmatic access.

  • [Instructor] Hello everyone. In this video, we are going
    to create the IAM role for our employee directory application and we will also look
    at how to create users, and look at the different AWS access keys, which are used for programmatic
    access to AWS APIs. So to start off, let’s create the role for our application by selecting Roles in the left-hand navigation
    of the IAM dashboard, and then we will click Create role. On this page we need to select what the trusted entity
    type is to assume this role. We know that roles allow you to get access to temporary credentials
    that are used to make calls to AWS API calls. You wanna make sure
    that you’re restricting who can assume this role. Not anyone can assume this role, right? So under the Trusted entity type we have the AWS service. An AWS service, that would be something like an EC2 instance, a Lambda function, other services that are assuming a role to make AWS API calls. You could have an AWS account, this would allow you to
    allow cross-account access to permissions for
    resources in your account. You also could select a web identity, which would allow for federated
    users to assume a role. You have a SAML 2.0 federation, so if you have a corporate directory that is on premises that would be using SAML, you could use this as
    your trusted entity type or you could create a
    Custom Trusts policy. We are going to select
    AWS service for this and then we are going to select EC2, since our employee directory application will be running on EC2. Next, we can select the Next button and then we’re brought to
    the page to Add permissions. This lists out the different
    permissions policies that are in IAM, and right now it’s pulling
    back the AWS managed policies that exist in this account by default. And what a managed policy
    is, is it is a policy that is created and managed by AWS, and so what I mean by that is, let’s go ahead and look
    at the a policy for S3. so if I type in S3 in the
    search bar and then hit Enter, I’m going to expand this
    Amazon S3 full access policy and we can see the JSON, which is the permissions for this policy. We can see we have the effect is Allow, that’s either gonna be Allow or Deny. There’s no other option
    besides Allow or Deny, and then there’s the action, which is going to define
    what AWS API calls are allowed to be made. So we can see that we have
    S3:* and S3-object-lambda:*. That’s a wild card to
    determine all API calls are allowed against this service, and then we also have the resource here, which is also set to *. So this would be all S3 resources. This is a very permissive policy. In the real world, you would
    likely want to change this to allow just the API calls that your application needs, nothing more, and just the resources that you intend to have this policy be related to. So to do that, you would have
    to create a custom policy. So if I scroll up, you could click on this
    Custom policy button, which would take you to a new page where you could then
    create your custom policy. For now, we’re going to use
    the AWSs managed policies and I’m gonna select the
    checkbox for AmazonS3FullAccess. And then I’m also going
    to type in DynamoDB, and exit out of the S3 filter, and then select the
    AmazonDynamoDBFullAccess policy here as well. Because later in the course, we are going to be using DynamoDB as the
    database for this application. So to prepare the role, I’ve selected both the S3FullAccess
    and DynamoDBFullAccess. Now we’ll click Next. Then we can give this a name. I’m gonna name this EmployeeWebApp, and then I’m going to scroll down. We can view the trusted entities here, so this is our trust policy. We’re allowing the API
    call STS AssumeRole, and who’s allowed to assume this role? ec2.amazonaws.com. So an EC2 instance is going to be allowed to assume this role only. All right, so now I can scroll down and then click Create role. Once your role is created, you can then click on the role, which will bring you to the page where you can see the
    information about this role, such as the ARN, the Amazon Resource Name, and you can also scroll down, you can view the permissions attached, you could add new
    permissions if you want to. You can simulate the permissions, you can view the trust relationships, you can also view any
    tags associated with this, which are key value pairs. So, this is where you can
    get all of the information about your role and then where
    you can manage your role. So, next what I wanna do is create a user. So I’m gonna click on users
    in the left-hand navigation, then I’m going to click Add users, and let’s give this user a name. Let’s say it’s EC2Admin, and then I want to click the checkbox for Enabling console access. So what that means is I
    want to allow this user to be able to sign in to
    the AWS management console. So note, by default, this was unchecked, meaning that just
    because you create a user doesn’t mean they have
    access to the console. I’m gonna go ahead and check this, and then I’m gonna allow an auto-generated password to be created, and then I want to leave
    the checkbox checked for users to create a
    password at the next sign-in. So this will allow them
    to change their password once they log in for the first time. Now we’ll click Next, and
    next what I want to do, is I’m going to add the users to a group. We currently don’t have any groups, so I’m gonna go ahead
    and click Create group, and then what I wanna
    do is add a group name. Let’s say this is EC2Admins, and then I want to attach
    a policy to this group, because we know that it’s a best practice to attach policies to groups,
    not to users directly. So I’m gonna select the
    AmazonEC2FullAccess policy, and then I’m gonna scroll down
    and click Create user group, and now I can select this user
    group to add this user into, and then I can click next, and then we can scroll down. We can see the permissions that this one user currently has. It will be inheriting the permissions from the EC2Admins group, and then it also has
    directly attached to it the IAMUserChangePassword permission, which will allow these user
    to change their password. So now we can click Create user, and now we can go ahead and
    click Return to users list. We didn’t download the
    password for this, that’s fine. I don’t intend to actually use this user. It’s just for demonstration purposes, so we’ll click Continue, and now if I click on this user, what I wanna do next is click on the Security Credentials tab, and if we scroll down, you can see that we have this panel
    here called Access keys. Access keys are going to allow your users to make programmatic calls to AWS using things like the AWS command line, the AWS software development kits, where maybe they’re developing
    locally on their laptop, and they need their code to
    be able to reach out to AWS, so I’m gonna go ahead and
    create an access key here, and then I wanna use this
    for the command line, and then I’m gonna go ahead
    and click the checkbox for “I understand the
    above recommendation.” What this is saying is it’s saying, “Hey, there’s another service
    in the browser that you could use to use the AWS
    CLI called AWS CloudShell.” We’re gonna go ahead and
    create the access keys anyways. I’m gonna select this
    checkbox and then click Next, and then I’m gonna click Next again, and click the Create access key. All right, so here you can see, we have our access key here, and then we have our secret access key, which is not being shown
    currently on this page, but you could click show and then copy it, and you would use this access
    key and secret access key to be able to configure
    your command line locally. Now I’ll click Done and
    then click Continue. So now for a little bit of cleanup, I’m gonna select Actions and then I’m gonna click
    Deactivate and then Deactivate, and then I will click
    Actions and then Delete, and then we can copy and
    paste the access key ID here, and then click Delete. All right, that’s it for this video. Hopefully you know a little
    bit more about roles and users.

Reading: Hosting the Employee Directory Application on AWS

Reading

Video: Hosting the Employee Directory Application on AWS

Summary:

  • This video focuses on launching an employee directory application on AWS using Amazon EC2.
  • The instructor, Morgan, provides an overview of the architecture and explains how each piece will be built.
  • Seth demonstrates launching an EC2 instance with default settings and basic configuration.
  • Key steps include choosing an instance type, selecting a VPC and subnet, configuring security groups, and providing user data script.
  • The script automates downloading the application code, starting the web server, and running the application.
  • Finally, Seth shows how to access the instance through its IP address and confirms it’s ready to use.
  • Morgan points out that data will be added later and upcoming lessons will delve deeper into EC2 and networking concepts.

Key takeaways:

  • Amazon EC2 allows hosting virtual machines for cloud applications.
  • Launching an EC2 instance involves configuring essential settings like network and security.
  • User data scripts can automate application setup on instance launch.
  • AWS provides default options for beginners to simplify initial setup.
  • All right. You’ve learned some AWS vocabulary, some of the concepts
    behind cloud computing as well as some of the
    specifics around cloud identity and access management. As a refresher, let’s take a look at the architecture diagram
    we’ll be building out. You’ll remember that it’s
    fairly complicated at a glance. There are many different
    services working together to host this solution and to get this entire
    thing built out this way will take some time and understanding. You’ll get the opportunity to
    see how each piece is built throughout the upcoming lessons. What I want to do now is show you how we are going to host our
    employee directory application using a service called Amazon EC2 which has been mentioned
    in previous videos. I find that the best way for me to explain to you how these services work
    is to show you how they work. So what we are going to do is we are going to call Seth in here to
    launch an Amazon EC2 instance and host the employee
    directory application using the defaults provided by AWS. AWS provides you with
    something called a default VPC which is a private network
    for your AWS resources. Every EC2 instance you launch using AWS must live inside of a network. So in order to complete this demo with the limited amount of information that we have shown you about AWS services, we will be using the default
    network provided by AWS and we will accept most
    of the default values for launching an EC2 instance. Let’s bring Seth in for
    some help with this one. Hey Seth. – Hey, Morgan. Is it time to launch
    our first EC2 instance? – It is. We’ll be using the default VPC to launch this first instance, and we will configure the bare minimum to get our application up and running. Sound good?
  • Yep. I got it. Let’s get started. I’m already in the EC2 management console and I will navigate to
    the service Amazon EC2. As Morgan already mentioned,
    Amazon EC2 is a compute service that allows you to host virtual machines. You’ll learn a lot about
    this topic coming up soon but for now we are going to
    simply create an EC2 instance to try and help you wrap your mind around using AWS services on demand. From here, we’ll launch
    a new EC2 instance. An instance is just what we
    call one single virtual machine. Now we have to select the configurations for this EC2 instance. You will go over these
    configurations in detail later. We can give this instance a name, I’ll call it employee-web-app and then we will select the
    free tier eligible options using a Linux AMI or Amazon Machine Image. And then scrolling down
    to the next section we will select the free
    tier eligible t2.micro for the instance type. Next, we normally need
    to select a key pair. This would be used to
    SSH into the instance but we won’t need to do this, so I’m going to select
    proceed without a key pair and continue scrolling
    to the network settings. Here we will select the network. So looking at network
    settings, we will click Edit. Then we can select the VPC
    and subnet for this instance. And as we discussed earlier,
    we will use the default VPC and leave the subnet at No preference. The default VPC has
    configurations in place that make this experimentation
    process a lot easier when you’re first
    getting started with AWS. Again, we will define all
    of these things later. Continuing on with network settings, we will create a new security group which is an instance level firewall that will allow HTTP and HTTPS traffic in to reach the instance. So we will add inbound
    rules for both of those. Now we will scroll down and
    expand advanced details. We will accept most of the defaults here except the IAM instance
    profile and user data which we will provide a value for. For the IAM instance profile
    we will use the IAM role that will be used by the application, though this won’t come
    into play until later when we have our S3 bucket created. But for now, let’s click the dropdown and select that IAM role. Now, scrolling down to the user data, this is a script that is going to run when the instance boots up. Let’s go ahead and paste
    in our user data script. This script will download
    the source code for the app, start the web server, and
    kick off the application code so it’s ready to start handling requests while you could have launched the instance then connected to it via SSH, configured and started
    your application manually. We have decided to use a script to automate this process on launch. Alright, now we can click Launch instance. It can take a few minutes
    for the instance to boot up so let’s wait for that. And now it’s up and running. To access the instance,
    copy the IP address from the details listed below, paste it into a new browser
    tab, and there you go. It’s up and ready to go. Morgan, what do you think? – That looked great. It’s exactly what I would
    expect at this point because there is no data
    being served from a database so I expect to see just
    a homepage with no info. So that’s great. Thank you Seth for
    walking us through that. In upcoming lessons, we
    will discuss the specifics of not only EC2, but
    also networking on AWS. And you’ll understand how everything we just kind of glossed over
    actually works, so stay tuned.

Reading: Default Amazon Machine Image (AMI) for Amazon EC2

Reading

Quiz: Week 1 Quiz

What are the four main factors that a solutions architect should consider when they must choose a Region?

True or False: Every action a user takes in AWS is an API call.

Which statement BEST describes the relationship between Regions, Availability Zones and data centers?

Which of the following is a benefit of cloud computing?

A company wants to manage AWS services by using the command line and automating them with scripts. What should the company use to accomplish this goal?

What is a best practice when securing the AWS account root user?

A solutions architect is consulting for a company. When users in the company authenticate to a corporate network, they want to be able to use AWS without needing to sign in again. Which AWS identity should the solutions architect recommend for this use case?

Which of the following can be found in an AWS Identity and Access Management (IAM) policy?

True or False: AWS Identity and Access Management (IAM) policies can restrict the actions of the AWS account root user.

According to the AWS shared responsibility model, which of the following is the responsibility of AWS?

Which of the following is recommended if a company has a single AWS account, and multiple people who work with AWS services in that account?

True or False: According to the AWS shared responsibility model, a customer is responsible for security in the cloud.

Which of the following provides temporary credentials (that expire after a defined period of time) to AWS services?

A user is hosting a solution on Amazon Elastic Compute Cloud (Amazon EC2). Which networking component is needed to create a private network for their AWS resources?

Reading: Mid-Course Survey

Reading


Home » Amazon » AWS Fundamentals Specialization » AWS Cloud Technical Essentials » Week 1: AWS Overview and Security