AWS Academy Cloud Architecture covers the fundamentals of building IT infrastructure on AWS. In this second course, we learn how to optimize use of the AWS Cloud by understanding AWS services and how they fit into cloud-based solutions. To do this, we focus more on the hands-on, practical experience working in the AWS console.
REQUISITES: To be successful in this course, it is strongly advised that we first accomplish either of the following:
Complete the AWS Academy Cloud Foundations course
Earned the AWS Certified Cloud Practitioner (CCP) certification
ADVISORIES: It is also helpful to have some prior experience with the following topics:
Experience with the Linux OS and command-line tools; for example, COMP 641 Linux Essentials
Familiarity with general networking concepts; for example: CompTIA A+ or Network+ (Desktop Technician program)
A working knowledge of multi-tier architectures; for example, COMP 643 Linux Server Technologies
AWS Academy Cloud Architecting is designed to show us how to optimize our use of the AWS Cloud by understanding AWS services and how they fit into cloud-based solutions. Though architectural solutions can differ depending on the industry, type of application, and size of the business, this course emphasizes best practices for the AWS Cloud that apply to all solutions. It also recommends various design patterns to help us think through the process of architecting optimal IT solutions on AWS. Throughout the course, we will explore a scenario that provides opportunities for us to build a variety of infrastructures through a guided, hands-on approach. To learn more details about the concepts covered, take a look at the COMP 672 AWS Cloud Architecture course outline of record.
The materials in this course are mapped to the AWS Certified Solutions Architect-Associate (SAA-C03) certification exam guide, which is intended for anyone with one or more years of hands-on experience. This certification focuses on the design of cost- and performance-optimized solutions, and earning this certification demonstrates a strong understanding of the AWS Well-Architected Framework.
In this course, we will typically cover one module per week, however, in an effort to keep the course within a twelve-week session, some weeks we will cover multiple AWS modules. Every week in our live sessions and demos, video lessons, labs, and knowledge checks, we will gain repeated practice and increasing depth in the course objectives. Plan to do something every day, even if only for a couple minutes, and to access the assignments and lessons at least three or more times a week to allow the concepts and procedures to take root and deepen. Similar to learning any skill, like playing a musical instrument or learning a new language, the more times you practice, the faster your skills will grow.
Attend and actively participate in weekly Live Session, and communicate if you miss a session
Submit at least one graded assignment each week
Check in to the course within Canvas at least three times a week
Respond within 24 to 48 hours to instructor messages
• Be fully present • Check in regularly • Listen to each other • Maintain honesty, integrity, and respect
• Ask for help when needed • Own our actions and choices
WEEK ONE - Storage Layer
WEEK TWO - Compute Layer
WEEK THREE - Database Layer
WEEK FOUR - Networking Layer
WEEK FIVE - Security
WEEK SIX - Scaling
WEEK SEVEN - Automation
WEEK EIGHT - Caching
WEEK NINE - Serverless Architectures
WEEK TEN - Data Architectures
WEEK ELEVEN - Disaster Planning
WEEK TWELVE - Capstone & Wrap Up
The goal of this course is to help you prepare for the AWS Certified Solutions Architect - Associate (SAA-C03) certification exam and an entry-level career architecting solutions on the Amazon Web Services platform.
After completing the 17 modules in this course, you should be able to:
Apply AWS architectural principles and best practices to make architectural decisions.
Use appropriate AWS services and features to make infrastructure scalable, reliable, and highly available.
Use AWS managed services to enable greater flexibility and resiliency in an infrastructure.
Increase performance and reduce cost of a cloud infrastructure built on AWS.
Use AWS services to secure user, application, and data access.
Apply best practices from the AWS Well-Architected Framework to improve architectures that use AWS solutions.
Social Responsibility SDCCE students demonstrate interpersonal skills by learning and working cooperatively in a diverse environment.
Effective Communication SDCCE students demonstrate effective communication skills.
Critical Thinking SDCCE students critically process information, make decisions, and solve problems independently or cooperatively.
Personal and Professional Development SDCCE students pursue short term and life-long learning goals, mastering necessary skills and using resource management and self advocacy skills to cope with changing situations in their lives.
Diversity, Equity, Inclusion, Anti-racism and Access SDCCE students critically and ethically engage with local and global issues using principles of equity, civility, and compassion as they apply their knowledge and skills: exhibiting awareness, appreciation, respect, and advocacy for diverse individuals, groups, and cultures.
Mission Statement: The Business and Information Technology Program (BIT) provides adults open access to transformational career technical education programs. Through skill building, upskilling and reskilling, BIT provides the San Diego community the opportunity to transition to college and work by providing hands-on and project based training in current technology, foundational skills, and business practices with real-work simulations. Lead by industry experienced instructors; these programs build student confidence for future employment, promotion, and entrepreneurial opportunities.
IT Department SLO's
Students completing an IT software course will be able to demonstrate the use of the software tools to effectively communicate with others in person, with paper documents or online. (Relates to Institutional SLO #2 above)
IT students work in teams with diverse individuals to apply Information Technology solutions to a problem. (Relates to Institutional SLO #1 above)
IT students use Information Technology and software tools to support decision processes and critical thinking. (Relates to Institutional SLO #3 above)
IT students pursue continued Information Technology education to complete short term goals such as website development, and also continue with long term programs that will keep them current in this rapidly changing field. (Relates to Institutional SLO #4 above)
Earn a "C" grade or higher by earning at least 957 points (C grade).
Complete the Academy Cloud Architecting Course Assessment with at least 70 points / 70%.
Assignments may be submitted after the weekly due date posted on the course schedule without grade penalty.
Assignments may be re-submitted at any time prior to the course end date without grade penalty.
Note that we are skipping modules 1 - 3, as they were covered in detail in the AWS Academy Cloud Foundations course. Although the associated knowledge checks and guided lab are viewable and may be available, all scores associated with those activities are excused (EX) and are excluded from the final course grade calculations.
A = 100 to 95.00% = 1,368 to 1,299 points
B = 94.99 to 85.00% = 1,298 to 1,162 points
C = 84.99 to 70.00% = 1,161 to 957 points
D = 69.99 to 60.00% = 956 to 820 points
F = 59.99 to 0.00% = 819 to 0 points
Course Assessment (Required) - 100 points, 25-questions
This cumulative assessment pulls questions from a large bank. Questions are distributed across Modules 2–16. We may attempt the assessment as many times as needed. However, our latest attempt replaces all previous attempts, and will be used as the final score. Note that detailed response feedback will not be provided.
NOTE: A minimum passing score of 70% is a requirement for course completion.
Knowledge Checks (KC) - 20 points each, 13 total for up to 260 points
Completed in the AWS Academy, each AWS module includes a ten-question (multiple choice and T/F) formative assessment. This assignment is autograded and may be attempted multiple times, which allows us to gain greater confidence with the material and improve our score.
NOTE: The AWS Academy grade book uses a 100 point scale. All knowledge check scores will be adjusted manually in the SDCCD course grade book by a factor of 0.2 to align with the 20 point scale.
Guided Labs - 56 points each, eleven total for up to 616 points
These are hands-on labs using the AWS console, available only within AWS Academy, and accessed and submitted using the Vocareum lab environment. The guided labs provide step-by-step instructions to help us gain experience with creating and configuring AWS resources in the different AWS service areas. The skills that we gain in these guided labs prepare us for many of the challenge labs.
Each lab has a default session length that’s longer than the time the lab is expected to take. If want want to extend the session time, we can refresh the remaining session time to the full original amount of time. All the work we already completed in the session is retained. If or when the session timer reaches 0:00, all the resources and configurations in the AWS account we created is permanently deleted. We can then launch a new session of the lab, but we will need to create all the lab resources again.
NOTE: Assignments marked with an * will be adjusted manually in the SDCCD course grade book by an appropriate factor to align with the 56 point scale.
*Module 5: Introducing Amazon Elastic File System - EFS (15 AWS points x 3.733)
*Module 6: Creating an Amazon RDS Database (20 AWS points x 2.8)
Module 7: Creating a Virtual Private Cloud
Module 8: Creating a VPC Peering Connection
Module 9: Securing Applications by Using Amazon Cognito
Module 9: Encrypting Data at Rest by Using AWS Encryption Options
Module 10: Creating a Highly Available Environment
Module 11: Automating Infrastructure Deployment with AWS CloudFormation
Module 13: Building Decoupled Applications Using Amazon SQS
Module 14: Implementing a Serverless Architecture on AWS
*Module 16: Configuring Hybrid Storage and Migrating Data with AWS Storage Gateway S3 File Gateway (40 AWS points x 1.4)
Challenge Labs - 56 points each, seven total for up to 392 points
The challenge labs, which are woven throughout the course, are based on a realistic case study following the evolving needs of a fictitious café as it grows from a startup to a global enterprise. Through the tasks within the labs, we are challenged to apply the skills you gained from the guided labs and the concepts presented in the lectures and demos to build solutions following best practices. Although the labs provide some detailed guidance for those tasks not previously encountered, we will encounter a few tasks where step-by-step instructions are not provided, challenging us to complete those by relying on our prior lab experience, module demos and student guides, and the AWS documentation. In this way, we gain increased familiarity with some of the most common services and tasks we will encounter as cloud architects on the AWS platform.
NOTE: Assignments marked with an * will be adjusted manually in the SDCCD course grade book by an appropriate factor to align with the 56 point scale.
*Module 4: Creating a Static Website for the Café (29 AWS pts x 1.931)
*Module 5: Creating a Dynamic Website for the Café (30 AWS pts x 1.867)
*Module 6: Migrating a Database to Amazon RDS (25 AWS pts x 2.24)
Module 7: Creating a VPC Networking Environment for the Café
Module 10: Creating a Scalable and Highly Available Environment for the Café
Module 11: Automating Infrastructure Deployment
Module 14: Implementing a Serverless Architecture for the Cafe
Optional Course Assignments: - 56 points each, two for up to 112 points. The following activities are not included in the grade scale and may be completed for extra credit points.
Module 14: (Optional) Breaking a Monolithic Node.js Application into Microservices
Capstone Project: This cumulative project provides an opportunity to apply the architectural design principles that we have learned to a real-world business case. We will acquire the skills necessary to complete this project through the many lessons and assignments. The capstone is delivered through the same lab environment as the other labs, but the environment for the capstone is long lived. This means that if we start the lab one day, we can continue working on the lab the next day or any following day. Plan on a total of five hours to complete the project.
Online Live Sessions in Zoom - no points, twelve weekly meetings
Each week during our meeting, we review the prior week's materials, introduce the current week's lesson and activities, and work in small groups. If you miss a session, be sure to notify the instructor.
Completing this course along with its companion, will provide eligibility for several useful badges, vouchers and certificates.
Upon successfully completing the two courses in the certificate program, we earn the SDCCD program certificate. We are then able to view our program certificate within our student transcript, available at myportal.sdccd.edu.
The AWS Academy requirement for this course badge is completion of the AWS Academy Course Assessment with a minimum score of 70 points. We then receive an email within 24 hours from Amazon Web Services Training and Certification via Credly to claim our digital badge and downloadable certificate. We can share our badge on our LinkedIn or other social media profile to let peers and potential employers know about our accomplishment. Instructions for receiving the course badge may be found in the AWS Academy on Canvas modules page.
This course helps to prepare for the AWS Certified Solutions Architect-Associate exam.
Completing this badge-eligible course qualifies us for a discount voucher (50%) through the AWS Emerging Talent Community (ETC) which we can use towards certification. The ETC is the place to connect with others from around the globe who have committed to learning AWS cloud skills. Upon completion of the course we receive an invitation to join AWS Educate, where we also learn how to secure a voucher and complete our certification. The badge we earn from this certification can also be posted to our LinkedIn profile and social media.
This course will be delivered using AWS Academy Canvas, Vocareum, and Zoom. The AWS Academy is providing the training materials in the Canvas LMS, grades, announcements, and Canvas Inbox messages.
AWS Academy Canvas LMS: This is where we will find our e-learning resources, including pre-recorded video lessons and demonstrations, as well as our hands-on-labs and knowledge checks (short quiz). We will access this from a web browser (Chrome or Firefox) on a standard PC or laptop (Windows, Mac, or Linux - no tablets or smartphones). To log-in, we will be provided with a unique AWS Academy Canvas student account.
AWS Management Console: We access this console within the AWS Academy on Canvas, through the Vocareum hands-on lab environment. The AWS Management Console is a web application that provides access through a single web interface to the AWS service consoles for managing such AWS resources as Amazon EC2 and VPC, Amazon S3, AWS Lambda, AWS Systems Manager, and more.
All assignments are submitted directly within the assignment page in AWS Academy on Canvas.
Upon completing an assignment, we submit our work for automated grading
Our score will be immediately displayed in the course gradebook
Upon ending a lab session in Vocareum, all resources launched or deployed (either by Vocareum at the start of the lab or by you in the course of the lab) will be terminated and removed.
Assignments may be attempted and submitted multiple times, with the most recent score displayed in the course gradebook
I want to help you achieve your goals in this course - if you are struggling to meet the schedule, contact me as soon as possible so we can discuss this and create a plan that will help you succeed!
The hands-on labs in this course provide a feature that we can use to submit the work we complete. When you choose the Submit link in the lab session to record your lab progress, a script runs to assess your completion of the defined tasks. The script checks for the existence of particular AWS resources (or resource configurations) that you were instructed to create or configure in the AWS account.
Specific checks are customized for each lab. For example, we might be instructed to create an Amazon Elastic Compute Cloud (Amazon EC2) instance that hosts a web server, which should be accessible from the internet. The script might check to see that an EC2 instance was created in both the virtual private cloud (VPC) and subnet that were specified in the instructions. It might also check that the instance is running. The script might further check the settings in the security group that’s associated with the instance to verify that TCP port 80 is open to inbound traffic. Finally, the script might verify that the HTTP endpoint of the web server returns an HTTP status code of 200, which indicates that the webpage responds successfully to requests.
Each item that the script checks is worth a certain number of points. We must choose Submit while the lab session is still active to receive points for our work. Some labs also include multiple choice questions that we answer while we work on the lab. When these questions exist, the answers are evaluated (and points are awarded) as part of the same submit process.
For each lab, we can submit our work as many times as we want. The score that displays for the educator is the score the student achieved for their latest submission.
Each lab includes a Grade button. If we click this button, the number of points we achieved from our last submission displays. The number of possible points varies by lab. The script also generates a submission report that includes more detailed output. For some labs, the submission report includes information we might find helpful in explaining the assessment.
I value your success and I know your ability to communicate with me is an important ingredient in that recipe.
Contact me Monday through Friday by Canvas Inbox, and I will respond within 24 to 48 Hours.
Meet with me in Zoom before or after the weekly Live Session.
Meet with me in Zoom during Student Virtual Office Hours.
If you are seeking help with a lab, consider scheduling time in Zoom to work on it together!
Canvas Inbox: It is important to stay in contact, and this is one of the best ways to do so. I will respond to your message within 48 hours (but usually sooner), Monday – Friday before 5:30 PM. You can either check your messages in the CANVAS system or set your notifications to your preferred method of contact. If you send me a message over the weekend or during the holiday, expect a response by Monday or Tuesday afternoon.
Canvas Announcements: You will receive one each week on Sunday when the weekly module opens. These appear at the top of the class homepage when you log in and will be sent to you directly through your preferred method of notification from CANVAS. Check them regularly, as they contain important information about upcoming assignments or class concerns.
If I do not hear from you, and your course participation drops, I will reach out through Canvas Inbox, to make sure everything is alright. It is important that you respond as soon as you receive the message. Remaining in communication with myself (and your classmates) is one of the best ways to ensure success in the course.
Help with Lab Assignments: If you are seeking help with an assignment, include the assignment name and number, the specific step number, and any error messages and relevant information, including the expected outcome. The more accurate and specific, the better. Sometimes a screen shot or two can explain things that words cannot, especially when properly annotated. You might also consider dropping by the weekly office hours in Zoom or during the Live Session, or we can schedule a one-to-one Zoom session.
I want and know that you can succeed in this course, and I have found that regular weekly participation is one of the most effective ways to learn and grow your Cloud skills. To help make that happen, this course is offered online, and synchronous, which means that we will have regular weekly online meetings, and weekly assignments
Regular participation means check into the course at minimum three-times a week:
Completing at least one assignment each week
Attending Weekly Online Live Sessions
Viewing module videos and demos
Responding to messages from the instructor within 48 hours, or sooner if urgent
Note: If I do not hear from you and you do not participate in the course for over a week, I will send you a Canvas message. If I do not hear back from you within 24 to 48 hours, and you still have not accessed the course, I may assume you have dropped, and will remove your name from the course roster.
In general, do your best to stay current with the weekly material. If you cannot participate regularly or know that you may have to miss a week in Canvas for an unavoidable circumstance, let me know right way. Stay in contact and respond to any messages within 48 hours
Each weekly Online Live Session is an opportunity for you to interact directly with others in the course. I urge you to make every effort to attend and participate. The meetings are held in Zoom and provides us with time to both review the prior week’s material, as well as highlight important points about the current week’s module. You also have an opportunity to meet with fellow classmates to discuss thought provoking scenarios and exchange ideas in small groups. Most students enjoy the opportunity to share ideas and learn from each other. The registration link is available on the course home page.
The Student Virtual Office Hours provide time for one-to-one assistance with labs and concepts, as well as again sharing ideas with classmates or going deeper with a topic. Link is available on the course home page.
Student services provides If you need help with a personal problem or advice about your studies, you can make an appointment with a counselor. For example, a counselor can help you make a plan to reach your goals: improving your English, getting your GED, enrolling in a job training class or attending college. If you need help finding a job, you can contact the Career Development Services Counselor
Course Counselor: Joyce Almario-Greno, jalmario@sdccd.edu
Job Developer: Jennifer Kennedy, 619-800-3093, jkennedy@sdccd.edu
Contact Career Services
If you have a disability or think you might have a disability, you can contact the counselor in the Disability Support Programs and Services (DSPS) at your campus. DSPS can provide services and special equipment that will make it easier for you to study in our classes. An example of special equipment is a machine that enlarges the print for people who have a vision disability. Since it takes time to provide services, we recommend that you contact the counselor at least two weeks in advance. DSPS services are confidential and voluntary.
For assistance with your SDCCD student password or student records: Use the secure mySDCCD Support Desk. Complete the top portion, and at the bottom of the web page, select from the Help Topic "I forgot my password". You will then be required to submit a digital copy of your government issued ID for proof of identity.
To Speak with Live Staff: Sign up for our Virtual Student Support Center (Links to an external site.)
For all other matters: email the campus at sdcenorthcity@sdccd.edu or sdcemesa@sdccd.edu. All of the staff are waiting to help students.
PARTICIPATION REQUIREMENTS
To maintain active status in the course, regular attendance is expected:
Submit at least one AWS assignment each week
Regularly attend our Live Sessions
Respond to messages within 48 hours
Be proactive and contact the instructor if you are not able to meet these expectations
Plan to check into the course at minimum 3 times a week. Any student frequently absent from the course may, at the discretion of the instructor, be dropped from the course. Those students receiving Veteran’s Benefits or CalWORKS must comply with the attendance requirements specific to these programs.
BP 5500 - Student Rights, Responsibilities, Campus Safety & Administrative Due Process - This policy enumerates the rights and responsibilities of all District students. It also outlines the District’s commitment to a safe learning environment for all students.
Students should actively participate in course activities.
Our college has rules about academic dishonesty:
Students are not permitted to cheat on course assignments or tests.
Students are not permitted to use false information.
Students may not copy the language or ideas of another person and use them as their own ideas.
An instructor will take the following steps if he/she thinks a student has been dishonest in completing a course assignment or test:
Discuss the situation with the student. Make sure that the student understands why his/her action is dishonest.
If the student did not understand that his action was dishonest, the instructor can give the student a warning.
If the student knew that his action was dishonest, the instructor can give him/her a failing grade.
Note that live sessions fall on the day of the week and at the times provided to you before the term start and proceed in a weekly manner. All assignments for a particular live session are due at 11:59 PM PST on the last day of the week for that module. Live sessions will not be held on SDCCE holidays. If a live session for this course falls on an SDCCE holiday, the live session will be rescheduled, and your instructor will inform you as to when the Live Session will be rescheduled or how the content will be covered
We begin with a brief introduction to the course (module 1), including the topics, objectives, and assignments, followed by a short review of foundational cloud concepts (module 2) and securing access using AWS Identity and Access Management (IAM) service (module 3), and then focus our attention on our main topic (module 4), using Amazon Simple Storage Service (Amazon S3).
Note that we are skipping modules 1 - 3, as they were covered in detail in the AWS Academy Cloud Foundations course. The associated knowledge checks and lab assignment are available, but are excused and any scores will not be included in our final scores and grade.
AWS Module 4 Adding a Storage Layer with Amazon S3 overarching theme is selecting the right storage class to suit the business need while optimizing cost and securing objects and S3 buckets from unwanted access and accidental data loss. We review S3 concepts from Cloud Foundations, then look at common use cases, discuss moving data to S3, lifecycle policies, versioning, the S3 pricing model, and key considerations for choosing regions. The module includes several demos and a challenge lab, as well as a knowledge check.
Minimum Content Time: 6.15 hours
Live Session (3 hr)
Student guide (83 pages)
Video lessons (1 hr 25 min)
Demo: Amazon S3 Transfer Acceleration (5 min)
Demo: Managing Lifecycles in Amazon S3 (10 min)
Demo: Amazon S3 Versioning (5 min)
Challenge (Café) lab: Creating a Static Website for the Café (1 hr)
Knowledge Check (10 min)
At the end of this module, you should be able to:
MO4.1 Define Amazon Simple Storage Service (Amazon S3) and how it works.
MO4.2 Recognize the problems that Amazon S3 can solve.
MO4.3 Describe how to move data to and from Amazon S3.
MO4.4 Manage the storage of content efficiently by using Amazon S3.
MO4.5 Recommend the appropriate use of Amazon S3 based on requirements.
MO4.6 Configure a static website on Amazon S3.
MO4. 7 Use the AWS Well-Architected Framework principles when designing a storage layer with Amazon S3.
Module 5: Adding a Compute Layer Using Amazon EC2 overarching theme is selecting the compute solution that best suits the business need while optimizing cost and efficiency. This includes choosing the most appropriate Amazon Machine Image (AMI), instance type, storage, and pricing options. We also discuss the benefits of user data as a configuration option.
Minimum Content Time: 6.75 hours
Live Session (3 hr)
Activity: Choosing Instance Types
Student guide (103 pages)
Video Lessons (1 hr 45 min)
Video Demo: Configuring an EC2 Instance with User Data (10 min)
Video Demo: Reviewing the Spot Instance History Page (5 min)
Guided lab: Introducing Amazon EFS (30 min)
Challenge lab: Creating a Dynamic Website for the Café (1 hr)
Knowledge Check (10 min)
At the end of this module, you should be able to:
MO5.1 Identify how to use Amazon Elastic Compute Cloud (Amazon EC2) in an architecture.
MO5.2 Explain the value of using Amazon Machine Images (AMIs) to accelerate the creation and repeatability of infrastructure.
MO5.3 Recommend EC2 instance types based on requirements.
MO5.4 Recommend storage solutions for Amazon EC2.
MO5.5 Recognize how to configure Amazon EC2 instances with user data.
MO5.6 Describe Amazon EC2 pricing options and make recommendations based on cost.
MO5.7 Launch an Amazon EC2 instance.
MO5.8 Use the AWS Well-Architected Framework principles when designing a compute layer with Amazon EC2.
AWS Module 6 Adding a Database Layer overarching theme is that there are many considerations a solution architect evaluates when adding a database layer. We overview these considerations, including capacity planning, database types, and options. Solution architects use the AWS Well-Architected Framework to ensure that they make the proper solution selection. We discuss Amazon Relational Databse Service (RDS) and Amazon DynamoDB service in detail and overview purpose-built databases. We also emphasize considerations related to migrating data in databases. Our labs provides hands-on experience with launching and configuring web application access to an RDS instance, and database migration.
Minimum Content Time: 5.25 hours
Live Session (3 hr)
Student Guide (96 pages)
Video lessons (1 hr 50 min)
Video Demo: Amazon RDS Automated Backups and Read Replicas (15 min)
Guided Lab: Creating an Amazon RDS Database (20 min)
Challenge Lab: Migrating a Database to Amazon RDS (1 hr, 20 min)
Knowledge Check (10 min)
At the end of this module, you should be able to:
MO6.1 Compare database types and services offered by Amazon Web Services (AWS).
MO6.2 Explain when to use Amazon Relational Database Service (Amazon RDS).
MO6.3 Describe the advanced features of Amazon RDS.
MO6.4 Explain when to use Amazon DynamoDB.
MO6.5 Identify AWS purpose-built database services.
MO6.6 Describe how to migrate data into AWS databases.
MO6.7 Design and Deploy a database server.
MO6.8 Use the AWS Well-Architected Framework principles when designing a database layer.
This week we cover two modules that together show how to create and connect networks to support our soultions hosted on AWS.
AWS Module 7 Creating a Networking Environment overarching theme is building a networking environment that is secure and optimized to meet the business needs at the lowest possible cost. We begin with a quick review of basic networking concepts, while mainly focusing on using Amazon Virtual Private Cloud (Amazon VPC) features and related services to configure a network environment that suits common use cases. We discuss best practices, patterns, and service limits when designing a VPC. Next we learn how to connect subnets to the internet using internet and NAT gateways. Then we explore the methods for securing our AWS networking environments using security groups and network access control lists (NACLs). In our guided lab, we create and configure a small Amazon VPC with public and private subnets, while in our challenge lab we experiment with some of the features available to us to provide a more secure VPC networking environment for the Café.
AWS Module 8 Connecting Networks major theme is to discover various AWS network services to connect VPC networks and on-premises networks to VPCs. We distinguish pricing differences between AWS network services, which services to use for small and large numbers of networks, and that some AWS services like AWS Direct Connect have lead times of days or weeks to install. our lab provides hands-on experience configuring and testing a VPC peering connection.
Minimum Content Time: 8.5 hours
Live Session (3 hr)
Module 7 Activity: Choose the Right Type of Subnet
Module 8 Activity: Configure AWS Transit Gateway Routes
Module 7 Creating a Networking Environment
Video lessons (1 hr 20 min)
Student Guide (73 pages)
Video Demo: Creating an Amazon VPC in the AWS Management Console (15 min)
Guided Lab: Creating a Virtual Private Cloud (30 min)
Challenge Lab: Creating a VPC Networking Environment for the Cafe (1 hr 30 min)
Knowledge Check (10 min)
Module 8 Connecting Networks
Video lessons (1 hr 15 min)
Student Guide (66 pages)
Guided lab: Creating a VPC Peering Connection (20 min)
Knowledge Check (10 min)
At the end of both modules, you should be able to:
MO7.1 Explain the role of a virtual private cloud (VPC) in Amazon Web Services (AWS) Cloud networking.
MO7.2 Identify the components in a VPC that can connect an AWS networking environment to the
internet.
MO7.3 Isolate and secure resources within your AWS networking environment.
MO7.4 Create and monitor a VPC with subnets, an internet gateway, route tables, and a security group.
MO7.5 Use the AWS Well-Architected Framework principles when creating and planning a network environment.
MO8.1 Describe how to connect an on-premises network to the AWS Cloud.
MO8.2 Describe how to connect multiple VPCs in the AWS Cloud.
MO8.3 Connect VPCs in the AWS Cloud by using VPC peering.
MO8.4 Describe how to scale VPCs in the AWS Cloud.
MO8.5 Describe how to scale VPCs in the AWS Cloud.
AWS Module 9: Securing User, Application, and Data Access overarching theme is applying the “secure all layers” design principle to the user, application, and data layers of a cloud architecture. This module extends the foundational security principles that were introduced in module 3, including AWS Identity and Access Management (IAM) service. The focus of user access in this module includes access by groups of users across a larger organization (AWS Organizations), access across multiple AWS accounts (IAM roles), and managing externally authenticated users (federation). This module also discusses securing access to cloud applications and encrypting data at rest using AWS Key Management Service (KMS). In our hands-on labs we gain practical experience implementing web application user authentication with Amazon Cognito, and employing encryption for data at rest using AWS KMS.
Minimum Content Time: 6.75 hours
Live Session (3 hr)
Student Guide (97 pages)
Video lessons (2 hr 5 min)
Guided lab: Securing Applications by Using Amazon Cognito (30 min)
Guided lab: Encrypting Data at Rest by Using AWS Encryption Options (1 hr)
Knowledge Check (10 min)
At the end of this module, you should be able to:
MO9.1 Use AWS IAM users, groups, and roles to manage permissions.
MO9.2 Implement user federation within an architecture to increase security.
MO9.3 Describe how to manage multiple AWS accounts.
MO9.4 Recognize how AWS Organizations service control policies (SCPs) increase security within an architecture.
MO9.5 Encrypt data at rest by using AWS KMS.
MO9.6 Identify appropriate AWS security services based on a given use case.
AWS Module 10 Implementing Monitoring, Elasticity, and High Availability overarching theme is designing highly available workloads by building in automated, dynamic compute and database scaling. In the event of a hardware or software failure, the architectural design should include a failover strategy to healthy resources. Resources constrained to a single location should also implement automated recovery for failure scenarios. AWS services covered in our videos and demos include CloudWatch, Elastic Load Balancing, EventBridge, EC2 Auto Scaling, and Route 53. The demos this week cover implementing scalability and using Route 53
Minimum Content Time: 7.75 hours
Live Session (3 hr)
Student Guide (90 pages)
Video lessons (1 hr 20 min)
Demo: Creating Scaling Policies for Amazon EC2 Auto Scaling (15 min)
Demo: Creating a Highly Available Application (15 min)
Demo: Amazon Route 53: Simple Routing (10 min)
Demo: Amazon Route 53: Failover Routing (15 min)
Demo: Amazon Route 53: Geolocation Routing (10 min)
Guided Lab - Creating a Highly Available Environment (40 min)
Challenge (Café) lab: Creating a Scalable and Highly Available Environment for the Café (1 hr 30 min)
Knowledge Check (10 min)
At the end of this module, you should be able to:
MO10.1 Examine how reactive architectures use Amazon CloudWatch and Amazon EventBridge to monitor metrics and facilitate notification events.
MO10.2 Use Amazon EC2 Auto Scaling within an architecture to promote elasticity and create a highly available environment.
MO10.3 Determine how to scale your database resources.
MO10.4 Identify a load balancing strategy to create a highly available environment.
MO10.5 Use Amazon Route 53 for Domain Name System (DNS) failover.
MO10.6 Use the AWS Well-Architected Framework principles when designing highly available systems.
Module 11 Automating Your Architecture introduces the importance of automation and the concept of Infrastructure as Code (IaC), and the CloudFormation service. IaC allows us to provision and support our computing infrastructure using code instead of manual processes and settings. We see how AWS CloudFormation can be used to create automated and repeatable deployments, and the architectural benefits of using automation. A brief discussion explains how manual approaches to creating and maintaining development and production environments do not scale, provide version control. or consistent data management. Next we learn how AWS CloudFormation provides an infrastructure as code (IaC) solution that can be used to achieve reusability, consistency, version control, and rapid deployment of simple or complex development and production environments on AWS. We're also introduced to other AWS Services that help with the creation, deployment, and maintenance of our infrastructure programmatically, including AWS Quick Starts, which provide us with CloudFormation templates for full solutions. We wrap up by discussing some of the challenges with writing code and how we can use Amazon CodeWhisperer to alleviate those challenges. Our recorded demonstrations highlight different aspects of CloudFormation, including the analysis of the templates that define the stack of associated resources that are deployed as part of the solution.
Minimum Content Time: 7.0 hours
Live Session (3 hr)
Student Guide (69 pages)
Video lessons (1 hr 10 min)
Video Demo: Analyzing an AWS CloudFormation Template (10 min)
Video Demo: AWS CloudFormation Resources (10 min)
Video Demo: Reviewing an AWS CloudFormation Template (15 min)
Video Demo: Using the AWS CloudFormation Console (10 min)
Guided lab: Automating Infrastructure with AWS CloudFormation (20 min)
Challenge (Café) lab: Automating Infrastructure Deployment (1 hr 30 min)
Knowledge Check (10 min)
At the end of this module, you should be able to:
MO11.1 Recognize when to use architecture automation and why.
MO11.2 Identify how to use infrastructure as code (IaC) as a strategy for provisioning and managing cloud resources.
MO11.3 Identify how to model, create, and manage a collection of AWS resources by using AWS CloudFormation.
MO11.4 Identify how to use AWS Quick Start CloudFormation templates to set up an architecture.
MO11.5 Identify uses of Amazon CodeWhisperer.
MO11.6 Use the AWS Well-Architected Framework principles when designing automation strategies.
This week we cover two modules that together show how to improve performance and increase resilience of our applincations hosted on AWS.
AWS Module 12 Caching Content overarching theme is that caching provides performance and cost optimization benefits. An architect must select the best caching approach for each use case based on where the content is stored, how it is used, and how often it changes. This module focuses on considerations for edge caching with CloudFront and database caching with ElastiCache. It’s important to emphasize that caching increases data retrieval performance by reducing the need to access the underlying slower storage layer. Elasticache acts as a cache to supplement your primary database by removing unnecessary pressure on it, typically in the form of frequently accessed read data.
AWS Module 13 Building Decoupled Architectures focuses on the methods and advantages of decoupling architectures for increased resilience and scalability. This module discusses the AWS services that implement both synchronous and asynchronous loose coupling at the infrastructure and application level respectively, including Elastic Load Balancing (ELB), Amazon Simple Queue Service (SQS), Amazon Simple Notification Service (SNS), and Amazon MQ. In our lab we create and configure an SQS queue and use it to decouple the different stages of an application.
Minimum Content Time: 6.5 hours
Live Session (3 hr)
Module 12 Caching Content
Student Guide (63 pages)
Video lessons (1 hr 5 min)
Knowledge Check (10 min)
Module 13 Building Decoupled Architectures
Student Guide (59 pages)
Video lessons (35 min)
Guided lab: Building Decoupled Applications by Using Amazon SQS (1 hr)
Knowledge Check (10 min)
At the end of this module, you should be able to:
MO12.1 Identify how caching content can improve application performance and reduce latency..
MO12.2 Identify how to use Amazon CloudFront to deliver content by using edge locations protection.
MO12.3 Create architectures that use Amazon CloudFront to cache content.
MO12.4 Describe how to use Amazon ElastiCache for database caching.
MO12.5 Use the AWS Well-Architected Framework principles when designing caching strategies.
MO13.1 Differentiate between tightly and loosely coupled architectures.
MO13.2 Identify how Amazon SQS works and when to use it.
MO13.3 Identify how Amazon SNS works and when to use it.
MO13.4 Describe Amazon MQ.
MO13.5 Decouple workloads by using Amazon SQS
AWS Module 14 Building Serverless Architectures and Microservices introduces us to building microservice applications using AWS container services and serverless architectures. Three key serverless services are covered: AWS Lambda, Amazon API Gateway, and AWS Step Functions. We begin by defining microservices and the key characteristics. We then learn about containers, terminology, Amazon ECS, and AWS Fargate, a serverless compute engine. A focus is placed on breaking a monolithic application into microservices. Next, the concept of serverless is introduced including the tenets of serverless architectures that make them valuable for building modern applications. AWS service offerings are discussed, with a focus on AWS Lambda. Amazon API Gateway, a fully managed service that enables us to create, publish, maintain, monitor, and secure APIs is introduced. Architecture examples then show how API Gateway is deeply integrated with AWS Lambda. We wrap by introducing AWS Step Functions, a service to help coordinate microservices.
Minimum Content Time: 6.25 (8.25) hrs
Live Session (3 hr)
Activity: Decomposing a Monolithic Application with AWS API Gateway)
Student Guide (102 pages)
Video lessons (1 hr 40 min)
Video Demo: Using AWS Lambda with Amazon S3 (10 min)
Video Demo: Running a Container (15 min)
Guided lab: Implementing a Serverless Architecture on AWS (40 min)
Challenge (Café) lab: Implementing a Serverless Architecture for the Café (1 hr 30 min)
(Optional) Guided Lab 1: Breaking a Monolithic Node.js Application into Microservices (2 hr)
Knowledge Check (10 min)
At the end of this module, you should be able to:
MO14.1 Define serverless architectures.
MO14.2 Identify the characteristics of microservices.
MO14.3 Architect a serverless architecture with AWS Lambda.
MO14.4 Define how containers are used in AWS.
MO14.5 Describe the types of workflows that AWS Step Functions supports.
MO14.6 Describe a common architecture for Amazon API Gateway.
MO14.7 Use the AWS Well-Architected Framework principles when building serverless architectures.
AWS Module 15 Data Engineering Patterns introduces data architectures and explains how the characteristics of data and the business need drive architecture decisions for data pipelines. The focus is on comparing common patterns and applying best practices from the AWS Well-Architected Framework Data Analytics Lens to choose components suited to each use case. The module is designed to provide enough information about data engineering processes to provide context for architectural considerations without going too deep into the specialty of data analytics.
Minimum Content Time: 5.25 hrs
Live Session (3 hr)
Activity: Choosing Data Storage for a Bank Application
Activity: Data Pipeline Architecture
Student Guide (116 pages)
Video lessons (2 hr 5 min)
Knowledge Check (10 min)
At the end of this module, you should be able to:
MO15.1 Use the AWS Well-Architected Framework to generalize the type of architecture that is required to suit common use cases for data ingestion (batch and stream).
MO15.2 Select a data ingestion pattern appropriate to characteristics of the data (velocity, volume, and variety).
MO15.3 Select the appropriate AWS services to ingest and store data for a given use case.
MO15.4 Select the appropriate AWS services to optimize data processing and transformation requirements for a given use case.
MO15.5 Identify when to use different types of AWS data analytics and visualization services based on a given use case.
AWS Module 16 Planning for Disaster focuses on strategies for disaster and recovery planning to create an architecture that supports an organization's system of prevention and recovery from potential threats. A key takeaway is understanding that an architect must assume services might fail and plan for those failures. Another key takeaway is knowing that the choices the architect makes must balance business needs and costs when deciding how quickly recovery must occur.
The module elaborates on disaster recovery patterns with related architecture diagrams and implementation best practices. It covers factors that influence disaster planning strategies and how a combination of services across storage, compute, networking, database, and deployment orchestration support disaster recovery.
In our hands-on lab, we gain practical experience migrating data from a Linux instance to an Amazon S3 bucket with a File Gateway.
Minimum Content Time: 6.0 hr
Live Session (3 hr)
Student Guide (67 pages)
Video lessons (1 hr 25 min)
Guided Lab - Configuring Hybrid Storage and Migrating Data with AWS Storage Gateway S3 File Gateway (1 hr 30 min)
Knowledge Check (10 min)
At the end of this module, you should be able to:
MO16.1 Identify strategies for disaster planning including recovery point objective (RPO) and recovery time objective (RTO)
MO16.2 Identify disaster planning for Amazon Web Services (AWS) service categories.
MO16.3 Describe common patterns for backup and disaster recovery (DR) and how to implement them.
MO16.4 Use the AWS Well-Architected Framework principles when designing a disaster recovery plan.
AWS Module 17 – Bridging to Certification familiarizes us with resources that can help you to prepare for the AWS Certified Solutions Architect – Associate exam. We begin by examining the main content domains, their weightings, and the objectives of the exam. We are then introduced to resources for preparing for the exam, including links to technical content, AWS documentation, and the AWS Cloud Adoption Framework (AWS CAF). We wrap up by presenting resources for labs and tutorials that can help us get more hands-on experience with AWS.
Minimum Content Time: 8.5 hr
Live Session (2 hr 30 min)
AWS Module 17 video lessons (10 min)
Capstone Project (2 hr)
Course Assessment - Completion Requirement (30 min)
At the end of this module, you should be able to:
MO17.1 Identify how to prepare for the AWS Certified Solutions Architect – Associate exam.
MO17.2 Find resources to prepare for the exam.
Duration: 40 minutes
In this lab, previously covered in AWS Academy Cloud Foundations, you explore users and groups and inspect the associated policies in the AWS Identity and Access Management (IAM) service. You also add users to the groups and verify the permissions that are inherited by them.
After completing this lab, you should be able to:
Explore pre-created IAM users and groups.
Inspect IAM policies as they were applied to the pre-created groups.
Follow a real-world scenario, while adding users to groups with specific capabilities enabled.
Locate and use the IAM sign-in URL.
Test the effects of policies on service access.
Task 1: Explore the users and groups, and inspect policies You explore the users and groups that were created for you in IAM.
Task 2: Add users to groups You recently hired user-1 into a role where they will provide support for Amazon S3. In this task, you add them to the S3-Support group so that they inherit the necessary permissions through the attached AmazonS3ReadOnlyAccess policy.
Task 3: Sign in and test user permissions You test the permissions inherited by IAM users in the console.
Duration: 60 minutes
Frank and Martha are a husband-and-wife team who own and operate a small café business that sells desserts and coffee. Their daughter, Sofía, and their other employee, Nikhil—who is a secondary school student—also work at the café. The café has a single location in a large city.
The café currently doesn’t have a marketing strategy. They mostly gain new customers when someone walks by, notices the café, and decides to try it. The café has a reputation for high-quality desserts and coffees, but their reputation is limited to people who have visited, or who have heard about them from their customers.
Sofía suggests to Frank and Martha that they should expand community awareness of what the café has to offer. The café doesn’t have a web presence yet, and it doesn’t currently use any cloud computing services. However, that situation is about to change.
After completing this lab, you should be able to:
Create a bucket in Amazon S3
Upload content to your bucket
Enable access to the bucket objects
Update the website
A business request for the café: Launching a static website (Challenge #1)
Sofía mentions to Nikhil that she would like the café to have a website that will visually showcase the café's offerings. It would also provide customers with business details, such as the location of the store, business hours, and telephone number.
Nikhil is happy that he was asked to create the first website for the café.
For this first challenge, you will take on the role of Nikhil and use Amazon S3 to create a basic website for the café.
Task 1: Extracting the files that you need for this lab
Task 2: Creating an S3 bucket to host your static website
Task 3: Uploading content to your S3 bucket
Task 4: Creating a bucket policy to grant public read access
New business requirement: Protecting website data (Challenge #2)
You show Sofía the new website, and she's very impressed. Good job!
You and Sofía discuss that you will likely need to make many updates to the website as the number of café offerings expands.
Olivia, an AWS Solutions Architect and café regular, advises you to implement a strategy to prevent the accidental overwrite and deletion of website objects.
You already need to make some changes to the website, so you decide that this would be a good time to explore object versioning.
Task 5: Enabling versioning on the S3 bucket
New business requirement: Optimizing costs of S3 object storage (Challenge #3)
Now that you enabled versioning, you realize that the size of the S3 bucket will continue to grow as you upload new objects and versions. To save costs, you decide to implement a strategy to retire some of those older versions.
Task 6: Setting lifecycle policies
New business requirement: Enhancing durability and planning for DR (Challenge #4)
The next time Olivia comes to the café, you tell her about the updates to the website. You describe the measures that you took to protect the website's static files from being accidentally overwritten or deleted. Olivia tells you that cross-Region replication is another feature of Amazon S3 that you can also use to back up and archive critical data.
Task 7: Enabling cross-Region replication
Duration: 20 minutes
This lab introduces you to Amazon Elastic File System (Amazon EFS) by using the AWS Management Console.
Amazon Elastic File System (Amazon EFS) provides serverless, fully elastic file storage so that you can share file data without provisioning or managing storage capacity and performance. Amazon EFS is built to scale on demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files.
Amazon EFS provides a simple, serverless, set-and-forget elastic file system. With Amazon EFS, you can create a file system, mount the file system on an Amazon EC2 instance, and then read and write data to and from your file system. You can mount an Amazon EFS file system in your virtual private cloud (VPC), through the Network File System versions 4.0 and 4.1 (NFSv4) protocol.
After completing this lab, you should be able to:
Log in to the AWS Management Console
Create an Amazon EFS file system
Log in to an Amazon Elastic Compute Cloud (Amazon EC2) instance that runs Amazon Linux
Mount your file system to your EC2 instance
Examine and monitor the performance of your file system
Task 1: Creating a security group to access your EFS file system The security group that you associate with a mount target must allow inbound access for TCP on port 2049 for Network File System (NFS). This is the security group that you will now create, configure, and attach to your EFS mount targets.
Task 2: Creating an EFS file system EFS file systems can be mounted to multiple EC2 instances that run in different Availability Zones in the same Region. These instances use mount targets that are created in each Availability Zone to mount the file system by using standard NFSv4.1 semantics. You can mount the file system on instances in only one virtual private cloud (VPC) at a time. Both the file system and the VPC must be in the same Region.
Task 3: Connecting to your EC2 instance via SSH In this task, you will connect to your EC2 instance by using Secure Shell (SSH).
Task 4: Creating a new directory and mounting the EFS file system Amazon EFS supports the NFSv4.1 and NFSv4.0 protocols when it mounts your file systems on EC2 instances. Though NFSv4.0 is supported, we recommend that you use NFSv4.1. When you mount your EFS file system on your EC2 instance, you must also use an NFS client that supports your chosen NFSv4 protocol. The EC2 instance that was launched as a part of this lab includes an NFSv4.1 client, which is already installed on it.
Task 5: Examining the performance behavior of your new EFS file system Examining the performance by using Flexible IO. Flexible IO (fio) is a synthetic I/O benchmarking utility for Linux. It is used to benchmark and test Linux I/O subsystems. During boot, fio was automatically installed on your EC2 instance.
Monitoring performance by using Amazon CloudWatch
Duration: 60 minutes
After the café launched the first version of their website, customers told the café staff how nice the website looks. However, in addition to the praise, customers often asked whether they could place online orders.
Sofía, Nikhil, Frank, and Martha discussed the situation. They agreed that their business strategy and decisions should focus on delighting their customers and providing them with the best possible café experience.
In this lab, you will deploy an application on an Amazon Elastic Compute Cloud (Amazon EC2) instance. The application enables the café to accept online orders. After testing that the application works as intended in the first AWS Region (the development environment), you will then create an Amazon Machine Image (AMI) from the EC2 instance. You will also deploy a second instance of the same application as the production environment in another AWS Region.
After completing this lab, you should be able to:
Connect to the AWS Cloud9 IDE on an existing EC2 instance
Analyze the EC2 instance environment and confirm web server accessibility
Install a web application on an EC2 instance that also uses AWS Systems Manager Parameter Store
Test the web application
Create an AMI
Deploy a second copy of the web application to another AWS Region
A business request for the café: Preparing an EC2 instance to host a website (Challenge #1)
The café wants to introduce online ordering for customers, and enable café staff to view submitted orders. Their current website architecture, where the website is hosted on Amazon S3, does not support the new business requirements.
In the first part of this lab, you will take on the role of Sofía. Using the Cloud9 IDE, you will configure an Amazon EC2 instance so that it is ready to host a website for the café. AWS Cloud9 is a service that can run on an EC2 instance. It provides an integrated development environment (IDE) that includes features such as a code editor, debugger, and terminal.
By using the AWS Cloud9 environment, you don't need to download a key pair and connect to the EC2 instance by using PuTTY or similar Secure Shell (SSH) software. By using AWS Cloud9, you also don't need to use command line text-editing tools (like vi or nano) to edit files on the Linux instance.
Task 1: Analyzing the existing EC2 instance
Task 2: Connecting to the Cloud9 IDE on the EC2 instance
Task 3: Analyzing the LAMP stack environment and confirming that the web server is accessible
New business requirement: Installing a dynamic website application on the EC2 instance (Challenge #2)
In the previous challenge, you configured the EC2 instance. You now know that PHP is installed, and that the application environment has a running relational database. Also, the environment has a running web server that can be accessed from the internet. You now have the basic setup for hosting a dynamic website for the café.
In the second part of this lab, you will take on the role of Sofía, and install the café application on the EC2 instance.
Task 4: Installing the café application
Task 5: Testing the web application
New business requirement: Creating development and production websites in different AWS Regions (Challenge #3)
Another business requirement emerges, along with the praise. Martha and Frank would like to have two café websites:
One website that can be used as a development environment to mock up new features and web designs before they are released to customers
A separate website that will host the production environment that customers use
Sofía discussed the new requirement with Mateo when he came into the café one morning for his coffee. He suggested that, ideally, the two environments would exist in different AWS Regions. Such a design would have the added benefit of providing more robust disaster recovery (DR) in the unlikely scenario when an AWS Region becomes temporarily unavailable.
Task 6: Creating an AMI and launching another EC2 instance
Task 7: Verifying the new café instance
Duration: 20 minutes
Traditionally, creating a database can be a complex process that requires either a database administrator or a systems administrator. In the cloud, you can simplify this process by using Amazon Relational Database Service (Amazon RDS).
Amazon RDS is an easy to manage relational database service optimized for total cost of ownership. It is simple to set up, operate, and scale with demand. Amazon RDS automates the undifferentiated database management tasks, such as provisioning, configuring, backups, and patching. Amazon RDS enables customers to create a new database in minutes, and offers flexibility to customize databases to meet their needs across 8 engines and 2 deployment options. Customers can optimize performance with features, like Multi-AZ with two readable standbys, Optimized Writes and Reads, and AWS Graviton3-based instances, and choose from multiple pricing options to effectively manage costs.
After completing this lab, you should be able to:
Launch a database using Amazon RDS
Configure a web application to connect to the database instance
Task 1: Creating an Amazon RDS database n this task, you will create a MySQL database in your virtual private cloud (VPC). MySQL is a popular open source relational database management system (RDBMS), so there are no software licensing fees.
Task 2: Configuring web application communication with a database instance This lab automatically deployed an Amazon Elastic Compute Cloud (Amazon EC2) instance with a running web application. You must use the IP address of the instance to connect to the application.
Duration: 80 minutes
The café currently uses a single EC2 instance to host their web server, database, and application code.
Meanwhile, café business has grown. The order history that's stored in the database provides valuable business information that the café staff doesn't want to lose. Martha uses the data for accounting, and Frank looks at it occasionally to plan how many of each dessert type he should bake.
Sofía has additional concerns. The database must be consistently upgraded and patched, and she doesn’t always have time to do these tasks. Also, administering the database is a specialized skill. Training others to do database administration isn’t something that she wants to spend time on. Meanwhile, Sofía is also concerned that the café isn’t doing data backups as often as they should.
Finally, Martha also wants to reduce labor costs that are associated with the technical learning investment that's needed to manage the database.
In this lab, you will migrate data from a database on an Amazon Elastic Compute Cloud (Amazon EC2) instance to Amazon Relational Database Service (Amazon RDS). Specifically, you will migrate a MariaDB database that runs on an EC2 instance to a MariaDB database that runs on Amazon RDS. You will also update the café web application to use the new database to store data for all future orders.
After completing this lab, you should be able to:
Create an RDS database instance.
Export data from MariaDB database by using mysqldump.
Connect a SQL client to an RDS database.
Migrate data from a MariaDB database that runs on an EC2 instance to an RDS database instance.
Configure a web application to use the new RDS database instance for data storage.
A business request: Creating an RDS instance for the café application (Challenge #1)
After a conversation with Olivia—the AWS solutions architect who often comes in for a coffee—Sofía decided that the café needs a database solution that is easier to maintain. In addition, the database should provide essential features such as durability, scalability, and high performance.
In the first part of this lab, you will take on the role of Sofía. You will create an RDS instance that the café can use as the data storage layer for the café website. You will also connect to the EC2 instance and analyze the details of the cafe web application.
Task 1: Creating an RDS instance
Task 2: Analyzing the existing café application deployment
New business requirement: Exporting data from the old database and establishing a connection to the new database (Challenge #2)
Now that you created a new RDS instance, you can move on to the next step in the café's database migration plan. Next, you will export the data from the database that the café application currently uses. You will also establish a network connection from the EC2 instance (where the application runs) to the new RDS database instance.
In this challenge, you continue as Sofía to complete these tasks.
Task 3: Working with the database on the EC2 instance
Task 4: Working with the RDS database
New business requirement: Importing data and connecting the application to the new database (Challenge #3)
In the previous challenge, you exported the data from the database that the café application currently uses. you also established a network connection from the EC2 instance to the RDS instance. You can now work on the next business requirement.
In this challenge, you will continue to take on the role of Sofía to import the cafe data into the RDS database instance. After you complete the import, you will configure the application to use the new database.
Task 5: Importing the data into the RDS database instance
Task 6: Connecting the café application to the new database
Duration: 30 minutes
Traditional networking is difficult. It involves equipment, cabling, complex configurations, and specialist skills. Amazon Virtual Private Cloud (Amazon VPC) hides the complexity, and simplifies the deployment of secure private networks.
Amazon Virtual Private Cloud (Amazon VPC) gives you full control over your virtual networking environment, including resource placement, connectivity, and security. Get started by setting up your VPC in the AWS service console. Next, add resources to it such as Amazon Elastic Compute Cloud (EC2) and Amazon Relational Database Service (RDS) instances. Finally, define how your VPCs communicate with each other across accounts, Availability Zones, or AWS Regions. In the example below, network traffic is being shared between two VPCs within each Region.
This lab shows you how to build your own virtual private cloud (VPC), deploy resources, and create private peering connections between VPCs.
After completing this lab, you should be able to:
Deploy a VPC
Create an internet gateway and attach it to the VPC
Create a public subnet
Create private subnet
Create an application server to test the VPC
Task 1: Creating a VPC You will begin by using Amazon VPC to create a new virtual private cloud, or VPC. A VPC is a virtual network that is dedicated to your Amazon Web Services (AWS) account. It is logically isolated from other virtual networks in the AWS Cloud. You can launch AWS resources, such as Amazon Elastic Compute Cloud (Amazon EC2) instances, into the VPC. You can configure the VPC by modifying its IP address range, and create subnets. You can also configure route tables, network gateways, and security settings.
Task 2: Creating subnets A subnet is a subrange of IP addresses in the VPC. AWS resources can be launched into a specified subnet. Use a public subnet for resources that must be connected to the internet, and use a private subnet for resources that must remain isolated from the internet. In this task, you will create a public subnet and a private subnet.
Task 3: Creating an internet gateway In this task, you will create an internet gateway so that internet traffic can access the public subnet. An internet gateway is a horizontally scaled, redundant, and highly available VPC component. It allows communication between the instances in a VPC and the internet. It imposes no availability risks or bandwidth constraints on network traffic.
Task 4: Configuring route tables A route table contains a set of rules, called routes, that are used to determine where network traffic is directed. Each subnet in a VPC must be associated with a route table because the table controls the routing for the subnet. A subnet can only be associated with one route table at a time, but you can associate multiple subnets with the same route table.
Task 5: Creating a security group for the application server A security group acts as a virtual firewall for instances to control inbound and outbound traffic. Security groups operate at the level of the elastic network interface for the instance. Security groups do not operate at the subnet level. Thus, each instance can have its own firewall that controls traffic. If you do not specify a particular security group at launch time, the instance is automatically assigned to the default security group for the VPC. In this task, you will create a security group that allows users to access your application server via HTTP.
Task 6: Launching an application server in the public subnet To test that your VPC is correctly configured, you will now launch an EC2 instance into the public subnet. You will also confirm that you can access the EC2 instance from the internet.
Duration: 90 minutes
Sofía and Nikhil are now confident in their ability to create a two-tier architecture because of their experience migrating the café's data. They successfully moved from a MariaDB database on an Amazon Elastic Compute Cloud (Amazon EC2) instance to an Amazon Relational Database Service (Amazon RDS) database instance. In addition, they also moved their database resources from a public subnet to a private subnet.
When Mateo—a café regular and an AWS systems administrator and engineer—visits the café, Sofía and Nikhil tell him about the database migration. Mateo tells them that they can enhance security by running the café's application server in another private subnet that's separate from the database instance. They could then go through a bastion host (or jump box) to gain administrative access to the application server. The application server must also be able to download needed patches.
Knowing that the cloud makes experimentation easier, Sofía and Nikhil are eager to set up a non-production VPC environment. They can use it implement the new architecture and test different security layers, without accidentally disrupting the café's production environment.
In this lab, you use Amazon Virtual Private Cloud (Amazon VPC) to create a networking environment on AWS and implement security layers to protect your resources.
After completing this lab, you should be able to:
Create a virtual private cloud (VPC) environment that enables you to securely connect to private resources.
Enable your private resources to connect to the internet.
Create an additional layer of security in your VPC to control traffic to and from private resources.
A business request for the café: Creating a VPC network that allows café staff to remotely and securely administer the web application server (Challenge #1)
In this challenge, you will take on the role of one of the café's system administrators. You will create and configure a VPC network so that you can securely connect from a bastion host in a public subnet to an EC2 instance in a private subnet. You will also create a NAT gateway to enable the EC2 instance in your private subnet to access the internet.
Task 1: Creating a public subnet
Task 2: Creating a bastion host
Task 3: Allocating an Elastic IP address for the bastion host
Task 4: Testing the connection to the bastion host
Task 5: Creating a private subnet
Task 6: Creating a NAT gateway
Task 7: Creating an EC2 instance in the private subnet
Task 8: Configuring your SSH client for SSH passthrough
Task 9: Testing the SSH connection from the bastion host
New business requirement: Enhancing the security layer for private resources (Challenge #2)
Sofía and Nikhil are proud of the changes they made to the cafe's application architecture. They are pleased by the additional security they built, and they are also glad to have a test environment that they can use before they deploy updates to the production instance. They tell Mateo about their new application architecture, and he's impressed! To further improve their application security, Mateo advises them to build an additional layer of security by using custom network access control lists (network ACLs).
In this challenge, you will continue to take on the role of one of the café's system administrators. Now that you established secure access from the bastion host to the EC2 instance in the private subnet, you must enhance the security layer of the private subnet. To accomplish this task, you will create and configure a custom network ACL
Task 10: Creating a network ACL
Task 11: Testing your custom network ACL
New business requirement: Importing data and connecting the application to the new database (Challenge #3)
In the previous challenge, you exported the data from the database that the café application currently uses. you also established a network connection from the EC2 instance to the RDS instance. You can now work on the next business requirement.
In this challenge, you will continue to take on the role of Sofía to import the cafe data into the RDS database instance. After you complete the import, you will configure the application to use the new database.
Task 10: Creating a network ACL
Task 11: Testing your custom network ACL
Duration: 20 minutes
You might want to connect your virtual private clouds (VPCs) when you must transfer data between them. This lab shows you how to create a private VPC peering connection between two VPCs.
After completing this lab, you should be able to:
Create a VPC peering connection
Configure route tables to use the VPC peering connection
Task 1: Creating a VPC Peering Connection Your task is to create a VPC peering connection between two VPCs. A VPC peering connection is a one-to-one networking connection between two VPCs that enables you to route traffic between them privately. Instances in either VPC can communicate with each other like they are in the same network. You can create a VPC peering connection between your own VPCs, in a VPC in another AWS account, or with a VPC in a different AWS Region.Two VPCs are provided as part of this lab: Lab VPC and Shared VPC. Lab VPC has an Inventory application that runs on an Amazon Elastic Compute Cloud (Amazon EC2) instance in a public subnet. Shared VPC has a database instance that runs in a private subnet..
Task 2: Configuring route tables You will now update the route tables in both VPCs to send traffic from Lab VPC to the peering connection for Shared VPC.
Task 3: Testing the VPC peering connection. Now that you configured VPC peering, you will test the VPC peering connection. You will perform the test by configuring the Inventory application to access the database across the peering connection.
Duration: 30 minutes
In this lab, you review the default data encryption and AWS Key Management Service (AMS KMS) encryption used to encrypt data at rest. You review the default encryption of the objects stored in Amazon Simple Storage Service (Amazon S3). You create an AWS KMS key and use it to encrypt objects stored in Amazon Elastic Block Store (Amazon EBS) volumes. You also observe how AWS CloudTrail provides an audit log of AWS KMS key usage and how disabling the key affects data access.
After completing this lab, you should be able to:
Review the default encryption provided by Amazon S3.
Access the encrypted Amazon S3 object.
Create an AWS KMS customer managed key to encrypt and decrypt data at rest.
Create and attach an encrypted data volume on an existing EC2 instance.
Disable and re-enable an AWS KMS key and observe the effects on data access.
Monitor encryption key usage by using CloudTrail event history.
Review key rotation.
You have the Birds web application, which was built by using a NodeJs server running on an AWS Cloud9 instance and an Amazon Simple Storage Service (Amazon S3) bucket with static website hosting capability. The Birds application tracks students' bird sightings by using the following components:
A home page
An educational page that teaches students about birds
The following three protected pages, which students can access only if they have been authenticated:
A sightings page where students can view past bird sightings
A reporting page where students report new bird sightings
An administrator page where site administrators can perform additional operations
You need to add authentication and authorization to the application for the protected pages.
Task 1: Preparing the lab environment Before you can work on this lab, you download files and run scripts in the AWS Cloud9 integrated development environment (IDE) that was prepared for you.
Task 2: Reviewing the Birds website You explore the Birds web application to understand how it behaves before you enable user authentication.
Task 3: Configuring the Amazon Cognito user pool You create an Amazon Cognito user pool, create users, and update the application to use the user pool.
Task 4: Updating the application to use the user pool for authentication You update the application to provide the information it requires to interact with Amazon Cognito. This information includes the user pool ID, application client ID, and Amazon Cognito domain prefix.
Task 5: Testing the user pool integration with the application You test the updated application. First, you restart the node server so that it uses the updated configuration.
Task 6: Configuring the identity pool The Amazon Cognito identity pool was created for you when you launched the lab environment. In this task, you configure the Amazon Cognito identity pool to work with the Birds application.
Task 7: Updating the application to use the identity pool for authorization As with the user pool, the application needs to be updated so it can interact with the identity pool. In this task, you make the necessary updates to the Birds application.
Task 8: Testing the identity pool integration with the application You test the updated Birds application to ensure that you can access temporary AWS credentials. With these temporary credentials, you are able to access AWS services based on the roles that were defined when you set up the identity pool. Remember that your identity pool is configured to associate authenticated users with an IAM role that allows access to a DynamoDB table.
Duration: 60 minutes
While building web applications, user authentication and authorization can be challenging. Amazon Cognito makes it convenient for developers to add sign-up, sign-in, and enhanced security functionality.
In this lab, you configure an Amazon Cognito user pool, which you use to manage users and their access to an existing web application. You also create an Amazon Cognito identity pool, which authorizes users when the application makes calls to the Amazon DynamoDB service.
After completing this lab, you should be able to:
Create an Amazon Cognito user pool.
Add users to the user pool.
Update the example application to use the user pool for authentication.
Configure the Amazon Cognito identity pool.
Update the example application to use the identity pool for authorization.
Task 1: Reviewing default encryption for objects in an S3 bucket You upload an image file to an S3 bucket and review the default encryption provided by Amazon S3.
Task 2: Creating an AWS KMS key You create a customer managed AWS KMS key. Later in the lab, you use the AWS KMS key that you create to generate, encrypt, and decrypt data keys. The data keys will be shared with Amazon EC2. The data keys are used to encrypt the actual data stored on EBS volumes.
Task 3: Creating and attaching encrypted data volume on an EC2 instance You create an encrypted EBS volume by using the KMS key that you created in the previous task and attach it to your EC2 instance. When you attach the encrypted volume, the EC2 instance retrieves the data key from AWS KMS and uses it to decrypt the data on the EBS volume. In later tasks, you examine the CloudTrail event history to observe the calls made to the AWS KMS service.
Task 4: Disabling the encryption key and observing the effects You temporarily disable the AWS KMS key that you previously used to encrypt the EBS volume. You then observe the effects that disabling the key has on accessing encrypted data.
Task 5: Analyzing AWS KMS activity by using CloudTrail You access the CloudTrail event history to find events that are related to your encryption operations. The CloudTrail audit log functionality provides an important security feature, and it's a good idea to monitor how AWS KMS keys are used in your account.
Task 6: Reviewing key rotation You review the feature to enable automatic rotation for the key you created in this lab. However, you cannot see it in effect as the rotation would be effective after a year, once enabled.
Duration: 40 minutes
Critical business systems should be deployed as highly available applications—that is, applications remain operational even when some components fail. To achieve high availability in Amazon Web Services (AWS), we recommend that you run services across multiple Availability Zones.
Many AWS services are inherently highly available, such as load balancers. Many AWS services can also be configured for high availability, such as deploying Amazon Elastic Compute Cloud (Amazon EC2) instances in multiple Availability Zones.
In this lab, you will start with an application that runs on a single EC2 instance. You will then make the application highly available.
After completing this lab, you should be able to:
Inspect a provided virtual private cloud (VPC)
Create an Application Load Balancer
Create an Auto Scaling group
Test the application for high availability
Task 1: Inspecting your VPC This lab begins with an environment that is already deployed via AWS CloudFormation. It includes: a VPC, public and private subnets in two Availability Zones, an internet gateway (not shown) that is associated with the public subnets, a Network Address Translation (NAT) gateway in one of the public subnets, an Amazon Relational Database Service (Amazon RDS) instance in one of the private subnets. In this task, you will review the configuration of the VPC that was created for this lab.
Task 2: Creating an Application Load Balancer To build a highly available application, it is a best practice to launch resources in multiple Availability Zones. Availability Zones are physically separate data centers (or groups of data centers) in the same Region. If you run your applications across multiple Availability Zones, you can provide greater availability if a data center experiences a failure. Because the application runs on multiple application servers, you will need a way to distribute traffic amongst those servers. You can accomplish this goal by using a load balancer. This load balancer will also perform health checks on instances and only send requests to healthy instances.
Task 3: Creating an Auto Scaling group. Amazon EC2 Auto Scaling is a service designed to launch or terminate Amazon EC2 instances automatically based on user-defined policies, schedules, and health checks. It also automatically distributes instances across multiple Availability Zones to make applications highly available. In this task, you will create an Auto Scaling group that deploys EC2 instances across your private subnets, which is a security best practice for application deployment. Instances in a private subnet cannot be accessed from the internet. Instead, users send requests to the load balancer, which forwards the requests to EC2 instances in the private subnets.
Create a Launch Template and an Auto Scaling Group You will first create a launch template. A launch template is a template that an Auto Scaling group uses to launch EC2 instances. When you create a launch template, you specify information for the instances such as the AMI, the instance type, a key pair, and security group.
Task 4: Updating security groups The application you deployed is a three-tier architecture. You will now configure the security groups to enforce these tiers.
Task 5: Testing the application Your application is now ready for testing. In this task, you will confirm that your web application is running. You will also test that it is highly available.
Task 6: Testing high availability Your application was configured to be highly available. You can prove the application's high availability by terminating one of the EC2 instances.
Optional task 1: Making the database highly available In this optional task, you will make the database highly available by configuring it to run across multiple Availability Zones (that is, in a Multi-AZ deployment).
Optional task 2: Configuring a highly available NAT gateway The application servers run in a private subnet. If the servers must access the internet (for example, to download data), the requests must be redirected through a Network Address Translation (NAT) gateway. (The NAT gateway must be located in a public subnet).
Duration: 90 minutes
The café will soon be featured in a famous TV food show. When it airs, Sofía and Nikhil anticipate that the café’s web server will experience a temporary spike in the number of users—perhaps even up to tens of thousands of users. Currently, the café’s web server is deployed in one Availability Zone, and they are worried that it won’t be able to handle the expected increase in traffic. They want to ensure that their customers have a great experience when they visit the website, and that they don’t experience any issues, such as lags or delays in placing orders.
To ensure this experience, the website must be responsive, able to scale both up and down to meet fluctuating customer demand, and be highly available. Instead of overloading a single server, the architecture must distribute customer order requests across multiple application servers so it can handle the increase in demand.
In this lab, you will take on the role of Sofía to implement a scalable and highly available architecture for the café's web application.
In this lab, you use Elastic Load Balancing and Amazon EC2 Auto Scaling to create a scalable and highly available environment on AWS.
After completing this lab, you should be able to:
Inspect a VPC.
Update a network to work across multiple Availability Zones.
Create an Application Load Balancer.
Create a launch template.
Create an Auto Scaling group.
Create an Auto Scaling group.
A business request for the café: Implementing a scalable and highly available environment (Challenge)
Sofía understands that she must complete some tasks to implement high availability and scalability for the café’s web application. However, before changing the café’s application architecture, Sofía must evaluate its current state.
In the next several tasks, you will work as Sofía to create and configure the resources that you need to implement a scalable and highly available application.
Task 1: Inspecting your environment.
Task 2: Creating a NAT gateway for the second Availability Zone.
Task 3: Creating a bastion host instance in a public subnet.
Task 4: Creating a launch template.
Task 5: Creating an Auto Scaling group.
Task 6: Creating a load balancer.
Task 7: Testing the web application.
Task 8: Testing automatic scaling under load.
Duration: 20 minutes
Deploying infrastructure in a consistent, reliable manner is difficult. It requires people to follow documented procedures without taking any undocumented shortcuts. It can also be difficult to deploy infrastructure out-of-hours when fewer staff are available. AWS CloudFormation changes this situation by defining infrastructure in a template that can be automatically deployed—even on an automated schedule.
In this lab, you will learn how to deploy multiple layers of infrastructure with AWS CloudFormation, update a CloudFormation stack, and delete a stack (while retaining some resources).
After completing this lab, you should be able to:
Use AWS CloudFormation to deploy a virtual private cloud (VPC) networking layer.
Use AWS CloudFormation to deploy an application layer that references the networking layer.
Explore templates with AWS CloudFormation Designer.
Delete a stack that has a deletion policy.
Task 1: Deploying a networking layer It is a best practice to deploy infrastructure in layers. Common layers are: Network (Amazon VPC), Database, Application. This way, templates can be reused between systems. For example, you can deploy a common network topology between development, test, and production environments, or deploy a standard database for multiple applications. In this task, you will deploy an AWS CloudFormation template that creates a networking layer by using Amazon VPC.
Task 2: Deploying an application layer Now that you deployed the network layer, you will deploy an application layer that contains an Amazon Elastic Compute Cloud (Amazon EC2) instance and a security group. The AWS CloudFormation template will import the VPC and subnet IDs from the Outputs of the existing CloudFormation stack. It will then use this information to create the security group in the VPC and the EC2 instance in the subnet.
Task 3: Updating a Stack AWS CloudFormation can also update a stack that has been deployed. When you update a stack, AWS CloudFormation will only modify or replace the resources that are being changed. Any resources that are not being changed will be left as-is. In this task, you will update the lab-application stack to modify a setting in the security group.
Task 4: Exploring templates with AWS CloudFormation Designer AWS CloudFormation Designer is a graphic tool for creating, viewing, and modifying AWS CloudFormation templates. With Designer, you can diagram your template resources by using a drag-and-drop interface, and then edit their details through the integrated JSON and YAML editor. Whether you are a new to AWS CloudFormation or an experienced AWS CloudFormation user, Designer can help you quickly see the interrelationship between a template's resources. It also enables you to easily modify templates. In this task, you will gain some hands-on experience with Designer.
Task 5: Deleting the stack When resources are no longer required, AWS CloudFormation can delete the resources built for the stack. A deletion policy can also be specified against resources. It can preserve or (in some cases) back up a resource when its stack is deleted. This feature is useful for retaining databases, disk volumes, or any resource that might be needed after the stack is deleted. The lab-application stack was configured to take a snapshot of an Amazon Elastic Block Store (Amazon EBS) disk volume before it is deleted. You will now delete the lab-application stack and see the results of this deletion policy.
Duration: 90 minutes
Up to this point, the café staff created their AWS resources and configured their applications manually—mostly by using the AWS Management Console. This approach worked well as a way for the café to get started with a web presence quickly. However, they find it challenging to replicate their deployments to new AWS Regions so that they can support new café locations in multiple countries. They would also like to have separate development and production environments that reliably have matching configurations.
In this challenge lab, you will take on the role of Sofía as you work to automate the café's deployments and replicate them to another AWS Region.
In this lab, you will gain experience with creating AWS CloudFormation templates. You will use the templates to create and update AWS CloudFormation stacks. The stacks create and manage updates to resources in multiple AWS service areas in your AWS account. You will practice using AWS CodeCommit to control the version of your templates. You will also observe how you can use AWS CodePipeline to automate stack updates.
After completing this lab, you should be able to:
Deploy a virtual private cloud (VPC) networking layer by using an AWS CloudFormation template.
Deploy an application layer by using an AWS CloudFormation template.
Use Git to invoke AWS CodePipeline, and to create or update stacks from templates that are stored in AWS CodeCommit.
Duplicate network and application resources to another AWS Region by using AWS.
A business request: Creating a static website for the café by using AWS CloudFormation (Challenge #1)
The café would like to start using AWS CloudFormation to create and maintain resources in the AWS account. As a simple first attempt at this process, you will take on the role of Sofía and create a simple AWS CloudFormation template that can be used to create an Amazon Simple Storage Service (Amazon S3) bucket. Then, you will add more detail to the template so that when you update the stack, it configures the bucket to host a static website for the café.
Task 1: Creating an AWS CloudFormation template from scratch
Task 2: Configuring the bucket as a website and updating the stack
New business requirement: Storing templates in a version control system (Challenge #2)
Task 3: Cloning a CodeCommit repository that contains AWS CloudFormation templates
New business requirement: Using a continuous delivery service, create the network and application layers for the café (Challenge #3)
Task 4: Creating a new network layer with AWS CloudFormation, CodeCommit, and CodePipeline
Task 5: Updating the network stack
Task 6: Defining an EC2 instance resource and creating the application stack
New business requirement: Duplicating the network and application resources in a second AWS Region (Challenge #4)
Task 7: Duplicating the café network and website to another AWS Region
Duration: 60 minutes
Building modern architectures involves decoupled applications, which are smaller, independent building blocks that are convenient to develop, deploy, and maintain. Message queues provide communication and coordination for these distributed applications. Message queues, along with notification systems, can significantly reduce the coding of decoupled applications while improving performance, reliability, and scalability.
In this lab, you work with an image processing application on an AWS Cloud9 instance and then use Amazon Simple Queue Service (Amazon SQS) and Amazon Simple Notification Service (Amazon SNS) to create a decoupled architecture.
After completing this lab, you should be able to:
Review how the image processing web application works.
Configure Amazon Simple Storage Service (Amazon S3) bucket events to send messages to an SNS topic.
Configure Amazon SQS to subscribe to an SNS topic and store the message.
Use Amazon SQS and Amazon SNS to create a decoupled architecture.
Implement polling to consume messages in an SQS queue.
Use Amazon SNS to send an email notification.
You work with an application that accepts images and processes them to create tinted images. You work on this application in the following phases:
Phase 1: In this phase, you work with a NodeJs-based application through an AWS Cloud9 integrated development environment (IDE) wherein components such as the web server and application server are tightly coupled. Refer to the phase 1 architecture diagram for more information.
Phase 2: In this phase, you work with an enhanced application through a different AWS Cloud9 IDE, and this application uses Amazon SQS and Amazon SNS to create a decoupled architecture wherein the web server and application server do not communicate directly. Refer to the phase 2 architecture diagram for more information.
Task 1: Installing the image processing application You download the required files and install the image processing application on the AWS Cloud9 instance. Then you configure a security group and the S3 bucket permissions required for the communication between the application and AWS services.
Task 2: Testing the application In this task, you will create an Amazon CloudFront distribution that will be used to deliver the multiple bit-rate files generated by Amazon Elastic Transcoder to end-user devices.
Task 3: Installing the application You download the required files and install the image processing application in the AWS Cloud9 IDE. Then you configure a security group to facilitate communication between the application running on the AWS Cloud9 instance and the user interface (browser).
Task 4: Configuring Amazon SQS
Task 5: Configuring Amazon SNS
Task 6: Configuring Amazon S3 permissions and event notifications You configure the S3 bucket to send a notification to the SNS topic that you created as soon as your application uploads an image.
Task 7: Creating Amazon SNS subscriptions Now that you have created an Amazon S3 event to send a notification to Amazon SNS, you create a subscription for the SNS topic to send a message to the queue. You also configure email subscriptions for the user.
Task 8: Configuring parameters and starting the application You configure three separate configuration files for each application tier: browser, web application, and application server.
Task 9: Testing the application
Duration: 40 minutes
You are creating an inventory tracking system. Stores from around the world will upload an inventory file to Amazon S3. Your team wants to be able to view the inventory levels and send a notification when inventory levels are low.
Traditionally, applications run on servers. These servers can be physical (or bare metal). They can also be virtual environments that run on top of physical servers. However, you must purchase and provision all these types of servers, and you must also manage their capacity. In contrast, you can run your code on AWS Lambda without needing to pre-allocate servers. With Lambda, you only need to provide the code and define a trigger. The Lambda function can run when it is needed, whether it is once per week or hundreds of times per second. You only pay for what you use.
This lab demonstrates how to trigger a Lambda function when a file is uploaded to Amazon Simple Storage Service (Amazon S3). The file will be loaded into an Amazon DynamoDB table. The data will be available for you to view on a dashboard page that retrieves the data directly from DynamoDB. This solution does not use Amazon Elastic Compute Cloud (Amazon EC2). It is a serverless solution that automatically scales when it is used. It also incurs little cost when it is in use. When it is idle, there is practically no cost because will you only be billed for data storage.
After completing this lab, you should be able to:
Implement a serverless architecture on AWS.
Trigger Lambda functions from Amazon S3 and Amazon DynamoDB.
Configure Amazon Simple Notification Service (Amazon SNS) to send notifications.
Task 1: Creating a Lambda function to load data In this task, you will create a Lambda function that will process an inventory file. The Lambda function will read the file and insert information into a DynamoDB table.
Task 2: Configuring an Amazon S3 event Stores from around the world provide inventory files to load into the inventory tracking system. Instead of uploading their files via FTP, the stores can upload them directly to Amazon S3. They can upload the files through a webpage, a script, or as part of a program. When a file is received, it triggers the Lambda function. This Lambda function will then load the inventory into a DynamoDB table.
Task 3: Testing the loading process You are now ready to test the loading process. You will upload an inventory file, then check that it loaded successfully.
Task 4: Configuring notifications You want to notify inventory management staff when a store runs out of stock for an item. For this serverless notification functionality, you will use Amazon SNS.
Task 5: Creating a Lambda function to send notifications In this task, you will create another Lambda function that looks at inventory while it is loaded into the DynamoDB table. If the Lambda function notices that an item is out of stock, it will send a notification through the SNS topic you created earlier.
Task 6: Testing the System You will now upload an inventory file to Amazon S3, which will trigger the original Load-Inventory function. This function will load data into DynamoDB, which will then trigger the new Check-Stock Lambda function. If the Lambda function detects an item with zero inventory, it will send a message to Amazon SNS. Then, Amazon SNS will notify you through SMS or email.
Duration: 90 minutes
The café's business is thriving. Frank and Martha want to get daily sales reports for products that are sold from the café's website. They will use this report to plan ingredient orders and monitor the impact of product promotions.
Sofía and Nikhil's initial idea is to use one of the Amazon Elastic Compute Cloud (Amazon EC2) web server instances to generate the report. Sofía sets up a cron job on the web server instance, which sends email messages that report daily sales. However, the cron job reduces the performance of the web server because it is resource-intensive.
Nikhil mentions the cron job to Olivia, and how it reduces the web application's performance. Olivia advises Sofía and Nikhil to separate non-business-critical reporting tasks from the production web server instance. After Sofía and Nikhil review the advantages and disadvantages of their current approach, they decide that they don't want to slow down the web server. They also consider running a separate EC2 instance, but they are concerned about the cost of running an instance 24/7 when it is only needed for a short time each day.
Sofía and Nikhil decide that running the report generation code as an AWS Lambda function would work, and it would also lower costs. The report itself could be sent to Frank and Martha's email address through Amazon Simple Notification Service (Amazon SNS).
In this lab, you will take on the role of Sofía to implement the daily report code as a Lambda function.
In this lab, you will use AWS Lambda to create a café sales report that is emailed each day through Amazon SNS.
After completing this lab, you should be able to implement a serverless architecture to generate a daily sales report that features:
A Lambda function within a virtual private cloud (VPC) that connects to an Amazon Relational Database Service (Amazon RDS) database with the café's sales data
A Lambda function that generates and runs the sales report
A scheduled event that triggers the sales report Lambda function each day
A business request for the café: Implementing a serverless architecture to generate a daily sales report (Challenge)
In the next several tasks, you will work as Sofía to create and configure the resources that you need to implement the reporting solution.
Task 1: Downloading the source code.
Task 2: Creating the DataExtractor Lambda function in the VPC.
Task 3: Creating the salesAnalysisReport Lambda function
Task 4: Creating an SNS topic
Task 5: Creating an email subscription to the SNS topic
Task 6: Testing the salesAnalysisReport Lambda function
Task 7: Setting up an Amazon EventBridge event to trigger the Lambda function each day
Duration: 90 minutes
In this lab, you will use the AWS Storage Gateway File Gateway service to attach a Network File System (NFS) mount to an on-premises data store. You will then replicate that data to an S3 bucket in AWS. Additionally, you will configure advanced Amazon S3 features, like Amazon S3 lifecycle policies and cross-Region replication.
After completing this lab, you should be able to:
Configure a File Gateway with an NFS file share and attach it to a Linux instance.
Migrate a set of data from the Linux instance to an S3 bucket.
Create and configure a primary S3 bucket to migrate on-premises server data to AWS.
Create and configure a secondary S3 bucket to use for cross-Region replication.
Create an S3 lifecycle policy to automatically manage data in a bucket.
Task 1: Reviewing the lab architecture This lab environment uses a total of three AWS Regions. A Linux EC2 instance that emulates an on-premises server is deployed to the us-east-1 (N. Virginia) Region. The Storage Gateway virtual appliance is deployed to the same Region as the Linux server. In a real-world scenario, the appliance would be deployed in a VMware vSphere or Microsoft Hyper-V environment, or as a physical Storage Gateway appliance. The primary S3 bucket is created in the us-east-2 (Ohio) Region. Data from the Linux host is copied to the primary S3 bucket. This bucket can also be called the source. The secondary S3 bucket is created in the us-west-2 (Oregon) Region. This secondary bucket is the target for the cross-Region replication policy. It can also be called the destination.
Task 2: Creating the primary and secondary S3 buckets Before you configure the File Gateway, you must create the primary S3 bucket (or the source) where you will replicate the data. You will also create the secondary bucket (or the destination) that will be used for cross-Region replication.
Task 3: Enabling cross-Region replication Now that you created your two S3 buckets and enabled versioning on them, you can create a replication policy.
Task 4: Configuring the File Gateway and creating an NFS file share n this task, you will deploy the File Gateway appliance as an Amazon Elastic Compute Cloud (Amazon EC2) instance. You will then configure a cache disk, select an S3 bucket to synchronize your on-premises files to, and select an IAM policy to use. Finally, you will create an NFS file share on the File Gateway.
Task 5: Mounting the file share to the Linux instance and migrating the data Before you can migrate data to the NFS share that you created, you must first mount the share. In this task, you will mount the NFS share on a Linux server, then copy data to the share.
Task 6: Verifying that the data is migrated You have finished configuring the gateway and copying data into the NFS share. Now, you will verify that the configuration works as intended.
Duration: 120 - 360 minutes
In this project, you’re challenged to use familiar AWS services to build a solution without step-by-step guidance. Specific sections of the assignment are meant to challenge you on skills that you have acquired throughout the Academy Cloud Architecting (ACA) course.
By the end of this project, you should be able to apply the architectural design principles that you learned in this course to do the following:
Create a database (DB) instance that the PHP application can query.
Create and deploy the highly available PHP application with the load distributed across multiple web servers and Availability Zones.
Use AWS Secrets Manager.
Import data into a MySQL database from a SQL dump file.
Secure the application to prevent public access to application servers and backend systems.
Example Social Research Organization is a (fictitious) nonprofit organization that provides a website for social science researchers to obtain global development statistics. For example, visitors to the site can look up various data points, such as the life expectancy for any country in the world over the past 10 years.
Shirley Rodriguez, a researcher at the nonprofit organization, developed the website. She thought it would be valuable to share the data that she had gathered with other researchers. Shirley stores the data in a MySQL database, and the data is available through a PHP website that she built. She initially published the site through a commercial hosting company that provides limited support for technical issues and security.
Over the past year, Shirley’s website has grown in popularity. As a result of increased traffic, she started receiving complaints that the site is not as responsive as it used to be. She also experienced an attempted ransomware security breach. The security breach was unsuccessful, but her supervisor, Mateo Jackson, suggested that Shirley investigate new ways to host the website.
Shirley heard about AWS and initially moved her website and database to an EC2 instance that runs in a public subnet. She also runs an instance of MySQL on the same EC2 instance.
Shirley approached your team to make sure that her current design follows architectural best practices. She wants to make sure that she has a robust and secure website. One of your colleagues started the process of migrating the site to a more secure implementation, but they were reassigned to another project. Your tasks are to complete the implementation, make sure that the website is secure, and confirm that the website returns data from the query page.
Provide secure hosting of the MySQL database.
Provide secure access to the database.
Provide anonymous access to web users.
Run the website on a t2.micro EC2 instance running in private subnets and provide Secure Shell (SSH) access to administrators.
Provide high availability to the website through a load balancer.
Provide automatic scaling that uses a launch template.