AWS SAP-C02教程7--云迁移与灾备(DR)

这篇具有很好参考价值的文章主要介绍了AWS SAP-C02教程7--云迁移与灾备(DR)。希望对大家有所帮助。如果存在错误或未考虑完全的地方,请大家不吝赐教,您也可以点击"举报违法"按钮提交疑问。

在SAP-C02的考试中,云迁移以及灾备是必考题目,且分量不轻,因此云迁移和灾备是一个必需的了解内容。之所以灾备也放在这里讲,是因为灾备有时候是本地服务中心在云上做灾备,会牵扯一些云上迁移的内容,但2者在考试中分量都是不轻的。本章分为2部分:云迁移和灾备。云迁移主要讲述通过迁移的解决方案以及一些常见迁移工具。灾备则通过RTO和RPO2个参数来讲述各种灾备解决方案的优劣。

1 云迁移(Cloud Migrations)

1.1 6R的迁移方案

6R方案来自AWS的这篇blog:https://aws.amazon.com/cn/blogs/enterprise-strategy/6-strategies-for-migrating-applications-to-the-cloud/
这里让你了解每种方案的方式以及优缺点

1.1.1 Rehosting(云上重部署)

  • 按照本地服务中心原先资源的样子,在云上按照原先样子部署一套。比如有5台服务器就使用5台EC2这样的例子
  • 优点:简单且无需改造;可利用云上一些优惠券(比如预留实例等)降低最高达30%的成本
  • 缺点:没有利用云上的架构优化,体现不出云架构优势
  • 场景:当不确定迁移风险或者需要迅速迁移上云

1.1.2 Replatforming(云上迁移)

  • 按照本地服务中心的资源,在云上选择与之相应的服务。比如原先关系型数据库选择RDS,java程序使用Beanstalk等
  • 优点:只需一点点改造。利用云上产品的优势(比如监控、灾备等)
  • 缺点:仅仅利用云上产品优势,依然无法体现不出云架构优势。需要一点点适应改造
  • 场景:当想利用云上优势且又不想重构架构

1.1.3 Repurchase(重新购买)

  • 按照本地服务中心的产品,重新购买SaaS产品(这里更多指一些SaaS产品,比如CRM、HR等),你重新迁移到云上的SaaS产品
  • 优点:迁移比较快速
  • 缺点:需要重新适应新产品
  • 场景:刚好有一样功能的SaaS产品,则废弃掉本地的产品,使用SaaS产品

1.1.4 Refactoring/Re-architecting(云上重构)

  • 按照本地服务中心的资源,在云上重构整个服务,获取更多的云上优势,也就是利用云原生优势。比如使用Lambda、S3等原先本地服务中心没有的产品
  • 优点:完全云原生化,完全利用云上优势
  • 缺点:需要重构整个架构,时间和成本短时间内比较高
  • 场景:未来需要将自己企业的IT服务都云化且时间成本充足

1.1.5 Retire(淘汰)

  • 按照本地服务中心的资源,在云上去除掉不必要的服务或产品。比如原先使用某一个功能,现在在云上去除。
  • 优点:节省不必要的成本
  • 缺点:需要重新架构,缩减功能等评估
  • 场景:内部服务过于臃肿且确定某些服务已经不再使用

1.1.6 Retain(保留)

  • 按照本地服务中心的资源,在本地服务中心保留此资源。虽然是保留就意味着不迁移,但是在整个云迁移过程可能涉及多套模式的组合,而Retain就是一种方式。
  • 优点:不需要做任何事情
  • 缺点:依旧保留原先样子,没有云化
  • 场景:适合一些很难迁移或者迁移也没什么价值的情况

1.1.7 总结

整个迁移过程往往都是多种迁移方案的组合,可能有些重构、有些保留、有些废弃。这个需要根据具体场景来确定,但是通过6R的体系让你对迁移云的方案有个整体的了解,对于遇到的迁移问题有一个解决方案。

1.2 AWS Storage Gateway

AWS Storage Gateway 将本地软件设备与基于云的存储相连接,从而在本地 IT 环境与AWS存储基础设施。也就是将你本地的存储与云相连,这个在做云迁移或者灾备的时候,是非常重要的一个环节。AWS中有4种不同存储类型的Storage Gateway,考试中只考其中3种,但这里会全部介绍。

1.2.1 S3 File Gateway

简单理解就是将文件系统的文件同步到S3,目录结构一一对应。
AWS SAP-C02教程7--云迁移与灾备(DR),AWS,aws,云计算,1024程序员节

1.2.1.1 基本特性
  • 利用一个虚拟的Gateway嫁接你本地文件系统与S3
  • 支持NFS和SMB(此处考试中出现相关支持的协议
  • 每一个Gateway都有独立的IAM角色
1.2.1.2 典型架构
  • 同步到S3,基于S3做更多应用和迁移
    AWS SAP-C02教程7--云迁移与灾备(DR),AWS,aws,云计算,1024程序员节
  • 利用S3 File Gateway将数据从一个数据中心迁移到另外一个数据中心
    AWS SAP-C02教程7--云迁移与灾备(DR),AWS,aws,云计算,1024程序员节
  • 利用S3的生命周期和不同存储类型做数据备份
    AWS SAP-C02教程7--云迁移与灾备(DR),AWS,aws,云计算,1024程序员节

1.2.2 FSx File Gateway

Amazon FSx 文件网关 (FSx File) 是一种新的文件网关类型,它提供了从本地设施对 Windows 文件服务器文件共享的云中 FSx 的低延迟和高效访问。具体架构如下图
AWS SAP-C02教程7--云迁移与灾备(DR),AWS,aws,云计算,1024程序员节

  • 提供了从本地设施对 Windows 文件服务器文件共享的云中 FSx 的低延迟和高效访问(重点:考试中只要出现Windows File Server及低延迟基本上就与FSx相关
  • 兼容Windows的协议:SMB、NTFS、AD等。

1.2.3 Volume Gateway

Volume Gateway 提供云支持的存储卷,您可以将这些卷作为互联网小型计算机系统接口 (iSCSI) 设备从本地应用程序服务器安装。具体架构如下图:
AWS SAP-C02教程7--云迁移与灾备(DR),AWS,aws,云计算,1024程序员节

  • 使用的是iSCSI协议
  • 有2种类型的Volume Gateway(考试中出现过重点记住
    1)使用缓存卷,您可以将卷数据存储在AWS其中,而最近访问的数据的一小部分则存储在本地缓存中。这种方法允许对经常访问的数据集进行低延迟访问。它还提供对存储在中的整个数据集的无缝访问AWS。通过使用缓存卷,您可以扩展存储资源,而无需预配置其他硬件。
    2)使用存储卷,您可以在本地存储整组卷数据,并将定期point-in-time备份(快照)存储在中AWS。在此模型中,您的本地存储是主存储,可提供对整个数据集的低延迟访问。 AWS存储是数据中心发生灾难时可以恢复的备份。
  • 如果你要检索卷中数据,需要在云上通过EBS还原卷,无法直接在S3检索(这也是考试重点
  • 每个Gateway最高支持32个Volume

1.2.4 Tape Gateway

可以经济高效且持久地在 S3 Glacier 灵活检索或 S3 Glacier 深度存档中存档备份数据。Tape Gateway 提供了可根据您的业务需求无缝扩展的虚拟磁带基础架构,并消除了配置、扩展和维护物理磁带基础架构的运营负担。这个考试目前还没出现过,不过记住以下几点即可:

  • 一种经济实惠、耐用、长期的异地数据存档替代方案
  • 利用虚拟磁带库 (VTL) 接口,您可以使用现有的基于磁带的备份软件基础设施,将数据存储到您创建的虚拟磁带盒
  • 如果你要检索磁带中数据,需要在云上通过还原磁带,无法直接在S3检索

1.3 AWS Snowball Family

AWS Snowball Family是一种 Snowball 设备,具有板载存储和计算能力,可供选择AWS能力。除了在本地环境和环境之间传输数据外,包括3种不同的的设备:Snowball Edge、Snowcone、Snowmobile。

1.3.1 基本特性

  • 将数据传输到Snowball,然后**物理运输(不通过网络)**到AWS
  • 支持TB、PB级别

例题:A video processing company wants to build a machine learning (ML) model by using 600 TB of compressed data that is stored as thousands of files in the company’s on-premises network attached storage system. The company does not have the necessary compute resources on premises for ML experiments and wants to use AWS.
The company needs to complete the data transfer to AWS within 3 weeks. The data transfer will be a one-time transfer. The data must be encrypted in transit. The measured upload speed of the company’s internet connection is 100 Mbps, and multiple departments share the connection.
Which solution will meet these requirements MOST cost-effectively?
A. Order several AWS Snowball Edge Storage Optimized devices by using the AWS Management Console. Configure the devices with a destination S3 bucket. Copy the data to the devices. Ship the devices back to AWS.
B. Set up a 10 Gbps AWS Direct Connect connection between the company location and the nearest AWS Region. Transfer the data over a VPN connection into the Region to store the data in Amazon S3.
C. Create a VPN connection between the on-premises network storage and the nearest AWS Region. Transfer the data over the VPN connection.
D. Deploy an AWS Storage Gateway file gateway on premises. Configure the file gateway with a destination S3 bucket. Copy the data to the file gateway.
答案:A
答案解析:B选项需要一个月;C选项和D选项可能需要2年;因此选择A选项。

  • Snowball 适用于:非常大数据迁移、灾备等场景
  • Snowball Edge、Snowcone包含一种计算功能,可以在运输过程中进行计算(嵌入EC2 AMI或Lambda做数据预处理)
  • 有一个叫AWS OpsHub的软件,安装在电脑上可以操作snow(避免使用CLI方式)
  • Snowball Edge适用于:数据迁移、图像收集、IoT、机器学习场景
    注意:考试中出现非常大的数据比如几十几百TB级别,且认为通过网络传输过慢或者数据用于机器学习,那么基本上选择Snowball Edge

A company is planning a one-time migration of an on-premises MySQL database to Amazon Aurora MySQL in the us-east-1 Region. The company’s current internet connection has limited bandwidth. The on-premises MySQL database is 60 TB in size. The company estimates that it will take a month to transfer the data to AWS over the current internet connection. The company needs a migration solution that will migrate the database more quickly.
Which solution will migrate the database in the LEAST amount of time?
A. Request a 1 Gbps AWS Direct Connect connection between the on-premises data center and AWS. Use AWS Database Migration Service (AWS DMS) to migrate the on-premises MySQL database to Aurora MySQL.
B. Use AWS DataSync with the current internet connection to accelerate the data transfer between the on-premises data center and AWS. Use AWS Application Migration Service to migrate the on-premises MySQL database to Aurora MySQL.
C. Order an AWS Snowball Edge device. Load the data into an Amazon S3 bucket by using the S3 interface. Use AWS Database Migration Service (AWS DMS) to migrate the data from Amazon S3 to Aurora MySQL.
D. Order an AWS Snowball device. Load the data into an Amazon S3 bucket by using the S3 Adapter for Snowball. Use AWS Application Migration Service to migrate the data from Amazon S3 to Aurora MySQL.
答案:C
答案解析:题目要求迁移60TB数据,尽最快速度完成。A选项和B选项都是需要基于网络连接迁移60TB将超过1个月时间。答案在C选项和D选项之间。但是D选项使用Application Migration Service并不适合数据库迁移,因此答案C选项

1.3.2 几种设备的差异

  • Snowball Edge:能够存储大量的数据,甚至TB级别。并且能计算。
  • Snowcone:非常小,能容8TB,并且能计算。很安全,防水。
  • Snowmobile:更大数据量迁移,支持EB级别。

1.3.3 场景使用

  • 运输数据
    AWS SAP-C02教程7--云迁移与灾备(DR),AWS,aws,云计算,1024程序员节
  • 边缘计算

1.4 AWS DataSync

AWS DataSync是一项在线数据移动和发现服务,可简化数据迁移,并帮助您在AWS存储服务之间快速、轻松、安全地传输文件或对象数据。

1.4.1 基本特性

  • 支持On-premises/其它Cloud以及AWS之间的数据同步,需要连接网络(比如VPN或者Direct Connect)(注意:考试中出现连接本地数据中心与AWS的网络,然后同步数据,一般都可以考虑DataSync

例题:A life sciences company is using a combination of open source tools to manage data analysis workflows and Docker containers running on servers in its on- premises data center to process genomics data. Sequencing data is generated and stored on a local storage area network (SAN), and then the data is processed.
The research and development teams are running into capacity issues and have decided to re-architect their genomics analysis platform on AWS to scale based on workload demands and reduce the turnaround time from weeks to days.
The company has a high-speed AWS Direct Connect connection. Sequencers will generate around 200 GB of data for each genome, and individual jobs can take several hours to process the data with ideal compute capacity. The end result will be stored in Amazon S3. The company is expecting 10-15 job requests each day.
Which solution meets these requirements?
A. Use regularly scheduled AWS Snowball Edge devices to transfer the sequencing data into AWS. When AWS receives the Snowball Edge device and the data is loaded into Amazon S3, use S3 events to trigger an AWS Lambda function to process the data.
B. Use AWS Data Pipeline to transfer the sequencing data to Amazon S3. Use S3 events to trigger an Amazon EC2 Auto Scaling group to launch custom-AMI EC2 instances running the Docker containers to process the data.
C. Use AWS DataSync to transfer the sequencing data to Amazon S3. Use S3 events to trigger an AWS Lambda function that starts an AWS Step Functions workflow. Store the Docker images in Amazon Elastic Container Registry (Amazon ECR) and trigger AWS Batch to run the container and process the sequencing data.
D. Use an AWS Storage Gateway file gateway to transfer the sequencing data to Amazon S3. Use S3 events to trigger an AWS Batch job that executes on Amazon EC2 instances running the Docker containers to process the data.
答案:C
答案分析:题目关键词:Docker containers,Direct Connect ,200 GB,Amazon S3。做数据迁移,使用Direct Connect 那么就是显示同步,因此如DataSync和Storage Gateway 都可以;Docker containers的管理需要ECR,因此答案选择C。

  • 支持NFS、SMB、HDFS、S3 API等协议或接口
  • 可以同步到Amazon S3、EFS、 FSx

例题:A company is storing data on premises on a Windows file server. The company produces 5 GB of new data daily.
The company migrated part of its Windows-based workload to AWS and needs the data to be available on a file system in the cloud. The company already has established an AWS Direct Connect connection between the on-premises network and AWS.
Which data migration strategy should the company use?
A. Use the file gateway option in AWS Storage Gateway to replace the existing Windows file server, and point the existing file share to the new file gateway
B. Use AWS DataSync to schedule a daily task to replicate data between the on-premises Windows file server and Amazon FSx
C. Use AWS Data Pipeline to schedule a daily task to replicate data between the on-premises Windows file server and Amazon Elastic File System (Amazon EFS)
D. Use AWS DataSync to schedule a daily task to replicate data between the on-premises Windows file server and Amazon Elastic File System (Amazon EFS)
答案:B
答案解析:题目是迁移到云上,需要将Windows上的文件通过Direct Connect 同步到AWS云上。A选项通过AWS Storage Gateway同步需要使用S3,如果直接使用FSx File Gateway则只是访问并无同步。C选项Data Pipeline是一个ETL工具,同时EFS不支持Windows。D选项EFS不支持Windows。

例题:A company is migrating an application to the AWS Cloud. The application runs in an on-premises data center and writes thousands of images into a mounted NFS file system each night. After the company migrates the application, the company will host the application on an Amazon EC2 instance with a mounted Amazon
Elastic File System (Amazon EFS) file system.
The company has established an AWS Direct Connect connection to AWS. Before the migration cutover, a solutions architect must build a process that will replicate the newly created on-premises images to the EFS file system.
What is the MOST operationally efficient way to replicate the images?
A. Configure a periodic process to run the aws s3 sync command from the on-premises file system to Amazon S3. Configure an AWS Lambda function to process event notifications from Amazon S3 and copy the images from Amazon S3 to the EFS file system.
B. Deploy an AWS Storage Gateway file gateway with an NFS mount point. Mount the file gateway file system on the on-premises server. Configure a process to periodically copy the images to the mount point.
C. Deploy an AWS DataSync agent to an on-premises server that has access to the NFS file system. Send data over the Direct Connect connection to an S3 bucket by using public VIF. Configure an AWS Lambda function to process event notifications from Amazon S3 and copy the images from Amazon S3 to the EFS file system.
D. Deploy an AWS DataSync agent to an on-premises server that has access to the NFS file system. Send data over the Direct Connect connection to an AWS PrivateLink interface VPC endpoint for Amazon EFS by using a private VIF. Configure a DataSync scheduled task to send the images to the EFS file system every 24 hours.
答案:D
答案解析:题目要求将本地数据中心的EFS数据迁移到AWS上。参考:https://aws.amazon.com/datasync/faqs/

  • 能够保留metadata的唯一选项(注意:考试中会出现保留metadata时,AWS DataSync是唯一选项)
  • 与Snow Family区别,Snow Family是不需要网络连接(注意考试中出现大数据量或者没有网络连接时,同步数据,需要选择Snow Family相关产品
  • 是定时同步,不是连续同步
    AWS SAP-C02教程7--云迁移与灾备(DR),AWS,aws,云计算,1024程序员节

1.4.2 典型架构

  • 通过Direct Connect在On-premise与AWS之间同步(通过PrivateLink传输到VPC内部,考题中出现过同步数据到EC2挂载的EFS中AWS SAP-C02教程7--云迁移与灾备(DR),AWS,aws,云计算,1024程序员节

1.5 AWS DMS

1.5.1 基本特性

  • 用于数据库之间的数据迁移
  • 不影响源数据库,可以通过CDC
  • 支持的源和目标数据库如下图
    AWS SAP-C02教程7--云迁移与灾备(DR),AWS,aws,云计算,1024程序员节
  • 注意:源数据库不支持:Redshift、DynamoDB、ElasticSearch、Kinesis Data Streams、DocumentDB
  • 可以基于VPC peering、VPN或者Direct Connect网络
  • 支持Full Load(一次性迁移)、Full Load+CDC(一次性迁移+后续迁移)、CDC(从当前开始迁移)三种迁移模式(注意考试会有不同场景考这个模式)

例题:A solutions architect is planning to migrate critical Microsoft SQL Server databases to AWS. Because the databases are legacy systems, the solutions architect will move the databases to a modern data architecture. The solutions architect must migrate the databases with near-zero downtime.
Which solution will meet these requirements?
A. Use AWS Application Migration Service and the AWS Schema Conversion Tool (AWS SCT). Perform an in-place upgrade before the migration. Export the migrated data to Amazon Aurora Serverless after cutover. Repoint the applications to Amazon Aurora.
B. Use AWS Database Migration Service (AWS DMS) to rehost the database. Set Amazon S3 as a target. Set up change data capture (CDC) replication. When the source and destination are fully synchronized, load the data from Amazon S3 into an Amazon RDS for Microsoft SQL Server DB instance.
C. Use native database high availability tools. Connect the source system to an Amazon RDS for Microsoft SQL Server DB instance. Configure replication accordingly. When data replication is finished, transition the workload to an Amazon RDS for Microsoft SQL Server DB instance.
D. Use AWS Application Migration Service. Rehost the database server on Amazon EC2. When data replication is finished, detach the database and move the database to an Amazon RDS for Microsoft SQL Server DB instance. Reattach the database and then cut over all networking.
答案:B
答案解析:要求迁移Microsoft SQL Server到AWS,且near-zero downtime。因此A选项使用SCT一般用于异构引擎;C选项使用native database high availability tools无法在本地SQL Server数据库和AWS上的Amazon RDS for SQL Server之间使用;D选项使用EC2部署Microsoft SQL Server更不合理。因此选择B选项

1.5.2 SCT(Schema Convert Tool)

  • 用于异构数据库引擎的schema迁移(比如mysql迁移到RDS for mysql不算异构)
  • 注意:当出现异构数据库迁移时,基本上都要使用上SCT

例题:A company is planning to migrate its business-critical applications from an on-premises data center to AWS. The company has an on-premises installation of a
Microsoft SQL Server Always On cluster. The company wants to migrate to an AWS managed database service. A solutions architect must design a heterogeneous database migration on AWS.
Which solution will meet these requirements?
A. Migrate the SQL Server databases to Amazon RDS for MySQL by using backup and restore utilities.
B. Use an AWS Snowball Edge Storage Optimized device to transfer data to Amazon S3. Set up Amazon RDS for MySQL. Use S3 integration with SQL Server features, such as BULK INSERT.
C. Use the AWS Schema Conversion Tool to translate the database schema to Amazon RDS for MySQL. Then use AWS Database Migration Service (AWS DMS) to migrate the data from on-premises databases to Amazon RDS.
D. Use AWS DataSync to migrate data over the network between on-premises storage and Amazon S3. Set up Amazon RDS for MySQL. Use S3 integration with SQL Server features, such as BULK INSERT.
答案:C
答案解析:希望将本地数据中心的Microsoft SQL Server 迁移到AWS上。Microsoft SQL Server 是关系型数据库,所以迁移优先选择关系数据库,但是MySQL与Microsoft SQL Server属于异构数据库引擎,因此需要使用SCT做schema转换,因此选项C选项。

例题:A company is planning to migrate an Amazon RDS for Oracle database to an RDS for PostgreSQL DB instance in another AWS account. A solutions architect needs to design a migration strategy that will require no downtime and that will minimize the amount of time necessary to complete the migration. The migration strategy must replicate all existing data and any new data that is created during the migration. The target database must be identical to the source database at completion of the migration process.
All applications currently use an Amazon Route 53 CNAME record as their endpoint for communication with the RDS for Oracle DB instance. The RDS for Oracle DB instance is in a private subnet.
Which combination of steps should the solutions architect take to meet these requirements? (Choose three.)
A. Create a new RDS for PostgreSQL DB instance in the target account. Use the AWS Schema Conversion Tool (AWS SCT) to migrate the database schema from the source database to the target database.
B. Use the AWS Schema Conversion Tool (AWS SCT) to create a new RDS for PostgreSQL DB instance in the target account with the schema and initial data from the source database.
C. Configure VPC peering between the VPCs in the two AWS accounts to provide connectivity to both DB instances from the target account. Configure the security groups that are attached to each DB instance to allow traffic on the database port from the VPC in the target account.
D. Temporarily allow the source DB instance to be publicly accessible to provide connectivity from the VPC in the target account. Configure the security groups that are attached to each DB instance to allow traffic on the database port from the VPC in the target account.
E. Use AWS Database Migration Service (AWS DMS) in the target account to perform a full load plus change data capture (CDC) migration from the source database to the target database. When the migration is complete, change the CNAME record to point to the target DB instance endpoint.
F. Use AWS Database Migration Service (AWS DMS) in the target account to perform a change data capture (CDC) migration from the source database to the target database. When the migration is complete, change the CNAME record to point to the target DB instance endpoint.
答案:ACE
答案解析:题目要求将RDS for Oracle迁移到RDS for PostgreSQL,不能停机,尽量缩短迁移时间,且能将新生产数据也一并迁移。异构数据库迁移一般使用SCT,A选项和B选项中,B选项使用SCT场景一个新的数据库包括数据是不可能,因此选择A选项。C选项和D选项是设置网络连通,但是D选项要求公开数据库接口,存在安全风险,因此选择C选项。E选项和F选项,迁移策略必须复制所有现有数据和迁移期间创建的任何新数据,使用F选项仅迁移更改数据,因此选择E选项。答案ACE。

1.5.3 典型架构

  • Snowball+DMS+SCT迁移数据库(当数据量达到TB级别时可以考虑方案)
    1)使用SCT迁移schema
    2)使用Snowball Edge迁移数据
    3)将Snowball Edge迁移的数据存入S3,再通过DMS将S3数据加载到新的数据库
    4)使用CDC同步后续数据

1.6 AWS Application Discovery Services

AWS Application Discovery Service通过收集有关本地服务器和数据库的使用和配置数据来帮助规划到AWS云的迁移。Application Discovery Service 与AWS Database Migration Service Fleet Advisor 集成在一起AWS Migration Hub。Migration Hub 可将您的迁移状态信息聚合到单个控制台中,从而简化了您的迁移跟踪。您可以查看发现的服务器,将它们分组为应用程序,然后从您所在区域的 Migration Hub 控制台跟踪每个应用程序的迁移状态。

  • 一般用于迁移前对整个数据中心使用情况进行采集数据
  • 无代理模式:使用无代理则无需安装agent到服务器上,但需要一个OVA包安装在VM上,会采集CPU、内存、硬盘信息
  • 代理模式:安装agent到服务器上,会采集CPU、进程、网络信息。通过网络信息能在Migration Hub 控制台展现拓扑图。

例题:A company is planning to migrate 1,000 on-premises servers to AWS. The servers run on several VMware clusters in the company’s data center. As part of the migration plan, the company wants to gather server metrics such as CPU details, RAM usage, operating system information, and running processes. The company then wants to query and analyze the data.
Which solution will meet these requirements?
A. Deploy and configure the AWS Agentless Discovery Connector virtual appliance on the on-premises hosts. Configure Data Exploration in AWS Migration Hub. Use AWS Glue to perform an ETL job against the data. Query the data by using Amazon S3 Select.
B. Export only the VM performance information from the on-premises hosts. Directly import the required data into AWS Migration Hub. Update any missing information in Migration Hub. Query the data by using Amazon QuickSight.
C. Create a script to automatically gather the server information from the on-premises hosts. Use the AWS CLI to run the put-resource-attributes command to store the detailed server data in AWS Migration Hub. Query the data directly in the Migration Hub console.
D. Deploy the AWS Application Discovery Agent to each on-premises server. Configure Data Exploration in AWS Migration Hub. Use Amazon Athena to run predefined queries against the data in Amazon S3.
答案:D
答案分析:题目要求需要对本地数据中心的虚拟服务器进行指标数据采集。因此使用Application Discovery Agent最合适,因此选择D选项

例题:A company has several applications running in an on-premises data center. The data center runs a mix of Windows and Linux VMs managed by VMware vCenter.
A solutions architect needs to create a plan to migrate the applications to AWS. However, the solutions architect discovers that the document for the applications is not up to date and that there are no complete infrastructure diagrams. The company’s developers lack time to discuss their applications and current usage with the solutions architect.
What should the solutions architect do to gather the required information?
A. Deploy the AWS Server Migration Service (AWS SMS) connector using the OVA image on the VMware cluster to collect configuration and utilization data from the VMs.
B. Use the AWS Migration Portfolio Assessment (MPA) tool to connect to each of the VMs to collect the configuration and utilization data.
C. Install the AWS Application Discovery Service on each of the VMs to collect the configuration and utilization data.
D. Register the on-premises VMs with the AWS Migration Hub to collect configuration and utilization data.
答案:C
答案解析:需要对本地数据中心的一些服务器进行使用情况数据采集。那么使用Application Discovery Service 是最合适,因此选择C选项

1.7 AWS Server Migration Service(SMS)

AWS Server Migration Service(AWS SMS)可自动将本地 VMware vSphere 或 Microsoft Hyper-V/SCVMM 虚拟机迁移到 AWS 云。AWS SMS 以增量方式将服务器 VM 复制为可直接在 Amazon EC2 上部署的云托管的 Amazon Machine Image(AMI)。

  • 可以将整个VM迁移到AWS上
  • 只适用于:VMware VSphere、Windows Hyper-V、Azure VM

1.8 AWS Migration Hub

AWS Migration Hub (Migration Hub)提供了一个单一的位置来发现现有服务器、计划迁移和跟踪每个应用程序迁移的状态。Migration Hub提供了对应用程序组合的可见性,并简化了计划和跟踪。无论您使用哪种迁移工具,您都可以可视化组成您正在迁移的每个应用程序的服务器和数据库的连接和状态。简单来说就是监控你的迁移状态,支持以下2种迁移工具的迁移状态监控

  • AWS Application Migration Service (Application Migration Service)
  • AWS Database Migration Service (AWS DMS)

例题:A company wants to migrate its workloads from on premises to AWS. The workloads run on Linux and Windows. The company has a large on-premises infrastructure that consists of physical machines and VMs that host numerous applications.
The company must capture details about the system configuration, system performance, running processes, and network connections of its on-premises workloads. The company also must divide the on-premises applications into groups for AWS migrations. The company needs recommendations for Amazon EC2 instance types so that the company can run its workloads on AWS in the most cost-effective manner.
Which combination of steps should a solutions architect take to meet these requirements? (Choose three.)
A. Assess the existing applications by installing AWS Application Discovery Agent on the physical machines and VMs.
B. Assess the existing applications by installing AWS Systems Manager Agent on the physical machines and VMs.
C. Group servers into applications for migration by using AWS Systems Manager Application Manager.
D. Group servers into applications for migration by using AWS Migration Hub.
E. Generate recommended instance types and associated costs by using AWS Migration Hub.
F. Import data about server sizes into AWS Trusted Advisor. Follow the recommendations for cost optimization.
答案:ADE
答案解析:题目要求将本地数据中心的服务器迁移到AWS EC2,但是需要先获取本地服务器的一些详细信息并分组迁移。A选项通过Application Discovery Agent对本地服务器进行数据采集,从而得到详细的信息,进而通过D和E选项对服务器进行分组迁移并有效监控。而B和C选项Systems Manager 并不具备服务发现功能;F选项Trusted Advisor.是用于AWS上的节省成本分析。因此答案选择ADE。

1.9 AWS Cloud Adoption Framework (AWS CAF)

AWS CAF组织并描述了规划、创建、管理和支持现代IT服务所涉及的所有活动和流程。简单来说就是一个提供迁云的建议工具,包括6个方面:

  • Business 帮助您从单独的业务和IT战略转变为集成IT战略的业务模式。敏捷的IT战略是一致的,以支持您的业务成果。
  • People 通过更新员工技能和组织流程以包括基于云的能力,帮助人力资源(HR)和人事管理部门为团队采用云技术做好准备。
  • Governance 集成了IT治理和组织治理。它为确定和实施It治理的最佳实践以及用技术支持业务流程提供了指导。
  • Platform 帮助您根据业务目标设计、实施和优化AWS技术的架构。它有助于为您使用的设计、原则、工具和策略提供战略指导。
  • Security 帮助您构建控件的选择和实现。遵循这一指导可以更容易地确定不合规领域并规划正在进行的安全举措。
  • Operations 帮助您运行、使用、操作IT工作负载,并将其恢复到满足业务利益相关者要求的级别。

1.10 AWS Migration Evaluator

AWS Migration Evaluator帮助您构建迁移到AWS的数据驱动业务案例。(注意:考试中出现创建一个business case或者规划等,基本上使用AWS Migration Evaluator
AWS SAP-C02教程7--云迁移与灾备(DR),AWS,aws,云计算,1024程序员节

例题:A solutions architect must create a business case for migration of a company’s on-premises data center to the AWS Cloud. The solutions architect will use a configuration management database (CMDB) export of all the company’s servers to create the case.
Which solution will meet these requirements MOST cost-effectively?
A. Use AWS Well-Architected Tool to import the CMDB data to perform an analysis and generate recommendations.
B. Use Migration Evaluator to perform an analysis. Use the data import template to upload the data from the CMDB export.
C. Implement resource matching rules. Use the CMDB export and the AWS Price List Bulk API to query CMDB data against AWS services in bulk.
D. Use AWS Application Discovery Service to import the CMDB data to perform an analysis.
答案:B
答案解析:题目要求create the case。参考:https://aws.amazon.com/blogs/architecture/accelerating-your-migration-to-aws/

例题:A solutions architect needs to assess a newly acquired company’s portfolio of applications and databases. The solutions architect must create a business case to migrate the portfolio to AWS. The newly acquired company runs applications in an on-premises data center. The data center is not well documented. The solutions architect cannot immediately determine how many applications and databases exist. Traffic for the applications is variable. Some applications are batch processes that run at the end of each month.
The solutions architect must gain a better understanding of the portfolio before a migration to AWS can begin.
Which solution will meet these requirements?
A. Use AWS Server Migration Service (AWS SMS) and AWS Database Migration Service (AWS DMS) to evaluate migration. Use AWS Service Catalog to understand application and database dependencies.
B. Use AWS Application Migration Service. Run agents on the on-premises infrastructure. Manage the agents by using AWS Migration Hub. Use AWS Storage Gateway to assess local storage needs and database dependencies.
C. Use Migration Evaluator to generate a list of servers. Build a report for a business case. Use AWS Migration Hub to view the portfolio. Use AWS Application Discovery Service to gain an understanding of application dependencies.
D. Use AWS Control Tower in the destination account to generate an application portfolio. Use AWS Server Migration Service (AWS SMS) to generate deeper reports and a business case. Use a landing zone for core accounts and resources.
答案:C
答案解析:题目要迁移AWS,给出一个产品组合。那么就必须先评估,因此选择Migration Evaluator,所以答案为C选项。

例题:A company wants to migrate to AWS. The company is running thousands of VMs in a VMware ESXi environment. The company has no configuration management database and has little knowledge about the utilization of the VMware portfolio.
A solutions architect must provide the company with an accurate inventory so that the company can plan for a cost-effective migration.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use AWS Systems Manager Patch Manager to deploy Migration Evaluator to each VM. Review the collected data in Amazon QuickSight. Identify servers that have high utilization. Remove the servers that have high utilization from the migration list. Import the data to AWS Migration Hub.
B. Export the VMware portfolio to a .csv file. Check the disk utilization for each server. Remove servers that have high utilization. Export the data to AWS Application Migration Service. Use AWS Server Migration Service (AWS SMS) to migrate the remaining servers.
C. Deploy the Migration Evaluator agentless collector to the ESXi hypervisor. Review the collected data in Migration Evaluator. Identify inactive servers. Remove the inactive servers from the migration list. Import the data to AWS Migration Hub.
D. Deploy the AWS Application Migration Service Agent to each VM. When the data is collected, use Amazon Redshift to import and analyze the data. Use Amazon QuickSight for data visualization.
答案:C
答案解析:题目想知道一批即将迁移AWS的虚拟机使用情况,以最小的操作开销满足需求。C选项是最小操作开销,不需要引入太多组件。

1.11 AWS Backup

AWS Backup是一项完全托管的服务,可以轻松地跨AWS服务、云端和本地集中和自动化数据保护。使用此服务,您可以统一配置备份策略和监控 AWS 资源的活动。它允许您自动执行和整合以前执行的备份任务 service-by-service,并且无需创建自定义脚本和手动流程。

1.11.1 基本特性

  • 跨账号、跨区域(可用于DR)
  • 支持服务:EC2、EBS、S3、RDS、Aurora、DynamoDB、DocumentDB、Neptune、EFS、FSx、Storage Gateway
  • 集中化备份管理、基于策略的备份、基于标签的备份策略
  • Backup Plans:备份计划是一种策略表达式,用于定义何时以及如何备份AWS资源
    AWS SAP-C02教程7--云迁移与灾备(DR),AWS,aws,云计算,1024程序员节
  • Backup Vault Lock:锁定备份不被删除

例题:A company has an application in the AWS Cloud. The application runs on a fleet of 20 Amazon EC2 instances. The EC2 instances are persistent and store data on multiple attached Amazon Elastic Block Store (Amazon EBS) volumes.
The company must maintain backups in a separate AWS Region. The company must be able to recover the EC2 instances and their configuration within 1 business day, with loss of no more than 1 day’s worth of data. The company has limited staff and needs a backup solution that optimizes operational efficiency and cost. The company already has created an AWS CloudFormation template that can deploy the required network configuration in a secondary Region.
Which solution will meet these requirements?
A. Create a second CloudFormation template that can recreate the EC2 instances in the secondary Region. Run daily multivolume snapshots by using AWS Systems Manager Automation runbooks. Copy the snapshots to the secondary Region. In the event of a failure launch the CloudFormation templates, restore the EBS volumes from snapshots, and transfer usage to the secondary Region.
B. Use Amazon Data Lifecycle Manager (Amazon DLM) to create daily multivolume snapshots of the EBS volumes. In the event of a failure, launch the CloudFormation template and use Amazon DLM to restore the EBS volumes and transfer usage to the secondary Region.
C. Use AWS Backup to create a scheduled daily backup plan for the EC2 instances. Configure the backup task to copy the backups to a vault in the secondary Region. In the event of a failure, launch the CloudFormation template, restore the instance volumes and configurations from the backup vault, and transfer usage to the secondary Region.
D. Deploy EC2 instances of the same size and configuration to the secondary Region. Configure AWS DataSync daily to copy data from the primary Region to the secondary Region. In the event of a failure, launch the CloudFormation template and transfer usage to the secondary Region.
答案:C
答案解析:题目要求EC2和其挂载的EBS存储卷在另外一个区域做DR。参考:https://docs.aws.amazon.com/aws-backup/latest/devguide/integrate-cloudformation-with-aws-backup.html

2 Disaster Recovery

Disaster Recovery就是灾难恢复,也就是我们平时理解的灾备。在理解灾备之前,先弄清楚2个概念。

2.1 RPO和RTP

  • RPO:能恢复数据到哪个时间点(一般决定于数据备份频率)
  • RTO:需要从故障发生后,多久时间恢复服务可用(一般决定于高可用或冷热备)
    注意:这是考试中经常出现,做DR时会有RPO和RTO要求,而解决方案就是要根据这2个指标做选择

2.2 4种DR方案

2.2.1 Backup and Restore

所有资源在灾备区都是冷备,且数据备份可能是24小时之前的

  • RPO:可能会是1天到1周时间
  • RTO:相对比较长,可能几个小时
  • 优点:成本低
  • 缺点:恢复时间以及恢复数据都是高的
    AWS SAP-C02教程7--云迁移与灾备(DR),AWS,aws,云计算,1024程序员节

2.2.2 Pilot Light

运行主要系统(如数据库等),其它次要或启动较快的服务都是冷备,数据准实时同步

  • RPO:相对于 Backup and Restore有更低的RPO,可能几分钟到几小时
  • RTO:相对于Backup and Restore有更短的RTO,可能几分钟到十几分钟
  • 优点:恢复比Backup and Restore更快
  • 缺点:需要一点管理成本(维护主要系统的热备)
    AWS SAP-C02教程7--云迁移与灾备(DR),AWS,aws,云计算,1024程序员节

2.2.3 Warm Standby

整体系统以最小资源运行在灾备区,当出现灾难时,自动切换和扩展资源。

  • RPO:比较低的RPO,可能几分钟
  • RTO:比较低的RTO,可能几分钟
  • 优点:恢复比Pilot Light更快
  • 缺点:需要成本较高(维护整套系统最小资源的热备)
    AWS SAP-C02教程7--云迁移与灾备(DR),AWS,aws,云计算,1024程序员节

2.2.4 Mult Site/Hot Site Approach

整体系统在灾备区重新运行一套一模一样的系统,当出现灾难时,自动切换。

  • RPO:非常低的RPO,可能几秒钟
  • RTO:非常低的RTO,可能几秒钟
  • 优点:恢复比Warm Standby更快
  • 缺点:需要成本高(维护整套系统资源的热备在灾备区)
    AWS SAP-C02教程7--云迁移与灾备(DR),AWS,aws,云计算,1024程序员节
    当所有东西都在云上,可以如下架构图:
    AWS SAP-C02教程7--云迁移与灾备(DR),AWS,aws,云计算,1024程序员节

2.3 AWS Elastic Disaster Recovery

AWS Elastic Disaster Recovery通过使用经济实惠的存储、最小的计算和时间点恢复,对本地和基于云的应用程序进行快速、可靠的恢复,最大限度地减少停机时间和数据丢失。当您使用AWS弹性灾难恢复来复制在受支持的操作系统上运行的本地或基于云的应用程序时,您可以提高IT弹性。使用AWS管理控制台配置复制和启动设置,监控数据复制,并启动实例进行演练或恢复。简单来说就是一个本地与AWS的DR方案。
一些典型方案参考:https://docs.aws.amazon.com/drs/latest/userguide/Network-diagrams.html

例题:A company wants to use AWS to create a business continuity solution in case the company’s main on-premises application fails. The application runs on physical servers that also run other applications. The on-premises application that the company is planning to migrate uses a MySQL database as a data store. All the company’s on-premises applications use operating systems that are compatible with Amazon EC2.
Which solution will achieve the company’s goal with the LEAST operational overhead?
A. Install the AWS Replication Agent on the source servers, including the MySQL servers. Set up replication for all servers. Launch test instances for regular drills. Cut over to the test instances to fail over the workload in the case of a failure event.
B. Install the AWS Replication Agent on the source servers, including the MySQL servers. Initialize AWS Elastic Disaster Recovery in the target AWS Region. Define the launch settings. Frequently perform failover and fallback from the most recent point in time.
C. Create AWS Database Migration Service (AWS DMS) replication servers and a target Amazon Aurora MySQL DB cluster to host the database. Create a DMS replication task to copy the existing data to the target DB cluster. Create a local AWS Schema Conversion Tool (AWS SCT) change data capture (CDC) task to keep the data synchronized. Install the rest of the software on EC2 instances by starting with a compatible base AMI.
D. Deploy an AWS Storage Gateway Volume Gateway on premises. Mount volumes on all on-premises servers. Install the application and the MySQL database on the new volumes. Take regular snapshots. Install all the software on EC2 Instances by starting with a compatible base AMI. Launch a Volume Gateway on an EC2 instance. Restore the volumes from the latest snapshot. Mount the new volumes on the EC2 instances in the case of a failure event.
答案:B
答案解析:题目要求在AWS上面做一个部署应用程序和MySQL的DR。参考:https://docs.aws.amazon.com/drs/latest/userguide/what-is-drs.html

例题:A company is running a critical stateful web application on two Linux Amazon EC2 instances behind an Application Load Balancer (ALB) with an Amazon RDS for MySQL database. The company hosts the DNS records for the application in Amazon Route 53. A solutions architect must recommend a solution to improve the resiliency of the application.
The solution must meet the following objectives:
– Application tier: RPO of 2 minutes. RTO of 30 minutes
– Database tier: RPO of 5 minutes. RTO of 30 minutes
The company does not want to make significant changes to the existing application architecture. The company must ensure optimal latency after a failover.
Which solution will meet these requirements?
A. Configure the EC2 instances to use AWS Elastic Disaster Recovery. Create a cross-Region read replica for the RDS DB instance. Create an ALB in a second AWS Region. Create an AWS Global Accelerator endpoint, and associate the endpoint with the ALBs. Update DNS records to point to the Global Accelerator endpoint.
B. Configure the EC2 instances to use Amazon Data Lifecycle Manager (Amazon DLM) to take snapshots of the EBS volumes. Configure RDS automated backups. Configure backup replication to a second AWS Region. Create an ALB in the second Region. Create an AWS Global Accelerator endpoint, and associate the endpoint with the ALBs. Update DNS records to point to the Global Accelerator endpoint.
C. Create a backup plan in AWS Backup for the EC2 instances and RDS DB instance. Configure backup replication to a second AWS Region. Create an ALB in the second Region. Configure an Amazon CloudFront distribution in front of the ALB. Update DNS records to point to CloudFront.
D. Configure the EC2 instances to use Amazon Data Lifecycle Manager (Amazon DLM) to take snapshots of the EBS volumes. Create a cross-Region read replica for the RDS DB instance. Create an ALB in a second AWS Region. Create an AWS Global Accelerator endpoint, and associate the endpoint with the ALBs.
答案:A
答案解析:题目要求做DR,并且有RTO和RPO的要求。
AWS弹性容灾(DRS) vs AWS DLM vs AWS备份
– 当希望自动创建、保留和删除EBS快照时,应该使用DLM。
– 您应该使用AWS Backup从单个位置管理和监视您使用的AWS服务(包括EBS卷)之间的备份。
题目中包括EC2实例,因此只能选择DRS

例题:A company needs to build a disaster recovery (DR) solution for its ecommerce website. The web application is hosted on a fleet of t3.large Amazon EC2 instances and uses an Amazon RDS for MySQL DB instance. The EC2 instances are in an Auto Scaling group that extends across multiple Availability Zones.
In the event of a disaster, the web application must fail over to the secondary environment with an RPO of 30 seconds and an RTO of 10 minutes.
Which solution will meet these requirements MOST cost-effectively?
A. Use infrastructure as code (IaC) to provision the new infrastructure in the DR Region. Create a cross-Region read replica for the DB instance. Set up a backup plan in AWS Backup to create cross-Region backups for the EC2 instances and the DB instance. Create a cron expression to back up the EC2 instances and the DB instance every 30 seconds to the DR Region. Recover the EC2 instances from the latest EC2 backup. Use an Amazon Route 53 geolocation routing policy to automatically fail over to the DR Region in the event of a disaster.
B. Use infrastructure as code (IaC) to provision the new infrastructure in the DR Region. Create a cross-Region read replica for the DB instance. Set up AWS Elastic Disaster Recovery to continuously replicate the EC2 instances to the DR Region. Run the EC2 instances at the minimum capacity in the DR Region. Use an Amazon Route 53 failover routing policy to automatically fail over to the DR Region in the event of a disaster. Increase the desired capacity of the Auto Scaling group.
C. Set up a backup plan in AWS Backup to create cross-Region backups for the EC2 instances and the DB instance. Create a cron expression to back up the EC2 instances and the DB instance every 30 seconds to the DR Region. Use infrastructure as code (IaC) to provision the new infrastructure in the DR Region. Manually restore the backed-up data on new instances. Use an Amazon Route 53 simple routing policy to automatically fail over to the DR Region in the event of a disaster.
D. Use infrastructure as code (IaC) to provision the new infrastructure in the DR Region. Create an Amazon Aurora global database. Set up AWS Elastic Disaster Recovery to continuously replicate the EC2 instances to the DR Region. Run the Auto Scaling group of EC2 instances at full capacity in the DR Region. Use an Amazon Route 53 failover routing policy to automatically fail over to the DR Region in the event of a disaster.
答案:B
答案解析:题目要求将应用运行于EC2和RDS数据库做跨区域DR,RPO of 30 seconds and an RTO of 10 minutes,并且MOST cost-effectively。A选项使用cron做备份如果可用区域出现故障,cron是不起作用的,因此排除。C选项也是同样使用cron问题。D选项虽然可行,但是相对于B选项来说,使用global database会比跨区域复制要贵。因此最佳选项为B选项。

例题:A company wants to use AWS for disaster recovery for an on-premises application. The company has hundreds of Windows-based servers that run the application. All the servers mount a common share.
The company has an RTO of 15 minutes and an RPO of 5 minutes. The solution must support native failover and fallback capabilities.
Which solution will meet these requirements MOST cost-effectively?
A. Create an AWS Storage Gateway File Gateway. Schedule daily Windows server backups. Save the data to Amazon S3. During a disaster, recover the on-premises servers from the backup. During tailback, run the on-premises servers on Amazon EC2 instances.
B. Create a set of AWS CloudFormation templates to create infrastructure. Replicate all data to Amazon Elastic File System (Amazon EFS) by using AWS DataSync. During a disaster, use AWS CodePipeline to deploy the templates to restore the on-premises servers. Fail back the data by using DataSync.
C. Create an AWS Cloud Development Kit (AWS CDK) pipeline to stand up a multi-site active-active environment on AWS. Replicate data into Amazon S3 by using the s3 sync command. During a disaster, swap DNS endpoints to point to AWS. Fail back the data by using the s3 sync command.
D. Use AWS Elastic Disaster Recovery to replicate the on-premises servers. Replicate data to an Amazon FSx for Windows File Server file system by using AWS DataSync. Mount the file system to AWS servers. During a disaster, fail over the on-premises servers to AWS. Fail back to new or existing servers by using Elastic Disaster Recovery.
答案:D
答案解析:题目要求在AWS建立一个本地数据中心的DR。A选项S3恢复达不到RTO of 15 minutes and an RPO of 5 minutes的要求。B选项CodePipeline 用于部署应用。C选项CDK不是作为备份使用。因此选择D选项。

2.4 一些DR技巧

  • backup
    1)EBS使用snapshot
    2)S3可以使用生命周期或者Glacier
    3)从本地数据中心到AWS,可以使用Snowball或者Storage Gateway
  • 高可用
    1)可以使用Route53做跨区域切换
    2)使用RDS Multi-AZ、S3等本身自带多可用区灾备
    3)使用VPN和Direct Connect两种方式一起做灾备
  • 数据复制
    1)使用RDS replication(跨区域)或者Aurora+Global database
    2)本地数据中心可以使用DMS灾备到云上
    3)可以使用Storage Gateway
  • 自动化
    1)使用CloudFormation或者Beanstalk快速自动重建服务
    2)使用CloudWatch自动恢复EC2
    3)自定义Lambda操作自动化恢复
  • Chaos(混沌测试)
    1)当不知道那个地方需要灾备,可以使用混沌测试,直接在生产环境上实验故障

3 考试出现的云迁移

例题:A solutions architect needs to advise a company on how to migrate its on-premises data processing application to the AWS Cloud. Currently, users upload input files through a web portal. The web server then stores the uploaded files on NAS and messages the processing server over a message queue. Each media file can take up to 1 hour to process. The company has determined that the number of media files awaiting processing is significantly higher during business hours, with the number of files rapidly declining after business hours.
What is the MOST cost-effective migration recommendation?
A. Create a queue using Amazon SQS. Configure the existing web server to publish to the new queue. When there are messages in the queue, invoke an AWS Lambda function to pull requests from the queue and process the files. Store the processed files in an Amazon S3 bucket.
B. Create a queue using Amazon MQ. Configure the existing web server to publish to the new queue. When there are messages in the queue, create a new Amazon EC2 instance to pull requests from the queue and process the files. Store the processed files in Amazon EFS. Shut down the EC2 instance after the task is complete.
C. Create a queue using Amazon MQ. Configure the existing web server to publish to the new queue. When there are messages in the queue, invoke an AWS Lambda function to pull requests from the queue and process the files. Store the processed files in Amazon EFS.
D. Create a queue using Amazon SQS. Configure the existing web server to publish to the new queue. Use Amazon EC2 instances in an EC2 Auto Seating group to pull requests from the queue and process the files. Scale the EC2 instances based on the SQS queue length. Store the processed files in an Amazon S3 bucket.
答案:D
答案解析:题目关键词:migrate, queue,1 hour to process,significantly higher。一个处理文件的应用程序,且访问量偶尔会突增。首先文件要处理1小时,超过Lambda的最大处理时间(15分钟)。因此排除掉A选项和C选项。B选项是创建一个新实例处理明显有问题(太慢了)。因此答案是D。

例题:A company wants to migrate an application to Amazon EC2 from VMware Infrastructure that runs in an on-premises data center. A solutions architect must preserve the software and configuration settings during the migration.
What should the solutions architect do to meet these requirements?
A. Configure the AWS DataSync agent to start replicating the data store to Amazon FSx for Windows File Server. Use the SMB share to host the VMware data store. Use VM Import/Export to move the VMs to Amazon EC2.
B. Use the VMware vSphere client to export the application as an image in Open Virtualization Format (OVF) format. Create an Amazon S3 bucket to store the image in the destination AWS Region. Create and apply an IAM role for VM Import. Use the AWS CLI to run the EC2 import command.
C. Configure AWS Storage Gateway for files service to export a Common Internet File System (CIFS) share. Create a backup copy to the shared folder. Sign in to the AWS Management Console and create an AMI from the backup copy. Launch an EC2 instance that is based on the AMI.
D. Create a managed-instance activation for a hybrid environment in AWS Systems Manager. Download and install Systems Manager Agent on the on-premises VM. Register the VM with Systems Manager to be a managed instance. Use AWS Backup to create a snapshot of the VM and create an AMI. Launch an EC2 instance that is based on the AMI.
答案:B
答案解析:VMware的迁移参照:https://docs.aws.amazon.com/vm-import/latest/userguide/vmimport-image-import.html

例题:A company has an on-premises website application that provides real estate information for potential renters and buyers. The website uses a Java backend and a NoSQL MongoDB database to store subscriber data.
The company needs to migrate the entire application to AWS with a similar structure. The application must be deployed for high availability, and the company cannot make changes to the application.Which solution will meet these requirements?
A. Use an Amazon Aurora DB cluster as the database for the subscriber data. Deploy Amazon EC2 instances in an Auto Scaling group across multiple Availability Zones for the Java backend application.
B. Use MongoDB on Amazon EC2 instances as the database for the subscriber data. Deploy EC2 instances in an Auto Scaling group in a single Availability Zone for the Java backend application.
C. Configure Amazon DocumentDB (with MongoDB compatibility) with appropriately sized instances in multiple Availability Zones as the database for the subscriber data. Deploy Amazon EC2 instances in an Auto Scaling group across multiple Availability Zones for the Java backend application.
D. Configure Amazon DocumentDB (with MongoDB compatibility) in on-demand capacity mode in multiple Availability Zones as the database for the subscriber data. Deploy Amazon EC2 instances in an Auto Scaling group across multiple Availability Zones for the Java backend application.
答案:C
答案解析:题目关键词:high availability,cannot make changes。将java+mongo的架构迁移到AWS云,且提供高可用,后端java程序不做任何修改。首先mongo最好还是使用与mongo一直的数据库,因此排除掉A选项。B选项将mongo部署在EC2上面并没有提供扩展性。D选项DocumentDB没有 on-demand capacity模式,只有 on-demand instance模式。因此选择C。

例题:A company is refactoring its on-premises order-processing platform in the AWS Cloud. The platform includes a web front end that is hosted on a fleet of VMs.
RabbitMQ to connect the front end to the backend, and a Kubernetes cluster to run a containerized backend system to process the orders. The company does not want to make any major changes to the application.
Which solution will meet these requirements with the LEAST operational overhead?
A. Create an AMI of the web server VM. Create an Amazon EC2 Auto Scaling group that uses the AMI and an Application Load Balancer. Set up Amazon MQ to replace the on-premises messaging queue. Configure Amazon Elastic Kubernetes Service (Amazon EKS) to host the order-processing backend.
B. Create a custom AWS Lambda runtime to mimic the web server environment. Create an Amazon API Gateway API to replace the front-end web servers. Set up Amazon MQ to replace the on-premises messaging queue. Configure Amazon Elastic Kubernetes Service (Amazon EKS) to host the order-processing backend.
C. Create an AMI of the web server VM. Create an Amazon EC2 Auto Scaling group that uses the AMI and an Application Load Balancer. Set up Amazon MQ to replace the on-premises messaging queue. Install Kuhernetes on a fleet of different EC2 instances to host the order-processing backend.
D. Create an AMI of the web server VM Create an Amazon EC2 Auto Scaling group that uses the AMI and an Application Load Balancer. Set up an Amazon Simple Queue Service (Amazon SQS) queue to replace the on-premises messaging queue, Configure Amazon Elastic Kubernetes Service (Amazon EKS) to host the order-processing backend.
答案:A
答案解析:题目使用到VM、RabbitMQ、K8s,迁移到云上。那么在云上对应AMI、Amazon MQ和EKS,因此答案是A选项。

例题:A company that tracks medical devices in hospitals wants to migrate its existing storage solution to the AWS Cloud. The company equips all of its devices with sensors that collect location and usage information. This sensor data is sent in unpredictable patterns with large spikes. The data is stored in a MySQL database running on premises at each hospital. The company wants the cloud storage solution to scale with usage.
The company’s analytics team uses the sensor data to calculate usage by device type and hospital. The team needs to keep analysis tools running locally while fetching data from the cloud. The team also needs to use existing Java application and SQL queries with as few changes as possible.
How should a solutions architect meet these requirements while ensuring the sensor data is secure?
A. Store the data in an Amazon Aurora Serverless database. Serve the data through a Network Load Balancer (NLB). Authenticate users using the NLB with credentials stored in AWS Secrets Manager.
B. Store the data in an Amazon S3 bucket. Serve the data through Amazon QuickSight using an IAM user authorized with AWS Identity and Access Management (IAM) with the S3 bucket as the data source.
C. Store the data in an Amazon Aurora Serverless database. Serve the data through the Aurora Data API using an IAM user authorized with AWS Identity and Access Management (IAM) and the AWS Secrets Manager ARN.
D. Store the data in an Amazon S3 bucket. Serve the data through Amazon Athena using AWS PrivateLink to secure the data in transit.
答案:C
答案解析:题目要求需要将数据迁移到AWS云上,并且还需要本地数据中心能做一些简单SQL数据分析。B选项和D选项不符合,因为需要SQL查询。A选项通过NLB和Secrets Manager做权限管理似乎不太行。因此选择C选项。

例题:A company is running a large application on premises. Its technology stack consists of Microsoft .NET for the web server platform and Apache Cassandra for the database. The company wants to migrate this application to AWS to improve service reliability. The IT team also wants to reduce the time it spends on capacity management and maintenance of this infrastructure. The Development team is willing and available to make code changes to support the migration.
Which design is the LEAST complex to manage after the migration?
A. Migrate the web servers to Amazon EC2 instances in an Auto Scaling group that is running .NET. Migrate the existing Cassandra database to Amazon Aurora with multiple read replicas, and run both in a Multi-AZ mode.
B. Migrate the web servers to an AWS Elastic Beanstalk environment that is running the .NET platform in a Multi-AZ Auto Scaling configuration. Migrate the Cassandra database to Amazon EC2 instances that are running in a Multi-AZ configuration.
C. Migrate the web servers to an AWS Elastic Beanstalk environment that is running the .NET platform in a Multi-AZ Auto Scaling configuration. Migrate the existing Cassandra database to Amazon DynamoDB.
D. Migrate the web servers to Amazon EC2 instances in an Auto Scaling group that is running .NET. Migrate the existing Cassandra database to Amazon DynamoDB.
答案:C
答案解析:题目要求将web程序和Cassandra 数据库迁移到AWS云上,且不想做过多的底层基础设施维护工作和低复杂性。因此在不想做过多的底层基础设施维护上相对来说选择Beanstalk会比部署在ASG的EC2更好,因此只能在B选项和C选项中选择。B选项中使用EC2部署Cassandra也是明显需要做较多的底层维护,因此答案选C选项。

例题:A company’s site reliability engineer is performing a review of Amazon FSx for Windows File Server deployments within an account that the company acquired.
Company policy states that all Amazon FSx file systems must be configured to be highly available across Availability Zones.
During the review, the site reliability engineer discovers that one of the Amazon FSx file systems used a deployment type of Single-AZ. A solutions architect needs to minimize downtime while aligning this Amazon FSx file system with company policy.
What should the solutions architect do to meet these requirements?
A. Reconfigure the deployment type to Multi-AZ for this Amazon FSx file system.
B. Create a new Amazon FSx file system with a deployment type of Multi-AZ. Use AWS DataSync to transfer data to the new Amazon FSx file system. Point users to the new location.
C. Create a second Amazon FSx file system with a deployment type of Single-AZ. Use AWS DataSync to keep the data in sync. Switch users to the second Amazon FSx file system in the event of failure.
D. Use the AWS Management Console to take a backup of the Amazon FSx file system. Create a new Amazon FSx file system with a deployment type of Multi- AZ. Restore the backup to the new Amazon FSx file system. Point users to the new location.
答案:B
答案分析:需要将存储在FSx数据迁移到一个高可用的FSx存储,并且minimize downtime。A选项没有做数据迁移;C选项依旧没有符合公司高可用政策;D选项切换过程中存在较长的不可用时间。因此选择B选项

例题:An enterprise runs 103 line-of-business applications on virtual machines in an on-premises data center. Many of the applications are simple PHP, Java, or Ruby web applications, are no longer actively developed, and serve little traffic.
Which approach should be used to migrate these applications to AWS with the LOWEST infrastructure costs?
A. Deploy the applications to single-instance AWS Elastic Beanstalk environments without a load balancer.
B. Use AWS SMS to create AMIs for each virtual machine and run them in Amazon EC2.
C. Convert each application to a Docker image and deploy to a small Amazon ECS cluster behind an Application Load Balancer.
D. Use VM Import/Export to create AMIs for each virtual machine and run them in single-instance AWS Elastic Beanstalk environments by configuring a custom image.
答案:C
答案解析:题目要求将本地数据中心的103个应用程序迁移到AWS上,这些程序流量很小,希望LOWEST infrastructure costs。A选项依旧103个独立的应用程序在不同VM上面,同时需要Load Balancer。B选项你需要103个EC2实例,这将有很大的成本。D选项如果它是单实例环境,仍然将有103个环境,这意味着它们后面有103个EC2实例。因此最好方案是C选项。

例题:A company runs a Java application that has complex dependencies on VMs that are in the company’s data center. The application is stable. but the company wants to modernize the technology stack. The company wants to migrate the application to AWS and minimize the administrative overhead to maintain the servers.
Which solution will meet these requirements with the LEAST code changes?
A. Migrate the application to Amazon Elastic Container Service (Amazon ECS) on AWS Fargate by using AWS App2Container. Store container images in Amazon Elastic Container Registry (Amazon ECR). Grant the ECS task execution role permission 10 access the ECR image repository. Configure Amazon ECS to use an Application Load Balancer (ALB). Use the ALB to interact with the application.
B. Migrate the application code to a container that runs in AWS Lambda. Build an Amazon API Gateway REST API with Lambda integration. Use API Gateway to interact with the application.
C. Migrate the application to Amazon Elastic Kubernetes Service (Amazon EKS) on EKS managed node groups by using AWS App2Container. Store container images in Amazon Elastic Container Registry (Amazon ECR). Give the EKS nodes permission to access the ECR image repository. Use Amazon API Gateway to interact with the application.
D. Migrate the application code to a container that runs in AWS Lambda. Configure Lambda to use an Application Load Balancer (ALB). Use the ALB to interact with the application.
答案:A
答案解析:题目要求将一个java应用程序迁移到AWS上面,并且modernize technology stack和minimize the administrative overhead。B选项要求将应用程序代码迁移到在AWS Lambda中运行的容器中,这将需要更多的代码更改。C选项需要将应用程序迁移到Amazon Elastic Kubernetes Service (Amazon EKS),这将需要更多的管理开销。D选项要求将应用程序代码迁移到在AWS Lambda中运行的容器中,这将需要更多的代码更改。因此最佳答案是A选项,使用ECR打包image,同时使用Fargate减少运维。

例题:A company wants to containerize a multi-tier web application and move the application from an on-premises data center to AWS. The application includes web. application, and database tiers. The company needs to make the application fault tolerant and scalable. Some frequently accessed data must always be available across application servers. Frontend web servers need session persistence and must scale to meet increases in traffic.
Which solution will meet these requirements with the LEAST ongoing operational overhead?
A. Run the application on Amazon Elastic Container Service (Amazon ECS) on AWS Fargate. Use Amazon Elastic File System (Amazon EFS) for data that is frequently accessed between the web and application tiers. Store the frontend web server session data in Amazon Simple Queue Service (Amazon SQS).
B. Run the application on Amazon Elastic Container Service (Amazon ECS) on Amazon EC2. Use Amazon ElastiCache for Redis to cache frontend web server session data. Use Amazon Elastic Block Store (Amazon EBS) with Multi-Attach on EC2 instances that are distributed across multiple Availability Zones.
C. Run the application on Amazon Elastic Kubernetes Service (Amazon EKS). Configure Amazon EKS to use managed node groups. Use ReplicaSets to run the web servers and applications. Create an Amazon Elastic File System (Amazon EFS) file system. Mount the EFS file system across all EKS pods to store frontend web server session data.
D. Deploy the application on Amazon Elastic Kubernetes Service (Amazon EKS). Configure Amazon EKS to use managed node groups. Run the web servers and application as Kubernetes deployments in the EKS cluster. Store the frontend web server session data in an Amazon DynamoDB table. Create an Amazon Elastic File System (Amazon EFS) volume that all applications will mount at the time of deployment.
答案:D
答案解析:题目需要将web多层架构迁移到AWS,且有session存储能力。A选项使用SQS存储session不合理;B选项 EBS with Multi-Attach只支持一个可用区;C选项EFS存储成本较高,且题目没有说IOPS要求。因此D选项最合适。

例题:A company wants to manage the costs associated with a group of 20 applications that are infrequently used, but are still business-critical, by migrating to AWS.
The applications are a mix of Java and Node.js spread across different instance clusters. The company wants to minimize costs while standardizing by using a single deployment methodology. Most of the applications are part of month-end processing routines with a small number of concurrent users, but they are occasionally run at other times. Average application memory consumption is less than 1 GB, though some applications use as much as 2.5 GB of memory during peak processing. The most important application in the group is a billing report written in Java that accesses multiple data sources and often for several hours.
Which is the MOST cost-effective solution?
A. Deploy a separate AWS Lambda function for each application. Use AWS CloudTrail logs and Amazon CloudWatch alarms to verify completion of critical jobs.
B. Deploy Amazon ECS containers on Amazon EC2 with Auto Scaling configured for memory utilization of 75%. Deploy an ECS task for each application being migrated with ECS task scaling. Monitor services and hosts by using Amazon CloudWatch.
C. Deploy AWS Elastic Beanstalk for each application with Auto Scaling to ensure that all requests have sufficient resources. Monitor each AWS Elastic Beanstalk deployment by using CloudWatch alarms.
D. Deploy a new Amazon EC2 instance cluster that co-hosts all applications by using EC2 Auto Scaling and Application Load Balancers. Scale cluster size based on a custom metric set on instance memory utilization. Purchase 3-year Reserved Instance reservations equal to the GroupMaxSize parameter of the Auto Scaling group.
答案:B
答案解析:题目希望将本地数据中心的应用程序迁移到AWS降低费用,这些都是使用java和node.js写的,月末会流量增高。A选项似乎与题目需求没有太大关系;C选项使用Beanstalk 需要做程序一定改造,且没有自动伸缩节省成本;D选项直接部署在EC2上节省不了成本。因此选择B选项。

4 考试中出现DR

例题:A North American company with headquarters on the East Coast is deploying a new web application running on Amazon EC2 in the us-east-1 Region. The application should dynamically scale to meet user demand and maintain resiliency. Additionally, the application must have disaster recover capabilities in an active-passive configuration with the us-west-1 Region.
Which steps should a solutions architect take after creating a VPC in the us-east-1 Region?
A. Create a VPC in the us-west-1 Region. Use inter-Region VPC peering to connect both VPCs. Deploy an Application Load Balancer (ALB) spanning multiple Availability Zones (AZs) to the VPC in the us-east-1 Region. Deploy EC2 instances across multiple AZs in each Region as part of an Auto Scaling group spanning both VPCs and served by the ALB.
B. Deploy an Application Load Balancer (ALB) spanning multiple Availability Zones (AZs) to the VPC in the us-east-1 Region. Deploy EC2 instances across multiple AZs as part of an Auto Scaling group served by the ALB. Deploy the same solution to the us-west-1 Region. Create an Amazon Route 53 record set with a failover routing policy and health checks enabled to provide high availability across both Regions.
C. Create a VPC in the us-west-1 Region. Use inter-Region VPC peering to connect both VPCs. Deploy an Application Load Balancer (ALB) that spans both VPCs. Deploy EC2 instances across multiple Availability Zones as part of an Auto Scaling group in each VPC served by the ALB. Create an Amazon Route 53 record that points to the ALB.
D. Deploy an Application Load Balancer (ALB) spanning multiple Availability Zones (AZs) to the VPC in the us-east-1 Region. Deploy EC2 instances across multiple AZs as part of an Auto Scaling group served by the ALB. Deploy the same solution to the us-west-1 Region. Create separate Amazon Route 53 records in each Region that point to the ALB in the Region. Use Route 53 health checks to provide high availability across both Regions.
答案:B
答案解析:需要在另外一个区域做DR。ALB是无法实现多区域负载均衡,因此排除掉A选项和C选项。Route 53是一个全球服务,并不属于某个Region,因此排除D,因此正确答案为B。

例题:A company is running a two-tier web-based application in an on-premises data center. The application layer consists of a single server running a stateful application. The application connects to a PostgreSQL database running on a separate server. The application’s user base is expected to grow significantly, so the company is migrating the application and database to AWS. The solution will use Amazon Aurora PostgreSQL, Amazon EC2 Auto Scaling, and Elastic Load Balancing.
Which solution will provide a consistent user experience that will allow the application and database tiers to scale?
A. Enable Aurora Auto Scaling for Aurora Replicas. Use a Network Load Balancer with the least outstanding requests routing algorithm and sticky sessions enabled.
B. Enable Aurora Auto Scaling for Aurora writers. Use an Application Load Balancer with the round robin routing algorithm and sticky sessions enabled.
C. Enable Aurora Auto Scaling for Aurora Replicas. Use an Application Load Balancer with the round robin routing and sticky sessions enabled.
D. Enable Aurora Scaling for Aurora writers. Use a Network Load Balancer with the least outstanding requests routing algorithm and sticky sessions enabled.
答案:C
答案解析:B选项和D选项中Aurora writers是一个多master,而题目只需要要一个Aurora Replicas即可。而A选项和C选择中,A选择使用NLB,但NLB不支持最少未完成请求路由,因此选择C选项。

例题:A company is hosting a critical application on a single Amazon EC2 instance. The application uses an Amazon ElastiCache for Redis single-node cluster for an in-memory data store. The application uses an Amazon RDS for MariaDB DB instance for a relational database. For the application to function, each piece of the infrastructure must be healthy and must be in an active state.
A solutions architect needs to improve the application’s architecture so that the infrastructure can automatically recover from failure with the least possible downtime.
Which combination of steps will meet these requirements? (Choose three.)
A. Use an Elastic Load Balancer to distribute traffic across multiple EC2 instances. Ensure that the EC2 instances are part of an Auto Scaling group that has a minimum capacity of two instances.
B. Use an Elastic Load Balancer to distribute traffic across multiple EC2 instances. Ensure that the EC2 instances are configured in unlimited mode.
C. Modify the DB instance to create a read replica in the same Availability Zone. Promote the read replica to be the primary DB instance in failure scenarios.
D. Modify the DB instance to create a Multi-AZ deployment that extends across two Availability Zones.
E. Create a replication group for the ElastiCache for Redis cluster. Configure the cluster to use an Auto Scaling group that has a minimum capacity of two instances.
F. Create a replication group for the ElastiCache for Redis cluster. Enable Multi-AZ on the cluster.
答案:ADF
答案解析:题目需要做一个改进,那么可能需要考虑高可用、可扩展。A选项和B选项关于EC2的高可用,那么应该使用ASG+ELB做高可用,在灾备区选择最小运行的实例,因此选择A选项;C选项和D选项,用于数据库高可用,使用Multi-AZ做弹性可伸缩,因此选择D选项;E选项和F选项,Redis直接使用Multi-AZ 做到弹性可伸缩。因此答案ADF。

例题:A company has a multi-tier web application that runs on a fleet of Amazon EC2 instances behind an Application Load Balancer (ALB). The instances are in an Auto Scaling group. The ALB and the Auto Scaling group are replicated in a backup AWS Region. The minimum value and the maximum value for the Auto Scaling group are set to zero. An Amazon RDS Multi-AZ DB instance stores the application’s data. The DB instance has a read replica in the backup Region. The application presents an endpoint to end users by using an Amazon Route 53 record.
The company needs to reduce its RTO to less than 15 minutes by giving the application the ability to automatically fail over to the backup Region. The company does not have a large enough budget for an active-active strategy.
What should a solutions architect recommend to meet these requirements?
A. Reconfigure the application’s Route 53 record with a latency-based routing policy that load balances traffic between the two ALBs. Create an AWS Lambda function in the backup Region to promote the read replica and modify the Auto Scaling group values. Create an Amazon CloudWatch alarm that is based on the HTTPCode_Target_5XX_Count metric for the ALB in the primary Region. Configure the CloudWatch alarm to invoke the Lambda function.
B. Create an AWS Lambda function in the backup Region to promote the read replica and modify the Auto Scaling group values. Configure Route 53 with a health check that monitors the web application and sends an Amazon Simple Notification Service (Amazon SNS) notification to the Lambda function when the health check status is unhealthy. Update the application’s Route 53 record with a failover policy that routes traffic to the ALB in the backup Region when a health check failure occurs.
C. Configure the Auto Scaling group in the backup Region to have the same values as the Auto Scaling group in the primary Region. Reconfigure the application’s Route 53 record with a latency-based routing policy that load balances traffic between the two ALBs. Remove the read replica. Replace the read replica with a standalone RDS DB instance. Configure Cross-Region Replication between the RDS DB instances by using snapshots and Amazon S3.
D. Configure an endpoint in AWS Global Accelerator with the two ALBs as equal weighted targets. Create an AWS Lambda function in the backup Region to promote the read replica and modify the Auto Scaling group values. Create an Amazon CloudWatch alarm that is based on the HTTPCode_Target_5XX_Count metric for the ALB in the primary Region. Configure the CloudWatch alarm to invoke the Lambda function.
答案:B
答案解析:题目要求RTO小于10分钟,并且预算有限。因此最好的解决方案就是低成本双活+自动转移故障。A选项不是双活,基于延迟的路由主要用于双活。C选项因为公司没有足够大的预算来实施双活战略。因此选择B选项。

例题:A company is providing weather data over a REST-based API to several customers. The API is hosted by Amazon API Gateway and is integrated with different AWS Lambda functions for each API operation. The company uses Amazon Route 53 for DNS and has created a resource record of weather.example.com. The company stores data for the API in Amazon DynamoDB tables. The company needs a solution that will give the API the ability to fail over to a different AWS Region.
Which solution will meet these requirements?
A. Deploy a new set of Lambda functions in a new Region. Update the API Gateway API to use an edge-optimized API endpoint with Lambda functions from both Regions as targets. Convert the DynamoDB tables to global tables.
B. Deploy a new API Gateway API and Lambda functions in another Region. Change the Route 53 DNS record to a multivalue answer. Add both API Gateway APIs to the answer. Enable target health monitoring. Convert the DynamoDB tables to global tables.
C. Deploy a new API Gateway API and Lambda functions in another Region. Change the Route 53 DNS record to a failover record. Enable target health monitoring. Convert the DynamoDB tables to global tables.
D. Deploy a new API Gateway API in a new Region. Change the Lambda functions to global functions. Change the Route 53 DNS record to a multivalue answer. Add both API Gateway APIs to the answer. Enable target health monitoring. Convert the DynamoDB tables to global tables.
答案:C
答案解析:题目中采用Route53做DNS,有API+Lambda+DynamoDB,想做跨区域DR。首先Route 53最合适故障转移,其次API和Lambda是需要多区域部署。那么答案就在B选项和C选项中。唯一不同是multivalue answer和failover,其中multi-answer有时不能返回部署服务的最新版本,因此这可能不利于此解决方案。因此选项C选项

例题:A financial services company in North America plans to release a new online web application to its customers on AWS. The company will launch the application in the us-east-1 Region on Amazon EC2 instances. The application must be highly available and must dynamically scale to meet user traffic. The company also wants to implement a disaster recovery environment for the application in the us-west-1 Region by using active-passive failover.
Which solution will meet these requirements?
A. Create a VPC in us-east-1 and a VPC in us-west-1. Configure VPC peering. In the us-east-1 VPC, create an Application Load Balancer (ALB) that extends across multiple Availability Zones in both VPCs. Create an Auto Scaling group that deploys the EC2 instances across the multiple Availability Zones in both VPCs. Place the Auto Scaling group behind the ALB.
B. Create a VPC in us-east-1 and a VPC in us-west-1. In the us-east-1 VPC, create an Application Load Balancer (ALB) that extends across multiple Availability Zones in that VPC. Create an Auto Scaling group that deploys the EC2 instances across the multiple Availability Zones in the us-east-1 VPC. Place the Auto Scaling group behind the ALB. Set up the same configuration in the us-west-1 VPC. Create an Amazon Route 53 hosted zone. Create separate records for each ALB. Enable health checks to ensure high availability between Regions.
C. Create a VPC in us-east-1 and a VPC in us-west-1. In the us-east-1 VPC, create an Application Load Balancer (ALB) that extends across multiple Availability Zones in that VPC. Create an Auto Scaling group that deploys the EC2 instances across the multiple Availability Zones in the us-east-1 VPC. Place the Auto Scaling group behind the ALB. Set up the same configuration in the us-west-1 VPC. Create an Amazon Route 53 hosted zone. Create separate records for each ALB. Enable health checks and configure a failover routing policy for each record.
D. Create a VPC in us-east-1 and a VPC in us-west-1. Configure VPC peering. In the us-east-1 VPC, create an Application Load Balancer (ALB) that extends across multiple Availability Zones in both VPCs. Create an Auto Scaling group that deploys the EC2 instances across the multiple Availability Zones in both VPCs. Place the Auto Scaling group behind the ALB. Create an Amazon Route 53 hosted zone. Create a record for the ALB.
答案:C
答案解析:题目要求做一个区域的DR,并且在故障时能够自动转移故障到DR区域。首先需要做多区域故障转移,优选采用Route 53,因此排除掉A选项和D选项。至于B选项并没有做到故障自动转移,只是监控健康状况而已。因此选择C选项。

例题:A company implements a containerized application by using Amazon Elastic Container Service (Amazon ECS) and Amazon API Gateway. The application data is stored in Amazon Aurora databases and Amazon DynamoDB databases. The company automates infrastructure provisioning by using AWS CloudFormation. The company automates application deployment by using AWS CodePipeline.
A solutions architect needs to implement a disaster recovery (DR) strategy that meets an RPO of 2 hours and an RTO of 4 hours.
Which solution will meet these requirements MOST cost-effectively?
A. Set up an Aurora global database and DynamoDB global tables to replicate the databases to a secondary AWS Region. In the primary Region and in the secondary Region, configure an API Gateway API with a Regional endpoint. Implement Amazon CloudFront with origin failover to route traffic to the secondary Region during a DR scenario.
B. Use AWS Database Migration Service (AWS DMS), Amazon EventBridge (Amazon CloudWatch Events), and AWS Lambda to replicate the Aurora databases to a secondary AWS Region. Use DynamoDB Streams, EventBridge (CloudWatch Events), and Lambda to replicate the DynamoDB databases to the secondary Region. In the primary Region and in the secondary Region, configure an API Gateway API with a Regional endpoint. Implement Amazon Route 53 failover routing to switch traffic from the primary Region to the secondary Region.
C. Use AWS Backup to create backups of the Aurora databases and the DynamoDB databases in a secondary AWS Region. In the primary Region and in the secondary Region, configure an API Gateway API with a Regional endpoint. Implement Amazon Route 53 failover routing to switch traffic from the primary Region to the secondary Region.
D. Set up an Aurora global database and DynamoDB global tables to replicate the databases to a secondary AWS Region. In the primary Region and in the secondary Region, configure an API Gateway API with a Regional endpoint. Implement Amazon Route 53 failover routing to switch traffic from the primary Region to the secondary Region.
答案:C
答案解析:题目要求做一个DR,RPO of 2 hours and an RTO of 4 hours,并且 MOST cost-effectively。其中C选项和D选项是比较符合需求,但是D选项没有C选项经济实惠,因此选择C选项。

例题:A company has a critical application in which the data tier is deployed in a single AWS Region. The data tier uses an Amazon DynamoDB table and an Amazon Aurora MySQL DB cluster. The current Aurora MySQL engine version supports a global database. The application tier is already deployed in two Regions.
Company policy states that critical applications must have application tier components and data tier components deployed across two Regions. The RTO and RPO must be no more than a few minutes each. A solutions architect must recommend a solution to make the data tier compliant with company policy.
Which combination of steps will meet these requirements? (Choose two.)
A. Add another Region to the Aurora MySQL DB cluster
B. Add another Region to each table in the Aurora MySQL DB cluster
C. Set up scheduled cross-Region backups for the DynamoDB table and the Aurora MySQL DB cluster
D. Convert the existing DynamoDB table to a global table by adding another Region to its configuration
E. Use Amazon Route 53 Application Recovery Controller to automate database backup and recovery to the secondary Region
答案:AD
答案解析:题目希望DynamoDB 和 Aurora MySQL做多区域DR,且RTO and RPO要求极低。那么等于至少是热备。Aurora MySQL 支持global database,因此只需要增加另外一个区域到集群就行。DynamoDB也需要配置为global table 。因此选择AD

例题:A large company has a business-critical application that runs in a single AWS Region. The application consists of multiple Amazon EC2 instances and an Amazon RDS Multi-AZ DB instance. The EC2 instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. A solutions architect is implementing a disaster recovery (DR) plan for the application. The solutions architect has created a pilot light application deployment in a new Region, which is referred to as the DR Region. The DR environment has an Auto Scaling group with a single EC2 instance and a read replica of the RDS DB instance. The solutions architect must automate a failover from the primary application environment to the pilot light environment in the DR Region.
Which solution meets these requirements with the MOST operational efficiency?
A. Publish an application availability metric to Amazon CloudWatch in the DR Region from the application environment in the primary Region. Create a CloudWatch alarm in the DR Region that is invoked when the application availability metric stops being delivered. Configure the CloudWatch alarm to send a notification to an Amazon Simple Notification Service (Amazon SNS) topic in the DR Region. Add an email subscription to the SNS topic that sends messages to the application owner. Upon notification, instruct a systems operator to sign in to the AWS Management Console and initiate failover operations for the application.
B. Create a cron task that runs every 5 minutes by using one of the application’s EC2 instances in the primary Region. Configure the cron task to check whether the application is available. Upon failure, the cron task notifies a systems operator and attempts to restart the application services.
C. Create a cron task that runs every 5 minutes by using one of the application’s EC2 instances in the primary Region. Configure the cron task to check whether the application is available. Upon failure, the cron task modifies the DR environment by promoting the read replica and by adding EC2 instances to the Auto Scaling group.
D. Publish an application availability metric to Amazon CloudWatch in the DR Region from the application environment in the primary Region. Create a CloudWatch alarm in the DR Region that is invoked when the application availability metric stops being delivered. Configure the CloudWatch alarm to send a notification to an Amazon Simple Notification Service (Amazon SNS) topic in the DR Region. Use an AWS Lambda function that is invoked by Amazon SNS in the DR Region to promote the read replica and to add EC2 instances to the Auto Scaling group.
答案:D
答案解析:题目要求给ASG+EC2+Amazon RDS做一个跨区域的DR。首先监控不可用使用CloudWatch,如果使用EC2运行cron task在区域不可用时会不起作用,因此排除B选项和C选项。A选项需要手动,明显D选项的自动切换更好。因此选择D选项

例题:A company runs an ecommerce application in a single AWS Region. The application uses a five-node Amazon Aurora MySQL DB cluster to store information about customers and their recent orders. The DB cluster experiences a large number of write transactions throughout the day.
The company needs to replicate the data in the Aurora database to another Region to meet disaster recovery requirements. The company has an RPO of 1 hour.
Which solution will meet these requirements with the LOWEST cost?
A. Modify the Aurora database to be an Aurora global database. Create a second Aurora database in another Region.
B. Enable the Backtrack feature for the Aurora database. Create an AWS Lambda function that runs daily to copy the snapshots of the database to a backup Region.
C. Use AWS Database Migration Service (AWS DMS). Create a DMS change data capture (CDC) task that replicates the ongoing changes from the Aurora database to an Amazon S3 bucket in another Region.
D. Turn off automated Aurora backups. Configure Aurora backups with a backup frequency of 1 hour. Specify another Region as the destination Region. Select the Aurora database as the resource assignment.
答案:C
答案解析:题目要求Aurora MySQL 做一个DR,RPO在1个小时,并且LOWEST cost。A选项使用global database,RPO很低,但价格过高;B选项每天复制一次,不符合一小时的RPO;D选项关闭自动Aurora备份并手动配置可能会有数据丢失的潜在风险。因此C选项。

例题:A company recently started hosting new application workloads in the AWS Cloud. The company is using Amazon EC2 instances, Amazon Elastic File System
(Amazon EFS) file systems, and Amazon RDS DB instances.
To meet regulatory and business requirements, the company must make the following changes for data backups:
– Backups must be retained based on custom daily, weekly, and monthly requirements.
– Backups must be replicated to at least one other AWS Region immediately after capture.
– The backup solution must provide a single source of backup status across the AWS environment.
– The backup solution must send immediate notifications upon failure of any resource backup.
Which combination of steps will meet these requirements with the LEAST amount of operational overhead? (Choose three.)
A. Create an AWS Backup plan with a backup rule for each of the retention requirements
B. Configure an AWS Backup plan to copy backups to another Region.
C. Create an AWS Lambda function to replicate backups to another Region and send notification if a failure occurs.
D. Add an Amazon Simple Notification Service (Amazon SNS) topic to the backup plan to send a notification for finished jobs that have any status except BACKUP_JOB_COMPLETED.
E. Create an Amazon Data Lifecycle Manager (Amazon DLM) snapshot lifecycle policy for each of the retention requirements.
F. Setup RDS snapshots on each database.
答案:ABD
答案解析:题目要求跨区域搭建DR。参考:https://docs.aws.amazon.com/aws-backup/latest/devguide/cross-region-backup.html文章来源地址https://www.toymoban.com/news/detail-721062.html

到了这里,关于AWS SAP-C02教程7--云迁移与灾备(DR)的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处: 如若内容造成侵权/违法违规/事实不符,请点击违法举报进行投诉反馈,一经查实,立即删除!

领支付宝红包 赞助服务器费用

相关文章

  • AWS SAP-C02教程3--网络资源

    架构设计中网络也是少不了的一个环节,而AWS有自身的网络结构和网络产品。本章中将带你看看AWS中不同网络产品,以及计算资源、存储资源等产品在网络架构中处于哪个位置,如何才能让它们与互联网互通、与其它产品互通。下图视图将SAP涉及到网络相关组件在一张图表示

    2024年02月07日
    浏览(44)
  • AWS SAP-C02教程11-解决方案

    本章中,会根据一些常见场景的解决方案或者AWS的某一方面的总结,带你了解AWS各个组件之间的配合使用、如何在解决方案中选择组件以及如何避开其本身限制实现需求。 通过从一个请求到最终获得数据开始,每一层的请求数限制。 网络层 1)通过Route53进行Global路由 2)通过

    2024年02月07日
    浏览(43)
  • AWS SAP-C02教程5--基础中间件

    在AWS中除了计算、存储、网络之外,还有一些组件非常重要,包括基础组件、消息队列组件、日志组件、编排组件等,接下来就通过分成几个不同类别(这个分类按照AWS的大概分类进行分类,并无统一标准,只是具备一定相同功能归类在一起方便记忆) 消息中间件当然与我们

    2024年02月08日
    浏览(46)
  • AWS SAP-C02教程8-大数据和机器学习

    接下来是一个组跟数据和机器学习有关的内容,这部分在SAP-C02考试中目前占比可能不多且不是很深入,但是随着AI的趋势,这部分内容将会越来越重要,但是经常会出现在考题的选项中,因此了解其基本功能和在解决方案中的应用也是非常重要的。 Amazon Kinesis家族有4个套件,

    2024年02月08日
    浏览(43)
  • AWS SAP-C02 考试指南

    Hello大家好,欢迎来到AWS解决方案架构师professional中文视频培训课程,我是讲师沉默恶魔。 本课时的内容是SAP-C02考试指南,我将介绍有关SAP-C02考试的详细信息。 SAP-C02是解决方案架构师专业级认证的最新版本的考试,目前也只能报考SAP-CO2新版本,SAP-C01已经被取代。 重要时间

    2024年01月23日
    浏览(41)
  • AWS SAP-C02 专家级认证考试指南

    SAP-C02是解决方案架构师专业级认证的最新版本的考试,目前也只能报考SAP-CO2新版本,SAP-C01已经被取代。 现在也许是参加考试的最好时机,为啥这样说呢?因为题库全面改版,废弃掉老版本的题目,全部使用新的题库,题库中题目数量来说缩小好几倍,这样考试也就轻松很多

    2024年02月07日
    浏览(94)
  • AWS SAP C02 五折考试

    昨天在网上遨游,突然发现SAP的考试有活动了。 关于SAP还是先做个简单科普,SAP是AWS的专家级别认证,在云服务行业含金量极高。 以下是一些科普内容: 该凭证有助于获得认证的个人展现以下方面的高深知识和技能水平:就复杂问题提供复杂解决方案,优化安全性、成本和

    2024年02月10日
    浏览(36)
  • aws sap认证概述

    AWS SAP认证指的是AWS Certified Solutions Architect - Professional,是AWS认证计划中的一种,旨在测试候选人在AWS云环境中设计和部署复杂应用程序的能力,特别是针对SAP应用程序。 该认证要求考生具有在AWS云环境中设计,部署和管理复杂的SAP工作负载的能力。考试涵盖以下主题: 设计

    2024年02月13日
    浏览(41)
  • 使用AWS迁移工具MGN迁移腾讯云到AWS

    环境准备: OS: Centos 7.9 x64 源端和目标端安全组都需要开通TCP 443、1500端口 1、创建设置模板 2、安装Agent(源服务器) 下载地址: https://aws-application-migration-service-region.s3.region.amazonaws.com/latest/linux/aws-replication-installer-init.py 替换region为您要复制到的 AWS 区域。 下载py文件 wget h

    2024年02月09日
    浏览(57)
  • AWS云迁移工具介绍

    Amazon Web Services (AWS) 提供了一系列的云迁移工具,帮助客户将现有的应用程序、数据和基础设施迁移到 AWS。以下是按照迁移过程的顺序列出的一些主要工具: AWS Migration Hub:迁移中心是一个集中式管理平台,它可以帮助你跟踪和管理整个迁移过程。它提供了一个统一的视图,

    2024年02月04日
    浏览(48)

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

博客赞助

微信扫一扫打赏

请作者喝杯咖啡吧~博客赞助

支付宝扫一扫领取红包,优惠每天领

二维码1

领取红包

二维码2

领红包