Implementing Microservices on AWS - AWS Whitepaper
Status:: 🟨
Links:: Amazon Web Services
Metadata
Authors:: Amazon Web Services
Title:: Implementing Microservices on AWS - AWS Whitepaper
Date:: 2023
Publisher:: Amazon Web Services
URL:: https://docs.aws.amazon.com/whitepapers/latest/microservices-on-aws/microservices-on-aws.html
DOI::
Amazon Web Services. (2023). Implementing Microservices on AWS - AWS Whitepaper. Amazon Web Services. https://docs.aws.amazon.com/whitepapers/latest/microservices-on-aws/microservices-on-aws.html
Notes & Annotations
Color-coded highlighting system used for annotations
📑 Annotations (imported on 2024-01-24#17:37:18)
Sometimes, business operations require multiple microservices to work together. If one part fails, you might have to undo some completed tasks. The Saga pattern helps manage this by coordinating a series of compensating actions.
Microservices architecture can enhance cost optimization and sustainability. By breaking an application into smaller parts, you can scale up only the services that need more resources, reducing cost and waste. This is particularly useful when dealing with variable traffic. Microservices are independently developed. So customers can do smaller updates, and reduce the resources spent on end to end testing. While updating they will have to test only a subset of the features as opposed to monoliths.
Optimizing costs and resource usage also helps minimize environmental impact, aligning with the Sustainability pillar of the Well-Architected Framework. You can monitor your progress in reducing carbon emissions using the AWS Customer Carbon Footprint Tool. This tool provides insights into the environmental impact of your AWS usage.
Chattiness refers to excessive communication between microservices, which can cause inefficiency due to increased network latency. It's essential to manage chattiness effectively for a wellfunctioning system.
Often, microservices use REST over HTTP for communication due to its widespread use. But in high-volume situations, REST's overhead can cause performance issues. It’s because the communication uses TCP handshake which is required for every new request. In such cases, gRPC API is a better choice. gRPC reduces the latency as it allows multiple requests over a single TCP connection. gRPC also supports bi-directional streaming, allowing clients and servers to send and receive messages at the same time. This leads to more efficient communication, especially for large or real-time data transfers.