Cloud-Native Applications
Status:: ๐ฉ
Links:: Cloud-Native Computing
Metadata
Authors:: Gannon, Dennis; Barga, Roger; Sundaresan, Neel
Title:: Cloud-Native Applications
Publication Title:: "IEEE Cloud Computing"
Date:: 2017
URL:: http://ieeexplore.ieee.org/document/8125550/
DOI:: 10.1109/MCC.2017.4250939
Bibliography
Gannon, D., Barga, R., & Sundaresan, N. (2017). Cloud-Native Applications. IEEE Cloud Computing, 4(5), 16โ21. https://doi.org/10.1109/MCC.2017.4250939
Zotero
Type:: #zotero/journalArticle
Keywords:: [Cloud Computing, Container, Microservices]
Relations
Related:: @Ramakrishnan.etal.2017.AzureDataLake
Related:: @Khan.2017.KeyCharacteristicsContainer
Abstract
Cloud-native is a term that is invoked often but seldom defined beyond saying โwe built it in the cloudโ as opposed to โon-premโ. However, there is now an emerging consensus around key ideas and informal applications design patterns that have been adopted and used in many successful cloud applications. In this introduction, we will describe these cloud-native concepts and illustrate them with examples. We will also look at the technical trends that may give us an idea about the future of cloud applications. We begin by discussing the basic properties that many cloud-native apps have in common. Once we have characterized them, we can then describe how these properties emerge from the technical design patterns.
Notes & Annotations
๐ Annotations (imported on 2023-06-01#15:33:50)
Properties of cloud-native:
- Operate at global scale: data and services are replicated globally
- Scaling well with thousands of concurrent users
- Resilient to failures
- Continuous operation: Upgrading without interrupting normal operations
- Secure: Security must be part of the application architecture
After Google published a research article about their experience, Yahoo! released an open source product called Hadoop that allowed anybody to deploy a distributed file system and analytics tools that anybody could deploy to virtual machines in any cloud. Hadoop may be considered the vanguard of cloud-native applications.
By 2013, the first major design pattern for cloud-native applications began to emerge. It was clear that to achieve scale and reliability, it was essential to decompose applications into very basic components, which we now refer to as microservices.
Microservice paradigm design rules dictate that each microservice must be managed, replicated, scaled, upgraded, and deployed independently of other microservices.
All microservices should be designed for constant failure and recovery and therefore they must be as stateless as possible. One should reuse existing trusted services such as databases, caches, and directories for state management.
The Linux kernel provided an easy solution to the encapsulation problem by allowing processes to be managed with their own namespaces and with limits on the resources that they used. This led to standards for containerizing application components, such as Docker.
Google has built a system that runs in their data centers that manages all of their microservice-based applications. This has now been released as open source under the name Kubernetes.
In their paper in this special issue, โProcesses, Motivations, and Issues for Migrating to Microservices Architectures: An Empirical Investigationโ Davide Taibi, Valentina Lenarduzzi, and Claus Pahl provide excellent insights into the application developer experience with microservice design.
Serverless computing is a style of cloud computing where you write code and define the events that should cause the code to execute and leave it to the cloud to take care of the rest.