官术网_书友最值得收藏!

Preface

As companies become more agile and reactive, they are moving away from large and complex monolithic architectures that are hard to scale, toward microservice architectures based around containers, which are more flexible and easier to deploy. For example, you could have a loosely coupled fleet of microservices, each with an Application Programming Interface (API) for each data source, integrating with a clustered NoSQL database such as Cassandra. In such modern microservice architectures, it is common to quickly iterate and regularly deploy microservices with a RESTful API for integrating with other systems such as visualizations, user interfaces, or even with other third parties.

However, there are many architectural, infrastructural, and developmental complexities to consider in the deployment, maintenance, and monitoring of such microservices when using containers. Additionally, you need to consider costs and the scalability of the API and data store, meaning that you typically need a DevOps team that sets up, monitors, and maintains the Continuous Integration/Continuous Deployment (CI/CD) pipelines, Kubernetes (https://kubernetes.io/) container-orchestration platform, and monitoring systems. Things are looking better recently, as AWS has announced Amazon Elastic Container Service (ECS) for Kubernetes (https://aws.amazon.com/eks/), but there is still a lot of custom integration that needs to be built by developers, and container configuration is required.

Back in 2014, AWS launched Lambda functions, a key component of serverless computing. They act as integration glue between services, where you only need to write the business logic code to respond to inbound events or HTTP requests. Using Lambda functions has become very popular as they are stateless, have built-in event source integration, and you only pay for the actual execution time. So, rather than using a fleet of microservices running on containers behind a load balancer, you can build a highly scalable serverless stack fully managed by AWS.

This book will provide you with a solid foundation that combines the best of both worlds: the flexibility of microservices, with the benefits of serverless computing, to maximize developer productivity. You will gain an appreciation for the power of deploying a full serverless stack, not only when you save money in terms of running costs, but also in terms of support maintenance and upgrading. This effectively allows your company to go to market a lot quicker with new products, and beat your competitors in the process with a much smaller team. You will also be able to create, test, and deploy a scalable, serverless microservice where the costs are paid per usage and not per running machine. In addition, this will allow you to autoscale based on the number of requests, while security is natively built in and supported by AWS.

We share our source code, configuration, and personal experiences of running serverless stacks in production at web scale since 2015. We guide you through concepts with practical examples as if you were being trained on the job. To give you a deeper intuition and understanding when starting your journey, we begin the serverless stack creation with the AWS Management Console, where most tasks are done using the user interface. We then switch to using the AWS Command Line Interface and Bash and Python scripts to automate the creation, testing, and deployment of your serverless stacks just as you would in a production environment for your organization. Now that we know what lies ahead, let's jump right into the book.

主站蜘蛛池模板: 牙克石市| 宁阳县| 白河县| 含山县| 金沙县| 潞西市| 玉林市| 汝阳县| 永川市| 郁南县| 诏安县| 广州市| 湘乡市| 万载县| 抚顺县| 贺兰县| 十堰市| 卓资县| 新兴县| 景东| 永定县| 广宁县| 科技| 荆门市| 定州市| 九寨沟县| 泰和县| 日喀则市| 陈巴尔虎旗| 神池县| 岳阳市| 阿坝县| 襄垣县| 九台市| 新蔡县| 金阳县| 双江| 香河县| 北宁市| 乌鲁木齐市| 通州区|