- Cloud Native Development Patterns and Best Practices
- John Gilbert
- 794字
- 2021-06-30 18:43:00
The cloud is the database
In the first chapter, I told the story of my first wow moment when I realized that we could run our presentation layer entirely from the edge with no servers. From that point on, I wanted to achieve the same level of scalability for the rest of the layers as well. Let's start this chapter with a continuation of that story.
Like many of you, for a significant chunk of my career, I implemented systems that needed to be database agnostic. The relational database was the standard, but we had to support all the various flavors, such as Oracle, MySQL, in-memory databases for testing, and so forth. Object relational mapping tools, such as Hibernate, were a necessity. We built large relational models, crammed the database schema full of tables, and then tossed the DDL over the fence to the DBA team. Inevitability, the schema would be deployed to underpowered database instances that were shared by virtually every system in the enterprise. Performance suffered and we turned to optimizing the software to compensate for the realities of a shared database model. We added caching layers, maintained more state in the application sessions, de-normalized the data models to the nth degree, and more. These were just the facts of life in the world of monolithic enterprise systems.
Then came time to lift and shift these monoliths to the cloud, along with the shared database model. It did not take long to realize how complicated and expensive it was to run relational databases in the cloud across multiple availability zones. Running on a database-as-a-service, such as AWS RDS, was the obvious alternative, though this had its own limitations. A given database instance size could only support a maximum number of connections. Thus it was necessary to scale these instances vertically, plus add read replicates. It was still complex and expensive and thus there was still an incentive to run a shared database model. It was time to start looking for alternatives. I studied CAP theorem and sharding, evaluated the various NoSQL options, learned about the BASE and ACID 2.0 transaction models, and considered the different ways our system could support eventual consistency. NoSQL was deemed a nonstarter because it would require reworking the system. The NewSQL options promised a pluggable alternative to NoSQL, but these were just downright expensive. In the end, we chose to leave the monolith as-is, on database-as-a-service, and instead implement new features as microservices and slowly strangle the monolith. We will discuss the strangler pattern in Chapter 10, Value Focused Migration.
When we moved to microservices, we started with a schema-per-service approach on top of the shared database-as-a-service model. It was a step in the right direction, but it still had several drawbacks: we needed a way to synchronize some data between the microservices, there was effectively no bulkhead between the services at the database level, and we still needed to scale the connections for the consumer-facing services. The new feature was a consumer-facing, mobile-first application. It had a click stream that pumped events through AWS Kinesis into a time series database running as-a-service. I knew this part of the new feature would scale, so we just had to repeat that pattern. We started using the event stream to synchronize the data between the microservices. This approach was very familiar because it was essentially the same event-driven pattern we had always used for Enterprise Application Integration (EAI) projects. How could we build on this to scale the consumer-facing services?
We had a set of back-office services with low user volumes for authoring content. Those could stay on the shared database for the time being. We would use the event stream to synchronize the necessary back-office data to high-octane Polyglot Persistence dedicated to the consumer-facing components. The consumer-facing databases included S3 plus AWS CloudFront for storing and serving the images and JSON documents and AWS CloudSearch to index the documents. We effectively created a bulkhead between the front-office and back-office components and allowed the two to scale completely independently. The documents were being served on a global scale, just like a single page app. We didn't realize it at the time, but we were implementing the Event Sourcing and Command Query Responsibility Segregation (CQRS) patterns, but with a cloud-native twist. We will discuss these patterns in Chapter 3, Foundation Patterns, and Chapter 4, Boundary Patterns, respectively.
Along the way on my cloud-native journey, I came across two documents that drew my attention, The Reactive Manifesto (https://www.reactivemanifesto.org/) and Martin Kleppman's excellent article, Turning the Database Inside-out (https://www.confluent.io/blog/turning-the-database-inside-out-with-apache-samza/). These documents formalized what we were already doing and led to another wow moment, the realization that the cloud is the database. This is where our discussion will go next.
- LabVIEW虛擬儀器從入門到測(cè)控應(yīng)用130例
- ETL with Azure Cookbook
- Python Artificial Intelligence Projects for Beginners
- Visual FoxPro 6.0數(shù)據(jù)庫(kù)與程序設(shè)計(jì)
- 圖解PLC控制系統(tǒng)梯形圖和語(yǔ)句表
- 統(tǒng)計(jì)策略搜索強(qiáng)化學(xué)習(xí)方法及應(yīng)用
- 信息物理系統(tǒng)(CPS)測(cè)試與評(píng)價(jià)技術(shù)
- 數(shù)據(jù)掘金
- Photoshop行業(yè)應(yīng)用基礎(chǔ)
- Visual C++項(xiàng)目開發(fā)案例精粹
- Mastering pfSense
- Mastering GitLab 12
- Cortex-M3嵌入式處理器原理與應(yīng)用
- Xilinx FPGA高級(jí)設(shè)計(jì)及應(yīng)用
- 新世紀(jì)Photoshop CS6中文版應(yīng)用教程