- ASP.NET Core 2 High Performance(Second Edition)
- James Singleton
- 360字
- 2021-07-08 09:39:02
Scaling approach changes
For many years, the speed and processing capacity of computers increased at an exponential rate. The observation that the number of transistors in a dense integrated circuit doubles approximately every two years is known as Moore's Law, named after Gordon Moore of Intel. Sadly, this era is no "Moore" (sorry). Although transistor density is still increasing, single-core processor speeds have flattened out, and these days increases in processing ability come from scaling out to multiple cores, multiple CPUs, and multiple machines (both virtual and physical). Multithreaded programming is no longer exotic; it is essential. Otherwise, you cannot hope to go beyond the capacity of a single core. Modern CPUs typically have at least four cores (even on mobile devices). Add in a technology such as hyper-threading and you have at least eight logical CPUs to play with. Naive programming will not be able to fully utilize these.
Traditionally, performance and redundancy was provided by improving the hardware. Everything ran on a single server or mainframe, and the solution was to use faster hardware and duplicate all components for reliability. This is known as vertical scaling, and it has reached the end of its life. It is very expensive to scale this way and impossible beyond a certain size. The future is in distributed and horizontal scaling, using commodity hardware and cloud computing resources. This requires that we write software in a different manner than we did previously. Traditional software can't take advantage of this scaling as it can easily use the extra capabilities and speed of an upgraded computer processor.
There are many trade-offs that have to be made when considering performance, and it can sometimes feel like more of a black art than science. However, taking a scientific approach and measuring results is essential. You will often have to balance memory usage against processing power, bandwidth against storage, and latency against throughput.
An example is deciding whether you should compress data on the server (including what algorithms and settings to use) or send it raw over the wire. This will depend on many factors, including the capacity of the network and the devices at both ends.
- Dynamics 365 for Finance and Operations Development Cookbook(Fourth Edition)
- Web應用系統開發實踐(C#)
- Spring技術內幕:深入解析Spring架構與設計
- Scala Design Patterns
- UML+OOPC嵌入式C語言開發精講
- Learning AndEngine
- Java 11 Cookbook
- Java實戰(第2版)
- 汽車人機交互界面整合設計
- 從Excel到Python數據分析:Pandas、xlwings、openpyxl、Matplotlib的交互與應用
- Oracle Data Guard 11gR2 Administration Beginner's Guide
- IPython Interactive Computing and Visualization Cookbook
- 從零開始學Selenium自動化測試:基于Python:視頻教學版
- Java從入門到精通(視頻實戰版)
- DevOps 精要:業務視角