- ASP.NET Core 2 High Performance(Second Edition)
- James Singleton
- 360字
- 2021-07-08 09:39:02
Scaling approach changes
For many years, the speed and processing capacity of computers increased at an exponential rate. The observation that the number of transistors in a dense integrated circuit doubles approximately every two years is known as Moore's Law, named after Gordon Moore of Intel. Sadly, this era is no "Moore" (sorry). Although transistor density is still increasing, single-core processor speeds have flattened out, and these days increases in processing ability come from scaling out to multiple cores, multiple CPUs, and multiple machines (both virtual and physical). Multithreaded programming is no longer exotic; it is essential. Otherwise, you cannot hope to go beyond the capacity of a single core. Modern CPUs typically have at least four cores (even on mobile devices). Add in a technology such as hyper-threading and you have at least eight logical CPUs to play with. Naive programming will not be able to fully utilize these.
Traditionally, performance and redundancy was provided by improving the hardware. Everything ran on a single server or mainframe, and the solution was to use faster hardware and duplicate all components for reliability. This is known as vertical scaling, and it has reached the end of its life. It is very expensive to scale this way and impossible beyond a certain size. The future is in distributed and horizontal scaling, using commodity hardware and cloud computing resources. This requires that we write software in a different manner than we did previously. Traditional software can't take advantage of this scaling as it can easily use the extra capabilities and speed of an upgraded computer processor.
There are many trade-offs that have to be made when considering performance, and it can sometimes feel like more of a black art than science. However, taking a scientific approach and measuring results is essential. You will often have to balance memory usage against processing power, bandwidth against storage, and latency against throughput.
An example is deciding whether you should compress data on the server (including what algorithms and settings to use) or send it raw over the wire. This will depend on many factors, including the capacity of the network and the devices at both ends.
- TypeScript Blueprints
- 密碼學原理與Java實現
- 構建移動網站與APP:HTML 5移動開發入門與實戰(跨平臺移動開發叢書)
- Python 深度學習
- Cocos2d-x游戲開發:手把手教你Lua語言的編程方法
- Cross-platform Desktop Application Development:Electron,Node,NW.js,and React
- Apache Spark 2 for Beginners
- 數據結構習題精解(C語言實現+微課視頻)
- Swift 3 New Features
- PySide GUI Application Development(Second Edition)
- Big Data Analytics
- Django 3.0入門與實踐
- Zabbix Performance Tuning
- C陷阱與缺陷
- Python計算機視覺和自然語言處理