- Mastering Concurrency in Python
- Quan Nguyen
- 555字
- 2021-06-10 19:24:00
Practical applications of Amdahl's Law
As we have discussed, by analyzing the sequential and parallelizable portion of a given program or system with Amdahl's Law, we can determine, or at least estimate, the upper limit of any potential improvements in speed resulting from parallel computing. Upon obtaining this estimation, we can then make an informed decision on whether an improved execution time is worth an increase in processing power.
From our examples, we can see that Amdahl's Law is applied when you have a concurrent program that is a mixture of both sequentially and executed-in-parallels instructions. By performing analysis using Amdahl's Law, we can determine the speedup through each incrementation of the number of cores available to perform the execution, as well as how close that incrementation is to helping the program achieve the best possible speedup from parallelization.
Now, let's come back to the initial problem that we raised at the beginning of the chapter: the trade-off between an increase in the number of processors versus an increase in how long parallelism can be applied. Let's suppose that you are in charge of developing a concurrent program that currently has 40 percent of its instructions parallelizable. This means that multiple processors can be running simultaneously for 40 percent of the program execution. Now you have been tasked with increasing the speed of this program by implementing either of the following two choices:
- Having four processors implemented to execute the program instructions
- Having two processors implemented, in addition to increasing the parallelizable portion of the program to 80 percent
How can we analytically compare these two choices, in order to determine the one that will produce the best speed for our program? Luckily, Amdahl's Law can assist us during this process:
- For the first option, the speedup that can be obtained is as follows:
- For the second option, the speedup is as follows:
As you can see, the second option (which has fewer processors than the first) is actually the better choice to speed up our specific program. This is another example of Amdahl's Law, illustrating that sometimes simply increasing the number of available processors is, in fact, undesirable in terms of improving the speed of a program. Similar trade-offs, with potentially different specifications, can also be analyzed this way.
As a final note, it is important for us to know that, while Amdahl's Law offers an estimation of potential speedup in an unambiguous way, the law itself makes a number of underlying assumptions and does not take into account some potentially important factors, such as the overhead of parallelism or the speed of memory. For this reason, the formula of Amdahl's Law simplifies various considerations that might be common in practice.
So, how should programmers of concurrent programs think about and use Amdahl's Law? We should keep in mind that the results of Amdahl's Law are simply estimates that can provide us with an idea about where, and by how much, we can further optimize a concurrent system, specifically by increasing the number of available processors. In the end, only actual measurements can precisely answer our questions about how much speedup our concurrent programs will achieve in practice. With that said, Amdahl's Law can still help us to effectively identify good theoretical strategies for improving computing speed using concurrency and parallelism.
- Spring 5.0 Microservices(Second Edition)
- Python 3.7網(wǎng)絡(luò)爬蟲快速入門
- CMDB分步構(gòu)建指南
- OpenNI Cookbook
- 深入淺出Android Jetpack
- KnockoutJS Starter
- Mastering ServiceNow(Second Edition)
- 精通Linux(第2版)
- Django 3.0入門與實(shí)踐
- Python第三方庫開發(fā)應(yīng)用實(shí)戰(zhàn)
- Flask開發(fā)Web搜索引擎入門與實(shí)戰(zhàn)
- Python滲透測試編程技術(shù):方法與實(shí)踐(第2版)
- Ubuntu Server Cookbook
- 高性能MVVM框架的設(shè)計(jì)與實(shí)現(xiàn):San
- 多接入邊緣計(jì)算實(shí)戰(zhàn)