官术网_书友最值得收藏!

Programming models

Ideally, we will focus on three major programming models, which are as follows:

  • The first one is a single-threaded synchronous model
  • The second one a is multithreaded model
  • The third one is an asynchronous programming model

Since JavaScript employs an asynchronous model, we will discuss it in greater detail. However, let's start by explaining what these programming models are and how they facilitate their end users.

The single-threaded synchronous model

The single-threaded synchronous model is a simple programming model or single-threaded synchronous programming model, in which one task follows the other. If there is a queue of tasks, the first task is given first priority, and so on and so forth. It's the simplest way of getting things done, as shown in the following diagram:

The single-threaded synchronous model

The single-threaded synchronous programming model is one of the best examples of a Queue data structure, which follows the First In First Out (FIFO) rule. This model assumes that if Task 2 is being executed at the moment, it must have been done after Task 1 was finished without errors with all the output available as predicted or needed. This programming model is still supported for writing down simple programs for simple devices.

The multithreaded synchronous model

Unlike single-thread programming, in multi-thread programming, every task is performed in a separate thread, so multiple tasks need multiple threads. The threads are managed by the operating system, and may run concurrently on a system with multiple process or multiple cores.

It seems quite simple that multiple threads are managed by the OS or the program in which it's being executed; it's a complex and time-consuming task that requires multiple level of communications between the threads in order to conclude the task without any deadlock and errors, as can be seen from the following diagram:

The multithreaded synchronous model

Some programs implement parallelism using multiple processes instead of multiple threads, although the programming details are different.

The asynchronous programming model

Within the asynchronous programming model, tasks are interleaved with one another in a single thread of control.

This single thread may have multiple embedded threads and each thread may contain several tasks linked up one after another. This model is simpler in comparison to the threaded case, as the programmers always know the priority of the task executing at a given slot of time in memory.

Consider a task in which an OS (or an application within OS) uses some sort of a scenario to decide how much time is to be allotted to a task, before giving the same chance to others. The behavior of the OS of taking control from one task and passing it on to another task is called preempting.

Note

The multithreaded sync model is also referred to as preemptive multitasking. When it's asynchronous, it's called cooperative multitasking.

The asynchronous programming model

With threaded systems, the priority to suspend one thread and put another on the exaction is not in the programmer's hand; it's the base program that controls it. In general, it's controlled by the operating system itself, but this is not the case with an asynchronous system.

In asynchronous systems, the control of execution and suspension of a thread is in complete discretion of the programmer and the thread won't change its state until it's explicitly asked to do so.

Densities with an asynchronous programming model

With all these qualities of an asynchronous programming model, it has its densities to deal with.

Since the control of execution and priority assignment is in a programmer's hand, he/she will have to organize each task as a sequence of smaller steps that are executed immediately. If one task uses the output of the other, the dependent task must be engineered so that it can accept its input as a sequence of bits not together; this is how programmers fabricate their tasks on and set their priorities. The soul of an asynchronous system that can outperform synchronous systems almost dramatically is when the tasks are forced to wait or are blocked.

Why do we need to block the task?

A more common reason why a task is forcefully blocked is that it is waiting to perform an I/O or transfer data to and from an external device. A normal CPU can handle data transfer faster than any network link is capable of, which in result makes a synchronous program blocked that is spending so much time on I/O. Such programs are also referred as blocking programs for this reason.

The whole idea behind an asynchronous model is avoid wasting CPU time and avoid blocking bits. When an asynchronous program encounters a task that will normally get blocked in a synchronous program, it will instead execute some other tasks that can still make progress. Because of this, asynchronous programs are also called non-blocking program.

Since the asynchronous program spends less time waiting and roughly giving an equal amount of time to every task, it supersedes synchronous programs.

Compared to the synchronous model, the asynchronous model performs best in the following scenarios:

  • There are a large number of tasks, so it's likely that there is always at least one task that can make progress
  • The tasks perform lots of I/O, causing a synchronous program to waste lots of time blocking, when other tasks are running
  • The tasks are largely independent from one another, so there is little need for intertask communication (and thus for one task to wait for another)

Keeping all the preceding points in mind, it will almost perfectly highlight a typical busy network, say a web server in a client-server environment, where each task represents a client requesting some information from the server. In such cases, an asynchronous model will not only increase the overall response time, but also add value to the performance by serving more clients (requests) at a time.

Why not use some more threads?

At this point, you may ask why not add another thread by not relying on a single thread. Well, the answer is quite simple. The more the threads, the more memory it will consume, which in turn will create low performance and a higher turnaround time. Using more threads doesn't only come with a cost of memory, but also with effects on performance. With each thread, a certain overhead is linked to maintain the state of that particular thread, but multiple threads will be used when there is an absolute need of them, not for each and every other thing.

主站蜘蛛池模板: 铅山县| 新余市| 太仆寺旗| 扶余县| 渝中区| 黄浦区| 巴青县| 丘北县| 同江市| 渑池县| 荣成市| 承德县| 英吉沙县| 汶川县| 嘉禾县| 沅江市| 治多县| 静海县| 剑川县| 阜平县| 唐海县| 长子县| 通山县| 太康县| 报价| 新民市| 广南县| 云梦县| 潢川县| 娱乐| 潮安县| 龙州县| 溧水县| 浠水县| 杨浦区| 新野县| 铁岭市| 林州市| 阿城市| 南乐县| 茂名市|