Tuesday, 20 January 2015

Multiprocessing

Multiprocessing, in computing, a mode of operation in which two or more processors in a computer simultaneously process two or more different portions of the same program (set of instructions). Multiprocessing is typically carried out by two or more microprocessors, each of which is in effect a central processing unit (CPU) on a single tiny chip. Supercomputers typically combine thousands of such microprocessors to interpret and execute instructions.

The primary advantage of a multiprocessor computer is speed, and thus the ability to manage larger amounts of information. Because each processor in such a system is assigned to perform a specific function, it can perform its task, pass the instruction set on to the next processor, and begin working on a new set of instructions. For example, different processors may be used to manage memory storage, data communications, or arithmetic functions. Or a larger processor might utilize “slave” processors to conduct miscellaneous housekeeping duties, such as memory management. Multiprocessor systems first appeared in large computers known as mainframes, before their costs declined enough to warrant inclusion in personal computers (PCs).

Personal computers had long relied on increasing clock speeds, measured in megahertz (MHz) or gigahertz (GHz), which correlates to the number of computations the CPU calculates per second, in order to handle ever more complex tasks. But as gains in clock speed became difficult to sustain, in part because of overheating in the microprocessor circuitry, another approach developed in which specialized processors were used for tasks such as video display. These video processors typically come on modular units known as video cards, or graphic accelerator cards. The best cards, which are needed to play the most graphic-intensive electronic games on personal computers, often cost more than a bargain PC. The commercial demands for ever better cards to run ever more realistic games, on PCs and video game systems, led IBM to develop a multiprocessor microchip, known as the Cell Broadband Engine, for use in the Sony Computer Entertainment PlayStation 3 and a new supercomputer that included thousands of the microchips.

It must be noted, however, that simply adding more processors does not guarantee significant gains in computing power; computer program problems remain. While programmers and computer programming languages have developed some proficiency in allocating executions among a small number of processors, parsing instructions beyond two to eight processors is impracticable for all but the most repetitive tasks. (Fortunately, many of the typical supercomputer scientific applications involve applying exactly the same formula or computation to a vast array of data, which is a difficult but tractable problem.)


IBM led one effort to address the problem of programming multiprocessor computers through an open source initiative, in which academics, nonprofit organizations, and other corporations contributed advancements. Similar proprietary research was pursued by Microsoft Corporation and Apple Inc.

Multithreading

Multithreading is the ability of a program or an operating system process to manage its use by more than one user at a time and to even manage multiple requests by the same user without having to have multiple copies of the programming running in the computer. Central processing units have hardware support to efficiently execute multiple threads. These are distinguished from multiprocessing systems (such as multi-core systems) in that the threads have to share the resources of a single core: the computing units, the CPU caches and the translation look aside buffer (TLB), where multiprocessing systems include multiple complete processing units, multithreading aims to increase utilization of a single core by using thread-level as well as instruction-level parallelism. As the two techniques are complementary, they are sometimes combined in systems with multiple multithreading CPUs and in CPUs with multiple multithreading cores.

Multiprogramming

A multiprogramming operating system is one that allows end-users to run more than one program at a time. The development of such a system, the first type to allow this functionality, was a major step in the development of sophisticated computers. The technology works by allowing the central processing unit (CPU) of a computer to switch between two or more running tasks when the CPU is idle.
Early computers were largely dedicated to executing one program — or, more accurately, one task initiated by a program — at a time. Understanding the concept of tasks is key to understanding how a multiprogramming operating system functions. A "task" is a small sequence of commands that, when combined, comprises the execution of a running program. For example, if the program is a calculator, one task of the program would be recording the numbers being input by the end-user.

A multiprogramming operating system acts by analyzing the current CPU activity in the computer. When the CPU is idle — when it is between tasks — it has the opportunity to use that downtime to run tasks for another program. In this way, the functions of several programs may be executed sequentially. For example, when the CPU is waiting for the end-user to enter numbers to be calculated, instead of being entirely idle, it may run load the components of a web page the user is accessing.

Multiprogramming

Multiprogramming is a rudimentary form of parallel processing in which several programs are run at the same time on a uniprocessor. Since there is only one processor, there can be no true simultaneous execution of different programs. Instead, the operating system executes part of one program, then part of another, and so on. To the user it appears that all programs are executing at the same time.
If the machine has the capability of causing an interrupt after a specified time interval, then the operating system will execute each program for a given length of time, regain control, and then execute another program for a given length of time, and so on. In the absence of this mechanism, the operating system has no choice but to begin to execute a program with the expectation, but not the certainty, that the program will eventually return control to the operating system.
If the machine has the capability of protecting memory, then a bug in one program is less likely to interfere with the execution of other programs. In a system without memory protection, one program can change the contents of storage assigned to other programs or even the storage assigned to the operating system. The resulting system crashes are not only disruptive, they may be very difficult to debug since it may not be obvious which of several programs is at fault.