Parallel Computing
parallel computing is the way of using two or more processes (with other underling infrastructure) for a computation.
These elements contains:
- Multiple Processors
- multiple Memories
- Interconnection Network
The operation of above elements are subject to the control by parallel Operating System Which contains special parallel algorithms for process handling.
The ultimate goal of parallel computing is to reduce the time for a computation and to facilitate computing, which requires high memory usage:
- Achieve speed: Tp=Ts/p
- Solve problems requiring a large amount of memory
Logical Organization:
Logical organization is referred to the way the parallel platform is viewed to application level.
Logical organization contains many control mechanisms
- SISD-Single Instruction Single data
- SIMD-Single Instruction Multiple data
- MIMD- Multiple Instruction Multiple data
- MISD- Multiple Instruction Single data
| Single Instruction | Multiple Instruction |
---|---|---|
Single Data | ||
Multiple Data | ||
|
Communication model for Parallel Computing:
Its clear that when using multiple data processing units to compute a single problem, there need to be communication between the sum computations. Communication can be achieved by mainly two methods.
- Using a shared memory
- message passing
No comments:
Post a Comment