Description
Name of Notes : – Parallel Computer Architecture Lecture Note
Introduction
Parallel computer architecture exists in a wide variety of parallel computers, classified according to the level at which the hardware supports parallelism. Parallel computer architecture and programming techniques work together to effectively utilize these machines. The classes of parallel computer architectures include:
- Multi-core computing: A multi-core processor is a computer processor integrated circuit with two or more separate processing cores, each of which executes program instructions in parallel. Cores are integrated onto multiple dies in a single chip package or onto a single integrated circuit die, and may implement architectures such as multithreading, superscalar, vector, or VLIW. Multi-core architectures are categorized as either homogeneous, which includes only identical cores, or heterogeneous, which includes cores that are not identical.
- Symmetric multiprocessing: multiprocessor computer hardware and software architecture in which two or more independent, homogeneous processors are controlled by a single operating system instance that treats all processors equally, and is connected to a single, shared main memory with full access to all common resources and devices. Each processor has a private cache memory, may be connected using on-chip mesh networks, and can work on any task no matter where the data for that task is located in memory.
- Distributed computing: Distributed system components are located on different networked computers that coordinate their actions by communicating via pure HTTP, RPC-like connectors, and message queues. Significant characteristics of distributed systems include independent failure of components and concurrency of components. Distributed programming is typically categorized as client–server, three-tier, n-tier, or peer-to-peer architectures. There is much overlap in distributed and parallel computing and the terms are sometimes used interchangeably.
- Massively parallel computing: refers to the use of numerous computers or computer processors to simultaneously execute a set of computations in parallel. One approach involves the grouping of several processors in a tightly structured, centralized computer cluster. Another approach is grid computing, in which many widely distributed computers work together and communicate via the Internet to solve a particular problem.
Other parallel computer architectures include specialized parallel computers, cluster computing, grid computing, vector processors, application-specific integrated circuits, general-purpose computing on graphics processing units (GPGPU), and reconfigurable computing with field-programmable gate arrays. Main memory in any parallel computer structure is either distributed memory or shared memory.
Modules / Lectures
- Module 1: Multi-core: The Ultimate Dose of Moore
- Module 2: Parallel Computer Architecture: Today and Tomorrow
- Module 3: Recap: Single-threaded Execution
- Module 4: Recap: Virtual Memory and Caches
- Module 5: MIPS R10000: A Case Study
- Module 6: Fundamentals of Parallel Computers
- Module 7: Parallel Programming
- Module 8: Performance Issues
- Module 9: Introduction to Shared Memory Multiprocessors
- Module 10: Design of Shared Memory Multiprocessors
- Module 11: Synchronization
- Module 12: Multiprocessors on a Snoopy Bus
- Module 13: Scalable Multiprocessors
- Module 14: Directory-based Cache Coherence
- Module 15: Memory Consistency Models
- Module 16: Software Distributed Shared Memory Multiprocessors
- Module 17: Interconnection Networks
- Module 18: TLP on Chip: HT/SMT and CMP
Reviews
There are no reviews yet