The Parallel Programming Guide for Every Software Developer
From grids and clusters to next-generation game consoles, parallel computing is going mainstream. Innovations such as Hyper-Threading Technology, HyperTransport Technology, and multicore microprocessors from IBM, Intel, and Sun are accelerating the movement's growth. Only one thing is missing: programmers with the skills to meet the soaring demand for parallel software.
That's where Patterns for Parallel Programming comes in. It's the first parallel programming guide written specifically to serve working software developers, not just computer scientists. The authors introduce a complete, highly accessible pattern language that will help any experienced developer "think parallel"-and start writing effective parallel code almost immediately. Instead of formal theory, they deliver proven solutions to the challenges faced by parallel programmers, and pragmatic guidance for using today's parallel APIs in the real world. Coverage includes:
- Understanding the parallel computing landscape and the challenges faced by parallel developers
- Finding the concurrency in a software design problem and decomposing it into concurrent tasks
- Managing the use of data across tasks
- Creating an algorithm structure that effectively exploits the concurrency you've identified
- Connecting your algorithmic structures to the APIs needed to implement them
- Specific software constructs for implementing parallel programs
- Working with today's leading parallel programming environments: OpenMP, MPI, and Java
Patterns have helped thousands of programmers master object-oriented development and other complex programming technologies. With this book, you will learn that they're the best way to master parallel programming too.
About the Author
Timothy G. Mattson is Intel's industry manager for life sciences. His research focuses on technologies that simplify parallel computing for general programmers, with an emphasis on computational biology. He holds a Ph.D. in chemistry from the University of California, Santa Cruz.
Beverly A. Sanders is associate professor at the Department of Computer and Information Science and Engineering, University of Florida, Gainesville. Her research focuses on techniques to help programmers construct high-quality, correct programs, including formal methods, component systems, and design patterns. She holds a Ph.D. in applied mathematics from Harvard University.
Berna L. Massingill is assistant professor in the Department of Computer Science at Trinity University, San Antonio, Texas. Her research interests include parallel and distributed computing, design patterns, and formal methods. She holds a Ph.D. in computer science from the California Institute of Technology.
Table of Contents
1. A Pattern Language for Parallel Programming.
Design Patterns and Pattern Languages.
A Pattern Language for Parallel Programming.
2. Background and Jargon of Parallel Computing.
Concurrency in Parallel Programs Versus Operating Systems.
Parallel Architectures: A Brief Introduction.
Parallel Programming Environments.
The Jargon of Parallel Computing.
A Quantitative Look at Parallel Computation.
3. The Finding Concurrency Design Space.
About the Design Space.
The Task Decomposition Pattern.
The Data Decomposition Pattern.
The Group Tasks Pattern.
The Order Tasks Pattern.
The Data Sharing Pattern.
The Design Evaluation Pattern.
4. The Algorithm Structure Design Space.
Choosing an Algorithm Structure Pattern.
The Task Parallelism Pattern.
The Divide and Conquer Pattern.
The Geometric Decomposition Pattern.
The Recursive Data Pattern.
The Pipeline Pattern.
The Event-Based Coordination Pattern.
5. The Supporting Structures Design Space.
Choosing the Patterns.
The SPMD Pattern.
The Master/Worker Pattern.
The Loop Parallelism Pattern.
The Fork/Join Pattern.
The Shared Data Pattern.
The Shared Queue Pattern.
The Distributed Array Pattern.
Other Supporting Structures.
6. The Implementation Mechanisms Design Space.
Appendix A. A Brief Introduction to OpenMP.
Appendix B. A Brief Introduction to MPI.
Appendix C. A Brief Introduction to Concurrent Programming in Java.
About the Authors.
Most Helpful Customer Reviews
This book tries to do for parallel programming what the seminal Gang of Four book did for sequential programming. While it remains to be seen if Mattson, Sanders and Massingill will succeed, their book serves a vital educational role. Until the 90s, parallel programming was relatively rare. But as hardware continued to get cheaper, many practical uses emerged. The authors point out that the bottleneck has now shifted to software. How does one find concurrency in a design? And given this, how to code it? The book tackles both issues. The latter is treated by explaining how to use MPI, OpenMP and Java for parallel coding. Where MPI and OpenMP were expressly made for this task. And with Java, the book discusses its concurrency classes and how these can be applied to parallel problems. The former issue of somehow finding concurrency is harder. This is really a wetware issue. You are the wetware. The book's core value is in showing common parallel patterns, that distills the essence of much previous work in the field. Plus, the book is not just iterating through a list of such patterns. As with the GoF, we have a pattern language. A metalevel, in which a walkthrough of the patterns and comparing these with your problem, helps you find appropriate patterns to map it to.