Skip to main content
Discounted
Programming Models for Parallel Computing (Scientific and Engineering Computation)

Programming Models for Parallel Computing (Scientific and Engineering Computation)

Previous price: $60.00 Current price: $59.00
Publication Date: November 6th, 2015
Publisher:
The MIT Press
ISBN:
9780262528818
Pages:
488
The MIT Press Bookstore
1 on hand, as of Apr 19 6:11pm
(CS)
On Our Shelves Now

Description

An overview of the most prominent contemporary parallel processing programming models, written in a unique tutorial style.

With the coming of the parallel computing era, computer scientists have turned their attention to designing programming models that are suited for high-performance parallel computing and supercomputing systems. Programming parallel systems is complicated by the fact that multiple processing units are simultaneously computing and moving data. This book offers an overview of some of the most prominent parallel programming models used in high-performance computing and supercomputing systems today.

The chapters describe the programming models in a unique tutorial style rather than using the formal approach taken in the research literature. The aim is to cover a wide range of parallel programming models, enabling the reader to understand what each has to offer. The book begins with a description of the Message Passing Interface (MPI), the most common parallel programming model for distributed memory computing. It goes on to cover one-sided communication models, ranging from low-level runtime libraries (GASNet, OpenSHMEM) to high-level programming models (UPC, GA, Chapel); task-oriented programming models (Charm++, ADLB, Scioto, Swift, CnC) that allow users to describe their computation and data units as tasks so that the runtime system can manage computation and data movement as necessary; and parallel programming models intended for on-node parallelism in the context of multicore architecture or attached accelerators (OpenMP, Cilk Plus, TBB, CUDA, OpenCL). The book will be a valuable resource for graduate students, researchers, and any scientist who works with data sets and large computations.

Contributors
Timothy Armstrong, Michael G. Burke, Ralph Butler, Bradford L. Chamberlain, Sunita Chandrasekaran, Barbara Chapman, Jeff Daily, James Dinan, Deepak Eachempati, Ian T. Foster, William D. Gropp, Paul Hargrove, Wen-mei Hwu, Nikhil Jain, Laxmikant Kale, David Kirk, Kath Knobe, Ariram Krishnamoorthy, Jeffery A. Kuehn, Alexey Kukanov, Charles E. Leiserson, Jonathan Lifflander, Ewing Lusk, Tim Mattson, Bruce Palmer, Steven C. Pieper, Stephen W. Poole, Arch D. Robison, Frank Schlimbach, Rajeev Thakur, Abhinav Vishnu, Justin M. Wozniak, Michael Wilde, Kathy Yelick, Yili Zheng

About the Author

Pavan Balaji holds appointments as Computer Scientist and Group Lead at Argonne National Laboratory, Institute Fellow of the Northwestern-Argonne Institute of Science and Engineering at Northwestern University, and Research Fellow at the Computation Institute at the University of Chicago.

William Gropp is Director of the Parallel Computing Institute and Thomas M. Siebel Chair in Computer Science at the University of Illinois Urbana-Champaign.

Rajeev Thakur is Deputy Director in the Mathematics and Computer Science Division at Argonne National Laboratory.

Ewing Lusk is Argonne Distinguished Fellow Emeritus at Argonne National Laboratory.

Ian Foster is the Arthur Holly Compton Distinguished Service Professor of Computer Science at the University of Chicago and Distinguished Fellow at Argonne National Laboratory.

Barbara Chapman is Professor of Computer Science at the University of Houston.

Charles E. Leiserson is Professor of Computer Science and Engineering at the Massachusetts Institute of Technology.

Timothy G. Mattson is Senior Principal Engineer at Intel Corporation.