|Statement||Guy E. Blelloch, K. Mani Chandy, Suresh Jagannathan, editors.|
|Series||DIMACS series in discrete mathematics and theoretical computer science,, v. 18|
|Contributions||Blelloch, Guy E., Chandy, K. Mani., Jagannathan, Suresh., NSF Science and Technology Center in Discrete Mathematics and Theoretical Computer Science.|
|LC Classifications||QA76.642 .S68 1994|
|The Physical Object|
|Pagination||xii, 399 p. :|
|Number of Pages||399|
|LC Control Number||94030810|
Book Description Focusing on algorithms for distributed-memory parallel architectures, Parallel Algorithms presents a rigorous yet accessible treatment of theoretical models of parallel computation, parallel algorithm design for homogeneous and heterogeneous platforms, complexity and performance analysis, and essential notions of scheduling. Parallel Algorithms for Regular Architectures is the first book to concentrate exclusively on algorithms and paradigms for programming parallel computers such as the hypercube, mesh, pyramid, and mesh-of-trees. Introduction The subject of this chapter is the design and analysis of parallel algorithms. Most of today’s algorithms are sequential, that is, they specify a sequence of steps in which each step consists of a single operation. These algorithms are well suited to today’s computers, which basically perform operations in a sequential Size: KB. Contents Preface xiii List of Acronyms xix 1 Introduction 1 Introduction 1 Toward Automating Parallel Programming 2 Algorithms 4 Parallel Computing Design Considerations 12 Parallel Algorithms and Parallel Architectures 13 Relating Parallel Algorithm and Parallel Architecture 14 Implementation of Algorithms: A Two-Sided Problem
Introduction to Parallel Algorithms and Architectures: Arrays Trees Hypercubes provides an introduction to the expanding field of parallel algorithms and architectures. This book focuses on parallel computation involving the most popular network architectures, namely, arrays, trees, hypercubes, and some closely related networks. ICA3PP is covering the many dimensions of parallel algorithms and architectures, encompassing fundamental theoretical approaches, practical experimental projects, and commercial components and systems. As applications of computing systems have permeated in every aspects of daily life, the power of computing system has become increasingly critical. The desired output of the parallel algorithm is a composition of results computed by the algorithm components. Since the subproblems are solved by the components of a parallel program called tasks, we will use the terms subproblem and task interchangeably. The process of designing a parallel algorithm consists of four steps. Their book provides an important starting place for a comprehensive taxonomy of parallel authors are all in the Department of Electrical Engineering at Purdue University. Leah H. Jamieson is a professor, Dennis Gannon an associate professor, and Robert Douglass head of .
In reality, the world is hierarchical, and so are parallel c Includes, among other things, an all-pairs shortest path algorithm that requires O((log n)2) time and O(n3/log n) processors, and parallel matching of strings to context-free and regular grammars.4/5(1). Although there has been a tremendous growth of interest in parallel architecture and parallel processing in recent years, comparatively little work has been done on the problem of characterizing parallelism in programs and algorithms. This book, a collection of original papers, specifically addresses that topic. The editors and two dozen other contributors have produced a work that cuts across. The book begins by explaining how to classify an algorithm, and then identifying which technique would be appropriate to implement the application on a parallel platform. It provides techniques for studying and analyzing several types of algorithms—parallel, serial-parallel, non-serial-parallel, and regular iterative algorithms. PREFACE. As part of the DIMACS (the Center for Discrete Mathematics and Theoretical Computer Science) special year on massively parallel computation, a three day workshop on Specification of Parallel Algorithms was held in May at Princeton, New Jersey. This workshop was undertaken in collaboration with CRPC (the Center for Research on Parallel Computation).