Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

Select Page

Is Parallel Programming Hard, And, If So, What Can You Do About It?

Is Parallel Programming Hard, And, If So, What Can You Do About It?

The purpose of this book is to help you program shared-memory parallel machines without risking your sanity. We hope that this book’s design principles will help you avoid at least some parallel-programming pitfalls. That said, you should think of this book as a foundation on which to build, rather than as a completed cathedral. Your mission, if you choose to accept, is to help make further progress in the exciting field of parallel programming-progress that will in time render this book obsolete. Parallel programming is not as hard as some say, and we hope that this book makes your parallel-programming projects easier and more fun.

In short, where parallel programming once focused on science, research, and grand-challenge projects, it is quickly becoming an engineering discipline. We therefore examine specific parallel-programming tasks and describe how to approach them. In some surprisingly common cases, they can even be automated.

This book is written in the hope that presenting the engineering discipline underlying successful parallel-programming projects will free a new generation of parallel hackers from the need to slowly and painstakingly reinvent old wheels, enabling them to instead focus their energy and creativity on new frontiers. We sincerely hope that parallel programming brings you at least as much fun, excitement, and challenge that it has brought to us!

  • This book is a handbook of widely applicable and heavily used design techniques, rather than a collection of optimal algorithms with tiny areas of applicability. You are currently reading Chapter 1.
  • Chapter 2 gives a high-level overview of parallel programming.
  • Chapter 3 introduces shared-memory parallel hardware.
  • Chapter 4 then provides a very brief overview of common shared memory parallel-programming primitives.
  • Chapter 5 takes an in-depth look at parallelizing one of the simplest problems imaginable, namely counting.
  • Chapter 6 introduces a number of design-level methods of addressing the issues identified in Chapter 5.
  • Chapter 7 covers locking, which in 2014 is not only the workhorse of production-quality parallel programming, but is also widely considered to be parallel programming’s worst villain.
  • Chapter 8 gives a brief overview of data ownership, an often overlooked but remarkably pervasive and powerful approach.
  • Chapter 9 introduces a number of deferred-processing mechanisms, including reference counting, hazard pointers, sequence locking, and RCU.
  • Chapter 10 applies the lessons of previous chapters to hash tables, which are heavily used due to their excellent partitionability, which (usually) leads to excellent performance and scalability.
  • Chapter 11 covers various forms of testing. It is of course impossible to test reliability into your program after the fact.
  • Chapter 12 follows up with a brief overview of a couple of practical approaches to formal verification.
  • Chapter 13 contains a series of moderate-sized parallel programming problems. The difficulty of these problems vary, but should be appropriate for someone who has mastered the material in the previous chapters.
  • Chapter 14 looks at advanced synchronization methods, including non-blocking synchronization and parallel real-time computing.
  • Chapter 15 covers the advanced topic of memory ordering.
  • Chapter 16 follows up with some ease-of-use advice.
  • Chapter 17 looks at a few possible future directions, including shared-memory parallel system design, software and hardware transactional memory, and functional programming for parallelism.
  • This chapter is followed by a number of appendices.

Is Parallel Programming Hard, And, If So, What Can You Do About It?

by Paul E. McKenney (PDF) – 529 pages

Is Parallel Programming Hard, And, If So, What Can You Do About It? by Paul E. McKenney