Jun 142011
 

Parallel programming consists of four distinct phases: finding the parallelism, writing the code, debugging the code, and optimizing the code. In my opinion, frameworks like the Apple’s Grand Central Dispatch and Intel’s TBB only help with writing the code; they do not help with finding parallelism, or debugging, or optimizations (e.g., false-sharing, thread waiting, etc). I think that the difficulty in finding the parallelism , which can be an insurmountable barrier for many inexperienced parallel programmers, is often underestimated. In this post, I try to explain this challenge using a couple of parallel programming puzzles.

Continue reading “Parallel Programming: Why new frameworks only solve a part of the problem?” »

May 202011
 

Multi-cores are here, and they are here to stay. Industry trends show that each individual core is likely to become smaller and slower (see my post to understand the reason). Improving performance of a single program with multi-core requires that the program be split into threads that can run on multiple cores concurrently. In effect, this pushes the problem of finding parallelism in the code to the programmers. I have noticed that many hardware designers do not understand the MT challenges (since they have never written MT apps). This post is to show them the tip of this massive iceberg.
Update 5/26/2011: I have also written a case study for parallel programming which may interest you.

Continue reading “What makes parallel programming hard?” »