Jul 242011
 

In this post, I describe pipeline parallelism,  a powerful method to extract parallelism from loops which are difficult to parallelize otherwise. Pipeline parallelism is underrated because of a misunderstanding that it only applies to “streaming” workloads. This statement is very vague and misleading. I discuss the uses, trade-offs, and performance characteristics of pipelined workloads.

Continue reading “Parallel Programming: Do you know Pipeline Parallelism?” »

Jul 222011
 

I feel very excited writing this 51st post on Future Chips blog today. I started this blog 2 months ago knowing very little about how blogs work and I must admit that the results have far exceeded my expectations (thanks to all the readers). I just want to share some stats to (1) show how its been a very encouraging start, and (2) inspire other computer scientists in academia and industry to to share their thoughts on the net more freely. I will keep it very brief.

Continue reading “50 posts and …” »

Jul 212011
 

From the comments on my post which criticized Amdahl’s law, I got some very clear indication that awareness needs to be raised about the different types of serial bottlenecks that exist in modern parallel programs. I want to highlight their differences and explain why different bottlenecks ought to be treated differently from both a theoretical and practical standpoint.

Continue reading “Parallel Programming: Types of serial bottlenecks” »

Jul 112011
 

I am sorry for the hiatus. I had some business to take care of and that is why I was unable to write for a few days. I will be writing regularly again. As a come back post, I decided to create a small quiz  on the microrpcoessor industry. It has a few questions about the recent history of microprocessors. I am hoping that you will enjoy the questions and learn from them at the same time. Let us know how you did through your comment!

Continue reading “Quiz: How well do you know CPUs? (Fixed)” »

Jul 082011
 

Many academic professors talk about parallelism in the context of HDL languages.  Having learned both Verilog and Pthreads, I have always felt that we can use some of the lessons learned in hardware (which is inherently parallel) in Parallel programming. ParC is based on this insight and an impressive piece of work. I learned about ParC through Kevin Cameron’s comments on Future Chips. After some (healthy) debate with Kevin, I felt that ParC is a concept we should all be familiar with. I am hoping that Kevin’s insights will trigger some interesting debate.  

When people say parallel programming is hard they are correct, but to say it is a new problem would be wrong. Hardware designers have been turning out parallel implementations of algorithms for decades. Back in the 1980s designers moved up from designing in gates to using RTL descriptions of circuits with synthesis tools for the hardware description languages (HDLs) Verilog and VHDL. In recent years other methodologies like assertion-driven verification and formal methods have been added to help get chips working at first Silicon.

Continue reading “Guest Post: ParC – Absorbing HDLs into C++” »

Jul 052011
 

Hardware prefetching is an important performance enhancing feature of today’s microprocessors. It has been shown to improve performance by 10-30% without requiring any programmer effort. However, it is possible for a programmer to throw away this benefit by making naive design decisions. Avoiding the common pitfalls only requires the programmer to have a 10K foot view of how a hardware prefetcher works. Providing this view is the goal of my post.

Continue reading “What programmers need to know about hardware prefetching?” »

Jul 042011
 

Writing parallel code is all about finding parallelism in an algorithm. What limits parallelism are the dependencies among different code portions. Understanding the dependencies in a program early on can help the programmers (a) determine the amount of available parallelism, and (b) chose  the best parallel programming paradigm for the program. In this post, I try to layout a taxonomy of dependencies in loops and how it plays into parallel programming.

Continue reading “Parallel Programming: On Using Dependence Information” »

Jul 022011
 

A comment on Hacker News is arguing that our self-assessment quiz for computer scientists is in fact a self-assessment for computer engineers since computer science is about computation complexity and not programming. At the same time, a popular article at the Elegant Code blog is arguing that software development and traditional engineering are fundamentally different. Now I am confused because apparently programming is neither a science and nor an engineering. Then what is it? It has to be an art. Continue reading “Is Programming an Art or a Science?” »