Mar 042014
 

This post originally appeared on the Flux7 blog at: http://flux7.com/blogs/benchmarks/littles-law/. It discusses the relationship between throughput and latency summarized by Little’s Law. Little’s Law has applications on any system with throughput and latency, whether it is memory performance or an industrial assembly line. Read the original article below:

Continue reading “Understanding Throughput and Latency Using Little’s Law” »

Jul 162012
 
photo

Raspberry Pi, Mele A1000, MK802, and … . the market is getting filled with these low price geek toys. I personally see a lot of potential here. These “devicelets” can do to hardware what apps did to software. Some readers may remember that I posted a tutorial to create a simple evaluation board out of a iPhone 3GS last year. Back then, Pandaboard was the only choice to get an ARM computer in the market and it was never available. Now there are so many vendors and sellers that it has become difficult to chose. This post is just a concise summary of all the available choices I have come across so far.

Continue reading “Which little PC should I buy? Raspberry Pi? Mele A1000? or …” »

Jun 302012
 

Yet another hiatus. Sorry, I was very busy with my job as a performance architect at Calxeda. Will try to be regular again. 

I have recently been interviewing people at Calxeda, my new employer. There are a few fundamental concepts I expect every engineer/CS major to understand, regardless of what position they are applying for. One of them is the difference between a channel’s throughput and its latency. It is surprising how many candidates get it wrong. I will not only try to explain the concepts of latency and throughput using a simple analogy, but also try to hypothesize why IMO most people get them confused.

Continue reading “Clarifying Throughput vs. Latency” »

Aug 072011
 

Big-O gives us a tool to compare algorithms’ efficiency and execution times without having to write the code and do the experiments. However, I feel that many people misuse the analysis unknowingly. Don’t get me wrong. I am also a big fan of analytic models and understand that they provide insights that cannot be found empirically. However, in case of Big-O, I feel that it has underlying assumptions that were true when it was formed but have since become false. Thus, we need a revision of how Big-O should be taught. This post explains why…

Continue reading “Why Big-O needs an update” »

Jul 022011
 

A comment on Hacker News is arguing that our self-assessment quiz for computer scientists is in fact a self-assessment for computer engineers since computer science is about computation complexity and not programming. At the same time, a popular article at the Elegant Code blog is arguing that software development and traditional engineering are fundamentally different. Now I am confused because apparently programming is neither a science and nor an engineering. Then what is it? It has to be an art. Continue reading “Is Programming an Art or a Science?” »

Jun 292011
 
Loop control flow

I got into a debate with a computer science professor a few months ago when I made a controversial blanket statement that “the code inside loop bodies is the only code that matters for performance.” I should provide some context: I was discussing how multi-threading is about speeding up loops and I don’t care about straight line code which is only gonna execute once (or just a few times). My argument is that programmers do not write billions of lines of straight line code. Its the repetition of code (via loops or recursion) that makes the code “slow.” In fact, I can argue that any time we wait on a computer program to do something useful, we are in fact waiting on a loop (e.g., grep, loading emails, spell checking, photo editing, database transactions, HTML rendering, you name it). It is a rather silly argument but I would like to see some counter arguments/examples. Question: Is parallel programming all about loops/recursions or are there cases where code that executes only once is worth optimizing?

Please note that if the code executing once has a function call which has a loop in it then that counts as a loop, not straight line code. Comments would be great but at least take the time to indicate your vote below.

Are loops the only code that matters for performance?

View Results

Loading ... Loading ...

 

 

Jun 282011
 

When talking to, Owais Khan, a friend studying communication systems, I mentioned that multicore systems are becoming memory bandwidth limited even though the bandwidth of latest chips exceeds several GB/second. He was puzzled and then corrected my terminology, thereby pointing me to a common mistake made by computer scientists . I decided to write about it and collect opinions from computer scientists here.

Continue reading “Quick Post: Memory Bandwidth? (or data rate)” »

Jun 262011
 

After reading my post on the shortcomings of Amdahl’s law, A reader of Future Chips blog, Bjoern Knafla (@bjoernknafla), on Twitter suggested that I should add a discussion on Gustafson’s law on my blog. Fortunately, I have the honor of meeting Dr. John Gustafson in person when he came to Austin in 2009. The following are my mental notes from my discussion with him.  It is a very simple concept which should be understood by all parallel programmers and computer architects designing multicore machines.

Continue reading “Parallel Programming: Amdahl’s Law or Gustafson’s Law” »

Jun 262011
 

I received a interesting comment from ParallelAxiom, a parallel programming expert, on my post titled “When Amdahl’s law is inapplicable?” His comment made me re-think my post. I must show an example to hammer my point. Thus, I have added an example to the original post and I am adding this new post just so the RSS subscribers can see this update as well. Please look at the original article first in case you have not read it.

Continue reading “Parallel Programming: An example of “When Amdahl’s law is inapplicable?”” »

Jun 252011
 
amdahlblock3

I see a lot of industry and academic folks use the term Amdahl’s law without understanding what it really means. Today I will discuss what Gene Amdahl said in 1967, what has become of it, and how it is often misused.

Update: 6/25/2011: On a suggestion from Bjoern Knafla, I have added a closely related article on Gustafson’s law.

Continue reading “Parallel Programming: When Amdahl’s law is inapplicable?” »