Apr 302013
 

Sorry for the delay in this post. I could not get to this post in time and wanted to be sure it is well-researched. The final post in this series is a comparison of the hardware support in the ARM and x86 world. As mentioned in the previous post the biggest reason for ARM to include virtualization in their architecture is to be viable in the server market against x86. So I think a comparison of x86 and ARM hardware support for virtualization is warranted.

Continue reading “ARM Virtualization – ARM vs x86 (Part 5)” »

Apr 082013
 

In the last few posts we discussed the hardware support needed to provide virtualization. In this post how virtualization can empower the user. We’ll discuss the use cases we already see in the server and desktop space, and mobile specific applications like big.LITTLE and lowering production costs for handsets.

Continue reading “ARM Virtualization – Applications (Part 4)” »

Aug 052011
 
Source: http://smartincomeblog.com/what-i-learned-from-creating-an-iphone-app

In a weak moment last July, I paid $99 for an Apple Developer Account with the intent to learn iPhone app development. However, I didn’t use it for 11.5 months. When I learned two weeks ago that I was about to lose my investment, I decided to salvage it. It has actually been a great experience and I don’t regret spending the week playing with iPhone apps. I have not become an expert by any means but I think I have learned enough to have some opinions. I am writing this article to share what I learned as it may interest some other “traditional” computer scientists to explore iOS.

Continue reading “iPhone App Development (for Old School Coders)” »

Jul 212011
 

From the comments on my post which criticized Amdahl’s law, I got some very clear indication that awareness needs to be raised about the different types of serial bottlenecks that exist in modern parallel programs. I want to highlight their differences and explain why different bottlenecks ought to be treated differently from both a theoretical and practical standpoint.

Continue reading “Parallel Programming: Types of serial bottlenecks” »

Jul 082011
 

Many academic professors talk about parallelism in the context of HDL languages.  Having learned both Verilog and Pthreads, I have always felt that we can use some of the lessons learned in hardware (which is inherently parallel) in Parallel programming. ParC is based on this insight and an impressive piece of work. I learned about ParC through Kevin Cameron’s comments on Future Chips. After some (healthy) debate with Kevin, I felt that ParC is a concept we should all be familiar with. I am hoping that Kevin’s insights will trigger some interesting debate.  

When people say parallel programming is hard they are correct, but to say it is a new problem would be wrong. Hardware designers have been turning out parallel implementations of algorithms for decades. Back in the 1980s designers moved up from designing in gates to using RTL descriptions of circuits with synthesis tools for the hardware description languages (HDLs) Verilog and VHDL. In recent years other methodologies like assertion-driven verification and formal methods have been added to help get chips working at first Silicon.

Continue reading “Guest Post: ParC – Absorbing HDLs into C++” »

Jul 042011
 

Writing parallel code is all about finding parallelism in an algorithm. What limits parallelism are the dependencies among different code portions. Understanding the dependencies in a program early on can help the programmers (a) determine the amount of available parallelism, and (b) chose  the best parallel programming paradigm for the program. In this post, I try to layout a taxonomy of dependencies in loops and how it plays into parallel programming.

Continue reading “Parallel Programming: On Using Dependence Information” »

Jun 262011
 

After reading my post on the shortcomings of Amdahl’s law, A reader of Future Chips blog, Bjoern Knafla (@bjoernknafla), on Twitter suggested that I should add a discussion on Gustafson’s law on my blog. Fortunately, I have the honor of meeting Dr. John Gustafson in person when he came to Austin in 2009. The following are my mental notes from my discussion with him.  It is a very simple concept which should be understood by all parallel programmers and computer architects designing multicore machines.

Continue reading “Parallel Programming: Amdahl’s Law or Gustafson’s Law” »

Jun 262011
 

I received a interesting comment from ParallelAxiom, a parallel programming expert, on my post titled “When Amdahl’s law is inapplicable?” His comment made me re-think my post. I must show an example to hammer my point. Thus, I have added an example to the original post and I am adding this new post just so the RSS subscribers can see this update as well. Please look at the original article first in case you have not read it.

Continue reading “Parallel Programming: An example of “When Amdahl’s law is inapplicable?”” »

Jun 252011
 
amdahlblock3

I see a lot of industry and academic folks use the term Amdahl’s law without understanding what it really means. Today I will discuss what Gene Amdahl said in 1967, what has become of it, and how it is often misused.

Update: 6/25/2011: On a suggestion from Bjoern Knafla, I have added a closely related article on Gustafson’s law.

Continue reading “Parallel Programming: When Amdahl’s law is inapplicable?” »