Vector Fabrics Vector Fabrics Blog RSS

Software parallelization is next to impossible - myth or reality?

Multiprocessing is marketed as the next silver bullet for higher performance, as stated in a paper from ST-Ericsson written by Marco Cornero and Andreas Anyuru, and an excellent excerpt by David Manners. Discussing the rationale of processor makers to introduce multicore processors, the paper claims the reason is pure marketing as mobile software won't benefit beyond a dual-core. It also repeats the old myth that software parallelization is hard or impossible. We’d like to bust this oft-repeated myth once and for all:

Software can be parallelized, and actually it's not that hard.

In the paper, Cornero and Anyuru re-iterate the myth that parallelizing existing sequential software is virtually impossible. "Too bad that software parallelization is still one of the toughest challenges in computing in general, for which no satisfying general-purpose solution in terms of productivity has been found yet, despite the initial enthusiasm and for many intrinsically serial applications there are no solutions at all", says the paper. This is a sentiment that gets repeated a lot in the software industry. Let's verify this myth with two examples mentioned by the authors: gaming and multimedia.

Indeed, many applications out there are sequential and not written with multiprocessing in mind. The general story that "parallelization is hard" further sustains this equilibrium. But is it really true? Can you take a serial application and turn it into a parallel implementation, improving the performance?

Well, that's exactly what we wanted to find out. We took the IdTech4 engine that underlies the again popular Doom3 PC game. Half a million lines of code, almost all sequential, optimized for single core processing. Our engineer Maurice Kastelijn achieved a 15% speedup on a quad core by parallelizing a few critical loops in the rendering engine only. A 15% improvement is equivalent to a generation upgrade of video cards on PCs. Did Maurice have to do extensive refactoring? No, he changed a few hundred lines of code: that is roughly a promille, 0.1%, of the code! Did it take an expert team of game development and parallelization experts with the effort of months? No, it took just 3 weeks for one single guy with no prior experience with the code.

Is this an isolated case? Actually, it isn't. We also parallelized Bullet, a common physics engine, 90k lines of code. Similar statistics: one guy, not an expert with the code, roughly 2 weeks of effort, 50% performance increase on a 4-core, changed about one hundred lines of code.

But parallelization is not just for gaming. We built an Android app to showcase the OpenCV "inpainting" application (remove scratches from old photographs). You can run it yourself to see the results: 3.6x speedup on a 4-core phone. Effort? Two weeks of analysis and a little bit of coding.

Myth busted.

So is multiprocessing a pure marketing stunt? No, code can really benefit from multiple cores, and parallelization is not as hard as you might think. But let’s be frank - it's not a silver bullet, either. Not all code is parallelizable. Your calendar and chat applications just won’t benefit.

The myth of difficulty of parallel programming permeates the software industry, discouraging people from even trying to benefit from multiprocessing. However, our experience shows this to be indeed only a myth, and a one that was already in a great need of busting.

What are your experiences where parallelism paid off?

Posted in category: Company News on Wednesday, January 23, 2013 - 17:09

Comments

Add comment

(required)
(required, will not be published)
(will not be published)
(will not be published)
Notify me of follow-up comments?

Please enter the word you see in the image below: