Search This Blog

Loading...

Friday, November 9, 2007

So much for Optimization

Before I start, I would like to define the word "optimization" in this context. Optimization is such an overused word that without properly specifying the context or defining its meaning, it would be impossible to have a meaningful discussion on it.

I found that it is no better to way to clear off the ambiguities than by copying the exact definition
from an encyclopedia or dictionary. Optimization, in the context of this post, is
the process of modifying a system to make some aspect of it work more efficiently or use fewer resources--Wikipedia optimization
So, I won't be talking about other kinds of optimization, such as compiler optimization. It is the optimization of the source code, by the programmers that I will mainly talk about.

Let me get this clear first: I am not against optimization per se. What I oppose, is blind optimization without a real understanding of the whole behaviors of your program, and the second guessing of the bottlenecks of your program. Why the developers need to optimize? Because they, or the users, feel that the program is slow, or consumes too many computer resources. Optimization itself is a noble goal: you want to make your users happier by creating a more responsive application. And I have no quarrel over that. The only problem is, more often than not, premature optimization does not lead to a faster program; worse still , it makes the program harder to read and maintain. This is the best example of good intention creating more havoc.

How can this be? One short answer, because often the programmers don't know where to optimize, optimize too early and optimize the wrong thing. And in the process of that, making the whole application more difficult to understand. It may sound unbelievable, but quite a number of programmers I encountered don't even know how to use the Big O analysis to determine the running time of a subroutine. Not surprisingly, they are also very allergic towards changing their code, or any other people's code, because they don't know whether doing those changes will affect the performance of the application itself. As time goes by, the whole code base degrades, and they either end up spending large amount of time fixing their problems, or passing those problems to other coworkers to fix.

Let me tell a little story of myself, I was responsible for creating and implementing mesh quality checking algorithm. For those of you who don't know what mesh is in the context of finite element, let me just offer a brief explanation. During the analysis of the behaviors the real world objects, in order to obtain realistic calculations, you need to break those objects into a lot of small elements and calculate their characteristics before summing all the results up to obtain the macro level behaviors. Each element is defined by nodes. So, a line element contains 2 nodes, a triangular element contains 3 nodes and so on. One of the criteria for good meshing algorithm is that the node list must not contain duplicated nodes. In other words, all the nodes must be unique.

How to check whether there are duplications of elements inside your list? Easy, you just compare all the elements to each other in the list. Although this is the most obvious choice, but this is the most time consuming algorithm because the running time increases quadratically with the number of nodes! So, if you need 1 minute to compare 10 nodes, then you will need 4 minutes to compare 20 nodes, and 16 minutes to compare 40 nodes. It's pretty clear that as your list grows, the performance will take a drastic hit.

Or, you use other methods that have better running time. One way is to sort the list and then only compare the element with the one adjacent to it. The running time, in this case is much faster as you can see a comparison here.

The point is, mesh generation and verification algorithms are complicated, and it's not your usual simple inventory thing. You will hit a performance bottleneck when you have a large list. If you don't know how to optimize your code, you can wander here and there, tweaking your code so that it runs faster by a tiny fraction of the original running time at the expense of making the code harder to maintain by hundreds order of magnitude but at the end of the day, you still get scolded by customers who are not satisfied with the poor performance of your application, and cursed by your coworkers who are forced to take over your code. Maximum efforts, minimum results.

But if you know how to optimize, and if you delay optimization until it's absolutely necessary, you will gain maximum benefits with minimum efforts. Back to the example above, by just changing a function alone, I reduced the running time of a medium size project from 30 minutes to 10 minutes or less without sacrificing the clarity of the program.

So, if someone came to tell me that I couldn't change any part of the program for fear of performance issue, then perhaps he needed to go back to the basics and learned some Big O notation again.

4 comments:

Philibert said...

We have a project where it took about 40 minutes to generate the output of the progam (XSLT translations).

So we went to optimize it and with a few modifications, namely introducing a software cache of objects and other things, the total time got down to about 1 minute or so.

The important thing here is that we spent time on optimization when it hit us hard. And we did optimize what became important for the user at this point.

That does not mean writing sloppy code, but more than often clarity and maintainability has to take precedence. And semingly optimized code can sometime be even worst. It is easy to underestimate what a CPU can do in 1ms time slice nowadays.

With experience you sometimes feel ahead where optimization is needed. But the thing is, running a profiler is the best way to know that for sure. And most of the time you were wrong at first. This can be because the language or the processor is different, thus same optimisations do not get same results.

And this is where, I think, good progammers get the higher hand. They do not write sloppy code. But they know that only a small part of their work will need optimization. They get even better when they can do that while programming the code.

Anonymous said...

Interesting post... I am currently writing a library where performance will be important (among functional correctness and maintainability, that is). But I only write the library; the bottlenecks will come up as soon as the customer (who is luckily in-house) will use the lib - so I can do the profiling and optimization only when the customer starts using it? And as soon as another team wants to use the lib as well, I have to redo profiling and optimization again...
OTOH, "speculative" optimization seems evil as well.

So I wonder what's the correct way and time to optimize such a library?

Soon Hui said...

Philibert, I agree with your observation. I agree that no optimization should come before you run a profiler to determine the real bottlenecks.

But I did see some programmers wrote their code with optimization in mind; they wrote their code in an obscure way because they believed that they could gain some performance out of it. And it happened that I needed to work with them side by side later in the project and they justified their convoluted coding style in the name of optimization. This grieved me a lot. Did their optimization work? I don't know, no one knows.

It would be better to have a programmer writes code with total disregard of optimization than to have someone who always strives to achieve optimization because for the former, we can always optimize as needed, whereas for the latter, it would be hard to even maintain the code because the result of premature optimization is often obscure code.

Brent Miller said...

Measure everything. I wrote the same naive function 11 different ways and measured the results.

The execution time varied from 7 seconds to 34 seconds.