2010-06-28 01:13 Rapid and Reliable Releases

image.png

Aivo @ Cybenetica gave me a link to Rapid and Reliable Releases talk some days ago. At around 37 minutes, Rolf Russell explains one thing he learned observing some team:

I really love this approach. This is actually what we have done with VSR installation scripting. First there were some random installations here and there. Later we started documenting the Installation to wiki, in a form that human can copy-paste the instructions into his shell. At first round the installation took >4h. Then, as the documentation got better and errors were fixed, it was something like 1.5h. Finally now, as we turned the copy-paste instructions to scripts, I was able to install VSR to vailla Debian in 6 minutes. In 8 minutes I had in my browser visualizations up and running using selected public sources. (Wasted few minutes as I forgot to workaround the fact that the host name did not have a DNS record.)

Taking a step away from the deployment, the approach is more general. To keep the focus and to survive in the world of hundreds of requirements flying left and right, we tend to search more and more evidence that something is worth implementing. Jukke has preached for years about premature optimization, and during the past 6 months it has finally started to stick in me too.

There is no doubt that the grail of efficiency leads to abuse. Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%. A good programmer will not be lulled into complacency by such reasoning, he will be wise to look carefully at the critical code; but only after that code has been identified. -- DONALD E. KNUTH. Structured Programming with go to Statements, ACM Computing Surveys, Vol 6, No. 4, Dec. 1974 (see p.268),

Every now and then I ponder if we are doing premature optimization with the architecture flexibility. While seeking critique to Knuth's argument I stumbled onto this:

I'll conclude that at minimum we are optimizing developer productivity, which will yield easier changes and scalability over time.

-- jani 2010-06-27 22:21:19


return to the blog ...