2010-06-28 01:13 Rapid and Reliable Releases
Aivo @ Cybenetica gave me a link to Rapid and Reliable Releases talk some days ago. At around 37 minutes, Rolf Russell explains one thing he learned observing some team:
one is approach to get to the automation
... i found really interesting and really valuable...
What they did first was they wrote Conan the deployer...
First it was just a shell script which printed the install instructions...
One nice thing about Conan the deployer was that they could automate in priority order
For awhile the deployer was a mixture, sometimes it told you "go do this thing", sometimes it would do it for you
In the end they got .. I'm not sure if they got to 100% automation but they got really close
That was a real eye-opener for me, to first focus on repeatability, understanding the deployment, making it work reliably. Then second, focusing on automation.
I really love this approach. This is actually what we have done with VSR installation scripting. First there were some random installations here and there. Later we started documenting the Installation to wiki, in a form that human can copy-paste the instructions into his shell. At first round the installation took >4h. Then, as the documentation got better and errors were fixed, it was something like 1.5h. Finally now, as we turned the copy-paste instructions to scripts, I was able to install VSR to vailla Debian in 6 minutes. In 8 minutes I had in my browser visualizations up and running using selected public sources. (Wasted few minutes as I forgot to workaround the fact that the host name did not have a DNS record.)
Taking a step away from the deployment, the approach is more general. To keep the focus and to survive in the world of hundreds of requirements flying left and right, we tend to search more and more evidence that something is worth implementing. Jukke has preached for years about premature optimization, and during the past 6 months it has finally started to stick in me too.
There is no doubt that the grail of efficiency leads to abuse. Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%. A good programmer will not be lulled into complacency by such reasoning, he will be wise to look carefully at the critical code; but only after that code has been identified. -- DONALD E. KNUTH. Structured Programming with go to Statements, ACM Computing Surveys, Vol 6, No. 4, Dec. 1974 (see p.268),
Every now and then I ponder if we are doing premature optimization with the architecture flexibility. While seeking critique to Knuth's argument I stumbled onto this:
Some computer scientist by the name of Donald Knuth once said,
- "Premature optimization is the root of all evil (or at least most of it) in programming."
Well speed of course! At least that is the optimization that Knuth refers to and it is what developers typically mean when they use the term optimize. But there are many factors in software that can be optimized, not all of which are evil to optimize prematurely. The key positive optimization that comes to mind is optimizing developer productivity. I hardly see anything evil about optimizing productivity early in a project. It is most certainly a healthy thing to do, hence the misleading title of this post.
I'll conclude that at minimum we are optimizing developer productivity, which will yield easier changes and scalability over time.
-- jani 2010-06-27 22:21:19