Past Topics

Full Entries

Congratulations Iceland!


Iceland had the fourth lowest infection rate of the period following a long period of improvement. - Microsoft Security Intelligence Report, Volume 14, p. 41.

We, as well as CERT Finland and F-Secure have been spreading the word of Mostly harmless Finland for a while now. It is time to start looking at the results in other countries, who have adopted the Finnish feeder-proxy-cleaner model. Microsoft Security Intelligence Report provides interesting data about the infection rates in different countries. The Microsoft Security Intelligence Report (SIR) analyzes the threat landscape of exploits, vulnerabilities, and malware using data from Internet services and over 600 million computers worldwide.

Depressing Starting Point


Picture: Malice is among us.

World is full of abuse. Are we beyond hope?

SIR data plotted

It's a Journey, Not a Destination - we can't get rid of all the malice, but seems that we are going to the right direction and with a nice speed.

The graph below represents few countries, who have adopted the feeder-proxy-cleaner model. Please note the significant drop in Iceland, one of the countries who adopted the model around 2011. The graph contains also world-wide average for comparison.

For comparison, lets have a look at some South American countries. Please notice that the scale is a bit different from the previous graph.

What We See

The data is based on public sources. Adding few non-public sources, such as ShadowServer, one would get 10-100x more events for analysis. And the data surprisingly rarely overlaps.

The visualization shows the number of unique IPs in the reports, compared to geoip country code and type of malicious activity. Time window is 7 days.

Picture: Where are your bots, Iceland?

Few Example South American Countries

During the past 7 days, South America has had a wider variety of malicious activity types. Furthermore, issues come in greater numbers.


Picture: Bots like to live in sunny South America.

Once the country has a good process for fire department work, all sorts of other benefits start to emerge. For example the country is better prepared for a more large scale issues. See DNSChanger blog entry for an example.

-- jani 2013-04-18 13:19:57

“Time flies like an arrow; fruit flies like a banana.” -- Groucho Marx


Staying on top of the current Internet abuse situation mandates that you follow your sources as close to real time as possible. On the left you can see a spanking new ZeuS binary, which has been observed by a source on 2012-10-01. They do not report the actual time of their observation, only a date, which is why we record an accurate timestamp for each observation we make. This is recorded as an observation time for our abuse tracking system. Since time is an important asset not to be wasted, we can stream the data real-time to the endpoint recipient. We don't want to sit on top of it or mull over it, rather than package the observation with additional observations through automatic augmentation. This action in turn is recorded as attribution time. Some sources are meticulous about time and they do inform you about the exact time of their observations, which we record as source time. Since time is a difficult thing to handle properly we have adopted a ISO8601:ish time format of YYYY-MM-DD HH:MM:SS UTC.


ZeroAccess, anyone?

Following the times of course gives you the opportunity, or challenge, of evolving the sources you follow to suit the need. For a while I've seen everybody hyping about the ZeroAccess malware and producing map views detailing infections. I knew we have some observations on the issue from the public sources, so I dug up the AbuseHelper source bot data and modified the bot to classify the findings with this evolved threat. On the left, you can see the latest 1 hour worth of sightings of ZeroAccess infections by this single source. So in essence, knowing your sources and what they provide is the key to gaining Abuse Situation Awareness in a timely manner.

I would like to exploit this opportunity and thank two of the numerous public sources, which AbuseHelper uses to gather information about abuse, namely and AutoShun. Without their efforts, the anti-abuse work of many a CSIRT team would be much more difficult.

-- ?svimes 2012-10-01 09:58:21

2012-07-13 20:31 DNSChanger servers down - no biggie says CERT Finland


DNSChanger malware took over hundreds of thousands of machines so that criminals could redirect the victims to fraudulent sites. Malicious DNS servers were taken down, and FBI was permitted 1 to temporarily host legit services to keep the victim machines working. When the permit expired, hundreds of thousands of machines were still infected and could potentially stop operating properly. Nobody knew exactly how much of the critical infra services were running infected. While the security community was a bit concerned and press was predicting doomsday, CERT Finland was not worried. Why was that?

What is DNSChanger?

In case you haven't ran into DNSChanger previously, it was a piece of malware which changes host's name server settings, so that it uses malicious DNS servers for looking up where it should connect to. Idea was that the criminals can redirect victims to fraudulent sites as they please. Luckily, the malicious servers were taken down. Bad news was that when the malicious are not available, infected machines would fail to work. Temporary solution was to allow FBI to take over and run the service so that infected hosts can be disinfected. The final deadline was 2012-07-09, after which supposedly some minor things like the Doom of Internet could occur. Well, the Internet survived again, so we can get back to the original topic.

Global Situation when FBI Lost its Permit

So, how did the cleanup go? Here is the data:


So after several years time to clean up infected machines, DNSChanger was still present on hundreds of thousands of computers. Over time, the number of infected machines were cut to half.

Zeroing in to Finland

CERT Finland on DNSChanger's national impact:

  • DNSChanger didn't impose problems to Finnish users. CERT Finland has received information about DNS traffic from Finland to the malicious DNS servers trough its international security community contacts. This information is processed and redistributed to the ISPs with CERT Finland's AutoReporter-service. Initially Finland produced 300 observations about infected routers and computers. At the moment (January 2012) Finland produces about 100 observations. On a global scale, 350 000 observations are made every day.

    Original text from CERT Finland's web pages and Google's translation.

Over time, infections declined from initial 300 to approximately pitiful 20 (source). On top of the fact that, CERT Finland could in fair confidence say ''Shutting down the DNSChanger name servers might be a problem - but not in Finland''. So comparing to the global drop to approximately 50%, I'd say Finns did a pretty good job with their drop to approximately 7% from the initial numbers. They've totally deserved their stickers:


But that was not my main point. The main points are:

  • CERT Finland's AutoReporter service dealt with the problem in a routine manner, information about DNSChanger infections just started flowing in from the security community, and out to hundreds of network owners.

  • They had the capability to observe if the number of infections is increasing or decreasing.
  • When the so-called doomsday was getting closer, they could estimate what the impact would be if the remaining infected hosts would stop working.

All because of a systematic approach to collect and forward abuse information from the ones who know to the ones who need to know.

If you are an national actor tasked to protect your national critical infrastructure or citizens, a good place to start is to ask yourself these questions:

  • Do you collect systematically information provided by the security community?
  • Can you automatically redistribute this information to the ones who can mitigate the problem?
  • Do you have the data to observe if the cleaners are doing their part?

  • Do you have the data to estimate the long term trends in your country?

If the answers are no at the moment to all of those, don't worry. CERT Finland, being the first, has been building the capability since 2006. Nowadays there are tools and services available from yours truly to get you started in few months. Just mail or and we'll show how.

-- jani 2012-07-13 20:29:10

2012-05-10 12:59 Gone Phishing

Categorilla view of Phishing data over a three hour period

Since I'm not big on fear, uncertainty and doubt I wanted to contribute to the actual observed facts related to phishing and enterprise targets. The picture above represents data retrieved from phishtank and categorized through Categorilla over a period of three hours worth of phishing data. Categorilla is my favorited VSRoom visualization, since it enables you to quickly assess and refine your observations through a dynamic matrix. The X axis in this picture contains Phishtank phishing events organized by country code and the Y axis represents the phishing targets for the same data. The numbers are the number of events for each country code versus target over the three hour period. Apart from the throwaway category Other, many of your bigger and and smaller brand names are constantly present in the Phistank data. Naturally, the most lucrative targets for identity theft seem to be related to services which move money in one form or another, such as PayPal, Habbo Hotel (Sulake Corporation) or Santander UK. My intention here is not to play the blame game, rather than point out the fact that the Internet is full of great non-profit organizations, which publish this information. Our job on the other hand is to help organisations benefit from this data through the community project AbuseHelper, as well as commercial offerings, such as AbuseSA, Abuse Situation Awareness.

-- ?svimes 2012-05-10 13:33:04

2011-12-27 10:06 Open Data (Peto-Media) and Open Source (VSRoom) equals basic level situation awareness

Thanks to the few critical-infra related open data sources, establishing a basic Finnish situation awareness is not such a difficult task. Below is what was needed to get the information presented below (there is actually more data, the screenshots below are just examples).

  • Some prior work, such as by Peto-Media to get basic info to the web

  • A generic information collection and visualization platform, namely Virtual Situation Room

  • A non-profit Finnish group called JKRY (organized net users, free translation), who took the platform and set it up to monitor rescue services and finnish national railway-related announcements.

  • 30 minutes for me to write this blog post.

Now, what if we threw Elisa's service disruption map and Sonera service disruption info to the mix?

Rescue service events

The amount of rescue service events started rising 2011-12-26 00:00, reaching its peak at 08:00. Towards the night the amount of events had fallen down, only to rise back up in the following morning 2011-12-27 08:00:


Geographical distribution

As of 2011-12-27 10:00, the events were mostly at southern Finland. Northern Finland got some random hits across the time:


As of 2011-12-27 12:00 there were a number of events in Kokkola/Oulu/Kajaani region:


Types of events and severity/size

Most events were classified as small rescue service (vahingontorjunta):


One medium sized rescue service event at Inkoo:


-- jani 2011-12-27 10:14:19

2011-11-24 15:32 Well hello there

I have returned. Let's breath some new life into our analyzer.

-- slougi 2011-11-24 15:33:48

2011-08-11 20:40 helping to mitigate website hacks

RTBF news channel covered a story in Belgium about increased number of hacked Belgian websites. Watch it here, fast forward about 26 minutes. is helping Belgian key resources, critical information providers and the Belgian public to protect their IT-infrastructure. presentation.

We got additional kicks from this story, as David from mentioned that a tool familiar to us is shown in the story. Congrats AbuseHelper for your TV premiere! However, it looks like AbuseHelper was not the only familiar piece in the story:


Christian being interviewed.

They are using some sort of encryption in the interview,
so I have no clue what they are talking about. Thus, I'll settle for some additional pics:


David hacking at Belnet.

David was the first one to join to the AbuseHelper community. By now, he has done AbuseHelper trainings and presentations on various occasions. On left there is a shell open, probably some bot being fine-tuned. The right window looks like an XMPP-client, which displays a number of friendly bots working on behalf of


This looks like a diff between AbuseHelper's default runtime config and a new config David is setting up.

The people at are set out to make a difference. While the rest of us are busy being superheroes of our lives, doing advanced and ground-breaking whatnot, CERTs such as are doing their part in keeping the Internet clean, in the spirit of true Bicycle repair man. Just because someone has to.

-- jani 2011-08-11 18:40:55

2011-07-14 Code Snippets 3000 Turbo: Subnets of Subnets

Juhani Eronen from CERT-FI was doing some Python coding, and he XMPP'd (Jabbered? Asked?) me whether I had something that would give out subnets inside subnets. For example, all /24 subnets contained in I didn't, but Jani was concentrating on no doubt something very important instead of guarding my productivity.

   1 import struct
   2 import socket
   4 def subnets(ip, original_bits, final_bits):
   5     if not 0 <= original_bits <= final_bits <= 32:
   6         return
   8     ip_num, = struct.unpack("!I", socket.inet_aton(ip))
  10     ip_start = ip_num & (((1 << 32) - 1) ^ ((1 << (32-original_bits)) - 1))
  11     ip_end = ip_start + (1 << (32 - original_bits))
  12     ip_jump = 1 << (32 - final_bits)
  14     while ip_start < ip_end:
  15         yield socket.inet_ntoa(struct.pack("!I", ip_start)), final_bits
  16         ip_start += ip_jump

As always, comments and documentation are for fools! And IPv6 support! And probably correctness, too! Juhani didn't ultimately need this code, but there you have it.

Addendum: Turns out Juhani needed something that checks whether an IP address is contained by a subnet. When all you have is a hammer everything becomes a nail:

   1 def check(ip, subnet_ip, subnet_bits):
   2     return (set(subnets(ip, subnet_bits, subnet_bits))
   3             ==
   4             set(subnets(subnet_ip, subnet_bits, subnet_bits)))

Kids, please use something more to the point instead of... this. And stay in school.

-- jviide 2011-07-14 11:16:21

2011-07-13 The Tale of Two Visualizations


Today F-Secure's Mikko Hyppönen stepped on the stage of TEDGlobal 2011 conference to give talk (see the video) about the next computer virus assisted end of the world. Judging by the Twitter response the talk was well received. No surprises there - Mikko has received the "The Best Educator in the Industry" award, given every ten years.

Jani has since a long time had this silly dream of being a part of the TED phenomenon, so he contacted Mikko and offered my lifeblood our expertise to create a visualization to provide some whiz-bang for the talk. Mikko was intrigued. At the time none of us had any idea what the visualization should be ABOUT, but what the hey. Last friday we finally started actually creating something visible. As luck would have it, Juhani Eronen from CERT-FI had some interesting anonymized data produced with AbuseHelper, yearning for visual flair: March-June 2011 data about network abuse, such as phishing, spamming, malware activity and so on, directed to and from Finland. This data was collected from various sources, and from it we could infer approximately when the incidents were finally solved.

A Python Walked into a Bar... Chart

Our first idea was to create a map visualization. As always. The animation was supposed to show how incidents were popping around the world and how fast they were taken care of, somehow. On the other hand at least Jani and I have been talking a lot how map visualizations are, like, so 2010. In this case especially we felt that throwing the stuff on the map wouldn't provide added value, as the geographical incident distribution was pretty limited. We wanted something fresh and new! Like bar charts!

Indeed, the resulting visualization, created with Python and ?PyQt4, is a pretty basic bar chart where the horizontal axis represents time (the beginning of March in the left, the end of June in the right) and the vertical axis represents the count of new incidents that appeared to do something nasty at that point of time. One bar is about two days, one line through the the vertical axis is about 1000 incidents. Now, when the animation starts going, you can see how unhandled incidents (red color) are detected and then turning into handled ones (grey). In the end we also show the cumulative amount of work still left at each point of time. Sort of "incident debt", if you will.

The original, higher definition video can be downloaded here.

The process of creating the video actually went pretty smoothly. It's probably the first time ever when a visualization turned out exactly like I first imagined in my head. But, alas, the above video isn't quite finished, missing things like labels and legends. That's because...

2011 Is the New 2009

On monday we presented the bar chart video to Mikko. Turns out that there were two factors that made the animation unsuitable: It would probably take some time to explain what's actually happening, what with the the weird double time dimensions and the sudden shift to cumulative charting. The other, arguably worse, problem was that it had nothing to do with Mikko's talk. Bugger.

So we sent Mikko a link to this old visualization that had been hanging around unpublished since 2009. This "ping pong view", created from data provided by Hillar Aarelaid from CERT-EE, depicts internet criminals moving their servers around the world and from jurisdiction to jurisdiction when they are about to get caught. Simple, fun, effective, and immediately illuminating even if you don't know the nitty gritty technical details. Mikko liked it, Jani digged up the original data, and I produced a remastered 720p version. BEHOLD:

Check out the original HD video here.

Apparently Mikko showed this during his talk. I hope. Otherwise this blog entry is just sad.

Now What?

We actually liked the bar chart, and have gotten some great feedback about it. It certainly needs more work, but its double time aspect shows the changing situation in a nice way. You can divine some interesting stuff from it (left as an exercise for the reader or Jani), especially if you consider that Finland is supposedly "mostly harmless" in the global ranking of network doomness.

This was also a great reminder for us how to keep it simple. In this case a map was a perfectly fine option, it tells a story and reinforces the point Mikko was making. That's all we could ask for, really.

-- jviide 2011-07-13 21:13:56

Media Tracking

2011-04-29 09:45 AbuseHelper added to the Enisa's Clearinghouse list


Enisa adopts AbuseHelper to their Clearinghouse for Incident Handling Tools -list.

This is a pilot site for a proposed collection of tools and guidelines of their use intended for incident handling teams. Information on this site reflects the experience of a number of European CSIRTs, working together as a project in the framework of the TERENA's Task Force TF-CSIRT. By this the project likes to create a repository of information about tools that are actively used and supported by active CSIRTs.

For us, Clearinghouse is not just a list of tools. As you might remember, one of the goals has always been to bring the incident handling community even more tightly together. Previously lots of Abuse Reporting has happened with in-house build tools and that has made it difficult to share the contributions inside the community. With AbuseHelper, we sought for ultimate modularity, so that the tool would fit to each team's process, not the other way around. Having common tools, the incident handlers can focus more on their work, instead of writing the tools from the scratch to be able to it. Obviously, Clearinghouse is one effort to achieve the same goal, thus our AbuseHelper team gets some extra kicks from being part of the list. :)

-- jani 2011-04-29 07:55:45

2011-03-31 21:09 VSRoom under-the-hood stuff + a couple of videos


One colleague from another company suggested that we should have material which also describes how VSRoom actually works. While waiting the release of such information to the public web, I wrote an email. Then I thought - why not share this info with our Blog readers:


That for sure is not informative but it was fun. :)

This is a bit more informative, a brief info on the UI:

As a side note, a quick glimpse of what happens under the hood, just for the fun of it.

The UI just joins to a selected XMPP room and visualizes events it sees, if it can. Events are just free form key-value pairs, allowing to have any sorts of data flowing around. The events look like this:

tix.sanitizer 9:55:43
customer=RIA 1, sector=comm, end=1301597743, longitude=24.7544715, service=data, area=Tallinn, inmbs=49, description=2011-03-31 21:55 - TIX - utilization: in/out %: 5/0 - RIA 1 , outmbs=907, port=GigabitEthernet1/0/6, subtype=exchange, asset=port: RIA 1, status=0, oututilization=0, inutilization=5, latitude=59.4388619, organization=TIX, start=1301597743, type=utilization, id=8b7d944c1725958889eace886153ad2a, source=

(tix.sanitizer is a bot which sanitizes stuff that actual tix bot produces (tix is a bot which reads public Tallinn information exchange statistics).

Then see the classification view in the second video. In that example user creates a new classification view, which will classify events by sector&service (X) and type (Y). The color can be just a counter: the more events fall into specific category (cell), more colorful the cell is. Then you can also assign value functions for the visualization (color density), like max(inmbs) or sum(status) etc. (status being collector bot's opinion on how healthy the status is, 0 being normal and higher values implying more critical issue)

That is one example how VSRoom is extremely flexible on regarding what and how the what-ever-data is flowing in the XMPP network.


BTW, that event example was just a copy paste from a XMPP chat client. As we use XMPP, everything ranging from bot presence, command & control and event streams are also observable via your favourite Chat client.

- -- jani 2011-03-31 19:16:56

2011-03-25 11:46 Malware Analysis with Clarified Analyzer

Have a look at a nice video where Lari Huttunen from Codenomicon uses Analyzer for malware analysis. I just read a nice (finnish) summary about different types of Scams at Sulava's Blog, by Antti Savolainen. It made me realize that the video Lari made has also some educational value for the general public - it shows in practice how one Fake-AV product works. What pushed me over the line to blog, was Mikko Hyppönen's (F-Secure) tweet highlighting Lari's work (with Analyzer ;).

In the video, Lari pretty quickly confirmed the dropsite, malware domain in use, and the potential objective of that scam, just by observing the network behaviour. This piece of software happened to utilize a "Fake AV" scam to collect credit card information.

-- jani 2011-03-25 11:54:07

2010-11-01 08:35 Bredolab botnet takedown


Picture: A screenshot from Netherlands mainstream
television. Clarified Analyzer is used to help with botnet analysis

A couple of days ago there was a huge Bredolab botnet takedown in Netherlands and Clarified had the honor to help THTC (Team High Tech Crime) by providing Clarified Analyzer to help with botnet analysis. This is yet another example of how Clarified Analyzer can help you when you have to discover what is actually going on in the network, and doing so in a really neat way. :)

The Bredolab takedown was also a great example about the power of co-operation. THTC took down 143 C&C servers with the help of the Dutch Forensic Institute, the internet security company Fox IT and GOVCERT.NL, the Dutch computer emergency response team, and with the complete cooperation of ?LeaseWeb, the largest hosting provider in the country, on whose IP space the servers were hosted.

-- turmio 2010-11-01 08:36:43

2010-10-12 16:22 How do we organize ourselves?


We have been quite busy with deployments and thus have omitted some updates on our web pages, such as the personnel list. Then something happened:

  1. MikaR forced Mikko to add Lauri Pokka (all live in Oulu) to the list.

  2. That forced me to complete the long-pending task of adding Sauli to the list (we are in Helsinki).

  3. Now I felt the urge to highlight the fact that we have seem to be able to avoid long vertical organization (marketing, sales, devel, postsales etc). Instead we have teams and owners for cases (sales/deployment)
  4. After writing that, I felt the urge to document the basic team structure also.
  5. Now People page reflects also the team membership.

  6. Finally, I decided to do a small analysis on who-is-doing-what. (You know, we are such a large organization that its structure definitely demands constant evaluation.)

Here are the results. Do you like ordered or unordered better?



-- jani 2010-10-12 13:32:41

2010-06-28 01:13 Rapid and Reliable Releases


Aivo @ Cybenetica gave me a link to Rapid and Reliable Releases talk some days ago. At around 37 minutes, Rolf Russell explains one thing he learned observing some team:

  • one is approach to get to the automation

  • ... i found really interesting and really valuable...

  • What they did first was they wrote Conan the deployer...

  • First it was just a shell script which printed the install instructions...

  • One nice thing about Conan the deployer was that they could automate in priority order

  • For awhile the deployer was a mixture, sometimes it told you "go do this thing", sometimes it would do it for you

  • In the end they got .. I'm not sure if they got to 100% automation but they got really close

  • That was a real eye-opener for me, to first focus on repeatability, understanding the deployment, making it work reliably. Then second, focusing on automation.

I really love this approach. This is actually what we have done with VSR installation scripting. First there were some random installations here and there. Later we started documenting the Installation to wiki, in a form that human can copy-paste the instructions into his shell. At first round the installation took >4h. Then, as the documentation got better and errors were fixed, it was something like 1.5h. Finally now, as we turned the copy-paste instructions to scripts, I was able to install VSR to vailla Debian in 6 minutes. In 8 minutes I had in my browser visualizations up and running using selected public sources. (Wasted few minutes as I forgot to workaround the fact that the host name did not have a DNS record.)

Taking a step away from the deployment, the approach is more general. To keep the focus and to survive in the world of hundreds of requirements flying left and right, we tend to search more and more evidence that something is worth implementing. Jukke has preached for years about premature optimization, and during the past 6 months it has finally started to stick in me too.

There is no doubt that the grail of efficiency leads to abuse. Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%. A good programmer will not be lulled into complacency by such reasoning, he will be wise to look carefully at the critical code; but only after that code has been identified. -- DONALD E. KNUTH. Structured Programming with go to Statements, ACM Computing Surveys, Vol 6, No. 4, Dec. 1974 (see p.268),

Every now and then I ponder if we are doing premature optimization with the architecture flexibility. While seeking critique to Knuth's argument I stumbled onto this:

  • Some computer scientist by the name of Donald Knuth once said,

    • "Premature optimization is the root of all evil (or at least most of it) in programming."
    Bah! What did he know? Of course we all know what he meant, but when you take his statement at face value, the claim is a bit vague. What exactly is it that is being optimized?

    Well speed of course! At least that is the optimization that Knuth refers to and it is what developers typically mean when they use the term optimize. But there are many factors in software that can be optimized, not all of which are evil to optimize prematurely. The key positive optimization that comes to mind is optimizing developer productivity. I hardly see anything evil about optimizing productivity early in a project. It is most certainly a healthy thing to do, hence the misleading title of this post.

    --Premature Optimization Considered Healthy

I'll conclude that at minimum we are optimizing developer productivity, which will yield easier changes and scalability over time.

-- jani 2010-06-27 22:21:19

2010-05-17 21:55 Experiences from the Baltic Cybershield Exercise


Picture: A screenshot from Swedish mainstream
television. Mikael Wedin explains the live events of the
game using the Clarified's topology view running in
video wall.

We got the opportunity to participate in Baltic Cybershield Exercise 10-11.5.2010:

Tallinn – 3 May, 2010. An International Cyber Defence Exercise on 10-11 May the Baltic Cyber Shield will give its participants a practical hands-on experience in defending computer networks. The event is jointly organised by the Cooperative Cyber Defence Centre of Excellence and several Swedish governmental institutions.

Needles to say, it was a blast. Our main objective was to bring the event closer to media, observers and visitors through visualizations. Our secondary objective was to facilitate the communication between teams and observers, using CollabHosting services. Primary objective was a success, in the sense that the visualizations showed up also in the mainstream media, such as Swedish TV. Also the communication services worked fine. So, even though there is always room for improvement (if there only was unlimited time), we are happy. :)

In the middle of fun, we had our share of sweating. We were overwhelmed by the number of users tapping in to a single recording server. Luckily our partners at FOI were prepared, and they quickly added more recording resources as the game activity started to increase significantly. As every proper exercise has to include incidents, as dictated by the Murphy's law, we got nice crisis situation simulation ourselves. For example, despite the clean shutdown, one maintenance task left RAID array in the recording servers disks to inconsistent state, causing imminent rebuilding of RAID array, killing half of the IO in that specific server. All this time our team was able to keep the services responsive to different defending teams - most impact was on the services for the green team, who was observing all the traffic. So big hand for people at FOI and our own team. All-in-All with some intensive care, the visualizations served their purpose - providing a communication tool for people observing the event.

We had 3 basic visualizations:

  • Earth View presenting real internet traffic, mainly showing connections from the participant countries to Swedish virtual gaming environment.
  • Fake Earth view, where different private address spaces of different blue teams were mapped to certain countries.
  • Topology view - which documented the networks of different blue teams (the defenders), red team (attackers) and green team (generic gaming infra)

I'm not sure I can say specific names here, so I would like to express my respect to all the people who worked hard (long days and weekends) to make the exercise happen. You made an exercise which seems to be extremely rare in its level of practicality. Thanks also to Mika, Mikko and Marko for good job under simulated chaos. :)

-- jani 2010-05-17 19:23:58

2010-04-05 19:15 Programming Problems in Disguise


Picture from the referred article

Jukke found a nice article about Programming Problems in Disguise. We faced this issue with AbuseHelper. While configuring AbuseHelper to production use for 2 CERT-teams, the configuration was very much of a programming problem. So we decided to scrap the .ini files and started using python for configuration. First experiences: very neat from programmers perspective.

The first argument against using .py vs .ini is often 'easy of use (configure)'. In my not-so-huble opinion the claim that .inis and .cfgs are simpler than .py is not valid. What I do agree however, is that with .py it is easy to come up with configuration file that is not understood by a regular maintainer, now that we have the power beyond regular configuration file.

At the moment I believe that the question is more about the required flexibility than the syntax of the configuration file.

Configuration is a thing that would be nice to make simpler and simpler, up to the point that there is no configuration. :) (For example you don't need to configure Clarified Analyzer at all.) With AbuseHelper, we will probably never get completely rid of configuration - it is a toolkit after all. We have now reached the level of flexibility we thought was needed - it should facilitate the varying needs of different CERTs. And the simplicity side is now handled with a confgen script, which will get the users started without a need to understand Python. But if you want to unleash the full power of AbuseHelper, you should understand a little bit of our beloved snake.

I'll promise to write another blog entry when we discover how to make configuration both, ultra easy and ultra flexible. :)

-- jani 2010-04-05 16:45:24

2010-02-07 16:50 Experiences from the First AbuseHelper Training


About a week ago, we were at FIRST Symposium, giving a hands-on class on AbuseHelper. AbuseHelper took its first steps towards the community.

The Class

We kept 2 x 4h hands-on classes on Abuse Helper. The hands-on class introduced the context of automated Abuse Handling, lessons learned from a total of 7 automata generations in CERT-EE and CERT-FI. The class also covered briefly processes, challenges, workflows, architectures, terminology, and the context of abuse fighting. The audience received process building blocks and supporting software. In the class, the audience practiced hands on with AbuseHelper Toolkit - a modular, scalable and robust software - that is designed to help organizations to automize part of their Internet abuse handling process. The toolkit and documentation are available for the participants under a permissive opensource licenses. To demonstrate the expandability of the toolkit, the authors wrote new proof-of-concept components during the class on sources suggested by the audience: Project Honeypot (implemented in 10 minutes) and MalwareURL (some time behind the curtains writing the parser, then 20 minutes, explaining on the projector how to insert it to a code template).

The (Not So) Hidden Agenda

The time and money we invested to this course was considerable, compared to the size of our company. Why we provided free training for already opensourced AbuseHelper?

In the long run, we want to establish AbuseHelper community. We had an opportunity to introduce AbuseHelper in practice for over 40 people from AbuseHelper's target audience. The introduction also worked as a test - can we convey AbuseHelper benefits to the people who hear about it for the first time? The ideological reason is to pull the already existing communities together, by taking the next step in fighting Internet abuse. The business reason is that we are creating a market to the field we love and think we have something to give. If we succeed in establishing systematic workflows for ?AbuseHandling, we have an opportunity to sell

  • abuseHelper tailoring services,
  • deployment assistance, and
  • AbuseHelper support for needing organizations.

Work done in those fronts will grow the AbuseHelper further, brining benefits for the whole community and our cusotmers. Everybody should win, so ideologically its a no-brainer. The following years will show how it plays out commercially.

Too small market you say? Lets just say for now, that AbuseHelper is a toolkit and the same tech bends for a lots of other stuff too. :) Basically we are building our second generation collaboration platform, partially killing two birds with one stone.

Postmortem Analysis on Planning the Class


Picture: Hacking at hotel room, writing examples for the next
day's class. Jussi has his famous poker face on.

Planning the training was a bit challenging as we didn't know our audience. We identified that there could be three AbuseHelper interest groups present:

  • Process - interested from the process-side of automatic Abuse Handling
  • Abuse Helper User - interested in automating internet abuse handling
  • Abuse Helper Developer - interested in developing AbuseHelper further

Our method of survival was to prepare to serve all of those audiences. That ment some sleepless nights prior the training, reviewing all the material we have documented in past couple of months. Next step was to throw in a number of people onsite, as we guessed there actually will be people from all the interest groups. So we sent, Joachim, ?Sebastian and Jani to Hamburg.

Also the techincal environment had to be considered. We had several plans to survive the challenges set by unknown training infra. We knew that there will be Internet connectiviti and local net inside the class. But we had no idea how reliable they were. So we had the following plans:

  1. Reliable Interent connectivity - use Clarified readily installed infra for the exercises, just have local copies of big software components to make sure that people do not need kill the Internet connectivity with large simultaneous downloads. Training documentation also in AbuseHelper collab environment.

  2. Unreliable Internet connectivity - Jani takes a local copy of AbuseHelper collab and provides the documentation and XMPP server for the class

  3. No Internet connectivity - people work with their VMWare machines

What we didn't consider was:

  • D. Unreliable local network - the classroom network had some issues with the combination of local connectivity and our laptops.

People had hard time connecting to our laptops in the classroom. Same issue was with Sebastian and Joachim. In retrospective, I suspect a combination of running VMWare fusion and some weirdness in the local net. Unfortunatelly we did not have time to debug properly. Only thing we checked was that the OS in the target laptop never saw the packets sent by the source.


Overall, we got very positive feedback on the class. We scored quite nicely on topics such as did the class meet your expectations, would you attend hands on class again, I would take another class from this instructor. Which was slightly below the other scores, was teaching skills of the instructor. I knew at least couple of things which we would have done better to increase the score no that:

  • We should have reserved time for personal assistance more
  • Anticipate also that the local connectivity might not be possible. That way we would not have been preoccupied about post-processing during the session about what we can and can not do due to this technical limitation.
  • Structure some of the basics better. We explained some basic things on the fly. (However, I don't believe all content could or should be forced under some structure. Ad-hoc stuff allows you to concentrate on items the audience is more interested. Not to mention it is more fun. :)

-- jani 2010-02-07 15:02:10

2009-11-13 16:28 Abraham Takes Step Toward Integrating Penetration Test Tools


During the past few years, on our customer-engagements, we have been using our Collab infra to support synthesis of results and creating templates and workflows for systematic practical security analysis. Our approach in short:

  1. Collect data (with Clarified Analyzer and different third party tools, such as Nmap, Nessus etc.)

  2. Parse the data using different OpenCollab scripts, split it into a semantic pieces so that the data can be better assigned to the real environment (hosts, client's network documentation etc)

  3. Upload data using OpenCollab SDK + XMLRPC interface to a centralized collaboration environment (see CollabHosting for more details).

  4. Analyze
    • Create semantic visualizatons using graphingwiki

    • Create tabular presentations, which are tied to the customers system.
  5. Provide living reports for always up-to-date view to problems and their fixes.

Stumbled upon to a article which describes how others are now also working on a solution for penetration testers that's a first step toward ultimately integrating and correlating data among different types of penetration-testing products.

The problem, Abraham says, is that pen testers using multiple pen-testing tools have to manually examine and correlate their findings, a laborious and error-prone process. "I run into this all the time," he says. "A lot of different types of tools run on different systems and usually aren't integrated...We're providing a way for the penetration tester to extract information from a lot of different tools to leverage when performing a pen test.

-- jani 2009-11-13 13:32:13

2009-11-09 14:23 Winter is here!


Christmas madness is coming to town and we are going to finish our Summer Campaign and start immediately a new Winter Campaign. Go and evaluate the Clarified Analyzer! :)

Make analyzing your traffic fun and go to: Clarified Analyzer

Since this is my first blog entry I think it is a good chance to introduce myself. My name is Mikko and I am working in Clarified with all kinds of jobs related to the Clarified Analyzer. I am interested in everything related to network-security and I also love to go social. On my free time I play bass in a cover band, fly with flight simulators and paint my friends with paintball marker in the forest.

-- Mikko "turmio" Kenttälä 2009-11-09 14:24:10

2009-10-22 13:43 HAPPY CAPS LOCK DAY!


Picture from



-- jani 2009-10-22 10:47:34

2009-09-23 11:14 One Reason Why It Is Good To Audit Actual Traffic (Not Just Access Control Lists)


Link to the quoted Matasano article.

Once every year or so, big companies commission small companies like ours to do the “annual external pen-test”, in which testers try to break in through the perimeter firewall. Even though I don’t do a lot of network pen-testing, I’ve done a couple. And on all of them, some stale old Win2k host gets left exposed or some branch network has 445/tcp open, because there are 20,000+ lines of firewall rules and rules only get added, never removed.

This quote was from Matasano article. The original article is at 'Joel on software' blog.

Auditing 20 000 lines of firewall rules is unrealistic. Especially with modern features, which require digging deeper than just the actual firewall rule lines. In this kind of cases it is much more cost-effective to combine active scanning and passive traffic analysis to unravel critical errors and outdated rules.

-- jani 2009-09-23 08:17:09

2009-09-09 11:37 (UI) From Finland With Love :)


I had very interesting opportunity as I was invited to talk to 40 highly technical individuals, gathered from all over the world to Amsterdam. I have not given presentations to such a group for awhile, so I was excited.

My typical concern is that the audience just sits silent. I love vivid discussions up to the point that I even like to be provoked. Having a carefully dozed emotional content in the discussion keeps the audience awake. Having a good balance between clear presentation and emotional content is something I haven't quite yet mastered. I think with this presentation I went just slightly over the top, I ended up ranting about libpcap bugs and about the times when it was extremely laborious to analyze multipoint captures taken from complex systems. Ranting too much takes time from the actual point of the presentation. Fortunately the audience was smart, and they picked most of the points, ending up explaining in other words to each other what I was talking about.

Something which was slightly unfortunate, was that the color scheme in the projected image was completely fubarred. That affected the Analyzer demo a bit. For example selections were bright yellow, burning audience's eyes, different shades of gray were completely missing, earth view was dark, etc. As a result I got feedback that the UI looked like it is made by a typical Finnish startup company. :D Typically we get exactly the opposite feedback amongst Finns - 'hey great, for once an UI which does not look like it is made by an average Finnish company'. :)

Thanks for the hospitality and the opportunity. It was fun and even therapeutic! :)

-- jani 2009-09-09 09:25:02

2009-09-08 06:51 Screw you guys I'm going to America!


Tomorrow morning I will be flying out into the land of stars and stripes and individual freedom and Walmart. I'd like to thank everyone at Clarified Networks for a terrific time - you guys are the best! But the time has come to say good bye, adieu, and arrividerci, so without further ado, in the words of the great and noble Eric Cartman: "Screw you guys, I'm going to America!"

-- slougi 2009-09-08 07:03:49

2009-08-14 14:07 When and Where the Spam Comes From

Ben from the Australian Honeynet Project has been analyzing SPAM and blogging about his findings. In his recent Blog entry he describes how ?Logster helped him to get the time perspective in his analysis. He was kind enough to share the Video using Vimeo.

Ben's Notes.

  • The problem with his earlier heatmaps was that they are the combination of months of data without any respect to time.
  • Blue Banana can be seen in spam logs too.

  • People don't turn their infected hosts off at night. SPAM, it seems, is 24x7.

My Comments

A really nice video Ben! You are one of the first ones I know who actually utilized Logster for something else than Apache access logs. We've been waiting for that to happen. :)

We earlier pondered how to visualize the passing of time in 2 dimensional space (images). It was really hard to find anything intuitive, which compresses 3 dimensions (x,y,time) to 2 dimensions (x,y). Then we realized, why not just add the time perspective... by using the time dimension (=video)! Logster was born.

-- jani 2009-08-14 11:55:23

2009-07-15 19:41 Clarified Now 20% More Social


As everybody knows, vacation is the only time you have enough time for work.

So, as a therapy project, I decided to code couple of small macros for GraphingWiki. I tried to keep them as small and simple as possible, don't want to open the Pandora's box on 'how many ways there are to shoot yourself in the foot with web apps'.

I proudly 2 present: <<TwitterBadge(twitteruser)>> and <<SocialShareLinks>>.

TwitterBadge prints the latest tweets from a selected user. SocialShareLinks prints the usual share links, familiar from numerous sites in the Internet.

Why? Well I needed the TwitterBadge on another therapy project. A festival - it's called: Jyrkkä \o/ (Link points to Finnish only page, sorry). I found a nice tool for Audioblogs, called Audioboo. Audioboo supports Twitter notifications. And Twitter supports Facebook notifications. So, the only thing missing was a way to announce the Boos in Jyrkkä Wiki page. TwitterBadge did the job. And as I was getting overly confident with TwitterBadge, I decided to throw in also the SocialShareLinks for the same price.

Now I have a nice toolchain to promote Jyrkkä festival, diagram follows. ;)


Now, how Social Web 2.0 marketing is that!

If my experiment with Jyrkkä produces good results, we will apply the same methods at Clarified. :)

Oh, and the macros in action:

Jani's Tweets:



-- jani 2009-07-15 17:00:54

2009-06-15 10:52 Logster vs. Wikipedia, Round 1


We wanted to try out how much data our Logster code can conveniently visualize. So we downloaded the whole English Wikipedia's revision history, available at in easily parseable XML. Processing and sorting it by time was a breeze. Thanks, Wikipedia people.

The idea was to show the edits on a world map. The history dump contains IP addresses for each anonymous edit, and usernames for non-anonymous ones. There was no way for us, as outsiders, to get our hands to the locations of non-anonymous Wikipedia editors. Which is a good thing, really. But we could use the IP addresses for geolocating the anonymous edits. All in all, there were about 50 000 000 entries we could use.

To spice things up a bit we added German and Spanish Wikipedias to the mix, as they seem to be quite active. The resulting video shows how the English, German and Spanish edits are distributed geographically over time, indicated by color. If the area is colored red, it's dominated by English edits; blue, German; green, Spanish. The coloring is also proportional, so if some area is e.g. purple, it's roughly divided between English and German edits.

2 seconds in the video represent roughly one month of Wikipedia history. There you can pretty accurately pinpoint stuff, for example the moments German and Spanish Wikipedias are born. Check it out at or here:

  • red = English
  • blue = German
  • green = Spanish
  • purple = red + blue = English + German

(Or download the higher quality version.)

The nice thing is that we now have the pruned and sorted data and the code for handling it. Should the need arise we can try all kinds of other visualizations based on this. Any good ideas? Contact us at for great justice!

-- ?jvi 2009-06-15 09:17:10

2009-04-29 09:42 Logster is out! I'm scared!


Finally. Logster is out for everyone to download and use. Logster is a easy tool for quickly visualizing where the traffic to your webserver comes from. All you need is your access logs, in the Common (or Combined) Log Format which e.g. Apache produces by default. I've also heard that certain someone created a quick script to convert their firewall logs into something vaguely resembling Common Log Format, and that it worked. Logster isn't picky.

We've also launched our web shop where you can buy full licenses for logster. With a license Logster won't nag you every 60 seconds and interrupt your totally rad visualization X-perience.

And that's not all! In fact, Logster was available for everyone a week ago. We just haven't been very vocal about it. That's because we thought it would be interesting to actually see how the word gets out. Therefore we have devised The Great Logster Social Experiment. The idea is to phase our advertising efforts. Instead of going all-out straight away, we started out small by informing only our immediate social circle. Now we're at week 2, so it's the time to hit the blogosphere. Then, week after week we'll bring out bigger and bigger guns by blogging, contacting the media etc. Along the way we intend to visualize the web traffic for Logster with Logster and put the results up for everyone to see.

Jani made a visualization from pre-kickoff and kickoff, just to get a baseline:

So, download Logster, spread the word, and nobody gets hurt! ;)

WARNING: Technical rambling


Logster in itself is pretty neat, but the coolest feature (in my opinion) is easy to miss. At least it was the most fun thing to implement: Random access (seekable) gzipped files. To save disk space Apache often compresses access logs with gzip, and this posed an interesting problem.

With Logster you can quite freely seek through your access logs (and time itself!) with a slider. This is easy with normal uncompressed access logs, as one can always start reading from the just seeked offset and get out more or less coherent data. Not so with gzipped files. If you want to uncompress a piece of a gzipped file, the uncompressor often needs to know stuff about how the data preceding that point uncompresses. And to have that knowledge it may have to have some knowledge of data preceding even that. And so on. And the data may be bit-packed. My writing skills fail me, so just remember that to our eyes a gzipped file look like an impenetrable jumble of bits, where to make anything out of a piece of data you must usually know everything that came before that piece.

The easiest thing, for us but not for the user, would have been to say "no bonus, Logster supports only uncompressed stuff". That would have been a drag. The next obvious option would have been just to automatically uncompress the whole file into memory or a temporary directory. This would again make things easy and fast for us, as seeking through uncompressed data is a problem solved. But it could also potentially eat up lot of memory or disk space. At least I usually tend to be a bit short of both. Python (yeah, we use Python) also has a library for gzipped data which has something emulating seeking within such data. But that is done by starting from the beginning of the data and uncompressing until the seek point has been reached. Slow.

Turns out that zlib, the library for handling gzipped data, has a mechanism for taking a snapshot of the state of the uncompressor. Hmm. Would it be feasible to just run through the data once, uncompressing as we go, and keeping only one snapshot per every uncompressed megabyte or so? Then when we want to seek a certain place in the file we can just check out what is the closest snapshot preceding that point. Push that snapshotted state back into the machinery and uncompress until the desired data offset is reached. But that can't be fast, can it?

Turns out that if we cache the last accessed uncompressed megabyte, the seeking and reading can indeed be quite fast. Sometimes even faster than roaming around pure uncompressed data, as gzipped files are smaller and disk caches do their tricks. The initial step of going through and indexing the data is also bearable, often faster than uncompressing it to the disk, in fact.

It took a day or two to dig up the necessary info and to implement a basic Python module that emulates random access files closely enough. Then it Just Worked(TM) with our existing log parsing code. It was a good moment. Coder's nirvana.

This does have some drawbacks, of course. Snapshotting consumes some memory, but it isn't that much, and can be controlled with decreasing the snapshotting frequency. And there's the wait one has to endure when the initial snapshots are created. But that's life.

-- ?jvi 2009-04-29 08:03:34

2009-04-28 23:09 This is the story of their access_log

It is a public secret that we have ties to the seedy underworld of the people referred to as "geeks".

March 25th last year a group of these "geeks" known only as KKS decided to direct their energies to late night hacking instead of more manly sports like disco dancing. That night they were to peer into Sampo Bank's then newly revamped web bank system. The system was running on the Java virtual machine on the client computers. "Geeks" love all things virtual (and cyber), so this was a match made in heaven.

For a couple of hours KKS looked into what the Java packages had eaten and what they actually did on the client computer and so on. They were in the same soom, but it was a virtual cyber room called an IRC channel, so to boost their communication they decided to document their findings as they went on. The place where they put their findings was their publicly accessible wiki at

What KKS hadn't taken into account was that bits are the fastest things alive. It didn't take many hours for the wiki link to start spreading, virtually (cyberifically?). For a couple of days their web server took in an impressive amount of traffic, which was dutifully chronicled into their access logs.

So, a KKS agent gave us those access logs and asked us to use our Logster tool to visualize the spreading of the link, starting from a short while before the incident and a short while after. Here it is. With disco music.

-- ?jvi 2009-04-28 21:46:05

2009-04-24 06:04 Sweeten the emotional impact with eye candy


Lets refresh our memory on Clarified priorities on software development.

  1. It needs to look good - eye candy does not make you grow bigger. :)

  2. It should be usable - this is the typical 'easy of use' -thing.
  3. It should have a purpose - the feature/view/tool should have a reason for its existence.

Yes, most of the readers would like this order of priorities to be reversed at least at the first thought. With the help of Stephen Anderson I now try to convince you otherwise. 3

There is a nice article about how eye candy also affects the usability:

We want our products to have some emotional impact. There is no need for software to be dull. Lets take iPhone for example. (No, I'm not a Apple fan boy. Ok, I am - but I still claim this example is objective.) I get kicks from using my phone even though I had it ... at least over half a year. I pick the phone, press the green phone icon, surf the contact list, make a call and get an beautiful menu during the call to enable certain features such as add call etc. Get kicks of making a call? I've must have gone insane. Emotional impact. Our products should have that (too).

Stephen Anderson writes: By making intentional, conscious decisions about the personality of your product, you can shape positive or negative responses...

  • People identify with (or avoid) certain personalities.
  • Trust is related to personality.
  • Perception and expectations are linked with personality.
  • Consumers “choose” products that are an extension of themselves.
  • We treat sufficiently advanced technology as though it were human.

Let the software entertain us. Not the other way around. (Greets to my friends in gaming industry. ;)

-- jani 2009-04-24 06:24:59

2009-04-15 12:22 1, 2, 3, Clean!


We decided to combine some of our best products in single affordable package, available only for limited time. Package contains two days of collaborative traffic audit, license to use our Analyzer software for one year and also hosted secure collaboration environment for one year and as a bonus, during the traffic audit you will learn how to get most out of our tools and collaboration environment.

Read more from campaign page (sorry, only in Finnish).

-- ?cooz 2009-04-15 12:48:15

2009-04-15 09:48 Tracking the State-of-the-art


We have two versions in our ponderings why we track our areas of interests publicly. Both are true.

The official version

As we plan, deliver and integrate our task specific and collaboration oriented services we must follow the state-of-the-art. We used to track these different concepts, products, services and other pointers in private but often ended up proxying this information to our clients and partners. Now by sharing this information here we hopefully bring some added value to you as well.

The unofficial version

Aren't we linking to our competitors? Mostly not, as we focus on providing best possible combination of tools and services. And even to the small parts that even we could see as competition, we don't care. :) We see more benefit in boosting our Google ranking and having a centralized place for the State-of-the-Art in our areas of interest than in being scared that some customer might learn about different solutions.

Maybe this attitude comes from the era when security-through-obscurity slogan emerged. We don't believe obfuscating the market does any good. We should be able to stand out from the market anyway. For a healthy business, we should be able to justify our existence anyway with more sensible mechanisms. IT should be simple: bring more value than your cost to the customer.

2009-04-02 17-34 Open Cloud Manifesto

We decided to sign up for Open Cloud Manifesto today. Snippes below:

1. Cloud providers must work together to ensure that the challenges to cloud adoption (security, integration, portability, interoperability, governance/management, metering/monitoring) are addressed through open collaboration and the appropriate use of standards.

2. Cloud providers must not use their market position to lock customers into their particular platforms and limit their choice of providers.

3. Cloud providers must use and adopt existing standards wherever appropriate. The IT industry has invested heavily in existing standards and standards organizations; there is no need to duplicate or reinvent them

4. When new standards (or adjustments to existing standards) are needed, we must be judicious and pragmatic to avoid creating too many standards. We must ensure that standards promote innovation and do not inhibit it.

5. Any community effort around the open cloud should be driven by customer needs, not merely the technical needs of cloud providers, and should be tested or verified against real customer requirements.

6. Cloud computing standards organizations, advocacy groups, and communities should work together and stay coordinated, making sure that efforts do not conflict or overlap.

-- jani 2009-04-02 17:36:31

2009-04-01 13-00 April Fools


We got an interesting request today to our sales-list. :)

Dear Sir / Madam,

I am a representative of a academic group in Lagos, Africa . We spesialise at computer security and related phenomenon. I have seen your visualisation of Dan Kaminski, and I am now suggesting a business opportunity with you. We have seen your Loggster visualising software system package, and we would like to boy many lisenses for it. Unfortunately our institution is a very poor one therefore we can not pay yet (you can locate our headquarters in Google Maps,+africa&um=1&ie=UTF-8&split=0&ei=BFjTSfaLK46N_Qa137HkBQ&sa=X&oi=geocode_result&resnum=1&ct=image but we will pay you doubly if you can give us 1000 lisenses for tree months. Both the software system and it's developer seems to be so sex we anticipate we can sell all these units in less than that time.

Yours faithfully,
Aria Pill
The University of Lagos Correspondence and Open Studies Unit

This e-mail (and any attachment/s) contains confidential and/or privileged information. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and destroy this e-mail. Any unauthorised copying, disclosure or distribution of the material in this e-mail is strictly forbidden.

-- jani 2009-04-01 13:01:07

2009-02-25 14-39 They Accidentally the Whole License


You know that you have coded a winning license nagger when it pisses even yourself off.

-- ?jvi 2009-02-25 12:42:05

2009-02-20 18-41 Monthy Python style consulting

My friend Jussi pointed out this nice Monty Python video. I must have been working too much lately, as even that reminded me about the infosec consulting scene. :) Enjoy and see if you will get the same effect. :)

-- jani 2009-02-20 20:15:52

2009-02-11 20-47 Intelligence Cycle Meets Semantic Web


Click to enlarge the picture.

Oh boy, stumbled into a great paper about semantic web. I'll do another of my cheap blog-tricks and just copy-paste some snippets. Recommended reading for anyone analysing complex networks (whether they are techical or social):

With the rise of the netwar paradigm new tools are needed to support intelligence collection and analysis. The Semantic Web uses information online in which data is defined in machine-readable terms, allows for the creation of flexible, adaptable knowledge bases that can be used collaboratively. This paper discusses how the Semantic Web facilitates research on terrorist organizations, particularly how a variety of useful features – such as network visualization and data attribution – can be used.

This paper has focused on how the Semantic Web can be applied to researching terrorism, but its functions could be adapted for analysts examining innumerable other issues both in the public and private sector. The Semantic Web will allow many intelligence community stakeholders to examine and sort the same data – using a variety of computing tools. This both facilitates human intuition and the exchange of ideas, which is at the core of successful intelligence, while bringing into play the data processing strengths of the computer. These tools can help the analysts structure the data, a crucial tool in the current information saturated environment. Bringing numerous different perspectives and tools to bear on a problem is essential in the complex netwars that challenge modern intelligence agencies.

-- jani 2009-02-11 20:47:52

Hey nice! Very much along the lines we have been thinking when working with the intelligence cycle in more everyday situations such as security audits or just good old plain investigation-of-everything. Furthermore Jukke pointed out the term Entity Attribute Value model5 that pretty much describes our multi-valued-schema-free-object-database (read RDF- triple kind of) approach for analysis and synthesis.

-- fenris 2009-02-11 20:57:03

2009-02-04 09-21 Visualizing for computer or for humans?


I recently met a prospect who with we discussed about visualizations. Somehow I was unable to explain well enough what we did. After the meeting it occurred to me: while the client was talking about people drawing for computers, I was talking about computers drawing for people.

'Why would people draw for computers', one might ask. Well, there are some scenarios, for example an analyst might want to have an intuitive interface for describing relationships between objects in a database. I think both approaches have their place. If I only had realized this terminology collision in the meeting, we would have had a much better basis for the discussion. Why wouldn't you have both?

Another thing was that I noted some disbelief when I said: "If simplicity is a key factor, why we don't just create a form and throw in 20 lines of python to pass it forward". Obviously my ego is not developed enough, as I was motivated enough to dig an example. Thanks to Erno Kuusela for doing the actual digging the code that I was thinking about when throwing my claims. 6

BTW, I found the discussions very refreshing. I love when someone starts challenging our ideas. It gives an opportunity to rant about what we (=I) think is wrong with the industry. On the other hand it is counter-productive from the salespoint of view. But hey, we need the fun every once in the while? :)

-- jani 2009-02-04 09:22:06

2009-02-02 11-12 I Can Has Testimonial?


Our Summer Campaign came and went, and now, 6 months later, it's the perfect time to blog about it. A respectable amount of people participated, and turns out that the most active campaign cookie user was a fella named Orac of They were gracious enough to give us the following nice testimonial:

  • "The analyzer has been used in helping us analyzing attempted SQL injection and Remote File Inclusion attacks upon our servers. The tool has also been used to good effect whilst penetrating botnet command and control sites.

    It allowed us to automaticly trace the links involved in a SQL injection attempts, work that was previously done manually, thereby saving time and allowing us to start getting the numerous sites involved closed down in a more timely manner."

Thanks, Orac, and thanks to everybody who participated!

-- ?jvi 2009-02-02 11:27:52

In my humble opionion we should have waited 12 months before blogging about it.

-- jani 2009-02-02 11:43:35

2009-02-02 10-57 Jani giving a presentation at Finlandia-talo, Helsinki


There is an interesting event coming up 2009-02-26 in Helsinki ( ...

  • Mikko Hyppönen - F-Secure
  • Jacques Erasmus - Prevx
  • Rasa Siegberg - ?SafeNet Inc

  • Sami Petäjäsoja - Codenomicon
  • Jari Kenttälä - Clarified Networks

The regular price for the participation is 690 EUR + VAT, but participating with us you will get considerable discount. Ask Jani for further details.

-- jani 2009-02-02 10:57:18

2009-01-02 14-16 Hans Rosling: New insights on poverty and life around the world


An excellent example on how visualizations help presentations and conveying the message. Hans has also great presentation skills, so this Video

  • is enjoyable to watch.

-- jani 2009-01-02 14:21:15

2009-01-01 18-15 Flashy Botnet is Flashy

Update (2009-05-06): We have released the tool, Logster, we used for creating this visualization! Check it out at

Update (2009-01-09): Updated the downloadable higher resolution (640x370) video, it should now work better with several video players. Get the new version here. Also added an even higher resolution (1280x720) version, available here.

2008-12-17 07-29 Collaborative collaboration


We ended a bigish project last friday. For some reason I felt especially good about it. I haven't been able to deduct yet what is the main reason for that. But as pondering this publicly is also good marketing, I'll blog about my thoughts. :) I will catch two birds with one stone.

Collab statistics

We were coordinating activities in 3 target countries. We had a project team which consisted collaborators from 5 different countries and 6 different companies. We had 6 different audit tasks that produced results in real-time, allowing co-ordinators and the client to actively participate during the whole project.

With this kind of crew and a number of tasks a project could easily fall into the trap where everyone is fiddling in their own corner, producing mediocre results. Then at the end everyone would provide their results and customer would then start pinpointing the inaccuracies and perhaps commenting that the results were not what they were looking for. As providing the results would be left to the last minute, the project would suffer further delays when we would start to sort them out together.

Yes. There is iterative model where there are milestones/checkpoints and so forth. However, I see a significant benefit when this is done almost in real-time. Everyone have the details fresh in their memory and adjustments to plans can be made before the project crew takes those expensive steps to the wrong direction.

Now the disclamers

I'm not claiming that everything went perfect. We still had to work in some areas to get everyone to the same page and to provide real-time situational awareness. But if I compare this project to the exceptations I have for the project without Collab-ideology - I would say that it would have been close to impossible to manage/coordinate/facilitate this kind of project crew. Thanks to this approach, we can actually bring the best brains (as my friend named it) to the project. We are more free from the constraints that geolocation and organizational borders typically create.

We are the collab in collaboration

Hmm.. now that I reflect these ponderings to the reasons we had to decline from a couple of projecrts in past few days, I notice a common denominator. The main reason for declining from couple of collaborative projects was that we have seen in the history that those projects typically end up to the situation I just described: everybody does their part independently and in the end the results are cut&pasted to the final report. In these cases we wouldn't have the position to dictate the real collaboration.

Thus I would like to end this blog to a corny slogan: we are the collab in collaboration. (Remember the Sun's old: we are the "dot" in ".com"). Collaboration term is over-heating, so this kind of satire would have it's place. Sadly it sounds a bit too corny and arrogant to be used as standalone slogan so I'll probably let it live only in this blog which puts it to a context.

Oh, and we got a nice CollabStats macro to provide real-time statistics about the usage of our Collab environment. Here it goes:

Currently hosting 212 collaboration instances for 1609 collaborators from 380 organisations.

-- jani 2008-12-17 09:31:43

And who is the oration in collaboration? Ke ke ke.

-- jviide 2008-12-17 12:34:42

2008-10-20 16-32 T2 - we'll be back!


We had an opportunity to participate T2 conference this year. Clarified has a special relationship to T2. Our early prototype from our research days was presented at T2 2005. An not-so-pretty hack, usable only by nerdish research scientist, is nowadays an extremely usable and beautiful piece of software. That makes me wonder, is the technologies, workflows and idelogies presented this year in similar state in 2011? ;) Our presentation was titled: Iceberg Incorporated - A Peek Under the Surface of the Criminal Enterprise.


T2 is not a marketing event. I've often found myself frustrated (=mad) in many marketing events, disguised to be seminars. Thus I was extremely cautious not put forward anything from our business-as-usual category, which might be interpreted as marketing. Still, being proud of what we do, I wanted to include the technologies and our daily propaganda in our presentation. What a puzzle.

Then it hit me: lets talk about collaboration! And to demonstrate collaboration, lets bring in the collaborators of the year: Jussi from CERT-FI and Lari from Codenomicon. That sounded fun, and fun it was!


The presentation was experimental in many ways. First of all, we introduced new technologies (although we didn't admit that our presentation was about technology ;). We introduced agent-technology. Agents download your tasks from collaboration environment, run the task in what ever part of the world they happen to be, and return the result back to the Collab-environment. We also pushed some of the collaboration features in Clarified Analyzer further. You can now attribute IP-addressess in Collab with search results and passive DNS information. Last but not least, we introduced a concept of 3 different speakers. We had similar objective, similar working habits and slightly differnt agenda. Lari wanted to talk about icebergs, Jussi about unsystematic data collection&analysis and me about collaboration. I'm eager to wait the feedback to hear if we managed to convey our message. :)


Picture: When you label an IP in Analyzer, a new page is born in the Collab environment. Our agents will notice it and deliver attribution, such as Country Code, AS number, whois output. Then you can upload your Analyzer packet content search results to the same page (http-get (dst)) and so forth.


The only thing I regret is that I set a bit too ambigious goal for my part of the presentation. I wanted to make the traffic audit demo as understandable as possible. Thus I ended up fine-tuning things to the last minute and I could not personally take the most out of the other interesting presentations. A feeling of unpoliteness shadowed my good time during the presentations, as I mindlessly stared my laptop, only to randomly snapping out of the fiddling. Upside is that during the breaks I got to talk to a lot of interesting people. I'm still half-way of following up those conversations. ;)

If you enjoy an intimate conference with lots of good security-minded people to network with, T2 is for you! Thank you T2 team! You put your heart to the event. You also take a good care of the presenters - although I must regretfully say that I didn't have the naked people in my hotel room that Ivan had. :)

-- jani 2008-10-20 19:27:25

2008-09-29 08-43 We are moving to new premises


We are moving. Hopefully someone will blog the sad story on why we had to leave our lovely city apartment office. I just wanted to inform you that as we are moving, our second priority infra (namely virtual laboratory and demo recorders) are out of reach during the Monday. Naturally the secure hosted customer infra is up and running normally. Restoring our second priority infra is our first priority so you should be able to use the demo environment soon.

-- jani 2008-09-29 08:51:58

2008-09-25 19-46 Simplify IT


Writing blogs is getting easier. Nowadays I can just copy&paste material from other articles. It's getting hard to be rebellious. :Z

Security professionals also need to keep security and IT simple. According to Oulette, too many organisations over complicate things.

“What we have found is that organisations that spend more than seven percent of the IT budget on security are actually less secure because they use reactionary approaches. They end up with point solutions where there’s no overarching theme and no integration.

Security professionals need to qualify threats that are reasonably anticipated, and dispel those which are pure myths, misconceptions, or based on paranoia of the unknown.

Original Article

-- jani 2008-09-25 19:50:52

2008-09-21 12-56 Internet Superhero Society


We have been ranting for 1-2 years how much the Internet community would benefit if all the good work done by different Internet Superhero groups could be shared in a systematic manner. We know a lot of smart superheroes who are developing their own analysis platforms for analysing Internet traffic, related for example malware. Sometimes I get the feeling that people are so busy developing these platforms that no-one has time to think how to utilize the end-result. The data/information should be easily available to other Superheroes.

I spotted a blog posting today which was one of the good signs I've been seeing lately. Malware analysis world is going towards more systematic collaboration:

Collaborative Analysis Efforts with Simple to use Interfaces

-- jani 2008-09-21 13:15:27

2008-08-26 16:15 Evaluate IT


Good news everybody, we are celebrating the end of summer (okay, we are not exactly celebrating) and we would like to invite you to participate in a fun summer campaign!

But here it is: FREE COOKIES FOR EVERYBODY! Not just that, but you will also get access to new experimental features. So that is good amount of free cookies! No strings attached. Actually, even more, we are actually rewarding the most active evaluation user (give us feedback, success stories, epic failures, feature requests) with an iPod Shuffle and extremely nice Clarified Networks' ecological yoga pants made out of hemp and recycled material. So you can look and feel cool while analysing your network.

Get Clarified Analyzer

  • Read about Clarified Analyzer's wonderful features.

-- charli 2008-08-26 13:15:30

2008-08-07 19-59 Marketing with 0 EUR

We are income based, no huge marketing budget. We have decided to do useful things for problems that are hot topics at given time. The first execution has been success so far.

Screenshot from ?YouTube statistics:


And full text:

#1 - Most Viewed (Today) - Science & Technology
#1 - Most Viewed (Today) - Science & Technology - Germany
#2 - Most Viewed (Today) - Science & Technology - Spain
#3 - Most Viewed (Today) - Science & Technology - New Zealand
#3 - Top Favorites (Today) - Science & Technology
#4 - Most Viewed (This Week) - Science & Technology - Germany
#4 - Most Viewed (Today) - Science & Technology - Netherlands
#4 - Most Viewed (Today) - Science & Technology - Poland
#6 - Most Viewed (Today) - Science & Technology - Australia
#6 - Most Viewed (Today) - Science & Technology - Canada
#7 - Most Viewed (Today) - Science & Technology - France
#7 - Most Viewed (Today) - Science & Technology - Italy
#10 - Most Viewed (Today) - Science & Technology - Ireland
#11 - Most Viewed (Today) - Science & Technology - India
#13 - Most Viewed (Today) - Science & Technology - United Kingdom
#19 - Most Viewed (Today) - Science & Technology - Mexico
#28 - Most Viewed (Today) - Science & Technology - South Korea
#34 - Most Viewed (Today) - Science & Technology - Brazil
#35 - Most Viewed (Today) - Science & Technology - Russia
#37 - Most Viewed (This Week) - Science & Technology - Netherlands
#37 - Top Rated (Today) - Science & Technology
#40 - Most Viewed (Today) - Germany
#43 - Most Viewed (This Month) - Science & Technology
#44 - Most Viewed (This Week) - Science & Technology - New Zealand
#45 - Top Favorites (This Week) - Science & Technology
#62 - Most Viewed (Today) - Science & Technology - Hong Kong
#64 - Most Viewed (This Week) - Science & Technology - Australia
#65 - Most Viewed (This Week) - Science & Technology
#71 - Most Viewed (Today) - Science & Technology - Japan
#73 - Most Discussed (Today) - Science & Technology
#88 - Most Viewed (This Week) - Science & Technology - Canada
#89 - Most Viewed (This Week) - Science & Technology - France
#89 - Most Viewed (Today) - Science & Technology - Taiwan
#95 - Most Viewed (This Week) - Science & Technology - Ireland
#98 - Most Viewed (This Week) - Science & Technology - Spain

Sites linking to this video (5)
22,082,1518,570584,00.... (Hide)
21,170 (Hide)
20,249 (Hide)
2,076 (Hide)
955 (Hide)


-- jani 2008-08-07 20:01:30

2008-08-04 10-32 Kaminsky DNS View & Black Hat Campaign Special


Greetings Black Hat / DEFCON visitors. I'm sorry to say we couldn't make it this year. But don't worry, we have not forgotten you! Actually, we have cooked up something special to offer for all you people with black hats, and you don't even have to be in Las Vegas to participate!

Jukke came up with a brilliant new view for Clarified Analyzer called the DNS Randomness View 7. It helps address the DNS vulnerabilities found and illustrated by Dan Kaminsky (be sure to catch his presentation at Black Hat). This issue has gathered a lot of press and it is actually now widely referred as the Kaminsky DNS flaw / bug / vulnerability / cock-up. So, we decided to name the view after Mr. Kaminsky (with his kind permission).

The Kaminsky DNS View monitors network traffic (either from a pcap file, or traffic captured by probes) and deducts the port and id deviations from the DNS flows. With this information it evaluates the IP addresses like this:

  • Get all the DNS packets for host X and sort them by time. Check ports and transaction IDs
  • Postprocess the port list: count the differences between consecutive port numbers (first query port 1000, second query port 1005, so the difference is 5)
  • Calculate standard deviation of the differences. Note, standard deviation, not sample standard deviation. We rather underestimate the deviation, although statisticians will now probably explode from sheer horror

  • Rank the hosts according to their deviations as described in

-- jani 2008-08-04 11:26:57

2008-07-31 12-35 Should have used my own script


Okay, this probably won't increase our share price or make us look like these really cool people (that we actually are), but here's a story from the real world. I and Jani were talking about our Summer Campaign and it seemed that Apple's didn't send the email to all the intended recipients. So, we decided to complete the mission with a quick script. But which one of us could do it quicker? I started modifying a Python script and Jani decided it would be a better idea to make a simple shell script. Few seconds (oh well okay, it was minutes) later Jani won with this:

myerr() {
 echo $* 1>&2
 exit 1

[ $# -eq 2 ] || myerr  "Usage: $0 <emailaddress-file> <content> "

while read emailaddress
 (echo To: $emailaddress; cat $2) | sendmail $emailaddress
done < $1

I came a good second with this one:

import sys
import time
import smtplib

me = ""

text = open("test-what").read()
addresses = open("test-who").readlines()

for line in addresses:

    msg = ""
    msg += 'Content-Type: text/plain; charset="iso-8859-1"\n'
    msg += 'MIME-Version: 1.0\n'
    msg += 'Content-Transfer-Encoding: 8bit\n'
    msg += 'Subject: Summer Campaign at Clarified Networks\n'
    msg += 'From: %s\n' % me
    msg += 'To: %s\n' % line
    msg += 'Cc: %s\n' % me
    msg += 'Reply-To: %s\n' % me
    msg += "\n" + text
    print "sending mail to " + line

    s = smtplib.SMTP()
    s.sendmail(me, line, msg)

So, I sent the group email (also known as spam) with Jani's script, but I forgot to edit the default Subject: line which was hardcoded in my own script. So now a good number of people will receive email from Clarified Networks titled "final test". How professional is that! Well, maybe it's the final test before our big release of Clarified Analyzer (artist previously known as Tia) in two months.

Anyway... The campaign is really cool! Please send us email to if you want to be a part of it. Let's just say that the winner will get utterly cool prizes that will make you look and feel good while analysing your network.

Disclaimer: We are not part of the coder team, they actually know how to do this stuff.

Have fun,

-- jyrki 2008-07-31 12:56:11

No no no, it was actually 'final test' before sending the actual spam. :) We take everything seriously. We even don't send spam before we test it thoroughly. We will soon send the actual mail. We wanted to test our script with as realistic material as possible, so our actual mail will just have different subject.

-- jani 2008-07-31 13:08:51

This is a perfect example what happens when corporation does not concentrate on it's core competence. This service could easily have been outsourced to Russia.

-- jani 2008-07-31 13:20:35

2008-07-31 10-59 Collaborative malware analysis

I've had this video footage lying around in my non-backuped hard drive for awhile now. I just wanted to get it out of my system, so I did an quick edit. Better version with more easily digestable content will follow later on. So go ahead and feed your imagination:

-- jani 2008-07-31 11:25:49

2008-07-02 10-40 All about our 'Strategies'


In close reference to what Jani was speaking of two blogs ago, we now have a devotee to the Sales and Marketing of Clarified Networks (that would be me). To introduce myself I am Charli, and I have been residing here comfortably under the Clarified Networks roof for a couple of months now. Throughout this time a picture has formed in my head which I would like to put forward to you all, including a brief background of the company, meanwhile explaining this little attractive picture you see next to my blog. This picture is how we are now categorising and describing our newly productised solutions (which are most commonly know as our 'Strategies') for selling not only in greater Europe but globally:

Clarified Networks is a company that has been exisiting commerically for the past two years, and for four years before that was in research and development funded by 'TeliaSonera' and 'Nokia', through the world renownded Oulu University Secure Programming Group (OUSPG) who have also launched the company Codenomicon (who we have been doing partnering in joint offerings which combine both our Network Analysis and their Robustness Testing).

Now for the overview of Clarified Networks solutions more specifically. Our tools and services are used for the Collaborative Network Analysis (simply, we allow you to see the actual traffic flows in your networks and read the packet flows, which give you the ability to pin point exact issues in trouble shooting and traffic auditing)

We have a structure that our product offerings are set up, best explained in the 3 'Strategies' we have implemented:

Strategy A - Standalone

This is our Clarified Networks Analyser tool we researched and developed for the 4 years, and it is now able to be bought as a stand alone tool, it can be downloaded to the laptop, and essentially this is the window into the network and shows all of the flows, and has extremely attractive visualisations which make the problem solving and analysis alot easier and time efficient. We have had many cases that it can cut out from half an hour a day of analysis due to its simple manner.

Strategy B - Infrastructure

This is our Clarified Networks Recorder, we like to think of this as a VCR for your Network. This is a box that you have running 24/7 and it records all of the traffic for the bigger networks that have multiple servers and locations, so that you can be constantly recording and keeping track of what is happening, and who is talking to who. It is great in the bigger enterprises and in operators as there can be times that issues aren't noticed till after the compromise has occurred, therefore with recording the traffic you can go back in time and pin point the problem straight away, rather than spending countless hours or even weeks trying to reproduce the problem.

Strategy C - Services

These are turnkey solutions that include our Tools, Services and Managed Service Infrastructure. These tend to be more customised packages for the more demanding of enterprises and operators, also if we perform one of these we have training to give the end customer the knowledge on being able to take care of their systems with our tools. The main focus with the Collab packages are with Traffic Audits and Troubleshooting, however we can customise packages for the larger more demanding of organisations.

I hope this clarifies our current offerings!

-- charli 2008-07-02 10:43:17

2008-07-02 08-58 - Dave on complexity

My thought is this, to avoid getting into the specifics that annoy everyone: People tend to think they can "manage" their networks or their application security, but their management skills are scaling linearly but the problem is scaling exponentially. They can only throw money at it for a while. When people talk about a "self-healing network" what they mean is "we can't afford to manage exponentially growing problems - those problems have to manage themselves".

Who is Dave?

-- jani 2008-07-02 09:01:47

2008-07-02 06-44 Focus


We are an income-based company. It has its pros and cons. Pro definitely in that sense that we can challenge the traditional way how corporates work. We can continue to be rebel...{{{HHHH

revolutional. For example being completely open about what we do, how and when. Including the screwups, such as in one of my previous blogs. Promoting real and open collaboration between client, us and non-commercial or commercial third parties.

The con is that we are constantly working underhanded. My personal pain is that we already know how the next generation analyzer would work. (No Jukke, there are no specifications). We also know a little bit below gazillion ways how to improve the current UI even more. We know a number of features and new views that could be implemented with not too much trouble. Actually we have already ten views which we call experimental merely for the reason that they lack some basic features such as 'Export to Wireshark', or there has been no time to test them properly yet.

This begs the question: why? We have something that our clients say is the best tool out there in its' field (troubleshooting and traffic audits). Still we have to work sometime around the clock to accomplish our goals.

Our business-oriented readers already know the reason: we don't put enough effort to the commercial side. We have been ignoring basic things from sales and marketing. In a bizarre way I can be proud of this: it is an accomplishment in itself that we have survived with this effort. Our ratio of successful deals has been very good. Anyway, we are not able to grow with this 'strategy'.

Well that is going to change now. During the spring we have sharpened our focus, recruited our first Sales/Marketting person Charli, who has done excellent job with the material she was given. Now we are working on implementing our sharpened focus to make us scale to worldwide deliveries. Most of the technical components are in place already. Some announcements will be done September, and so forth. The 2008 will be the race to scalability. I'm looking forward year 2009.

As a part of our strategy planning, we did a small calculation. While being much more we are also a good supplement for Wireshark. We did this one conservative calculation what it would mean if 1% of the wireshark downloads would result customer relationship with the downloader. You can find it from this page.

-- jani 2008-07-02 08:26:29

P.S. I promised earlier to reveal the reason behind why we are so open about ourselves. This entry gave a hint in the beginning. To summarize: because we can.

2008-06-25 20-43 Curiosity kept the cat alive


Hello from Starbuck on Oxford street. Some time has passed from my previous entry. I've been running around, for some weeks now, as we had a sort of an rush hour on audits. After couple of weeks doing strictly consulting, we headed off to London and switched to the sales/marketing mode.

From these trips you don't immediately know how well they've gone saleswise, it can be assessed at the point we the fat lady sings. For me, the most interesting thing learned in this trip has been more on a personal level. We've met some pretty high level people, some of them typically start talking deals of the size of our expected turnover for the year 2009. Then they have been some people in smaller positions. One of the senior guys from a veeeery large organisation was extremely helpful, he had not eaten in whole day and he still found some time to meet us, after office hours at his company. He was curious to hear what we do. He also genuinely wanted to help us, he draw an organisation chart of his company and discussed with us where we would fit in.

On the other hand we met couple of people who did not have any curiosity'. They formed an opinion on first 3 minutes and stick with it. I started wondering. It is great if we can establish quickly if we have something to offer to a customer or not. (Ok, now I revealed more of our weaknessess, our stuff is not perfect for everybody everywhere). Why it was then that I felt a little bit sorry at first?

Then it hit me: some of the people we met were not curious. They operate based on the assumption that they know everything already. One can notice this attitude by observing if the listener is willing to understand, or would he rather hear himself talking. Of course this goes two ways: both the salesperson and the customer need the curiosity and the ability to listen each other for a successful collaboration. (I tend to get carried away when explaining our technology etc).

I found out that in the end I didn't feel bad. One thing that helps in these kind of situations is that you'll have to know your focus and you need to know who your customers are. That builds your confidence and you are able to do more balanced assessment for example whether:

  1. You could help your potential customer but he does not want to listen
  2. You could help your potential customer and you should just learn more about explaining how
  3. You can not help your potential customer and you should not waste your and your not-to-be your customers time.

I finally begin to realize why all of our helpers have preached about the importance of knowing your focus and your ideal customer. This of course is just one of the many reasons.

Ok, this was my 2 cents on personal development. :) From the company perspective, we met great people who are willing to help us in setting our foot in UK and also some prospects how are interested in using our technology. Thank you! We'll be seeing soon!

-- jani 2008-06-25 21:13:50

I would like to point out that, contrary to what Jani said, our software is perfect for everyone everywhere. Furthermore, we never make misteaks.

-- jviide 2008-06-26 18:20:50

2008-06-06 23:58 Virtual Reflections in the Real World


The Internet truly brings the world together.

Perhaps some explanation is in order. Using GeoIP we can check where connections originate and plot them on a world map. Bigger highlights mean more traffic.

Also note that since Clarified Analyzer is a Qt app, we can pick up native styles. Prepare for oxygenized network analysis once we start to ship with Qt 4.4. ;)

Update: Adjusted looks slightly.

-- slougi 2008-06-07 00:00:55

2008-04-30 08:42 Infosec 3rd day - Anti Cognitive Dissonance


Picture: A dead view with that familiar
NASA earth view.

As opposite of what I thought in my last blog, Cognitive Dissonance is actually ...a psychological state that describes the uncomfortable feeling when a person begins to understand that something the person believes to be true is, in fact, not true. (Wikipedia). What I experienced in Infosec was just the opposite. I thought it ment something where one pays attention only to the observations of the world which are supporting his common beliefs. I thought I had the last one, as I heard some things which could be straight from a Clarified Networks TV-shop commercial, if there were one. Here are some samples.

''I don't want to pay for the whole year''

I heard an argument where customer was saying to vendor that he uses the software only periodically, not throughout the whole year. He didn't want to pay for annual license. Instead, he would like to pay only for the time he is using the software. Sound like Cookie Licensing to me.

"You can't have balanced vulnerability assessments without understanding the traffic flows"

There was this one company called IDSec who had arguments exactly same like us when we talk about ?TrafficAudit. Their approach was just to solve clients problems was just different. We monitor the actual traffic of the system, as that is the most authorative source for seeing how organisation policy is implemented in practice. This approach is also independent of the components used in the target network. IDSec imports configuration files from network devices. Not that bad idea either. In our audits we typically use the existing configuration to gain more insight on the client's security policies.

P.S. Now I've advertised our competitor and admitted that we are not perfect on our blog. Why? I'll answer at some of my next blogs.

-- jani 2008-04-30 09:45:51

2008-04-24 21:10 Infosec, 2nd day

'''Picture:''' having a phone. What a sweet feeling.)


My second morning almost turned out to be a disaster. I lost my phone. I had a creeping feeling that I must have dropped it at the tube or at the tube station, because my last mental image of the phone was when I left my hotel for the conference hall. I had my other phone with a UK prepaid SIM card in it so I kept calling my 'real' phone wishing someone would answer. No luck. To add insult to injury, this was also the moment when the prepaid SIM's operator informs me that I'm running out of juice. Fortunately I found a free public access point at the Infosec conference hall. The downside was that only http and https connections was allowed so I could not use any of my standard Instant Messaging options to contact people. By the way, I thought that hindering users' internet connection to achieve a false sense of security went out of style already few years ago. Well, the only thing left to do was to fill a lost property form for the London transport. They wanted to have the IMEI and SIM references of the lost phone. They also adviced that if I don't know my IMEI or SIM references by heart I should call my carrier and ask for them. Right, roger that.

And this story doesn't even end here. The form required me to leave my address. But according to this form there is no such country as Finland. So I decided not to call about my lost phone to my carrier as surely London transport lost items office can't ship the phone to a non-existing country.

To be honest, at that point I just had to make a decision: which is more expensive: a phone worth of few hundred euros or me wasting hours of good social networking time at Infosec. I tried to remember if there are any text messages in my cell phone that could throw off governments in the wrong hands (it could happen in Finland as well). The answer I meditated was 'no' so I decided that my work at Infosec is more important.

The day turned out to be good after all

I met a friend of mine who is alpha testing our products. He really likes Clarified Analyzer so he has mentioned about it to some of his contacts and we had a nice chat with them. We also visited Codenomicon's booth and what do you know: someone gave me my phone back (I don't remember who it was because I was so stunned). A good citizen had called to the recently dialed numbers and one of the Codenomicon's people had answered and arranged that my phone would be brought back to their booth to wait for me. Thanks Ashley and the anonymous guardian angel I never met!

Later I attended a panel discussion arranged by Sophos. The panel was actually much better than I expected. However, it was mostly about dodging bullets (how to avoid being infected) while I'm more interested in helping the authorities to catch the actual criminals more efficiently. But anyway it was good fun as the panelist put out a good show.

In the evening there was a social event at the hall and after that we went to a restaurant with people from one SLA company. I had talked to one of these guys at their stand and got a lukewarm reception. At the evening meal as we more time to chat, things were totally different. We had some good time. I lost my voice and we called the day off a little bit before midnight. Thai food was excellent btw.

Tomorrow is going to be busy but I will try to write about day three as well, the day of cognitive dissonance.

-- jani 2008-04-24 21:53:37

2008-04-23 08:09 Infosec, 1st day


Picture: FUD still alive and kicking

My first impression of the Infosec conference this year is: FUD (Fear, Uncertainty and Doubt) and lots of Snakeoil. People are selling threats, complete solutions and so on. Fortunately I got the chance to talk to some new people which has helped amend some of that first impression. The main action point for me was to meet old friends and business partners and by the end of the day I had already met with a lot of them. Another thing I noticed was that most of the user interfaces still have this really old-school Windows look. And 'we need more buttons to impress our clients' was alive and kicking. I guess it will still take couple of years for things to catch up, although many companies have already noticed the keep-it-simple principle. It just takes some time until it manifests itself in these huge mainstream events.

My second objective was to test how easy it is to turn the discussions upside down, so that I would end up selling Clarified Analyzer to the sales people in stands, without forcing discussion to that direction. Worked pretty okay, although I don't have such high expectations on these exercises, there are so many steps from getting someone exited in conference to the actual deal. But it was good practice anyway.

-- jani 2008-04-23 08:19:35

2008-04-23 08:19 Charlene at Govsec Conference in Washington April 23-24 2008


Clarified is covering the globe this week, spreading our knowledge on how to simplify and understand the harsh realities of your modern-day complex networks. We are not only making an appearance at Infosec in London, we are also going to be in Washington DC for the ?GovSec 2008 conference. So come and see me at booth #1813. I will be there to discuss with you all the joys of 'collab'oration, our award winning tools, and how we will tailor packages to your specific needs.

Looking forward to seeing you.

-- charli 2008-04-23 08:19:35

2008-04-19 13:54 Miracle based businessmodel

Our developers (well, actually Joachim most of the time) often jokes around that Clarified's software development model is to come up with small miracles on a regular basis. This picture is related, we found it from:

-- jani 2008-04-19 13:57:30

2008-04-19 13:29 Jani at Infosec, London 22 - 24 of April


Clarified will be present at Infosec. No, we are not having our own exhibition stand there unfortunately, but I will be on the move, catching up with old friends and trying to find new ones. If you are around, give me a call or SMS (+358451224601).

If I have any energy left during the conference days, I'll share some of my observations here, so stay tuned.

-- jani 2008-04-19 13:33:35

2008-03-25 14:05 Safra


Today I imported safra into KDE svn. That alone probably does not tell you a whole lot. So I’ll talk a bit about what the project is about and where I want to go with it

Read more on my personal blog.

-- slougi 2008-03-25 14:07:58

2008-03-17 15:04 Information Security Turned Against its Creator



Congratulations and condolences on your latest baby. Nice to see you got that material finally out of your system!

Oh, how the topic is related to the news? The 'baby', (archive format test material) happens also to break a lot of software that is considered to increase security. Our long-lasting rant has been that it is hard to achieve better security by constantly increasing network components and exposing more and more lines of code to external ... inputs.

-- jani 2008-03-17 15:19:58

2008-03-14 12:48 Friday night link dump

Our director of non-human resources Peppi is already feeling the Friday effect.

Our director of non-human resources Peppi
is already feeling the Friday effect.

Sometimes we surf the web, too. Oh, don't act so shocked, everyone does it.

These links have been obtained by clicking around the great

  • Quintina apparently refers to a weird fifth voice emerging when four throat-singers harmonize in just the right way. There are some audio samples here. Kinda creepy.

  • Two actually funny LOLCAT-pictures (that, incidentally, don't contain cats): abort! abort! and EMO BATH IS PENSIVE.

  • I'm not even interested in fashion related things, but The Sartorialist is strangely hypnotic. Apparently it is by a guy who strolls around with a camera and when he spots an interesting person he asks if he can take a picture.

  • Photoshop disasters chronicles, well, Photoshop disasters such as the mutant Lady Guinevere. Speaking of which, here's a page about Photo Tampering Throughout History.

This TED talk by David Gallo reminded me why one shouldn't eat and surf at the same time. I almost choked to my bread during the last clip. The talk includes, quoting, "a shape-shifting cuttlefish, a pair of fighting squid, and a mesmerizing gallery of bioluminescent fish that light up the blackest depths of the ocean".

There's also a documentary about Lemurs starring John Cleese as himself at Google Videos.

If this wasn't enough, then destroy the remaining fragments of your friday productivity with some Macintosh Stories or Halcyon Days: Interviews with Classic Computer and Video Game Programmers. Or The Impossible Quiz (click the "PLAY THIS GAME!"-link).

-- jviide 2008-03-14 12:02:28

Okay, here's my Friday effect links:

  • I love science fiction and this place Hulu seems to have a lot of the more obvious ones like Firefly and Battlestar Galactica but they can only be enjoyed online if you have access to an american IP-address.

  • You can also catch the not-so-obvious science fiction shows like Lexx and Sky Girls from Veoh

  • For movie gossip I occasionally read the clever news blog

  • Last but not least I would like to share something (almost) totally not related to science fiction - There's this interesting new social networking web site that has spread quite nicely .. or should I say hereditarily in my circle of trust

Have a fun weekend everybody!

-- jyrki 2008-03-14 12:54:52 2008-03-14

2008-03-12 14:25 In the year 2000


Can you guess what this does?

Available in your nearest cvs now, production use... some time later, if ever.

-- jviide 2008-03-12 12:32:22

I don't know, but it looks sweet!

-- jyrki 2008-03-12 12:50:33 2008-03-12

I'm stunned. (They really didn't tell me this feature is already being prototyped.)

-- jani 2008-03-12 12:58:07

2008-03-09 13:41 Baking Sweet Licenses


Jyrki already wrote about the Cookie Monster in Blog/2008-03-04 11:17. That was an introduction to our licence ideas from an end user's point of view, I wanted to share the same topic from a little bit more technical side, so here we go.

We have this love/hate relationship with enforcing licences. Even if you'd like to think so, licence checks and copy protection does not prevent people from cracking and copying software - it just usually decreases the usability of the software.

So why use licensing at all?

How I see it, there are three benefits you gain by having a licence check in software. First of all, it helps the honest people to stay honest (people with dishonest intentions can always circumvent the security measures). Second, software licence checks will help the ones responsible for managing licences. I've encountered people in my work that are utterly stressed out because they have a hard time tracking how much their employees are using which software. This leads to the third point: when negotiating new annual deals, our client's sometimes have a hard time making purchase orders because they have no solid idea of how much the software is actually being used, and what kind of benefits it has reaped. As a result, in some cases they are also not motivated to buy the software again. In our case, it is actually not our client's responsibility to track how much our software is actually being used. With Bottomless Cookier Jar Licencing for example, we need to take care that our clients are optimising their Tia usage in order to seal the next annual deal. That takes some extra effort, but we believe that selling software that no-one uses is not going to be viable business anyway.

So what we need is to tackle the annoyances.

Licence strings are a little dull. From a vendor's point of view, you end up building a framework for managing the licences. From a customer's point of view, you need to be copy-pasting the licenses. That might be okay if you assume that there is hardly ever a need to reinstall software, or the operating systems. In the unfortunate event, that enterprises do need to support software and operating system installation there needs to be some kind of network based licence daemons, which are usually especially designed for enterprise use. Those are typically maintained by the ICT department and to keep the long story short - lets just say that we software vendors are usually advertising that we should be reducing the workload of ICT-professionals, not increasing it.

Well, how about the dongles then? Personally, I keep losing mine and even when I sometimes manage not to misplace them, I also have noticed that many times I can not complete a small task on the road because I had decided to travel light and leave all "extra" items to the office. Thank you very much, that was very handy.

So, no wonder users hate licence checks.

We have spent a lot of time discussing about a smarter way to handle licence checks and we've had a really nice response from introducing Cookie Jar Licencing (Blog/2008-03-04 11:17). We have a user base (currently about 100 users and increasing) who are already utilising our Collab Portal. So why not integrate Tia licences to our Collab Portal? Now our users can simply type in their Collab credentials, pick the amount of days they want to use Tia and that's it. No hassle, no copy/pasting. No integration to enterprise licence managemet. Just login and go.

By the way, see a demo at YouTube.

-- jani 2008-03-09 14:43:06

2008-03-07 22:27 HAHA U LOOSE


I would like to chip into the conversation started in the previous blog entry, because we, the code squad, actually know the name for the described phenomenon.8 It's called "the syndrome of trivial in-betweens" ("triviaalien välivaiheiden syndrooma" for those who are in the know). It usually manifests itself e.g. in situations where two people make seemingly simple plans while being blissfully unaware of the reality. Left untreated, it may lead to epic fail and bitterness.

"Hey, let's climb Mount Everest! We'll just walk uphill until we reach the top! Piece of cake!"

"Hey, Mr. CTO, you want Tia to be able to sniff whether you have Wireshark installed in your path and then be able to launch it? Piece of cake!"

Piece of cake.9

-- jviide 2008-03-07 21:04:44

2008-03-07 08:49 It Will Be Done In Two Days...


Have you ever found yourself in a situation where you have made a promise and before you know it you have bitten of a little bit more than you ORIGINALLY planned to chew, so to speak? Well I found myself in that place about 2 weeks ago. I had an assignment that I assumed was going to take me two days. Now it seems that either there has been some sort of time dilatation effect, or otherwise I have misled myself.

So let me impart some new found knowledge on to you with my hindsight bias about planning projects and assignments:

  1. Don't be fooled by the simplicity of the goal. Behind every corner lurk the many trivial intermediate steps ready to bite you, just like trolls waiting on a bridge harassing you to complete more silly quests before they allow you to cross.
  2. Don't make promises before you have assessed the entire situation.....PLAN PLAN PLAN!

Abiding to these rules and you will not be left alone in unknown waters, trying to find a way back to shore while already being one week over due.

-- mikko 2008-03-07 09:43:22

2008-03-04 21:01 Rest in peace, Dungeon Master


Earlier today Gary Gygax, 69, co-creator of D&D, stepped on to the next level. I know that, as avid gamers, several Clarified people have been influenced by his work.

From D&D we learned that wizards can't wear any armor, which has proven time after time to be directly applicable information also in real life.

-- jviide 2008-03-04 21:01:46

2008-03-04 15:04 Exponential nag-off


The guys have been nagging to me to write about the new cookie system. I took a taoist approach, and finally Jyrki ?wrote about it instead. Tao abides in non-action yet nothing is left undone.

Speaking of nagging, I was coerced to sell the last bit of my soul and write a nagging screen if your Tia license gets expired mid-session. Yeah. I'm the one to whom you should send warm feelings when this pops up on your face.

The nag dialog isn't the end of the world and won't kill your session or anything like that. Except if there's a bug, which of course is unpossible. You can always push your luck, and you will be nagged to some later time. Keep pushing, though, and the delay between nags will get shorter.

I'm calling this behaviour exponential nag-off. That term has everything: There's the fancy-pants, highly scientific word "exponential", "nag-off" sounds rude for some reason I can't really put my finger on, etc. In short, I think my contribution to the future generations will be this very term.

And a nagging dialog.

-- jviide 2008-03-04 15:04:40

2008-03-04 11:17 Cookie Monster


I believe it is Adam Savage from Mythbusters who likes to say "I reject your reality and substitute my own" (originally from the movie The Dungeonmaster). Well, we here at Clarified like to reject reality on a regular basis as well. Lately we have been discussing a lot about licencing models. Especially concerning the Clarified Analyzer licences, we began with the more typical ones; annual licences per user, then per usage, then some customisations to special cases. The problem is that we want to have something that is really simple, and we want to have something that is totally fair to use (we have seen some really outrageous licence schemes). There are some key questions that need to be addressed:

  • Who is using the software and are they also the ones who are paying for it?
  • Once you have found a good software tool, what are you actually paying for?
  • How do you control access rights and different types of licences?

Since we didn't find anything that suited our needs from the current reality, we decided to substitute it with our own. Enter Cookie Jar Licence Model - instead of buying a costly annual licence that you are unsure someone will actually use, you can buy some Cookies and use them whenever you feel the need. One cookie will give you access to Clarified Analyzer (and corresponding collaborative features) for 24 hours. You can have a few cookies for free (send us an email to, or you can buy a bunch for your company and give it to your IT-support / Security / Auditing / Administration guys (and girls) to be used when there is a next good case for trying out our next-generation user-friendly analysis features.

In addition to this intuitive way to have an easy beginning to start using our tools, you can also have something we call Bottomless Cookier Jar Licence which is the perfect solution for bigger enterprises. Bottomless Cookie Jar is an annual licence where you can use as many Cookies as you like, and the next annual licence is based on previous year's consumption. This gives you hassle-free access to all the powerful features of our Collaborative Network Analysis tools for all staff (and even to your business partners) and gives good visibility to management about what is being done to advance your network infrastructure and administration and by who.

In a nutshell, with a Cookie Jar Licence you can make a purchase order very easily regardless of being part of the IT-support or General Management. Also, you can rest assured that you will only pay for the actual use of the software and you will have good visibility (supported by our collaborative features) to have good visibility about who is using the software and for what.

So, beware the Cookie Monster and please contact us for feedback and inquiries (we would also like to know how you like our new blog):

Hallituskatu 9 A 21 (for real cookies with coffee)
90570 Oulu


-- jyrki 2008-03-04 11:17:59

2008-02-25 12:26 We've been YouTubed


While we have had success with our struggle against complex networks, our struggle with complex video codecs has not been so succesful. The problem is that we would need an army of mad scientists to systematically test all the possible codec combinations and how they work on different platforms.

But fortunately, there is YouTube, which is so widely used that most of our computers are already able to view videos from there. As an additional bonus, YouTube has lots of bandwidth and good availability. Of course being on the ?YouTube will hopefully also give us more exposure. The only downside is that the video quality is still quite poor although there are some tricks that can be used to enhance it a little bit.

I just uploaded the older videos to youtube. We'll post more videos with better authoring later. Have a look and feel free to comment!

-- jani 2008-02-25 12:26:43

2008-02-24 10:47 Casting our screens


I've been searching for a decent screencasting software for OS X for a year now. I've been using Snapz Pro which has been good enough. However, Snapz Pro lacks some features I would need. For example, it would be nice if I could zoom to the key areas of Clarified Analyzer user interface while showcasing the features. Our friends at Codenomicon recommended Camtasia but unfortunately that is only for Windows.

I think today I found what I've been looking for. And I discovered it while sleeping! (Yes, we are working 24/7 to achieve the best possible results). I listen to podcasts as bed time stories and just when I was going to doze off I heard the fellows at MacBreak Weekly recommend ScreenFlow. It sounded so good that I woke up and downloaded the evaluation version of the software. I decided to postpone testing until next morning because there is no better way to start my Sunday than a fresh cup of coffee and evaluate a new promising software. Maybe now we are able to quickly create great Clarified Analyzer examples for you people out there!

ScreenFlow was extremely simple and yet it had the features I've been looking for. In half an hour I had my first test screencast which included zooming in and out of key elements and having the presenter face attached to the screencast. After one hour more testing I'm really optimistic about what we can do with this nice piece of software. The result can also be found from here (my apologies for my Sunday morning -look in the video).

-- jani 2008-02-24 10:47:08

2008-02-19 08:45 Big Challenges. Simple Solutions.


Kudos to Dan Cohn and Tom Killalea of the for the title and their impressive keynote presentation at the Nanog meeting this month. It gave us a glimpse to their amazing computing infrastructure, its design principles and some (visualization) tools they use to manage it. However what stunned me was that how familiar their challenges were and how easily I could sympathize with their advice. My background is in managing considerably smaller systems but yet it seems problems I have encountered will scale to even to a planetary-scale distributed system. :) From now on my checklist will explicitly have: Massive scale services need to be simple, policy that is not enforced by tools is useless, automation of everything that does not need human operator, common metrics for all parts of the operation, hire good people. However that doesn't do justice to the actual content so couple of verbatim quotes follow:

Simplicity isn't achievable as a passive goal. It’s a force that must be actively applied -- Charles Moore, father of FORTH

Always anticipate the next order of magnitude of growth, even if it’s a struggle.

The network is the only authoritative resource that exists.

It can take multiple iterations to get to “simple”

Applications meet the Network - Complementary instrumentation on both sides can contribute to automated recovery

It is reassuring to hear that simple is beautiful even in the planetary scale. :)

-- fenris 2008-02-19 08:45:15

2008-02-18 02:47 Hello

Louai Al-Khanji/slougi.jpg

Since we now have this new-fangled Blog thing I thought I might introduce myself. My name is Louai. I go by the nick slougi just about everywhere in cyberspace, and lately (somewhat disturbingly) quite often in real life as well. I like KDE, X, and unixy kinds of things. I listen to music ranging from classical to gangsta rap to heavy metal to trance. Right now I would really like to have some kibbeh. I guess that about sums me up. Hello world!

As I already started writing I might as well talk about the things I am currently working on. During the last summer I incrementally wrote the Topology View we have in Tia today. It has served us pretty well - apart from some teething problems and a few small latent bugs it does what it set out to do. But time flies and at Clarified Networks six months is a really, really, really long time. In a nutshell, we have for a while now wanted something with more bang, preferably a few metric tons more.

On Friday I started writing a replacement. The plan is to use ?GraphingWiki metadata keys to construct topologies dynamically. For every container a wiki page is created which includes its coordinates for each topology it belongs to, as well as all the IP addresses it contains, along with documentation and whatever else one likes to put there. Because each container page includes this information we no longer even need topology pages! At the same time support for customizable container pixmaps will be implemented - you can now make your router actually look like a router, not a white spot.10 So we get more dynamic content, more bling, and personally I get a chance to clean up the codebase, which I have been meaning to do for a while now. Putting all this metadata in the wiki and adding the containers to the same category will also enable us to show networks graphically in the browser as well, similar to this. ?GraphingWiki is really really cool.

Another, currently somewhat more researchy, thing I am working on is breaking out certain functionality into a C++ 11 library. In some modules, most noticeable the graphs, but the topoview as well, we are hitting certain python-related performance limits. We looked into this with Jukke and found that ?PyQt proxy object creation overhead consumes a lot of time in fast paths such as painting graphs. Another place that could use a speed boost is the physics spring graph layouting engine, which is basically just number crunching. The plan is to rewrite performance-critical parts in C++ and wrap them using ?PyQt as needed. Stay tuned, I might some day actually get it to work - currently there doesn't seem to be a reason why it shouldn't.

On a sidenote, I've recently been working on an irc bot in my spare time. It doesn't do much yet, but has been very therapeutic to write. Check it out.

-- slougi 2008-02-18 02:47:56

2008-02-15 08:54 Corporate Website meets OpenCollab ideology

Jani Kenttälä

We decided to start using our Collab infrastructure for our corporate website as well. This is the first setup available to the public. It has been a challenge to make the moinmoin infrastructure meet our security standards. It's not quite there yet so we need to put some extra effort on monitoring and isolating it from our production infrastructure.

On the positive side, in just a couple of days there have been more edits and updates to our web page than in the past 1.5 years combined. And we are collaborating on the content! 1213 Thanks to the collab-infrastructure, I will instantly get a message informing me that my blog page changed. And I can check the changes like this. We (and our clients) can also check recent changes easily from the RecentChanges -page. It is interesting how small things like these make a huge impact on the productivity of the people working on and with the content. I would 'never' go back to the old world of the dead views, welcome the living views!

-- jani 2008-02-15 08:54:14

2008-02-14 12:45 Keep it simple

Jani Kenttälä/jani2.jpg

Dr. Stuart Cheshire from Apple gave a talk at Google: Zero Configuration Networks with Bonjour. Understanding Bonjour was nice. But the thing I liked most was that he was ranting about how we should keep protocol designs as simple as possible.

From time to time it seemed that he was almost ashamed about how simple the protocol is. I just wonder: how many nowadays think complexity and sophistication are synonyms? As Antoine de Saint-Exupéry famously put it: Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.

-- jani 2008-02-14 12:45:46

  1. With the help of DNS Changer Working Group (1)

  2. Proudly as in 'proud as 2-year old is when she manages to put the shoe to the wrong leg'. (2)

  3. In reality the software should have all of those three properties listed above (3)

  4. Original image from: (4)

  5. Unfortunately, as ?Erno phrased it, that Wikipedia article on the EAV-model has been written with RDBMS tainted glasses (5)

  6. I didn't test the code after making some trivial(tm) edits, the original exmple was 25 lines. :) (6)

  7. Experimental (7)

  8. We're on a roll! Remember the previous, by now ubiquitous, expression "exponential nag-off" coined in this blog? (8)

  9. Right. We had a long-standing problem with the LD_LIBRARY_PATH environment variable when launching external programs. Slougi fixed that just a while ago with a kludge-hack. Why kludge-hack? Because fixing it without either a kludge or a hack would be a mathematical impossibility! Grarh! (9)

  10. Admittedly, a very sexy white spot with gradients. (10)

  11. After working with Python for so long writing C++ code is quite a nightmare. It's like building a house out of wet spaghetti. I do like having the compiler shout at me when I do the Wrong Thing though. (11)

  12. Slougi just informed me that he had fixed my typos from my previous blog. (12)

  13. I just fixed some typos here as well ;) -slougi (13)