I am still sitting in the airplane and the next article from the ISSA Journal from November 2005 that catches my attention is the “Log Data Management: A Smarter Approach to Managing Risk”. I have only a few comments about this article:
- The author demands that all the log data is archived, and archived unfiltered. Well, here is a question: What is the difference between not logging something and logging it, but later filtering it out? What does that mean for litigation quality logs?
- On the same topic of litigation quality data, the author suggest that a copy of the logs are save in the original, raw format while analysis is done on the other copy. I don’t agree with this. I know, in this matter my opinion does not really count and nobody is really interested in it, but I will have some proof soon that this is not required. I am not a lawyer, so I will not even try to explain the rational behind allowing the processing of the original logs and still maintaining litigation quality data.
- “Any log management solution should be completely automated.” While I agree with this, I would emphasize the word should. What does that mean anyways? Completely automated in the real of log management? Does that mean the log is archived automatically? Does it mean that the log management solution takes action and block systems (like an IPS)? There will always need to be human interaction. You can automate a lot of things, including the generation of trouble tickets, but at least then, an operator will be involved.
- Why does the author demand that “companies should look for an applicance-based solution”. Why is that important? The author does not give any rational for that. I can see some benefits, but there are tons of draw-backs to that approach too. I yet have to see a compelling reason why an appliance is better than a custom install on company approved hardware.
- In the section about alerting and report capabilities, the author mentiones “text-based alerts”, meaning that rules can be setup to trigger on text-srings in log messages. That’s certainly nice, but sorry, it does not scale. Assume I want to setup a trigger on firewall block events. I can define a text-string of “block” to trigger upon. But all the firewalls which call this not a block, but a “Deny” will not be caught. Have you heard of categorization or an event taxonomy? That’s what is really needed!
- “… fast text-based searches can accelerate problem resolution …” Okay. Interesting. I disagree. I would argue that vsualization is the key here. But I am completely biased on that one 😉
- Another interesting point is that the author suggest that “… a copy [of the data] can be used for analysis”. Sure. Why not, but why? If the argument is litigation quality data again, why would compression, which is mentioned in the next sentence be considered a “non-altering” way of processing the data. If that is the argument. I would argue that I can work with the log data by normalizing it and even enriching the data without altering it.
I’ve taken a quick look at your postings, which are very interesting. Lots of material and ideas! Congrats on being so focused!
Comment by David — November 6, 2006 @ 10:14 am