August 25, 2007

Event Processing – Normalization

Category: Log Analysis,Security Information Management — Raffael Marty @ 6:15 pm

A lot has happened the last couple of weeks and I am really behind with a lot of things that I want to blog about. If you are familiar with the field that I am working in (SIEM, SIM, ESM, log management, etc.), you will fairly quickly realize where I am going with this blog entry. This is the first of a series of posts where I want to dig into the topic of event processing.

Let me start with one of the basic concepts of event processing: normalization. When dealing with time-series data, you will very likely come across this topic. What is time-series data? I used to blog and talk about log files all the time. Log files are a type of time-series data. It’s data which is collected over time. Entries are associated with a time stamp. This covers anything from your traditional log files to snapshots of configuration files or snapshots of tools that are run on a periodic basis (e.g., capturing your netstat output every 30 seconds).

Let’s talk about normalization. Assume you have some data which reports logins to one of our servers. We would like to generate a report which shows the top ten users accessing the server. How would you do that? We’d have to identify the user name in the log entry first. Then we’d extract it, for example by writing a regular expression. Then we’d collect all the user names and compile the top ten list.

Another way would be to build a tool which picks the entire log entry apart and puts as much information from the event into a database. As opposed to just capturing the user name. We’d have to create a database with a specific schema. It would probably have these fields: timestamp, source, destination, username. Once we have all this information in a database, it is really easy to do all kinds of analysis on the data, which was not possible before we normalized it.

The process of taking raw input events and extracting individual fields is called normalization. Sometimes there are other processes which are classified as normalization. I am not going to discuss them right here, but for example normalizing numerical values to fall in a predefined range is generally referred to as normalization as well.

The advantages of normalization should be fairly obvious. You can operate on the structured and parsed data. You know which field represents the source address versus the destination address. If you don’t parse the entries, you don’t really know that. You can only guess. However, there are many disadvantages to the process of normalization that you should be aware of:

  • If you are dealing with a disparate set of event sources, you have to find the union of all fields to make up your generic schema. Assume you have a telephone call log and a firewall log. You want to store both types of logs in the same database. What you have to do is take all the fields from both logs and build the database schema. This will result in a fairly large set of fields. If you keep adding new types of data sources, your database schema gets fairly big. I know of a SIM which uses more than 200 hundred fields. And still that doesn’t cover nearly all the fields that are needed to cover a good set of data sources.
  • Extending the schema is incredibly hard: When building a system with a fixed schema, you need to decide what your schema will look like. If, to a later point in time, you have a need to add another type of data source, you will have to go back and modify the schema. This can have all kinds of implications on the data already captured in the data store.
  • Once you decided to use a specific schema, you have to build your parsers to normalize the inputs into this schema. If you don’t have a parser, you are out of luck and you cannot use that data source.
  • Before you can do any type of analysis, you need to invest the time to parse (or normalize) the data. This can become a scalability issue. Parsing is fairly slow. It generally applys regular expressions to each of the data entries, which is a fairly expensive operation.
  • Humans are not perfect and programmers are not either. The parsers will have bugs and they will screw up normalization. This means that the data that is stored in the database could be wrong in a number of ways:
    • A specific field doesn’t get parsed. This part of the data entry is not available for any further processing.
    • A field gets parsed but assigned to the wrong field. Part of your prior analysis could be wrong.
    • Breaking up the data entry into tokens (fields) is not granular enough. The parser should have broken the original entry into more specific fields.
  • The data entries can change. Oftentimes, when a new version of a product is released, it either adds new data types or it changes some of the log entries. This has to be reflected in the parsers. They need to be updated to support the new data entries, before the data source can be used again.
  • The original data entry is not available anymore, unless you are spending the time and space to store the original data entry along with the parsed and extracted fields. This can have quite some scalability issues as well.

I have seen all of these cases happening. And they happen all the time. Sometimes, the issues are not that bad, but other times, when you are dealing with mission critical systems, it is absolutely crucial that the normalization happens correctly and on time.

I will expand on the challenges of normalization in a future blog entry and put it into the context of security information management (SIM).

[tags]SIM, SIEM, ESM, log management, event normalization, event processing, log analysis[/tags]

August 16, 2007

BaySec – Next Meeting August 20th

Category: Uncategorized — Raffael Marty @ 8:35 pm

We have another BaySec meeting scheduled for the coming Monday. 7pm at O’Neills, at 3rd and King Street. Right around the corner from my work 😉

August 7, 2007

Turning off mDSNResponder

Category: Uncategorized,UNIX Security — Raffael Marty @ 12:58 am

I thought I’d already disabled mDNSResponder when I did some basic hardening of my Laptop. Turns out that when Marty (no, I am not refereing to myself in the third person) asked me whether I disabled it and I checked again, it was really not. Maybe I just killed the process, but here is how to really disable that service:

Launch the following command

sudo launchctl unload /System/Library/LaunchDaemons/com.apple.mDNSResponder.plist

The next step is turning off the mDNSResponder at startup. And where do you do that? As I am not really confident getting online here at BlackHat, I decided to just look around on the hard drive and what I found was that you could probably just change an entry in the /System/Library/LaunchDaemons/com.apple.mDNSResponder.plist file:

<key>OnDemand</key>
<false></false>

Replace false with true. Do you notice something? Someone really knew XML. Darn it. Two elements. One being the key, the other one being the value. Ever heard of attributes in XML? To whoever built this, this is how I would write the entry:

Or even better, re-architect the entire XML file to actually make sense!

I just now found the real way to actually disable the service by using the -w flag on the launchctl command from above. That will turn the process off permanently. A good reference is here.

August 6, 2007

Mac OS X – Really Just FreeBSD ?

Category: UNIX Security — Raffael Marty @ 10:24 am

No! OS X is not FreeBSD! Not sure if I’d like OS X better, if it was just FreeBSD on steroids.

I am sitting at BlackHat. Yes, I turned my laptop on, but the network interfaces are turned off! I was going to configure my firewall to lock everything down and then go online. First shock: <b>ipfw</b> is the firewall OS X uses. There is some history with me and ipfw. I am a big fan of OpenBSD and when Daniel wrote the pf firewall to replace ipfw , I was delighted. I started using pf and even fiddled around with the source code. I am no expert on all the features anymore, but I got a pretty good handle on that beast at some point. Now I have to learn ipfw… Okay. Let’s do that and face the challenge.
First things first. Where’s the configuration file for it? Hmm… There is a guy. Let me play with that. I am shocked. By default, UDP traffic is allowed in and out, even if you turn off all your services in the main tab. Only if you use the advanced tab, can you turn UDP off. Logging is not turned on either (what a surprise). Alright, I am turned that on too. How do the rules look now? OMG! Ridiculous. It allows port 5353, 137, 427, and 631 inbound! Why? Turn that off! Lesson learned: Don’t use the default config. Again, show me the configuration file. But where is it?

I still haven’t found it. I am just going to write a script which uses the <b>ipfw add</b> command to add ipfw rules one by one. That’s really the same thing I am doing with iptables on my Linux boxen. But before doing so, I wanted to see how ipfw log entires look. To test that, I added the following rule:
<code>deny log ip from any to any</code>
I just wanted to see how a log entry looks when I telnet to some port on my box. Well. Surprise surprise. Right after adding that rule not much worked anymore. <b>sudo</b> is not functioning anymore. Some digging around and I realized that the <b>/etc/passwd<b> file is not used for authentication! It’s some service that uses the loopback interface. Not really sure what to do without sudo and a bit frustrated, I closed the laptop to resume later. Well, later, the laptop did not wake up anymore. Authentication gone! It just hung. A reboot was necessary. Darn. At this point I am really frustrated!

I think my next step is to go out and take Jay’s Bastille Linux scripts to see what they are going to do to my box. I actually hope Jay is going to show up here in Vegas so I can bug him about some of my OS X things 😉
[tags]OS X,ipfw[/tags]