January 16, 2006

Wireless Access – Linux

Category: UNIX Scripting — Raffael Marty @ 6:40 pm

Sitting down at a cafe around the corner from where I live, I realize that some of the scripts I wrote a while back might actually benefit others too. This one is to connect to the first available access point:

#!/bin/bash

iwlist ath0 s > /tmp/$$

ap=`cat /tmp/$$ | grep "Encryption key:off" -B 5 | head -1 | sed -e 's/Cell.*Address: \(.*\)/\1/g'`
essid=`cat /tmp/$$ | grep "Encryption key:off" -B 4 | head -1 | sed -e 's/ESSID:"\(.*\)"/\1/g'`
essid=`echo $essid | sed -e 's/ //g'`

echo Tryping AP:$ap / SSID:$essid

iwconfig ath0 ap $ap
iwconfig ath0 essid $essid
iwconfig ath0 nick test
killall -9 dhclient
dhclient ath0

Not sure whether there would be a simpler solution natively supported by linux…

Shoki – Packet Hustler

Category: Visualization — Raffael Marty @ 5:05 pm

I haven’t looked at Shoki in a while. Today I downloaded a version again and tried to compile it on my Fedora Core 4 installation, just to find out that the thing would not compile. Well, I dug around in the code for a bit and after some searches on the Web, I realized that gcc 4 is stricter about the C conventions and Shoki was written with some declarations being non-standard. What fixed it was to define the CC flag in the Makefile to use gcc32 instead of gcc.
Playing with this tool, I somehow have the impression that I just don’t get it. I can redefine the axes and play with that, but even zomming into a certain selection I can’t seem to accomplish. And then there is all this extra stuff like fast fourier transformations etc. While I know what that is, I just don’t quite understand how all that works in Shoki. Maybe I have to spend an afternoon with the documentation 😉 Or maybe there are people out there who have some tips or hints for me?
What I am really interested in is if someone managed to analyze a dataset and can show me what he found with what feature. Do all the bells and whistles (some of the advanced features) really help? Help me out!

January 12, 2006

Conferences

Category: Visualization — Raffael Marty @ 1:41 am

Conference season is kicking in again. It looks like this year will be a busy year for me. I will be speaking at the RSA conference mid February in San Jose. Then I just got notice that I got accepted to EuSecWest06. Both places I will be talking about Security Event Visualization. The EuSecWest presentation is going to be more technical and AfterGlow driven, while the RSA presentation is more on the higher level of visualizing security data and attaching a workflow to that.
I have been interested in the workflow aspect of security monitoring for a long time already. It kind of started about three years back with a presentation on Intrusion Management I gave at ETH Zurich. There I tried to outline that the incident response and security event monitoring processes need to be tightly integrated into the other IT process. I guess over time this has become quite apparent, but I still don’t see it completely implemented at many places.

December 28, 2005

Tools for Visualization II

Category: Visualization — Raffael Marty @ 9:15 am

I will keep posting the answers to my Focus-IDS post where I asked people what they use to visualize their log files. Here is some other home grown solution to visualize pf logs: Fireplot. It’s basically a scatter plot over time where the x-axis shows the port.

December 27, 2005

Bayes’ Theorem in Security Posture Analysis

Category: Security Article Reviews — Raffael Marty @ 8:15 am

I was reading this article in the ISSA Journal from December 2005 that talks about Bayes’ Theorem and its application in security posture analysis. I love math and was very interested in what the article has to say. However, when I reached the end, I was not quite sure what I learned and why Bayes would help me to do any analysis. The examples given in the article are very poor. If department A has a 90% of all the faults, it probably needs a reorganisation. I don’t need any mathematician to tell me that, not even Bayes.
Sometimes people are blinded by math. Just because it’s a nice theorem, don’t abuse it. In the entire article, the probabilities are never discussed. They are completely random. There is no guidance in choosing them. Making the entire analysis completely useless! The outcome from all the math and complexity is extremely logical. Again, there is no added value in using Bayes for this.
Maybe I am missing the point of the article, but I did not learn anything. Maybe I should retake statistics…

December 24, 2005

Information Security Magazine December 2005

Category: Security Article Reviews — Raffael Marty @ 6:48 am

Yes, I am reading not just old issues of magazines … So here is a jewel I found in the December issue of the Information Security Magazine:

if you work with large, high-performance networks, make sure yo uare using system such as Windows 2000 or Linux kernels 2.1.9 or later.

2.1.9? I am not even sure whether kernel.org still has those around 🙂 Does he mean 2.4.9? Maybe. I think he’s another author that has not used Linux. At least not in a while. The quote stems from an article called “The Weakest Link” by Michael Cobb. Another author who is coming up with new terminology. This time it’s Application Level Firewall. While I have definitely heard this term, the author manages very well to confuse me:

Where IDS informs of an actual attack, IPS tries to stop it. IPS solutions tend to be deployed as added security devices at the network perimeter because they don’t provide network segmentation.
ALFs provide the application-layer protection of an IPS by merging IDS signatures and application protocol anomaly detection rules into the traffic-processing engine, while also allowing security zone segmentation.

That’s a long one and reading it, I don’t really see the difference between ALFs and IPSs. Network segmentation? Hmm… Interesting. Is that really the difference? I have to admit, I don’t know, but this seems like a “lame” difference. I bet the IPSs out there can do network segmentation.

The report manages to omit something that I think is quite important. When the author talks about decision factors for buying ALFs (by the way, this reminds me of the brown creature ALF on the TV series…), he does not mention that logs need to be monitored! And that requires from the application that the logs they produce need to be useful. What a concept.

Information Security Magazine August 2005 – NBAD

Category: Security Article Reviews — Raffael Marty @ 6:48 am

I am not sure in what century this article was written. I know, August is almost half a year back, but still, have you not heard of anomaly detection?

The article triggers another question that constantly bugs me: Why do people have to – over and over – invent new terms for old things. I know, most of the times it’s marketing that is to blame, but please! Have you heard of NBAD (network based anomaly detection)? Well, I have and thought that was the term used in the industry. Well, apparently there is another school of thoughts calling it NAD (network anomaly detection). That’s just wrong. How long have we had anomaly detection? I remember some work written around 5 years ago that outlined the different types of IDSs [btw, I learned from Richard at UCSB that it’s not IDSes, but IDSs]: behavioral based and knowledge-based ones. Or as other call them, anomaly based and signature based. I will try to find the link again for the paper, which originated at IBM Research in Zurich. [Here it is: A Revised Taxonomy. for Intrusion-Detection. Systems by H. Debar, M. Dacier, and A. Wepsi.] So when the author says:

… it helps to understand the differences between it [NAD] and a traditional IDS/IPS.

What does he mean by traditional? Anomaly based systems are IDSs and are among the first ones that were built. So where is the difference?

Let’s continue to see what other things are kind of confusing in the article that is trying to explain what NAD is. The main problem with the article is that it’s imprecise and confusing – in my point of view. Let’s have a look:

NAD is the last line of defense.

Last line? I always thought that host-protection mechanisms would be the last line or something even beyond that, but network based anomaly detection? That’s just wrong!

NADS use network flow data…

Interesting. This is still my definition of an NBAD. Maybe the author is confusing NBADs with anomaly detection. Because anomaly-based IDSs are using a whole range of input sources, and not necessarily network flow data.

NAD is primarily an investigative technology.

Why is that? If that’s really the case, why would I use them at all? I could just use network flow data by itself, not needing to buy an expensive solution. NAD (I am sticking with the author’s term), combined with other sources of information is actually very very useful in early detection, etc. Correlate these streams with other stremas and you will be surprised what you can do. Well, just get a SIM (security information management) to do it for you!

Another thing I love is that _the_ usecase called out in the article is detecting worms. Why is everyone using this example? It’s one of the simplest things to do. Worms have such huge footprints that I could write you a very simple tool that detects one. Just run MRTG and look at the graphs. Huge spike? Very likely a worm! (I know I am simplifying a bit here). My point is that here are harder problems to solve with NAD and there are much nicer examples, but the author fails to mention them. The other point is that I don’t need a NAD for this, even SIMs can do that for you (and they have only done that for about three years now, although the author claims that SEMs (how he calls SIMs) are just starting to do this. Well, he has to know.)

I love this one:

Security incident deteciton typically falls into two categories: signature- and anomaly-based. These terms are so overused …

So NAD is not overused? I actually don’t think they are very overused. There are very clear definitions for them. It’s just that a lot of people have not read the definitions and if they have, don’t really understand them. (I admit, I might not understand them either.) [For those interested, google for: A revised taxonomy for intrusion detection systems]

NAD’s implied advantages include reduced tuning complexity, …

This is again not true. Have you ever tried to train an anomaly detection system? I think that’s much harder than tuning signatures. The author actually contradicts his own statement in the next sentence:

NADS suffer from high false-positive rates…

Well, if it’s so easy to tune them, why are there many false-positives?

What does this mean:

Network anomaly detection systems offer a different view of network activity, which focuses on abnormal behaviors without necessarily designating them good or bad.

Why do I have such a system then? That’s exactly what an anomaly detection system does. It lears normal behavior and flags anomalous behavior. Maybe not necessarily bad behavior, but certainly anomalous!

The case study presented is pretty weak too, I think. Detecting unusual protocols on the network can very nicely be done with MRTG or just netflow itself. I don’t need an NAD (and again, I think this should really be NBAD) for that. By the way, a signature-based NIDS can do some of that stuff too. You have to basically feed it the network usage policy and it can alert if something strange shows up, such as the use of FTP to your financial servers. So is that anomaly detection? No! This goes along with the article claiming that NADs check for new services appearing to be used on machines. I always thought that was passive network discovery. I know things are melting together, but still! Oh, and protocol verification is anomaly detection? No! It’s not. Where is the baseline that you have to train against?

Finally, why would an NAD, or NBAD for that matter, only be useful in an environment that is, quote: “stable”? I know of many ISPs that are using those systems and they for sure don’t have a stable environment!

Well, that’s all I have…

Information Security Magazine August 2005 – ProvinGrounds

Category: Security Article Reviews — Raffael Marty @ 6:47 am

It is probably a sign that I travel too much if I have already seen all the movies – a total of three – they show on the airplane. But at least it is a good opportunity to read some of the many computer security magazines that have piled up on my desk over the past months.

I have an old issue of the information security magazine in front of me, the August 2005 issue. There is an article by Joel Snyder entitled “ProvinGrounds” where he writes about setting up a test lab for security devies. I like the article, but one quote caught my attention:

LINUX is a useful OS for any lab equipment, but it’s best kept on the server side. Its weak GUI and lack of laptop support makes it difficult to use as a client.

I don’t know how much experience the author actually has with Linux, but I am typing this blog entry on a linux system running on my laptop. I don’t know what the problem is. In fact, my GUI is probably even nicer than some of the Windows installations (I know, this is personal taste ;). Why would someone write something like that?

In the same issue of the magazein, there is another article from the same author. This article is talking about VLAN security. While the article does not reveal anything new and exciting – if you actually follow Nicolas Fischbach’s work, you might even be disappointed with the article – there is one thing in the article which makes me think that the author never had to configure a firewall. He recommends doing the following on a switch:

Limit and control traffic. Many switches have the ability to block broad types of traffic. If your goal, for example, is to enable IP connectivity, then you want to use an ACL to allow IP and ARP Ethernet protocols only, blocking all other types.

Firstly, why are IP and ARP Ethernet protocols? But that’s not what is wrong here. Have you ever configured your switch like this? Do you want to know what happens if you do? If you ever had to setup a firewall (and I am not talking about one that has a nice GUI with a wizzard and stuff), then you know that this is going to break quite a lot of things. Dude, you need some of the ICMP messages! Path MTU discovery for example. What about things like ICMP unreachables? Without them you introduce a lot of latency in your network because your clients have to wait for timeouts instead of getting negative ACKs.

December 20, 2005

Security Through Obscurity

Category: UNIX Security — Raffael Marty @ 2:44 pm

While I am not at all a fan of the “security through obscurity” paradigm, I think in some cases it has its benefits. For example in preventing automated scripts (i.e., worms) to compromise your box. I found this page about “Port Knocking” which only opens port 22 if you connect to a series of other ports beforehand. What I like about this solution is the simplicity by using iptables.
The solution uses the

-m recent –rcheck

feature of iptables to open port 22 if a certain other port is being connected to.

RAID 2006

Category: Uncategorized — Raffael Marty @ 1:41 am

The RAID (Recent Advances in Intrusion Detection) conference next year will be held in Hamburg. I will be on the program committee for the conference.
Make sure you submit a paper and attend the con!