January 17, 2011

Mid January Roundup

Category: Links,Log Analysis,Visualization — Raffael Marty @ 9:00 am

The last couple of months have been pretty busy. I have been really bad about updating my personal blog here, but I have not been lazy. Among other things, I have been traveling a lot to attend a number of conferences. Here is a little summary of what’s been going on:

  • I posted a blog entry on secviz about my security visualization predictions for 2011. It’s a bit of a gloomy forecast, but check it out.
  • The Security visualization predictions post was motivated by a panel I was on at the SANS Incident Detection Summit in D.C. early December. Here are the slides for my panel discussion.
  • One of the topics I have been talking about lately is Cloud Security. The slides linked here are from a presentation I gave in Mexico.
  • The topic of cloud security and also cloud risk management is one that I have been discussing on my new blog over at Infoboom.
  • I have recorded a couple of pod casts in the last months also. One was the CloudChaser podcast where we talked about Logging Challenges and Logging in the Cloud.
  • The other pod cast I recorded was together with Kord and Gary for The Cloud Computing Show. We talked about all kinds of things. Mainly about Loggly and logging in the cloud. Here the mp3.
  • I also dug out the log maturity scale again. After mentioning it at the SANS logging summit, I got a lot of great responses on it.
  • The other day, one of my Google alerts surfaced this DefCon video of me talking about security visualization. It’s probably one of my first conference appearances. Is it?
  • And finally, 2011 started with a trip to Kauai where I presented a paper on insider threat visualization. Unfortunately, the paper is not publicly available. Email me if you want a copy.

As you are probably aware, you find my speaking schedule and slides on my personal page. That’s a good way of tracking me down. And in case you haven’t found it yet, I have a slideshare account where I try to share my presentations as well.

November 8, 2010

November Logging Updates

Category: Log Analysis,Security Market — Raffael Marty @ 11:02 am

It’s time for a quick re-hash of recent publications and happenings in my little logging world.

  • First and foremost, Loggly is growing and we have around 70 users on our private beta. If you are interested in testing it out, signup online and email or tweet me.
  • I recorded two pod casts lately. The first one was around Logging As A Service. Check out my blog post over on Loggly’s blog to get the details.
  • The second pod cast I recorded last week on the topic of business justification for logging. This is part of Anton Chuvakin’s LogCast series.
  • I have been writing a little lately. I got three academic papers accepted at conferences. The one I am most excited about is the Cloud Application Logging for Forensics one. It is really applicable to any application logging effort. If you are developing an application, you should have a look at this. It talks about logging guidelines, a logging architecture and gives a bunch of very specific tips on how to go about logging. The other two papers are on insider threat and visualization: “Visualizing the Malicious Insider Threat”
  • I will have some new logging and visualization related resources available soon. I am going to be speaking at a number of conferences in the next month: Congreso Seguridad en Computo 2010 in Mexico City, DeepSec 2010 in Vienna, and the SANS WhatWorks in Incident Detection and Log Management Summit 2010 in D.C.

See you next time.

June 28, 2010

All the Data That’s Fit to Visualize

Last week I posted the introductionary video for a talk that I gave at Source Boston in 2008. I just found the entire video of that talk. Enjoy:

Talk by Raffael Marty:

With the ever-growing amount of data collected in IT environments, we need new methods and tools to deal with them. Event and Log Analysis is becoming one of the main tools for analysts to investigate and comprehend the state of their networks, hosts, applications, and business processes. Recent developments, such as regulatory compliance and an increased focus on insider threat have increased the demand for analytical tools to help in the process. Visualization is offering a new, more effective, and simpler approach to data analysis. To date, security visualization, has mostly failed to deliver effective tools and methods. This presentation will show what the New York Times has to teach us about effective visualizations. Visualization for the masses and not visualization for the experts. Insider Threat, Governance, Risk, and Compliance (GRC), and Perimeter Threat all require effective visualization methods and they are right in front of us – in the newspaper.

June 14, 2010

Common Event Expression – CEE

Category: Log Analysis,Security Information Management — Raffael Marty @ 10:45 am

cee-logoA rehash of an old blog post from February 2008. I thought it would make sense to give a quick update on CEE and put the link to the public discussion archives here again:

Well well well… I get so many questions from people about CEE. Where is it at, when does it come out, what will it cover? To be honest, I don’t quite know. I have some answers. We have been working really hard on getting a syntax, and a taxonomy working draft written up. I think it’s more than just a working draft. It will be a really well thought through starting point for the final standard around log syntax and taxonomy. For years (I wish this wasn’t literal, but it is), we have been working on this now. Took quite some time to get everyone on the CEE board to run into the same direction. I can’t promise any timeline for publication, but I hope it’s close.

In the meantime, if you are interested in the public discussions around CEE, the public discussion archives are available online.

I have also been working on an application logging paper that I just submitted to USENIX. If you are interested in how we implemented logging at Loggly and want to look at the paper, drop me a line, maybe I will share it.

June 7, 2010

Maturity Scale for Log Management and Analysis

Category: Log Analysis,Security Information Management,Security Market — Raffael Marty @ 11:02 am

The following blog post was originally posted in December 2008. I updated it slightly to fit current times:

This following blog post has turned into more than just a post. It’s more of a paper. In any case, in the post I am trying to capture a number of concepts that are defining the log management and analysis market (as well as the SIEM or SEM markets).
Any company or IT department/operation can be placed along the maturity scale (see Figure 1). The further on the right, the more mature the operations with regards to IT data management. A company generally moves along the scale. A movement to the right does not just involve the purchase of new solutions or tools, but also needs to come with a new set of processes. Products are often necessary but are not a must.
The further one moves to the right, the fewer companies or IT operations can be found operating at that scale. Also note that the products that companies use are called log management tools for the ones located on the left side of the scale. In the middle, it is the security information and event management (SIEM) products that are being used, and on the right side, companies have to look at either in-house tools, scripts, or in some cases commercial tools in markets other than the security market. Some SIEM tools are offering basic advanced analytics capabilities, but they are very rudimentary. The reason why there are no security specific tools and products on the right side becomes clear when we understand a bit better what the scale encodes.

mat1

Figure 1: IT Data Management Maturity Scale.

The Maturity Scale

Let us have a quick look at each of the stages on the scale. (Skip over this if you are interested in the conclusions and not the details of the scale.)

  • Do nothing: I didn’t even explicitly place this stage on the scale. However, there are a great many companies out there that do exactly this. They don’t collect data at all.
  • Collecting logs: At this stage of the scale, companies are collecting some data from a few data sources for retention purposes. Sometimes compliance is the driver for this. You will mostly find things like authentication logs or maybe message logs (such as email transaction logs or proxy logs). The number of different data sources is generally very small. In addition, you mostly find log files here. No more specific IT data, such as multi-line applications logs or configurations. A new trend that we are seeing here is the emergence of the cloud. A number of companies are looking to move IT services into the cloud and have them delivered by service providers. The same is happening in log management. It doesn’t make sense for small companies to operate and maintain their own logging solutions. A cloud-based offering is perfect for those situations.
  • Forensics / Troubleshooting: While companies in the previous stage simply collect logs for retention purposes, companies in this stage actually make use of the data. In the security arena they are conducting forensic investigations after something suspicious was noticed or a breach was reported. In IT operations, the use-case is troubleshooting. Take email logs, for example. A user wants to know why he did not receive a specific email. Was it eaten by the SPAM filter or is something else wrong?
  • Save searches: I don’t have a better name for this. In the simplest case, someone saves the search expression used with a grep command. In other cases, where a log management solution is used, users are saving their searches. At this stage, analysts can re-use their searches at a later point in time to find the same type of problems again, without having to reconstruct the searches every single time.
  • Share searches: If a search is good for one analyst, it might be good for another one as well. Analysts at some point start sharing their ways of identifying a certain threat or analyze a specific IT problem. This greatly improves productivity.
  • Reporting: Analysts need reports. They need reports to communicate findings to management. Sometimes they need reports to communicate among each other or to communicate with other teams. Generally, the reporting capabilities of log management solutions are fairly limited. They are extended in the SEM products.
  • Alerting: This capability lives in somewhat of a gray-zone. Some log management solutions provide basic alerting, but generally, you will find this capability in a SEM. Alerting is used to automate some of the manual trouble-shooting that is done among companies on the left side of the scale. Instead of waiting for a user to complain that there is something wrong with his machine and then looking through the log files, analysts are setting up alerts that will notify them as soon as there are known signs of failures showing up. Things like monitoring free disk space are use-cases that are automated at this point. This can safe a lot of manual labor and help drive IT towards a more automated and pro-active discipline.
  • Collecting more logs and IT data: More data means more insight, more visibility, broader coverage, and more uses. For some use-cases we now need new data sources. In some cases it’s the more exotic logs, such as multi-line application logs, instant messenger logs, or physical access logs. In addition more IT data is needed: configuration files, host status information, such as open ports or running processes, ticketing information, etc. These new data sources enable a new and broader set of use-cases, such as change validation.
  • Correlation: The manual analysis of all of these new data sources can get very expensive and too resource intense. This is where SEM solutions can help automate a lot of the analysis. Uses like correlating trouble tickets with file changes, or correlating IDS data with operating system logs (Note that I didn’t say IDS and firewall logs!) There is much much more to correlation, but that’s for another blog post.

Note the big gap between the last step and this one. It takes a lot for an organization to cross this chasm. Also note that the individual mile-stones on the right side are drawn fairly close to each other. In reality, think of this as a log scale. These mile-stones can be very very far apart. The distance here is not telling anymore.

  • Visual analysis: It is not very efficient to read through thousands of log messages and figure out trends or patterns, or even understand what the log entries are communicating. Visual analysis takes the textual information and packages them in an image that conveys the contents of the logs. For more information on the topic of security visualization see Applied Security Visualization.
  • Pattern detection: One could view this as advanced correlation. One wants to know about patterns. Is it normal that when the DNS server is doing a zone transfer that you will also find a number of IDS alerts along with some firewall log entries? If a user browses the Web, what is the pattern of log files that are normally seen? Patter detection is the first step towards understanding an IT environment. The next step is to then figure out when something is an outlier and not part of a normal pattern. Note that this is not as simple as it sounds. There are various levels of maturity needed before this can happen. Just because something is different does not mean that it’s a “bad” anomaly or an outlier. Pattern detection engines need a lot of care and training.
  • Interactive visualization: Earlier we talked about simple, static visualization to better understand our IT data. The next step in the application of visualization is interactive visualization. This type of visualization follows the principle of: “overview first, zoom and filter, then details on demand.” This type of visualization along with dynamic queries (the next step) is incredibly important for advanced analysis of IT data.
  • Dynamic queries: The next step beyond interactive, single-view visualizations are multiple views of the same data. All of the views are linked together. If you select a property in one graph, the selection propagates to the others. This is also called dynamic queries. This is the gist of fast and efficient analysis of your IT data.
  • Anomaly detection: Various products are trying to implement anomaly detection algorithms in order to find outliers, or anomalous behavior in the IT environment. There are many approaches that people are trying to apply. So far, however, none of them had broad success. Anomaly detection as it is known today is best understood for closed use-cases. For example, NBADs are using anomaly detection algorithms to flag interesting findings in network flows. As of today, nobody has successfully applied anomaly detection across heterogeneous data sources.
  • Sharing views, patterns, and outliers: The last step on my maturity scale is the sharing of advanced analytic findings. If I know that certain versions of the Bind DNS server tend to trigger a specific set of Snort IDS alerts, it is something that others should know as well. Why not share it? Unfortunately, there are no products that allow us to share this knowledge.

While reading the maturity scale, note the gaps between the different stages. They signify how quickly after the previous step a new step sets in. If you were to look at the scale from a time-perspective, you would start an IT data management project on the left side and slowly move towards the right. Again, the gaps are fairly indicative of the relative time such a project would consume.

Related Quantities

The scale could be overlaid with a lines showing some interesting, related properties. I decided to not do so in favor of legibility. Instead, have a look at Figure 2. It encodes a few properties: number of products on the market, number of customers / users, and number of data sources needed at that state of maturity.

quantities2

Figure 2: The number of product, companies, and data sources tat are used / available along the maturity scale.

Why are so few products on the right side of the scale? The most obvious reason is one of market size. There are not many companies on the right side. Hence there are not many products. It is sort of a chicken and an egg problem. If there were more products, there might be more companies using them – maybe. However, there are more reasons. One of them being that in order to get to the right side, a company has to traverse the entire scale on the left. This means that the potential market for advanced analytics is the amount of companies that linger just before the advanced analytics market itself. That market is a very small one. The next question would be why there are not more companies close to the advanced analytics stage? There are multiple reasons. Some of them are:

  • Not many environments manage to collect enough data to implement advanced analytics across heterogeneous data. Too many environments are stuck with just a few data sources. There are organizational, architectural, political, and technical reasons why this is so.
  • A lack of qualified people (engineers, architects, etc) is another reason. Not many companies have the staff that understands how to deal with all the data collected. Not many people understand how to interpret the vast amount of different data sources.

The effects of these phenomenon play yet again into the availability of products for the advanced analytics side of the scale. Because there are not many environments that actually collect a diverse set of IT data, companies (or academia) cannot conduct research on the subject. And if they do, they mostly get it wrong or capture just a very narrow use-case.

What Else Does the Maturity Scale Tell Us?

Let us have a look at some of the other things that we can learn from/should know about the maturity scale:

  • What does it mean for a company to be on the far right of the scale?
    • In-depth understanding of the data
    • Understanding of how to apply advanced analytics, such as visualization theory, anomaly detection, etc)
    • Baseline of the behavior in the organization’s environment (needed for example for anomaly detection)
    • Understanding of the context of the data gathered, such as what’s the network topology, what are the properties of the assets, etc.
    • Have to employ knowledgeable people. These experts are scarce and expensive.
    • Collecting all log data, which is hard!
  • What are some other preconditions to live on the right side?
    • A mature change management process
    • Asset management
    • IT infrastructure documentation
    • Processes to deal with the findings/intelligence from advanced analytics
    • A security policy that tells what is allowed and intended and what is not. (Have you ever put a sniffer on the network to see what traffic there is? Did you understand all of it? This is pretty much the same thing, you put a huge sniffer on your IT environment and try to explain everything. Wow!
    • Understand the environment to the point where questions like: “What’s really normal?” are answered quickly. Don’t be fooled. This is nearly impossible. There are so many questions that need to be answered, such as: “Is a DNS server that generates ICMP messages every now and then an anomaly? Is it a security problem? What is the payload of the ICMP message? Maybe an information leak?”
  • What’s the return on investment (ROI) for living on the right-side of the scale?
    • It’s just not clear!
    • Isn’t it cheaper to ignore than to discover?
    • What do you intend to find and what will you find?
  • So, what’s the ROI? It’s hard to measure, but you will be able to:
    • Detect problems earlier
    • Uncover attacks and policy violations quicker
    • Prevent information leaks
    • Reduce down-time of infrastructure and applications
    • Reduce labor of service desk and system administration
    • More stable applications
    • etc. etc.
    • What else?
May 25, 2010

Recent Blog Posts on Django, Security, Cloud, and Visualization

Category: Links,Log Analysis,Programming,Visualization — Raffael Marty @ 5:17 pm

I thought you might be interested in some blog posts that I have written lately. I have been doing quite a bit of work on Django and Web applications. That might explain the topics of my recent blog posts. Check them out.

Would love to hear from you if you have any comments. Either leave a comment on the blogs, or contact me via Twitter at @zrlram.

March 1, 2010

RSA Security Conference – Cloud the Logging Killer App?

Category: Log Analysis — Raffael Marty @ 2:40 pm

Logging - Cloud Kiler App

Logging - Cloud Kiler App

I am attending the RSA conference this week. The first session I attended was the Cloud Security Alliance (CSA) meeting. Reading some of the accompanying material and listening to some of the presentations and panels, I couldn’t help it but notice that the terms auditing and logging were all over.

Here is my attempt for an explanation of this. It seems that one of the reasons for this is the nature of the cloud. Think about it. You are in an environment where you don’t control much. You are in an environment where you cannot trust most of the infrastructure pieces. For example, if you are using AWS like we are doing at Loggly, you should generally not trust your AMIs (the OS images). Now, what do you do if you don’t trust someone? You observe them, you monitor them. That’s exactly what is and needs to happen in the cloud: You don’t trust the service. To mitigate this issue, you are going to monitor the service.

And to make this not just my explanation, here is what some panelists during the CSA meeting said:

“Loss of visibility in the cloud” – Scott Chasin, CTO McAfee SaaS Unit
“Lose control and still maintain accountability” – Ken Biery, Verizon Business.

Is the cloud the killer app for logging? And if that’s the case, how do you manage your logs in the cloud? There are hardly any cloud logging solutions out there. I think you see where I am going with this.

December 18, 2008

Comments on the Syslog Protocol Internet Draft

Category: Log Analysis — Tags: , , , , – Raffael Marty @ 6:08 pm

ietf.jpgI am really late to the game. But finally I read draft-ietf-syslog-protocol-23. This is the new draft for revising the syslog protocol.

Here are some of my comments that I also submitted officially:

  • Let me say this first: I really like some of the changes that have been incorporated.
  • Syslog message facility: Why still keeping this? The only reason that I see people using the facility is to filter messages. There are better ways to do that. Some of the pre-assigned groups are fairly arbitrary and not even really implemented in most OSs. UUCP subsystem? Who is still using that? I guess the reason for keeping it is backwards compatibility? If possible, I would really like this to be gone.
  • Priority calculation: The whole priority field is funky. The priority does not really have any meaning. The order does not imply importance. Why having this at all?
  • Timestamps: What’s the reason for having the “T” in the timestamp? Having looked at hundreds of different log formats, I have never seen anything like that. Why doing this?
  • Hostname: I am not comfortable with the whole hostname spec. I like that there is an ordering and people are supposed to use FQDNs, but there are many questions about this. To start with, in a lot of UNIX configurations, /etc/hosts contains an entry like
    127.0.0.1   localhost.localdomain  localhost
    The second column is the FQDN (technically). Is that one that can be used? Can you make it clear that this is not what should be used? Same for 127.0.0.1 or the loopback address in general. How does a machine know whether an IP address is static or dynamic? How does a logging application know? I don’t think you will ever know that. Did you mean a private versus a public address? That might be interesting. Furthermore, it should specify which interface’s IP address to use. The interface that the message is sent out on?
  • Under the section of PROCID: The text is imprecise. This number is not the process ID of the syslog process, it’s the ID of the writing process. The third paragraph talks about detecting restarted applications and somehow mixes in the syslog process. (“might be assigned the same process ID as the previous syslog process“.) This is not clear at all and very very confusing.
  • MSGID: Make clear that this ID is local to the application. It’s not a global ID at all.

The biggest issue I have around the SD-ID field:

  • I like that the user can extend the set of registered IDs.
  • Why is this structure so complicated? Why not going with a simple set of key-value pairs? This whole structure thing is so complicated. Parsing it, you need to keep state! You need to remember the SD-ID for each SD-PARAM. Why introducing this? Just stick with simple key-value pairs. That makes parsing easier. Much easier. And it makes the events easier to produce as well.
  • By keeping an explicit message field (the unstructured part), you encourage people to still log in that way. I recommend using an explicit field (or parameter) that can be used to include human readable text. Instead of this:
    <165>1 2003-08-24T05:14:15.000003-07:00 192.0.2.1 myproc 8710 - - %% It's time to make the do-nuts.
    use:
    <165>1 2003-08-24T05:14:15.000003-07:00 192.0.2.1 myproc 8710 - message="%% It's time to make the do-nuts."
    or really:
    1 2003-08-24 05:14:15.000003-07:00 host=192.0.2.1 process=myproc procid=8710 message="%% It's time to make the do-nuts."
  • I definitely like the consideration of some of the special fields (structured data IDs). However, they should be used as simple keys (or parameters) that have special meaning.
  • Parameter – origin: What does it mean to have multiple origins IPs? Is that a syslog forwarding chain? The document does not say anything about that. Also, we already have the host field in the beginning of the syslog messages. What’s the relationship to that? Or is origin something completely different?
  • Parameters – I would really like to see some use-cases for all of the IDs. Especially the sequenceId. I am assuming this is something that the syslog daemon assigns, not the logging application. Right? I think that needs to be clearer. For the sequenceId, what happens for forwarded messages? Are these IDs local? Are they forwarded along with a message? Also, how does the logging application know about the timeQuality? Or if that something that the syslog daemon assigns, how does it know?
  • I would really like to see the parameters to go away and have a generic key-value extension. In addition, IANA should have a set of allowed/defined keys. The parameters should be part of those. Each key has a special meaning (semantics). There should be a whole lot of them: src_ip, user_name, etc. Each producer should be free to add additional keys, realizing that not all consumers would understand their semantics. However, the consumers could still read them.

That’s it for now… Let’s see what some of the reactions are going to be.

Technorati Tags: , , , ,

October 29, 2008

VizSec 2008 and Ben Shneiderman’s Keynote

Category: Log Analysis,Visualization — Raffael Marty @ 3:06 pm

image_thumb.pngVizSec is a fairly academic conference that brings together the fields of security and visualization. The conference had an interesting mix of attendees: 50% came from industry, 30% from academia, and 20% from government. I had the pleasure of being invited to give a talk about DAVIX and also participate on a panel about the state of security visualization in the market place.
The highlight of the conference was definitely Ben Shneiderman’s keynote. I was very pleased with some of the comments that Ben made about the visualization community. First he criticized the same thing that I call the “industry – academia dichotomy”. In his words:

“[There is a] lack of applicability of research.”

I completely agree and if you have seen me talk about the dichotomy, I outline a number of examples where this becomes very obvious.
The second quote from Ben that I would like to capture is the following:

“The purpose of viz is insight, not pictures”

Visualization is about how to present data. I am not always sure that people understand that.
Unfortunately, I wasn’t prepared to capture what Ben said about my book (Applied Security Visualization.) He brought his copy that I had sent him. He talked about the book for quite a bit and specifically mentioned all the treemaps that I have used to visualize a number of use cases. I felt very honored that Ben actually looked at the book and had such great things to say about it. The following lunch with Ben was a great pleasure as well, filled with some really interesting visualization discussions.

August 20, 2008

Applied Security Visualization Press

Category: Log Analysis,Security Article Reviews,Visualization — Raffael Marty @ 12:20 pm

I recorded a couple of podcasts and did some interviews lately about the book. If you are interested in listening in on some of the press coverage:

More information about the Applied Security Visualization book is on the official book page. I am working on figuring out where to put an Errata. There were some minor issues and typos that people reported. If you find anything wrong or you have any generic comments, please let me know!