August 2, 2019

The Need For Domain Experts and Non Trivial Conclusions

In my last blog post I highlighted some challenges with a research approach from a paper that was published at IEEE S&P, the sub conference on “Deep Learning and Security Workshop (DLS 2019)“. The same conference featured another paper that spiked my interest: Exploring Adversarial Examples in Malware Detection.

This paper highlights the problem of needing domain experts to build machine learning approaches for security. You cannot rely on pure data scientists without a solid security background or at least a very solid understanding of the domain, to build solutions. What a breath of fresh air. I hole heartedly agree with this. But let’s look at how the authors went about their work.

The example that is used in the paper is in the area of malware detection; a problem that is a couple of decades old. The authors looked at binaries as byte streams and initially argued that we might be able to get away without feature engineering by just feeding the byte sequences into a deep learning classifier – which is one of the premises of deep learning, not having to define features for it to operate. The authors then looked at some adversarial scenarios that would circumvent their approach. (Side bar: I wish Cylance had read this paper a couple years ago). The paper goes through some ROC curves and arguments to end up with some lessons learned:

  • Training sets matter when testing robustness against adversarial examples
  • Architectural decisions should consider effects of adversarial examples
  • Semantics is important for improving effectiveness [meaning that instead of just pushing a binary stream into the deep learner, carefully crafting features is going to increase the efficacy of the algorithm]

Please tell me which of these three are non obvious? I don’t know that we can set the bar any lower for security data science.

I want to specifically highlight the last point. You might argue that’s the one statement that’s not obvious. The authors basically found that, instead of feeding simple byte sequences into a classifier, there is a lift in precision if you feed additional, higher-level features. Anyone who has looked at byte code before or knows a little about assembly should know that you can achieve the same program flow in many ways. We must stop comparing security problems to image or speech recognition. Binary files, executables, are not independent sequences of bytes. There is program flow, different ‘segments’, dynamic changes, etc.

We should look to other disciplines (like image recognition) for inspiration, but we need different approaches in security. Get inspiration from other fields, but understand the nuances and differences in cyber security. We need to add security experts to our data science teams!

July 30, 2019

Research is “Skewing up”

Over the weekend I was catching up on some reading and came about the “Deep Learning and Security Workshop (DLS 2019)“. With great interest I browsed through the agenda and read some of the papers / talks, just to find myself quite disappointed.

It seems like not much has changed since I launched this blog. In 2005, I found myself constantly disappointed with security articles and decided to outline my frustrations on this blog. That was the very initial focus of this blog. Over time it morphed more into a platform to talk about security visualization and then artificial intelligence. Today I am coming back to some of the early work of providing, hopefully constructive, feedback to some of the work out there.

The researcher paper I am looking at is about building a deep learning based malware classifier. I won’t comment on the fact that every AV company has been doing this for awhile (but learned from their early mistakes of not engineering ‘intelligent’ features). I also won’t discuss the machine learning architecture that is introduced. What I will argue is the approach that was taken and the conclusions that were drawn:

  • The paper uses a data set that has no ground truth. Which, in network security is very normal. But it needs to be taken into account. Any conclusion that is made is only relative to the traffic that the algorithm was tested, at the time of testing and under the used configuration (IDS signatures). The paper doesn’t discuss adoption or changes over time. It’s a bias that needs to be clearly taken into account.
  • The paper uses a supervised approach leveraging a deep learner. One of the consequences is that this system will have a hard time detecting zero days. It will have to be retrained regularly. Interestingly enough, we are in the same world as the anti virus industry when they do binary classification.
  • Next issue. How do we know what the system actually captures and what it does not?
    • This is where my recent rants on ‘measuring the efficacy‘ of ML algorithms comes into play. How do you measure the false negative rates of your algorithms in a real-world setting? And even worse, how do you guarantee those rates in the future?
    • If we don’t know what the system can detect (true positives), how can we make any comparative statements between algorithms? We can make a statement about this very setup and this very data set that was used, but again, we’d have to quantify the biases better.
  • In contrast to the supervised approach, the domain expert approach has a non-zero chance of finding future zero days due to the characterization of bad ‘behavior’. That isn’t discussed in the paper, but is a crucial fact.
  • The paper claims a 97% detection rate with a false positive rate of less than 1% for the domain expert approach. But that’s with domain expert “Joe”. What about if I wrote the domain knowledge? Wouldn’t that completely skew the system? You have to somehow characterize the domain knowledge. Or quantify its accuracy. How would you do that?

Especially the last two points make the paper almost irrelevant. The fact that this wasn’t validated in a larger, real-world environment is another fallacy I keep seeing in research papers. Who says this environment was representative of every environment? Overall, I think this research is dangerous and is actually portraying wrong information. We cannot make a statement that deep learning is better than domain knowledge. The numbers for detection rates are dangerous and biased, but the bias isn’t discussed in the paper.

:q!

January 17, 2018

Virtual Reality in Cyber Security

Category: Security Article Reviews,Visualization — Raffael Marty @ 6:17 pm

I just read an article on virtual reality (VR) in cyber security and how VR can be used in a SOC.



Image taken from original post

The post basically says that VR helps the SOC be less of an expensive room you have to operate by letting a company take the SOC virtual. Okay. I am buying that argument to some degree. It’s still different to be in the same room with your team, but okay.

Secondly, the article says that it helps tier-1 analysts look at context (I am paraphrasing). So in essence, they are saying that VR helps expand the number of pixels available. Just give me another screen and I am fine. Just having VR doesn’t mean we have the data to drive all of this. If we had it, it would be tremendously useful to show that contextual information in the existing interfaces. We don’t need VR for that. So overall, a non-argument.

There is an entire paragraph of non-sense in the post. VR (over traditional visualization) won’t help monitoring more sources. It won’t help with the analysis of endpoints. etc. Oh boy and “.. greater context and consumable intelligence for the C-suite.” For real? That’s just baloney!

Before we embark on VR, we need to get better at visualizing security data and probably some more advanced cyber security training for employees. Then, at some point, we can see if we want to map that data into three dimensions and whether that will actually help us being more efficient. VR isn’t the silver bullet, just like artificial intelligence (AI) isn’t either.

This is a gem within the article; a contradiction in itself: “More dashboards and more displays are not the answer. But a VR solution can help effectively identify potential threats and vulnerabilities as they emerge for oversight by the blue (defensive) team.” – What is VR other than visualization? If you can show it in three dimensions within some google, can’t you show it in two dimensions on a flat screen?

August 20, 2008

Applied Security Visualization Press

Category: Log Analysis,Security Article Reviews,Visualization — Raffael Marty @ 12:20 pm

I recorded a couple of podcasts and did some interviews lately about the book. If you are interested in listening in on some of the press coverage:

More information about the Applied Security Visualization book is on the official book page. I am working on figuring out where to put an Errata. There were some minor issues and typos that people reported. If you find anything wrong or you have any generic comments, please let me know!

August 14, 2008

First Amazon for Applied Security Visualization Book

Category: Log Analysis,Security Article Reviews,Visualization — Raffael Marty @ 11:21 am

I just saw the first Amazon review for my book. I just don’t understand why the person only gave it four stars, instead of five 😉 Just kidding. Thanks for the review! Keep them coming!

August 13, 2008

Applied Security Visualization Book is Available!

Category: Compliance,Log Analysis,Security Article Reviews,Visualization — Raffael Marty @ 12:38 pm

picture-5.pngThe Applied Security Visualization book is DONE and available in your favorite store!

Last Tuesday when I arrived at BlackHat, I walked straight up to the book store. And there it was! I held it in my hands for the first time. I have to say, it was a really emotional moment. Seeing the product of 1.5 years of work was just amazing. I am really happy with how the book turned out. The color insert in the middle is a real eye-catcher for people flipping through the book and it greatly helps making some of the graphs better interpretable.

2754352452_e3ed2c1d0f.jpgI had a few copies to give away during BlackHat and DefCon. I am glad I was able to give copies to some people that have contributed by inspiring me, challenging me, or even giving me very specific use-cases that I collected in the book. Thanks everyone again! I really appreciate all your help.

People keep asking me what the next project is now that the book is out. Well, I am still busy. secviz.org is one of my projects. I am trying to get more people involved in the discussions and get more people to contribute graphs. Another project I am starting is to build out a training around the book, which I want to teach at security conferences. I have a few leads already for that. Drop me a note if you would be interested in taking such a training. Maybe I will also get some time to work on AfterGlow some more. I have a lot of ideas on that end…

During DefCon, I recorded a PodCast with Martin McKeay where I talk a little bit about the book.

April 1, 2008

Applied Security Visualization – I Have a Book Cover!

Thanks to the design department at Addison Wesley, I have a proposal for a cover page of my upcoming book:

Applied Security Visualization

This is really exciting. I have been working on the book for over a year now and finally it seems that the end is in sight. I have three chapters completely done and they should appear in a rough-cuts program, as an electronic pre-version, very soon (next three weeks). Another three chapters I got back from my awesome review committee and then there are three chapters I still have to finish writing.

Applied Security Visualization should be available by Black Hat at the beginning of August. I will do anything I can to get it out by then.

Technorati Tags: , , , ,

March 21, 2008

Follow me on Twitter: @zrlram

Category: Security Article Reviews — Raffael Marty @ 9:02 am

twitter.png I was quite surprised, when I heard that twitter was around for about a couple of years already. I jumped on the band wagon about 2 weeks ago, just before SOURCEBoston. What’s twitter? It’s a micro-blog. It’s IM that can be read by everybody that you authorize. It’s broadcast. You subscribe to people’s feeds and they subscribe to yours. It’s fairly interesting. There is an entire following of security twits who twitter all day long about more or less interesting thing.

What I find very interesting are the RSS-like twitter feeds from, for example, conferences. We had a feed for @SOURCEBoston. There is also one for the RSA Blogger Meetup. I hope to see you there!

Follow me: @zrlram

March 13, 2008

2nd Keynote at SOURCEBoston – Dan Geer

Category: Security Article Reviews — Raffael Marty @ 9:13 am

cimg2597_2.jpgDan Geer just gave his keynote at SOURCEBoston. Have you heard Dan Geer speak? If not, I highly encourage you to watch the video of his talk as soon as it is online. I will have to go back and listen to his talk a few more times to absorb some more of it. Dan throws out so many thoughts and concepts that it is hard to follow him, without knowing some of this stuff already. I am sure those of you who have been following Dan were able to retain much more of his talk. I mostly know about Dan’s work from his postings on the security metrics list.

Risk management is a topic that is often discussed by Dan. “Risk management is about affecting the future, not explaining the past.” says Dan. To do effective risk management we need to measure things as best as we can. We need security metrics. We can’t make much progress in security if we don’t have good metrics. We’ve exhausted what we can do with firefighting. Dan has an entire slide-deck of over 400 slides about the topic of security metrics that is incredibly interesting to read up on security metrics and risk management.

Do you need security analogies from other fields? Read the transcript of Dan’s talk as soon as it is up on the SOURCEBoston site. It’s really worth it.

March 12, 2008

AirForce Recruiting For Cyber Offense

Category: Security Article Reviews — Raffael Marty @ 10:47 am

picture-6.pngRichard Clarke, during his keynote at SOURCEBoston, talked about the 2007, non-public Presidential cybersecurity directive. One part of the directive is rumored to talk about building an offensive cyber capability (see also Jennifer’s post). Is the fact that the AirForce has changed their recruiting commercial to contain cybersecurity aspects already a first sign that they are looking for talent that can execute on those objectives?
– Live from SOURCEBoston!

picture-9.pngpicture-8.pngpicture-7.png