March 16, 2020

Use of AI for Cyber Security in the Intelligence Community

Category: Artificial Intelligence — Raffael Marty @ 6:47 am

This post is about capturing my talking points from the recent conference panel on the “Use of AI for Cybersecurity” at the Intelligence and National Security Alliance (INSA) conference. You can find my musings on the term AI in my previous blog post.

Building an AI Powered Intelligence Community (Click image for video)

Here is the list of topics I injected into the panel conversation:

  • Algorithms (AI) are Dangerous
  • Privacy by Design
  • Expert Knowledge over algorithms
  • The need for a Security Paradigm Shift
  • Efficacy in AI is non existent
  • The need for learning how to work interdisciplinary

Please not that I am following in the vein of the conference and I won’t define specifically what I mean by “AI”. Have a look at my older blog posts for further opinions. Following are some elaborations on the different topics:

  • Algorithms (AI) are Dangerous – We allow software engineers to use algorithms (libraries) for which they do not know what results are produced. There is no oversight demand – imagine the wrong algorithms being used to control any industrial control systems. Also realize that it’s not about using the next innovation in algorithms. When DeepLearning entered the arena, everyone tried to use it for their problems. Guess what; barely any problem could be solved by it. It’s not about the next algorithms. It’s about how these algorithms are used. The process around them. Interestingly enough, one of the most pressing and oldest problems that every CISO today is still wrestling with is ‘visibility’. Visibility into what devices and users are on a network. That has nothing to do with AI. It’s a simple engineering problem and we still haven’t solved it.
  • Privacy by Design – The entire conference day didn’t talk enough about this. In a perfect world, our personal data would never leave us. As soon as we give information away it’s exposed and it can / and probably will be abused. How do we build such systems?
  • Expert Knowledge – is still more important than algorithms. We have this illusion that AI (whatever that is), will solve our problems by analyzing data with the use of software systems that work with a cloud based database. Instead of using “AI” to augment human capabilities. In addition, we need experts who really understand the problems. Domain experts. Security experts. People with experience to help us build better systems.
  • Security Paradigm Shift – We have been doing security the wrong way. For two decade we have engaged in the security cat and mouse game. We need to break out of that. Only an approach of understanding behaviors can get us there.
  • Efficacy – There are no approaches to describing how well an AI system works. Is my system better than someone else’s? How do we measure these things?
  • Interdisciplinary Collaboration – As highlighted in my ‘expert point’ above; we need to focus on people. And especially on domain experts. We need multi-disciplinary teams. Psychologists, counter intelligence people, security analysts, systems engineers, etc. to collaborate in order to help us come up with solutions to combat security issues. There are dozens of challenges with these teams. Even just something as simple as terminology or a common understanding of the goals pursued. And this is not security specific. Every area has this problem.

The following was a fairly interesting thing that was mentioned during one of the other conference panels. This is a “non verbatum” quote:

AI is one of the poster children of bipartisanship. Ever want to drive bipartisanship? Engage on an initiative with a common economical enemy called China.

Oh, and just so I have written proof when it comes to it: China will win the race on AI! Contrary to some of the other panels. Why? Let me list just four thoughts:

  1. No privacy laws or ethical barriers holding back any technology development
  2. Availability of lots of cheap, and many of them, very sophisticated resources
  3. The already existing vast and incredibly rich amount of data and experiences collected; from facial recognition to human interactions with social currencies
  4. A government that controls industry

I am not saying any of the above are good or bad. I am just listing arguments.

:wq

March 9, 2020

No Really – What’s AI?

Category: Artificial Intelligence — Raffael Marty @ 7:30 pm

Last week I was speaking on a panel about the “Use of AI for Cybersecurity” at the Intelligence and National Security Alliance (INSA) conference on “Building an AI Powered Intelligence Community”. It was fascinating to listen to some of the panels with people from the Hill talking about AI. I was specifically impressed with the really educated views on issues with AI, like data bias, ethical and privacy issues, bringing silicon valley software development processes to the DoD, etc. I feel like at least the panelists had a pretty good handle on some of the issues with AI.

The one point that I am still confused about is what all these people actually meant when they said “AI”; or how the “Government” defines AI.

I have been reading through a number of documents and reports from the US government, but almost all of them do not define what AI actually is. For example the American AI Initiative One Year Annual Report to the president doesn’t bother defining AI.

The Summary of the 2018 department of defense artificial intelligence strategy – Harnessing AI to Advance Our Security and Prosperity” defines AI as follows:

Artificial intelligence (AI) is one such technological advance. AI refers to the ability of machines to perform tasks that normally require human intelligence – for example, recognizing patterns, learning from experience, drawing conclusions, making predictions, or taking action – whether digitally or as the smart software behind autonomous physical systems.

Seems to me that this definition could use some help. NIST on their AI page doesn’t have a definition front and center. And the documents I browsed through didn’t have one either.

The Executive Order on Maintaining American Leadership in Artificial Intelligence defines AI as:

Sec. 9. Definitions. As used in this order: (a) the term “artificial intelligence” means the full extent of Federal investments in AI, to include: R&D of core AI techniques and technologies; AI prototype systems; application and adaptation of AI techniques; architectural and systems support for AI; and cyberinfrastructure, data sets, and standards for AI;

I would call this a circular definition? Or what do you call this? A non-definition? Maybe I have focused on the wrong documents? What about the definition of AI by the Joint Artificial Intelligence Center (JAIC). a group within the DoD? The JAIC Web site does not seem to have a definition, at least not one I could find.

One document that seems to get it is the Artificial Intelligence and National Security report, which has an entire section discussing the different aspects of AI and what they mean by the acronym.

In closing, if we have policy, legislative, or regulatory conversation, we must define what AI is. Otherwise we have conversations that go into the absolutely wrong directions. Does 5G fall under AI? How about NLP or automating the transcription of a conference presentation? If we don’t get clear, we will write legislation and put out bills that do not cover the technologies and approaches we actually want to govern but will put roadblocks into the path of innovation and the so fiercely sought after dominance in AI.