Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development , Training & Security Leadership
Protecting the Hidden Layer in Neural NetworksChris 'Tito' Sestito and John Kindervag on Securing Machine-Learning Assets
An ever-increasing number of production systems include machine learning and artificial intelligence - a development adversaries aren't letting go unnoticed. "I've never seen a technology get this far in terms of adoption without considering security," says Chris "Tito" Sestito, the co-founder and CEO of HiddenLayer, a company that seeks to provide that security.
The company's name comes from the place in an artificial neural network learning algorithm where inputs to the network undergo nonlinear transformations. "Adversarial machine learning as a service is certainly available on the dark web," as well as via red-teaming tools and on GitHub, Sestito says, and his company seeks to protect machine-learning models "just like any other cybersecurity product."
HiddenLayer's service fits into the zero trust framework, according to John Kindervag, creator of zero trust. "People want to damage, destroy, steal your machine-learning models," he says, and by focusing specifically on securing the algorithm and the database it runs on, HiddenLayer "expands our understanding of the things that we need to protect."
In this episode of "Cybersecurity Unplugged," Sestito and Kindervag also discuss:
- How HiddenLayer conducts machine learning detection and response;
- The large number of AI incidents not disclosed and how regulations are playing catch-up;
- Use cases driven by AI and ML and the role MLOps - a set of practices to deploy and maintain ML models into production - can play in making models even better.
Sestito is the co-founder and CEO of cybersecurity startup HiddenLayer. He has over a decade of experience leading global threat research, intelligence, engineering and data science teams and has focused on security products at companies such as Cylance, Qualys and Agari. Sestito has also delivered cybersecurity and data science training for Fortune 500 companies and government agencies.
Kindervag is considered one of the world's foremost cybersecurity experts. He is best known for creating the revolutionary zero trust model of cybersecurity. He currently advises both public and private sector organizations on the design and building of zero trust networks and other cybersecurity topics. He has a practitioner background, having served as a security consultant, penetration tester and security architect.
Steve King: [00:13] Good day, everyone. This is Steve King. I'm the managing director of CyberTheory. Today's podcast is going to feature Tito Sestito, the co-founder and CEO of HiddenLayer, a cybersecurity startup that is in the business of preventing adversarial machine learning attacks. And he is joined by John Kindervag, who you all know is the father of zero trust and a friend to the firm that Tito has started here called HiddenLayer. We welcome John, his background in zero trust, as part of this discussion today, as you'll see in a minute or two. Tito has over a decade of experience leading global threat research, intelligence, engineering and data science teams. His focus has been on security products at companies like Cylance and Qualys and Agari and he's delivered cybersecurity and data science training for Fortune 500 companies and government agencies. So with that, I'd like to welcome you to the show, and John as well. Thanks for taking the time.
Tito Sestito: [01:28] Thank you. I'm looking forward to the conversation.
King: [01:30] So let's jump right in. And if I'm not mistaken, HiddenLayer refers to neural networks and those layers that are located between the input and the output sides of the algorithms. And my impression is that when we talk about AI, we're talking about artificial neural networks here. Are we?
Sestito: [01:54] Yeah, that's right. And that was the inspiration for our name. And I think that's exactly what we do. We want to protect those neural networks, we want to protect artificial intelligence and machine learning of all kinds. I think when we look at artificial intelligence, machine learning that have been deployed in production systems today, there are many different flavors in which they come in, and all of which are vulnerable and certainly a focus for us. But yeah, that's exactly how we named ourselves. It's based of the HiddenLayer there. And it's a little bit of a play on words be, we want to be a hidden layer and your defensive stack as well. And where we can defend those assets for you without necessarily impacting anything in terms of your production line.
King: [02:35] Yeah, sure. Why is this stuff suddenly becoming important? It feels to me like we've been talking about this for a long time, and yet nobody has come forth. But is that because the bad guys are using more of it? Or because it leads to automation of manual labor tasks? Or that it don't require a lot of predictive analytics? Why now to the market?
Sestito: [02:38] It's an important question. And it's one that I can answer in a couple of different ways. We started developing this technology back in 2019. And we knew it was a bit early for the market at the time, at least in terms of the willingness to embrace the need for a solution like this. To answer your question about the bad guys, they're absolutely taking advantage of this. I think the most important way I can answer why now is that if you think a little bit back to parallels in traditional cybersecurity, we started seeing attacks take off when we saw automated attack tools like Metasploit become available. And we are at that point now in adversarial machine learning. There's over 26 attack tools available on GitHub today. So what that creates is a scenario where you don't need to be a data scientist, you don't need to be an exploit developer, you just need motivation to conduct these types of attacks against machine learning. And you can go perform tools, or go download tools to do that for you. We're also just seeing - in general, I would describe this as I've never seen a technology get this far in terms of adoption without considering security and different defenses that are required when you think about all the different artificial intelligence is being deployed at the edge in web applications, mobile applications, hardware, software, products, open-source solutions. Deployment is everywhere, the adoption is everywhere, and there isn't any specific security measures being taken. And that's the void that we want to fill. And the time is right, the time is probably - even earlier would have been okay, but this is a real risk for organizations today. So I can go out and buy an exploit kit for 100 bucks or something on the dark web, and I don't have to know anything about this stuff and I can run it and attack General Motors? That's exactly right. When you think about all of the different machine learning models that are deployed at the edge, and what I mean by "at the edge" is the power that's unlocked by these models is allowing the public to interact with them, allowing your customers to interact with them. But what that does is expose that same path to allowing malicious actors to interact with them. So, adversarial machine learning as a service is certainly available on the dark web. And you don't even have to go hunting for it and pay for it if you'd like, you can go on all the different academically driven, attacking red teaming tools that are available on GitHub today. Some of them developed by some major players, there's counterfeit developed by Microsoft, there's adversarial robustness toolbox developed by IBM, all of which can be used for legitimate red teaming exercises or to conduct attacks. So those are freely available to the public and as open-source tools. So the bar of entry for conducting these types of attacks is low. And that's the same catalyst we've seen in the early 2000s, for just an enormous step up in terms of the frequency of these types of attacks in traditional cybersecurity and we're seeing that parallel at adversarial machine learning.
King: [06:01] Nobody that I'm aware of has ever characterized this space as the joke fest that you've just described. But when you think about it, if I can go to a grocery store and I can grab some stuff from produce that happened to have been created by the NSA, or Microsoft, or whomsoever, and then I can use that as an exploit kit to go after General Motors. And I don't have to know anything about anything in order to do that. That, to me, sounds like a comic book circus. How can that even be real? Who created this? And how did we get here?
Sestito: [06:42] It's crazy to think about it that way. But it's an established pattern. It's the idea behind all of the different technological sharing that we need to do on the kind of the Red Team-Blue Team relationship and using these well-known attack tools. And you think about things like legitimate organizations that build things like Cobalt Strike, they get used by the bad guys. And it's a pattern that we've seen and we need to embrace when it comes to predicting new technologies.
King: [07:12] It's crazy. We could talk for hours about that, too. But who is your ideal customer profile? What does that company look like? And what's your market differentiator? Is there a bunch of people in this space?
Sestito: [07:26] It's a great question. I can answer both of those, which is that there are some other organizations in this space. It's certainly a recognized problem. I would say before us, there were more academic style responses to this problem, which involves things like helping you build your machine learning models to be more robust or complex, which essentially creates some sort of hardening to them. We didn't take that approach for a few reasons. And we protect it just like any other cybersecurity product. We've evolved things like endpoint detection and response or managed text and response or XDR into what we call machine learning detection and response. And that allows us to protect these machine learning models without having to make them bigger, without having to make them more expensive, without having to get inside them and gain access to them. So it's a much less invasive approach. We don't need access to any raw data, or to even the algorithms themselves to be able to protect them - all lessons that we've learned in the EDR space. So there are some organizations in that space, a little bit more services-driven, a little bit more academic. But to answer your original question around who is the ideal organization: I think, mature data science teams are well aware of those vulnerabilities. Data science has been publishing adversarial white papers in the usability in machine learning space since 2013. So when I approach an organization with a mature data science team and talk to them about the real threat that exists for their organization, they're not surprised. Now the security side of the house has not had that level of exposure. So, ideal customers for us, at the moment, are those with more mature data science teams that have got an understanding, at least in one area of their organization, of what's going on here and what can happen. But to be honest, it's a universal problem. It's a problem for every industry, it's a problem for every organization of every size. I think we see 93% of United States businesses, either home grown that use machine learning internally or use it through a third-party vendor. So it's every stage of the enterprise, small and medium business, even mom and pop shops are using machine learning. So we need to solve this problem more globally, I would say, to be even more specific about the perfect scenario for early stage users of this product. We see a lot of adversarial activity and things like fraud. So if you look at organizations trying to avoid fraud detection and evade classification models that are looking for that malicious behavior, we see quite a bit of that today. But every company is a big data company and that provides a lot of opportunity for adversarial threat in the machine learning space. So there's not that many organizations out there that don't need a solution like this.
John Kindervag: [10:02] Yeah, sure. Pharmacies too.
King: [10:04] I just want to clarify what your go-to-market is around company size. I know you're not saying this, but if a company has a large volume of data, is that a way to parse through and find the right prospects, like how big is your data lake? Or do you care?
Sestito: [10:28] I think if we were speaking more tactically around organizations we need to work with, it's organizations that are applying machine learning to that data. And so, size of the data isn't quite as important as how they're engaged in the application of machine learning toward their problem space, especially those who are deploying it in their products, in their data pipelines, those are the ones who are the most exposed. So there's many examples of that. There's, as I mentioned, fraud, there's algorithmic trading, there's a lot of that in financial services, there's a good amount of that happening in healthcare insurance. So I would say those with mature data science teams who are applying these machine learning models into production systems, that's our market.
King: [11:07] Yeah, got it. As we've said, John Kindervag here, it goes back with you guys like 20 years or so. You work with him because I'm assuming your product fits somewhere within the zero trust framework? If that's true, can you elaborate on what that looks like a little bit? Maybe, John, you can do that?
Kindervag: [11:28] Yeah, sure. I'll do it. Because some of the people that I've known for a long time, there are ex-Cylance folks. And when we started talking, the question was, to me, do I think this fits into zero trust? And it was clear to me that it did because a) if we look at the five-step model that you and I have talked about a lot, Steve - the first step is to find your protect surface. What do I need to protect? Well, I need to protect my machine learning artificial intelligence algorithm, I need to protect the database that it runs on, I need to protect a whole lot of things that are happening. So, now, that fits very well in there. And then, the second step, understand the transaction flows, how does the system work together as a system? I need to know that if I'm going to protect the machine learning system. How would I protect it if I don't know how it works? That's always baffled me throughout my whole career. We started out as young pups in cyber together, going through lots of training courses and stuff. We both live in Dallas, and at that time, you weren't looking at things holistically. He was very tactical, we would take a training, I'd see him in a training course for some scanning product, or some firewall or some IDS product. And you were very tactical about the products, but you weren't thinking about the system. And so now that we can start doing system thinking, we can start thinking about it differently. And then, by understanding it as a system that will tell us how to protect it, and quite frankly, this is the only technology that I've seen - not that maybe isn't the only technology in the world - but of the things that I've seen that are actually focused on this specifically. There's other organizations that would say that they have some ability to do this within some other product. But to focus on this specifically is unique. So they have architectural controls for step three. Step four is policy. And they can define the policy, who should be allowed to have access to the machine learning algorithm or the data set or whatever it is. And then finally, the fifth step is monitor, maintain. And Cylance was good at that, understanding what was coming in and getting some visibility and then helping you understand what you needed to do to fix that. So it absolutely fits into zero trust as a five-step model, and it expands our understanding of the things that we need to protect. Because I would say that most CISOs are going to protect my machine learning algorithm. Isn't that something done by application security team? Isn't that just built into the app? I think we're all at an early understanding of that unless you're a data scientist. And that's the struggle when you're trying to secure something. Tito said, "Well, I don't think I've seen anything that's this mature that we've waited this long to secure." And I would argue that that's the way it always works. It gets a lot of maturity before anybody thinks about the security. The network is the great example. You know, ARPANET and DARPANET and the internet were around for years before anybody even considered adding the first bit of security. So I'm glad we're getting ahead of this now, before it becomes a big deal, and the question that I would have to Tito is if I'm a skeptical CISO, or director of security who's got to get the budget because the data team wants it, but it has to come out of the security team's budget, because it says security in it. And I will go, "I haven't heard of any of these attacks," are there documented things that people can look at and say, "I can read this article, I can read this report, or this case study and see that it's a real problem."
Sestito: [15:32] Oh, absolutely. First of all, well said, John, and I just want to throw out that it's so important to view this particular problem space from a CISO's lens, and to apply it to the important framework that you've created, because it does make this a little more tangible and part of the CISO's workflow, and it's important to think of it in those terms. So I think that this comparison in this thought path is highly important in that space. But there's documented areas. The AI incident database is an enormous wake up call for those who visited the first time and they start to see how many artificial intelligence incidents there are, some of which are exposed. MLS has some, which are malicious attacks from external attackers, some of them are insider threats. And when you start to see some of the organizations that are up there, where we see how many incidents and how many affected organizations from those incidents, we see that this is not a problem for tomorrow, this is a problem for today. Now, regulation is significantly behind in terms of requiring those to publicly divulge these types of breaches and threats and problem machine learning assets. So you're not seeing any of these organizations volunteer to expose this type of data. But I would say that's a great spot to start as the incident database, you'll also see use cases like MITRE has formed their ATLAS framework, which is a lot like those of you who are familiar with MITRE attack, they've built a brand new framework called ATLAS, which is dedicated to adversarial machine learning. And we work closely with them to help them grow and adopt that framework to the types of attacks that we're seeing. But they also have use cases where they say both real-world attacks, as well as red teaming exercises, and some other areas that can wait to see. So up to what's going on. And I would just add from my experience so far in this space that it's real, I think. I like to say in security, we've gotten used to ransomware, which calls its attention to us immediately, nobody doesn't know they've been hit with a ransomware attack. But if you think not too far, long ago, when we were looking at things like rootkits and backdoors, and hidden shells, and those types of attacks, you had to go looking for that, that's a lot more of the mindset we need to have with these types of attacks. Because if you're not looking for them, an inference attack or an evasion attack, or data poisoning can happen right under your nose. So I would say that there's a lot of information out there, you just have to be a little proactive and understand it while we wait for regulation to catch up. But it's just around the corner. In fact, October was an enormous month for that. We saw the AI Bill of Rights come out in the White House. Bank of England called for a framework around this type of security, because they've seen how exposed we are in the financial sector. So I think we're going to start seeing some more of this regulation play catch up. But in the meantime, it's going to be up to those CISOs to be a little proactive and understanding around how exposed their organization is and what steps need to be taken. And I think viewing that through that zero trust framework that John has shifted the way in which we look at the adversarial landscape. Since its inception, I think we need to take that lens and look at this problem. Because this is not a new pattern, it's just a new technology.
King: [18:07] When you describe your product to prospects, do you emphasize the zero trust framework as part of your solution? Or are you talking about that as "by the way, it also fits within zero trust framework?"
Sestito: [18:54] We absolutely lead with it. I think that CISOs understand that as the framework that they need to use, but we tell you if you can't use your address, if you're not protecting your ML assets - and machine learning is certainly part of that supply chain. And if you're not looking at it as an element that you're securing, then you have to address.
King: [19:12] Yeah. So how do you know that CISOs are thinking about AI and ML adoption today? Would you agree that modern CISOs have zero bandwidth for new systems or tools or software and they don't trust anybody's marketing messages anyway. And so if you agree with, how do you plan to go to market, how are you going to get a CISO to pay attention to you?
Sestito: [19:37] Yeah, great question, Steve. And one that I thought quite heavily about over the last year, which is there are certainly a bandwidth issue amongst CISOs today, and there's no discounting that, but I personally believe 12 months from now, hearing a CISO say they don't protect their ML assets is going to sound as crazy as hearing CISOs say they don't protect their network today. It's a problem. They are going to be responsible for that, they already are responsible for it. And we can be an ally in helping move forward there. But I would also say to any of the CISOs, think about the patterns you've seen over the last decade. Any new technology comes alongside new specific protection mechanisms that go alongside it, so if you think not too long ago, if you think toward even our migration to cloud, we had a similar scenario where we did require new technologies that are now openly embraced. So when you think about, every new attack vector comes alongside of a new need on the security side and most CISOs have gone through that 20 times in their career. So there's not much difference in terms of a pattern, it's just a technology that they may not be as familiar with today.
Sestito: [21:30] Yeah, absolutely. And I think if I'm putting a CISO cap on, I'm thinking about all of the different vendors that I'm currently working with, and how they're using machine learning. So even in threat detection, like we did at Cylance, and all of the different next-generation technologies that exist in cybersecurity today, good example of AI at the edge where these machine learning models are able to be interacted with through those products, to scan an asset, an artifact, whatever it happens to be, whether it's a file, or whether it's some network set of data, or something along those lines or EDR behaviors, all of those are using machine learning and attackers want to bypass that machine learning detection. So they're actively interested in performing these types of inference attacks and adversarial ML attacks on those pieces of software to be able to build universal bypasses against them. And that's exactly what we experienced at Cylance in 2019. So, as a CISO, I would think a lot about your third-party risks, I would think about all the different tools that you're using to secure your environment, and how many of them are reliant on machine learning as a tool to protect you. And if those machine learning models themselves are not protected, and that's a risk that's transferred directly to you. So I think that that's one example. There's fraud, I think that there's significant examples of machine learning being used in prevention of adversarial activity that are only one step removed from malicious attacks being successful. So I think that that's probably the closest example of what CISOs are interacting with on a day-to-day basis.
King: [22:53] Yeah, and fraud detection is a good example. But what is it that I can do today in that domain that I couldn't do before AI and ML?
Sestito: [23:04] Yeah, absolutely. I think fraud is, for many reasons, in all of its forms, whether it's account takeovers, whether it's financial transaction fraud, like credit card fraud, or banking fraud, machine learning is a very effective tool in identifying fraudulent transactions, because of just the wide breadth of data that's available there. And it's something you can do on the backend that doesn't interrupt the user experience, like you're not using it, you can't necessarily put out a two-factor text message every time somebody wants to swipe their card. So machine learning is something that can be used to identify these types of behaviors without interrupting that user experience. So it's also something that attackers are interested in understanding. So if I'm an attacker and I want to go steal some credit cards on the dark web and start understanding how Bank X uses machine learning to detect fraudulent transactions. And as an attacker, I want to understand where it's making its weakest decisions and where I can find exposed different factors that I have control over, like the frequency of those transactions, the amount of those transactions, maybe those transactions are coming from to learn how to manipulate that model to go completely undetected and commit as much fraud as I'd like without that classification. And that's an inference attack that leads into an ML evasion attack. And one that's real and actively happening today. So just one example, but I think it is a similar format in terms of the adversary, attempting to understand how these machine learning models are making their decisions so that we can either poison them or we can avoid them altogether for something like putting a classification from a fraud model. So certainly something that I think CISOs can easily interpret based on a lot of the technology they've worked with in the past. It's a newer way of detecting the same type of threats.
King: [24:54] Yeah, sure. That makes sense. I'm conscious of the time so I think my final question today will be around MLOps, which is a set of practices that is the way to deploy and maintain machine learning models into production. In a perfect world, how would the zero trust impact an MLOps architecture or delivery system?
Sestito: [25:25] So yeah, I can start by saying MLOps is a fantastic step forward in the applied machine learning space simply because of all the steps that we need to take with every other technology, we need to version our datasets, we need to version our models, we need to be able to rollback technology if we find problems, we need to be able to track some explicability across our data pipelines. So it quantifies a lot and allows us to understand some rootcause analysis when something goes wrong, or how to make our products even better, and how to make our models even better. So it helps every step of now what we call the MLOps pipeline in terms of not only making those pipelines better, but then also aligning it with something like zero trust and understanding at which steps of the five steps that John mentioned earlier, where along that MLS pipeline, can we not only test and make sure that we're compliant, but then also, I think that step five of that continuous monitoring is well aligned with zero trust, but I'll pass it on to the expert and then see what you think. John?
Kindervag: [26:25] Yeah, all these things are things that people have to be aware of, because attackers don't sit still, they're always innovating. So just because we don't have any idea of how to do something doesn't mean they don't. And I think your example of, as a former QSA trying to secure credit card environments, your example of learning, how does the system work? What are the triggers? It's the same thing you would do in physical security? Where's the alarm system? How to get into the compound where the fence isn't electrified, or whatever the thing is, and so, these things are adversarial. And cybersecurity is an adversarial business, just like the military and law enforcement. So we're going to find smart people doing creative things. And then we're going to have to try to stop that creativity because from our perspective, it's malicious. From their perspective, it's how they make their money. Sometimes I just look at how good the attackers are. And I just want to applaud just like, wow, that was an amazing thing that you just did. And I think this is one of those spaces where these are not the cyber criminals from 20 years ago, the script kiddies and all that, even if they can download stuff of the dark web, they have to know what to do with the data. And they have to understand data at some different level in order to create value from the attacks, because they need to get a value from that. So they could do it sometimes just to be malicious. But there's always going to be some outcome that they want. What is the goal of the attacker? Is the goal to just bring down the machine learning system so they can get in real quick? Is the goal to learn how it works, so they can figure out how to do an end run around it? What is that goal? And then by understanding that, you use technology to solve this, you can't solve a problem like this through training your people, security awareness training for machine learning. If anybody ever comes out with that, that's going to be the silliest thing of all time. No, this is what computers are made for now. They're made to crunch these numbers. This is all math, it's applied mathematics. Use a computer to do this, and figure out that this is an important thing that you have and then protect it. People want to damage, destroy, steal your machine learning models. And one thing that we haven't even talked about is research and development. If I can steal your algorithm, that's a whole lot of machine learning R&D that I didn't have to do. So if I know that you're doing it in a cool way, I want to get in there and steal that so that I can do it even better, but I don't have to catch up.
King: [29:35] Yeah, the Chinese have the lowest cost R&D on the whole planet. Amazing. Hey, John, not everybody knows what a QSA is. So, for the benefit of our audience, tell us what that means when you say a former QSA.
Kindervag: [29:53] QSAs are the people who assess credit card security systems. So, there's a Payment Card Industry Data Security Standard, they validate these people called QSA, a qualified security assessor, I think it stands for. It's been so long that I'm trying to pull that out of my brain. But you learn how to protect cardholder databases is essentially how you do that, which is a cool thing, because you learn how to protect a single binary data string, which is the PAN, the personal account number, because that's the thing people need to get in order to do credit card fraud. So they're trying to always steal that information. And it's valuable even today. People say it's not valuable. Well, an individual credit card isn't valuable, but 5 million of them are. And we've seen those sides of breaches. So these adversaries can make a lot of money by stealing this kind of stuff and selling it on the dark web.
King: [30:59] Yeah, for sure. And before I run off here and log into GitHub to find myself some red team tools, I wanted to thank you, Tito, for spending a half an hour or so with us today. And you John as well. I know both you guys are super busy. I'd love to talk to Tito offline about perhaps putting together introduction to AI and ML and today's use cases for our education initiative that I'm trying to put together here. And John has developed some introductory coursework around zero trust as well. But I'll call you on that later. And perhaps, if we can revisit here, maybe mid-January, mid-February and see how things are going from a market shift point of view. We're all aware of the headwinds that we're now up against as a result of the last few weeks' worth of reporting, and God willing, the recession won't have a kind of impact on this market as it will on all the other markets. So I wish you the best. Sounds like a cool company. And I'm looking forward to seeing you guys in the news.
Sestito: [32:17] Thank you so much, Steve. Thank you, John. I appreciate the conversation today. And I would love to look into anything in terms of the educational side, that's something that we believe a lot in here at HiddenLayer. How to spread that word and educate those on applied AI and cybersecurity implications of it. So looking forward to chatting about that and catch up in the future.
King: [32:37] Yeah, that's great. All right. I appreciate that. Thank you to our audience as well for spending some time here. And I hope that this was useful to you all. And until next time, I'm Steve King, your host signing off.