Harnessing AI in Cybersecurity: Revolutionizing OT Protection

Episode 4 February 20, 2024 00:57:21
Harnessing AI in Cybersecurity: Revolutionizing OT Protection
PrOTect It All
Harnessing AI in Cybersecurity: Revolutionizing OT Protection

Feb 20 2024 | 00:57:21

/

Hosted By

Aaron Crow

Show Notes

Hosted by: Aaron Crow

Guest: Clint Bodungen

Clint Bodungen is a globally recognized cybersecurity professional and thought leader with 25+ years of experience (focusing primarily on industrial cybersecurity, red teaming, and risk assessment). He is the author of two books, "Hacking Exposed: Industrial Control Systems" and “ChatGPT for Cybersecurity Cookbook. Clint is a United States Air Force veteran and has worked for notable cybersecurity firms like Symantec, Booz Allen Hamilton, and Kaspersky Lab, and is currently the co-founder and CEO of a cybersecurity training startup, ThreatGEN. Renowned for his creative approach to cybersecurity education and training, he has been at the forefront of integrating gamification and AI applications into cybersecurity training, creating his flagship product, “ThreatGEN® Red vs. Blue”, the world's first online multiplayer computer designed to teach real-world cybersecurity. His latest innovation is AutoTableTop, which uses the latest generative AI technology to automate, simplify, and revolution IR tabletop exercises. As AI technology continues to evolve, so too does his pursuit to help revolutionize the cybersecurity industry using generative AI and large language models (LLM).

Summary

In this conversation, Clint and Aaron discuss the value of tabletop exercises in cybersecurity and the development of auto tabletop, an AI-based tool for facilitating incident response tabletop exercises. They highlight the limitations of traditional tabletops and the benefits of using AI to enhance engagement and flexibility. They address concerns about AI in cybersecurity, such as data privacy and security, and emphasize the use of local language models to mitigate risks. They also discuss the future of AI in the industry and the workforce, emphasizing the importance of learning generative AI and prompt engineering for future job prospects. In this conversation, Clint discusses the automation of tasks using AI and the benefits of using AI as a tool to enhance human creativity. He also explores the future of AI and its potential for accelerating technological advancement. Clint acknowledges the concerns about the potential misuse of AI but emphasizes the importance of using it for good. He highlights the role of AI in reducing barriers to innovation and its significance in cybersecurity. Overall, the conversation highlights the transformative power of AI and its impact on various industries.

Takeaways

Connect with Clint Bodungen:

 

Connect with Aaron Crow:

Learn more about PrOTect IT All:

To be a guest, or suggest a guest/episode please email us at [email protected]

Show notes by NMP.

Audio production by NMP. We hear you loud and clear.

View Full Transcript

Episode Transcript

[00:00:00] Speaker A: You're listening to protect it all, where Aaron Kowe expands the conversation beyond just oT, delving into the interconnected worlds of it and OT cybersecurity. Get ready for essential strategies and insights. Here's your host, Aaron Crow. Hey, Clint. Welcome to the show, buddy. But go ahead and why don't you tell us who you are, what you do, all that kind of fun stuff. [00:00:25] Speaker B: Yeah. [00:00:25] Speaker A: Cool. [00:00:26] Speaker B: Thanks for having me. And yeah. So about me, I'm a 25 plus year veteran in cybersecurity, United States Air Force veteran as well. It's where I got started, but been doing the cybersecurity thing for more than 25 years. Been specializing in otics before we called it OT and ics since about 2003 or something like that. I mean, kind of got introduced to it in the 90s, but really formally got into it in the early two thousand s and been doing pretty much otics cybersecurity solid nonstop since then. Never looked back anything. I focused on the offensive side of things, mostly writing code, pen testing, development. But throughout consulting, as you know, you can't get away from the compliance and all the good stuff. So been doing pretty much that. So now I am the co founder and CEO by default of Threat Gen, a cybersecurity gamification and simulation company. And so if it matters, some of you may know that or may know me from my previous works as one of the principal authors of hacking exposed industrial control systems in my upcoming book, which is Chat GPT for cybersecurity. Which is not just Chat GPT, it's just a buzzword. But, yeah, so we'll just start there and go from there. [00:01:49] Speaker A: Awesome. Yeah. Clint and I have a very similar background in a lot of ways. Been doing this a long time. I've been fortunate enough to have some amazing conversations with people like you. You and I hanging out at black hat and talking about the art of the possible and AI and cybersecurity and Ot and all these different things. So, man, let's dig into. Let's. You and I were on a YouTube stream yesterday doing a tabletop exercise where we were the Death Star and we were on the Death Star, and we were cyber response, and we were responding to an incident where the rebels were trying to attack us. It was a lot of fun. But why don't you talk a little bit about what that is and why that's important and why it's valuable and why it's cool. Beyond just we were the bad guys or the good guys or whichever side you want us to talk about. [00:02:40] Speaker B: Yeah. We were using a platform that I just recently developed called Auto Tabletop. And it was really cool because it was the very first that I know of live stream tabletop exercise. And the product that I developed in this all started way back, I guess, when large language models and generative AI first became a thing in late 2022. And really kind of the beginning of that was just backtrack a little bit more. So I've been working with AI for quite a long time anyway, and even early open AI products, before anybody knew, really, the general public knew who OpenAI was. [00:03:32] Speaker A: Sure. [00:03:33] Speaker B: And this is because of the gamification product that we'd been working on at threat gen. So a lot of AI work there and stuff, and I've done a few presentations at conferences on AI and OT and stuff like the, it all started when I was with my kids working on, hey, can we do a DND, a dungeons and dragons like kind of thing with AI? Can we have it automate this and all that? And then that naturally went into, wait a second, this is working pretty good. Can I use this for tabletops? Because, well, that's what a tabletop is, right? I mean, when you talk about IR tabletops, it's very much in the spirit and fashion of the DND, tabletops, dungeons and dragons, and it's role playing. I developed an application of that through different techniques and different large language models of how can I turn this into something that can facilitate an IR tabletop automatically? And the reason why that's important and going back into why tabletops are important and some of their limitations, is that anytime you have an incident response plan, which you should have all the time, whenever you get into the thick of it, whenever something happens, the bad thing happens. That's not the right time to see if it works, right. And so a lot of regulatory agencies, or even non regulatory, but just compliance and standards are saying that, hey, you should test your incident response plan. You need to exercise this. And the general recommendation is annually, which I am highly against. I mean, it's better than nothing, but you're not really going to get that much out of it in annually. All that's doing is checking a box for compliance. And you're not actually going to remember what your after action was. You're not going to remember what needs to be changed. And so, like, anything that needs to be practiced once a year isn't good enough. And so this needs to be practiced regularly, and you need to see actionable, change, actionable items and change. The problem is that with regular tabletops the standard way of doing things. A lot of people use, what, powerpoints and slides and Excel spreadsheets and those sorts of things, and they take a lot of time and resources to plan properly. And then you have to have somebody who's an expert come out and facilitate that. And all this ends up being time and money and all these things. So most companies don't have the capacity and the resources to do it more than once a year. [00:06:16] Speaker A: Right. [00:06:16] Speaker B: So now with the advent of generative AI and large language models, we can build tools like auto tabletop that has very human like analytical capabilities, narrative capabilities, and for the most part, the entirety of human knowledge wrapped up into one model that this thing can automatically reference instantly. And so that's what we did. We were using auto tabletop to, for the most part, instantly and dynamically create a scenario based on some simple settings that we gave it to start and it played out the entire story, which know we were called upon to defend the Death Star from Rebel alliance cybersecurity attack. And that was the first time I'd done it live online streaming. I've done it at a couple of conferences, but it even impressed me. It was really cool. It was scary how good it is. And then we also gave it voice capability. So not only does it generate the scenario and the injects and facilitate and run and keep up with everything, it was also narrating it to us via audible voice and in a very, I would say, convincing accent and tone that sounds like somebody who could have been a commander on the Death Star. And so it was really engaging and it was quick and it was easy and it was very accurate, I think. I mean, we don't know what your impression was, but it was very accurate. [00:07:54] Speaker A: Yeah, the cool thing was, there's a lot of things, and I want to dive into this because I think it's super important. You were able to on the fly a it made it fun, right? So we did this exercise, we did it in Star wars land. It was a hypothetical scenario, right? So it wasn't real, but it was fun. So not only was it was engaging, like, the four of us were there, we weren't prepared. I didn't have a clue what was going to happen. You didn't have a clue what was going to happen, but we were able beyond just the way that I've done this before, and you've seen me do these at conferences where we have a ten step scenario that we want to run people through a tabletop, and it's a predetermined step one has these three options, and step two has these three options. And step three, you get the point. Right. And really, it's about, for those, especially that we do, that I've done multiple times with ICs Village. It's more about conversation. It's more about team building. It's more about people giving people that have never done one, that maybe they're dipping their toe into ot or they're dipping the toe into a. It's an understanding of, wow, I didn't even think about that. Right. And we got some really amazing answers and perspectives based on that. But because of the constraints of the tabletop being in a PowerPoint, and we had to have all the answers, and we had an hour's worth of time to do this in, we had to really have it be analog. Right. There were only certain things that we could do, so it limited the conversation. We had a lot of good conversations, but we couldn't dive down that path. So the thing that I loved about what we did yesterday was every step as we're communicating amongst ourselves, we say, hey, let's do this. So we tell the Chat GPT or the auto tabletop. [00:09:47] Speaker B: Yeah, it's not Chat GPT. Let's not get that confused. [00:09:52] Speaker A: We tell the auto tabletop, the AI model, to, hey, we want to lock down the Death Star, and we want to send stormtroopers out to this area, and we want to do all these things, and then it gives us a response based on our actions. So it's very dynamic. It's a roleplay game. Very similar, like you said, to DND, but all along the way, even though it was this fake scenario, in the Death Star, we were still doing cyber hygiene things like, we were still having incident response, we were still communicating to our leadership, we were still sending physical response folks out there because we were afraid that there was maybe a response required. So it's really powerful to be able to do that, because when you do a tabletop for a large organization, how many people actually get to sit at the table? How many people actually get their voice heard? And like you said, if you do it once a year, if I go work out once a year, I'm not going to have a six pack. Right. Um, I'm going to have to do it more consistently than that. It's the same reason why you have fire practice. You go through these procedures and you test those things. Where do I go? What is the muster point? It should be second nature. As soon as those things go off, people aren't looking around like, I don't know what to do. We did this once three years ago, but I don't remember what to do. You should be trained. It should be repetitive. It's like CQB in the military. It's like why the Seals train so much. It's all about understanding what the next step is because you're going to step your toe, you're going to make mistakes. But when you do this on a regular basis, and you could do this for everybody because you lowered the barrier of entry, like, I don't have to have a mackenzie, I don't have to have a booze. I don't have any Ui or this consultant to come in and do this big orchestrated thing that costs time and materials and bringing in all this stuff. I can do this monthly, quarterly. It really opens the floodgates to be able to provide this understanding to the masses. [00:11:50] Speaker B: Yeah. And you hit on something also, which was the lack of limitations. Right. And I think that's what makes it truly valuable and truly just. I keep saying truly, but that's what really makes it, I guess that increases the value because with a prescripted set of injects or a scripted scenario, you have those limitations. And I think that limits the learning in a way because you have a certain set of things that you are exercising. And if the participants try to go outside of those boundaries, you don't have an answer for it, or you have to guide them back or your answer may not be accurate to their situation. Or what if you didn't plan it exactly properly? And you always have that engineer, you always have that it person, that manager that says, well, that would never happen in our situation because this and this and this. Oh, that can't happen because our system doesn't have that. So if you have a system that we can now utilize AI to generate everything, there are no limits now. Now somebody can try to throw you a curveball and you're okay. Like somebody literally did. Somebody said, hey, well, where is Darth Vader during all of this? And we said, well, let's ask it. And it answered, so you can't stump it, you can't trip it up. And then if you do have a situation where somebody says, well, that our system doesn't do that. So going back, is that okay, let's say we're not doing a Star wars themed exercise. Let's say we are doing a theme that is more realistic to somebody's systems, which it can do as well. We were just having fun yesterday. But if we get into a situation. Somebody says, well, our system doesn't look like that. It doesn't act like that. Well, then you just feed that information to the AI. The AI makes the adjustments and it says, okay, well, we'll move forward accordingly. It takes away the limits of conversation and the injects. It takes away all limits and there are no limitations now on what kind of questions you can ask, what can be answered. And so I think that is the true value of where we are today. Can you hear my dog barking, by the way? Because I know you're going to have to cut this, but that's all right. Could you hear the dog barking? [00:14:18] Speaker A: I could, but we'll get it out in post, don't worry. [00:14:21] Speaker B: Hold on, real quick. I know you're going to have to edit this. Hold on, real quick. [00:14:23] Speaker A: That's fine. [00:14:27] Speaker B: I really need her to stop barking. [00:14:32] Speaker A: Around. I'm doing my best. [00:14:43] Speaker B: All right, getting back to what I was saying. [00:14:47] Speaker A: Ready? [00:14:49] Speaker B: Yeah. Forgot what I was saying. [00:14:56] Speaker A: So it's really deep diving into how you take the gloves off. There's no scenario that you can't do. I can tell it to check logs. We went through and told it to check logs. We told it to check video surveillance for physical in these areas. We sent stormtroopers out again, you could do physical security when it's not a hypothetical situation. I think you even said that you can feed it architecture diagram so it actually is using your environments. Again, you're not going to want to potentially put IP addresses or things like that. You don't need to, but you can put a general hierarchical architecture drawing in there that is more realistic to your environment. So you have less of those managers saying, well, that's not how our environment works. Right. We don't have that problem because we've done X, Y and Z, and then you can just give it that feedback saying, well, we don't have that problem. We've turned off that secure mode access or we don't have active directory or we don't have that vulnerability. So that's not a viable attack vector. [00:16:00] Speaker B: Yeah, exactly. And I think that's where the problem of tabletops really kind of start, is that it is so scripted and they're so structured and it takes so long to plan, is that there are limitations in the discussions that you can have because there's limitations in the questions you can ask or the knowledge, and you have to have that expertise not only from the staff participating, but from the person running it as well. And so it takes away that. And you said it earlier, the barrier to entry. Because now the team itself can just say, well, here's what we're going to do. Here's what we're going to run, whether it's we want to run this specific scenario or surprise us, and here's our environment. Now go. And even from an IR expertise perspective, you can have the AI itself make sure that you are following the identify, classify, isolate, eradicate, recover steps of IR and even explain that to you. It can help you along. And it is an expert. It does know all the things that a human would know about IR and cybersecurity and such. And so you have expertise built in. [00:17:22] Speaker A: I know when we first started, we were like, I don't know what to say. I don't know where to start. So you kind of helped us guide that, but you can just ask it, I don't know where to start. What are some ideas on things? Like, there's all sorts of things that you can do to get the ball rolling. For me, that means I can give it to my junior people, I can give it to my mid tier people, I can give it to my super experienced people. Right? And they're all going to get value from it. That's another problem that you have in a tabletop exercise. Your junior people are going to sit in the back. They're probably not going to say anything. Your mid level people may say something, but they may get overrun by the guy that knows everything and talks all the time. Whereas if you do this in the right scenario, and depending on how you put the teams together, you can take your junior folks and let them run on it on their own and practice it and run through it a couple of times and that kind of thing. Right? And they're able to get value and improve their skill set because they're putting in reps. It's just like anything, the more of these things I do, and the other thing is every time I do it, it's a different experience. So it's not like I'm just going to run back through it and I have the same thing again. I can run through it with a different scenario, with different outcomes, with different perspectives, a different attack vector. This one's malware, that one's a phishing attack, this one is social engineering, really. Again, going back to the sky's the limit, so you can really be as flexible as you want by the teams. The scenarios, the amount of times that you run it, it really opens the door to get a lot of value out of this for entities that may, again, like you said, maybe they do one a year to check the box for compliance, and that's good enough, but that's because maybe they don't see the value in it from this perspective, because it's not the same. [00:19:09] Speaker B: Right. And that's one of the benefits of using AI based tabletops. And the reason I'm not really just going to sit here and say, auto tabletop, auto tabletop, threat you in? Because I'm not trying to make this a sales presentation. I'm an advocate for the use of AI technologies in cybersecurity. And the auto tabletop just happens to be the product that I developed based off of it. But the benefits of using AI are for tabletops is like what you just said. But to add to that a bit is that all of these things that you can do, this limitless capability, can be done instantly to start a new one. So I've done this for customers to where we would do one, it would take an hour or so, and then we would spin up a completely different one right afterwards. And then we would do one day, we would do one for the junior people and the engineers, and then another day we would do one, or that afternoon we would do one tailored for management. And so it takes away that, because it doesn't take any time to set up, because it's unlimited potential, you could run one right after the other. And one of the things that I noticed that I'd never seen before is we ran a couple of tabletops in the morning on one day, ran one in the afternoon, then the next day we ran a couple more. And by the time we were done, by the time we got to the last one, which we kind of called the final test. Right. We threw the kitchen sink at them, and they were already improved. They were using lessons learned and techniques that they had gotten bitten by from the first and second ones. The day prior, they had already improved. You are not going to see that at all with a once a year annual tabletop. And that is the big thing. That is why I hate annual tabletops, because you don't have noticeable improvement. There's really no gain from it other than saying, okay, our IR plan works in theory, but being able to just do boom, boom, boom one after the other and change up scenarios, I have literally seen it 100% of the time to where people see noticeable improvement from the beginning to the end, because you can do them right in a row and you can exercise so many things. Another benefit of it is, well, what if you're not doing a cyber IR? So for example, I have a friend of mine who wants to do this for the emergency response, incident response for pipelines. Not even cybersecurity related. [00:21:48] Speaker A: Sure. [00:21:48] Speaker B: So you're not limited to cybersecurity. Basically, you can test the efficacy of any process, any procedure that you have using generative AI technology. [00:22:00] Speaker A: Yeah, man, it's really, the sky's the limit. I know you and I have talked about a lot of things that aren't there yet, but the sky's the limit of where we can take this. Right? And the overarching AI. And I wrote an article the other day and posted it around what is ot going to be like and how do we incorporate AI and how do we make people feel comfortable? Because it's just like anything. It's the difference of, oh, I don't want to have a car, I'd rather just stay with my horse. Right. Those people are long gone. Right. And I grew up riding horses, but I was not doing it for transportation. I was doing it because I liked riding horses. I definitely would rather get in my dad's 77 Chevy pickup to go to the store because it's a heck of a lot faster and I can haul a lot more stuff and it's a lot more comfortable ride. Even in that old truck, it was a lot more comfortable with my horse. I love my horse, but it's not the same. So even with this, like in OT, there's no doubt in my mind that OT is going to get brought into or AI is going to get brought into OT. It's just a matter of how do I do it in an intelligent way. And I know you and I are looking at some of that and working on some of that, but some of that is making it be on prem, making it have confirmation by a real operator. So I'm not necessarily taking action and blind, especially in the beginning, until I believe it, until I've grown this thing to a place that the operator feels comfortable. Or maybe never. Maybe never is a relatively long term, but in the foreseeable future, maybe that's not even my focus. Maybe I'm just saying how can I get decisions and ideas and data in front of my operator faster? AI and the ability for it to be able to look at this data and make a decision or narrow things down, which I know you and I have been working on a lot. Right? How can I just get it to narrow down to. It's one of these five things. Like look at these things. Instead of the list of 8000, just look at these five. Tell me which one of these, you think it is, right? That's hugely powerful for you to be able to make a decision as a human because I know what I know about my plant and now it's just narrowed it down, so it helped me make a decision and choose from a smaller list. And that's hugely valuable. [00:24:16] Speaker B: Yes. Real quick, I'm going to record what I'm going to say next. Okay, hold on just a second. Let me bring that up, because what I'm going to say next, I need to record my notes. Come on, hurry up, hurry up. I need to record my notes and. [00:24:33] Speaker A: I'll give you a transcript of all this and I'll give you the audio of this or the video of this as well. [00:24:37] Speaker B: Okay. All right. And there we go. Okay, so let's pivot real quick and talk about that aspect, which is, look, AI is coming. It's inevitable whether you want it to or not, right? Just kind of like itot convergence, right? It's coming whether you like it or not. Scada in the cloud, it's all these things that people don't want. They're going to happen. Virtualization, it's going to happen. It's happening. So let's talk real quick about some of the misconceptions that people have with AI in the industry and the pitfalls. And I'm not talking about the existential threat of this is how you get terminators. But there are some misconceptions that people have that are causing some unsubstantiated fears of AI in their industry, in their companies. There are some valid concerns. Let's talk real quick about what are some of the concerns? Are they valid or not? And then how can we protect ourselves against adversaries using AI, or how can we use AI effectively in our companies, organizations and our industry? Effectively and safely? And so I think one of the big things is right now, a lot of organizations are scared of using AI because of the claims, and some of these are valid, that their data was used in training in the OpenAI models. And so we're like, well, I don't want my sensitive information exposed. So company policy, you don't use Chat GPT. Okay, so let's talk about this for a minute. So first of all, people need to understand that Chat GPT is the consumer based web interface for the OpenAI models. That information, while you can opt out by default, that information is saved. That's how whenever you go to the Chat GPT interface and you can click on air conversations because it's saved out there, it's saved. And the data, even if you opt out, can be saved for up to 30 days. And then if you don't opt out, that data can be used for training. And that's where people start to see their private data used in the models. But you should know that Chat GPT aside, but OpenAI, the back end, the llms and building apps on top of it using the API, they are audited for SoC two compliance and they are GDPR compliant. And data, if you're using the teams data, sorry, if you're using the teams version of OpenAI of Chat GPT, if you are using the enterprise version or if you're using the API, that data is not used for training. If you are building an application on top of the API, that data is not saved anywhere unless the app builder saves it. That data is not used for training whenever you make inferences, meaning if I have a prompt and I put data into the prompt and I send it to the OpenAI large language model for inference to query it, yes, it is going across the Internet, encrypted, and it hits the cloud, it hits the model and then it makes that inference and then you get the information back, that data disappears into the ether. That prompt is not saved anywhere. So there is a slight risk when you take that data and you send it across the Internet to make the inference and then it disappears, but it is encrypted. So what does that mean? That means that you have the same data intransit security and privacy that you have whenever you're storing all that data in your sharepoint or anything that you have in your organization that has a cloud based infrastructure like AWS, Azure, Sharepoint, if you're storing that data anywhere other than on premises. And by the way, if you have a site to site, if I have an office in Georgia and an office in Houston, guess what? That data is still going out there somewhere if I'm sharing it at. So that's what we're seeing. [00:29:44] Speaker A: Now. [00:29:45] Speaker B: The difference is that if that data is going, if I'm not saving that data anywhere, it's being transmitted, but then it disappears if I'm only saving it locally. So it's actually more secure than if you're using any type of file sharing services, data transmission, things like that. So if you are a company that is using any type of cloud infrastructure, it's no different. It really is. It's no different. And so that's what you have to be aware of from a data inference, data storage, data transit. When you're talking about these large language models now. How much can we trust what OpenAI is saying about your data isn't stored, it's not used for training or whatever? Well, that's the question, isn't. But we sign that trust contract when we use cloud services anyway, anywhere. So I don't know, do you have any comments or anything to say on that? [00:30:43] Speaker A: Yeah, I mean, obviously that's a good point to talk about, right? And that's just basic cyber hygiene. It's basic data exfiltration. Where do I put my data? It's no different than putting data in GitLab or GitHub. And what data can I put there? Just because it's there in my environment doesn't mean I've got to be careful with that. As a consultant or whatever my role is, my customer may not want their data in that location for obvious reasons, for the same purpose. Right. It's the same concept we've done with GitLab. We're sharing content, we're sharing code, we're sharing all this different stuff. We just don't want to put customer proprietary information up there. Same thing you wouldn't want to do in GitLab. Same thing you wouldn't want to do in OpenAI. Right? But then the other piece to that is that, and you and I have been working on some of this, right? Is there's also local language models, right? You can pull those things out of the cloud and run them on your machine, right? So with gpus, with just normal computers, not even something that's got some crazy amount of gpus on my laptop, I can run Olama and all these different language models right on my machine. So I'm not exfiltrating the data. The stuff is local, it's running on my site on Prem. So I know a lot of the OT stuff, they're like, oh yeah, well we don't put stuff in the cloud anyway, so that doesn't apply to me, Clint. Okay, but this is another way you can do it. You can do that same thing. Those same language models that you're getting now, it's not the same for same, right. Obviously you get the benefit of the larger training data set on a large model like OpenAI. If I'm doing chat GTP four, it's going to have this huge thing of all these people's train data that it's been doing and scouring the Internet, and it's got dolly and I can create images and it can search the Internet, all that kind of stuff. I'm going to be limited. It's not the same for same, but I can still do local stuff and create like auto, tabletop and other like, products that I could actually do things locally. So I'm not sending my data to the cloud. It's all on my computer. Right. It's not going anywhere. [00:32:51] Speaker B: Right. And that's where I was going to go next, is that, yes, there are large language models. You can use large language models that are local, open source. And this technology is moving so fast right now. The general consensus is that open source is about six months behind your best models in OpenAI. What that means is that, yes, the quality of GPT four is better than, say, your open source llama. Llama two, whatever. But that's going to change, and that's going to change very fast. And as time goes on, you will have that same quality, and it's very close anyway, you'll find. But you can also fine tune local models. And what that means is that I can take question and answer sets. I can basically create data that is specific to what I'm going to use the model for. I can fine tune it, create another variation of that open source local model. And what research is showing is that fine tuned versions of the top quality open source models are actually as good at your best quality, like GPT four, in that particular domain. And that's because it doesn't have to. Well, basically, long story short, what happens is that the number of weights, the number of transformers, let's just call them the parameters. We'll call them. But the open source models weren't trained with as many gpus with as many resources, and so they're not as good at looking at the broad scale, that broad knowledge base, that data set within that model, and making the proper inferences. So their weights aren't as good anyway. Just long story short, because of that reason, they're not as good. However, when you fine tune a model for a specific task, it doesn't have to then comb through all that data in general to find the answer you're looking for. So the open source models are just as good as the top quality models once they're fine tuned for a specific task. And that's really what you're going to be doing when it comes to OTe and cybersecurity and stuff like that. So, yes, you can absolutely benefit from artificial intelligence, generative AI, large language models, completely private locally, and your data never leaving your site. [00:35:41] Speaker A: Right? Yeah. And that's hugely powerful. I mean, the sky's the limit with that. Again, I go back to the scenario we were talking about before, whether it be auto tabletop, where I can feed it architecture diagrams. Well, I don't want to feed that across the Internet to OpenAI. Okay, so don't. You can do that locally. You can start feeding it the architecture, the build types and configurations. Maybe even so, when we're doing these tabletops, you're actually able to give it actual log exports from a type of device and PCaP, maybe you can feed it a PCAP of, this is some data that we have in our environment. This is what the inject looks like. All these different things become possible. And you're not worried about your data getting out there? You're not worried about my architecture or some china or bad actor getting my data and knowing what my attack vectors are, because now they saw my tabletop exercise, so now they know where all my weaknesses are. Right? So you can do this in a powerful way, get the benefit without the risk. So it's just a matter of how you want to do it and what's important to you. But to your point, on the it side of the house, you're already doing this. Like, you already have all your stuff in the cloud. You already are using all these cloud based services. Maybe ot is a little different, but on the IT side, we're already doing these things, and we have been for years, we've been in cloud and online and edge and all this type of stuff forever, and it's not going away. So I think we're more comfortable on the it side of the house than the OT. But I can see both of them being valuable. [00:37:16] Speaker B: Yeah. As time goes on. Look, there are people already doing things like ScADA in the cloud. I mean, technically, if you think about it, the nature of SCADA is, unless you're using point to point frame relays and things like that, that data is traveling, unless you're point to point satellite and things like that. In a lot of cases, that data is traveling over the Internet. Right. I mean, remote communications, but there are people that are moving to that. And so I think at some point, there is going to be a certain amount of data from OT that is accepted in cloud infrastructure. [00:38:00] Speaker A: Yeah. And I think that's going to be more so around. How do I get comfortable? Right. A lot of this is fear of the unknown. Right? I'm scared of what I don't know. I'm scared of. I don't understand how this AI thing works. I remember it must have been 2010 when one of the control vendors was rolling out remote vmware virtualization and using virtualization, having a VM instance to be an engineering workstation. And I was trying to explain to an operator and an engineer, a plant control system engineer, the concept of virtualization and where the server lives. And even though the server is over here, I'm going to access it over there. But it's not secure mode access. It's different. And that whole concept was beyond what they could understand because they'd never been exposed to virtualization. Now virtualization is used all over the place in OT. Everybody understands it. It's not scary. They have virtual control processors and field I o and all this different stuff. It's become a thing. But even at that point, this was again like 2010. Virtualization was not new in 2010. Virtualization has been around for a long time. It has been using it for 30 years. But in OT, we're probably 20 years behind them, right. So I think the same probably adoption rate is going to be here. And obviously there's a reason for that caution in OT, right? Obviously the impacts are bigger. I'm dealing with death, I'm dealing with life, I'm dealing with safety, I'm dealing with availability of my lights, our Internet, our infrastructure. There's a reason we've got to go a little slower and make sure that we fully understand the ramifications of the actions that we're doing. But it doesn't mean it's not coming. So you can't just say never going to do that because it is coming. You need to start thinking about it. You need to start thinking about what would make me feel comfortable, what are the checkboxes that they would have to do for me to feel comfortable to do this in my space? And how could I do it and phase it in instead of it just getting rammed down my throat in ten years? Because that's my only option. [00:40:14] Speaker B: Right? Yeah. In essence, I'll echo what you said. Rightfully so. Yeah, we're worried about it, and rightfully so. And of course, I'm only speculating. Obviously, I'm not saying, okay, I said it, but I don't mean like it's going to happen whether you like it or not. I'm saying more than likely history would show that technology will evolve into these things that we're uncomfortable with, but we'll find a way to do it. But that's where we are with AI right across the board in that everybody's sort of afraid of what does this AI mean? What are the risks? And there is a risk of, and I don't really want to. We probably should not put this in scope here, but in terms of prompt injection, large language model injection, right, to get data, you don't have to worry about that if you're using open source private models locally only. And so that's the solution to that. I think that once you get into the conversation of we can use local models, completely private, nothing going out anywhere, then the risk of prompt injection, remote access, our data being exposed significantly diminishes, in some cases completely, and depending on how well you secure your onsite data from any remote access. But at that point, that's where we can start to fully take advantage of the benefits of generative AI, large language technology. And I think that everyone should. I think that the benefits of the ability for AI to enhance human capabilities exponentially, the analytics capabilities, the reasoning capabilities, the search capabilities, and what it can do, can actually make our industry organizations more efficient and not even get into the conversations of, well, it's going to replace jobs and da da da da. No, that's a ways off. We're not even going to get into that. Would I trust everybody says, well, not everybody, but I've had people tell me I wouldn't trust these large language models to make decisions that's concerning human life and to make a split second process decision or whatever. You're already doing that. By the way, what is process automation in the first place? I would say that once you have a really fine tuned model and once you have tested it and trained it, the large language model capabilities, generative AI, the AI is actually less likely to make a mistake that a human would make. And so humans are prone to mistake too. And the difference is that AI doesn't have emotions and AI doesn't get sleepy. AI is not hungover, AI doesn't get sick. We'll get to the point to where we are trusting AI to make a lot of decisions and do these things more so than humans. If we can protect that data, if we can protect the proprietary stuff and the sensitive data and all of that, then I would say that using the capabilities of AI and where we're going, where it is today and where it's going to be a year from now, six months from now, I think is a benefit. I think that everybody should be preparing to use AI to make things not only more productive, more efficient, but to make things safer. [00:44:02] Speaker A: And I think they can absolutely, I mean, day to day basis, I use it to, I'll write an article, but instead of worrying about sentence structure and fragments and any of that. I just flow. Like I'll go through and I'll say, this is what I want to talk. I want to talk about this. And I just start writing out notes just as it flows out of my head and this topic and that topic and this thing and that thing, because I don't have to worry about this goes before that or any of that kind of stuff. And then I can put that and kind of build it into strat. This is what I want to say. And then I can post that. The Chat GPT. I'm not getting Chat GPT to write anything for me. I'm writing it, I'm getting it to reword it, to put it in a structure that makes sense. I can say, hey, make sure that we're using this language. So I've trained it. I've built a custom GPT for me using the language the way that I speak, the words that I use, because I don't want to come across using words that I never use in conversation. It does, it, it's getting better. Sometimes it does a first draft, right? [00:45:03] Speaker B: I use it all the time to give me first drafts of things. But the thing is that I would say that this is scary for the workforce, but that generative AI can do those things that I would normally have an intern do or a junior person do, a junior programmer, a junior engineer. So I think if it's going to affect anything in the workforce, it's going to make entry level positions harder to get. So I think here's a clue. Here's a hint, people, in order to increase your chances of getting entry level jobs in the future, learn generative AI. Learn prompt engineering. Learn how to use these tools. [00:45:46] Speaker A: Yeah. And it could be a powerful tool. It's just like anything, right? It's what you do with it. Right. And how you wield it and you can use it to be a powerful thing. Like my kids are in school and the conversations with the teachers, like how do they make them write an article or a paper or things like that? Okay, well, you know they're going to put it through AI. Okay, so that's a tool. It's no different than when I took calculus in engineering school. I had a ti 92 calculator that could do calculus problems. I still had to show my work, though. I just knew that if I showed my work and got to the right answer, I knew it was the right answer, but it wasn't good enough for me just to put a. I had to show all of my work and my calculator is not going to do that for me. Right. It was really just a tool that I could use to verify that I got to the right place, but I still had to do the work. So I see AI as the same thing. I still have to put thoughts into what I want it to say and what I want it to do. I can't just say, hey, chat, GGP, write me an article. I can, but it's going to be the quality of a junior level person, right? If I wanted to actually have some meat and some value, I'm going to have to write the article and then I allow it to do what the junior person would do. Hey, go fix the sentence structure, go fix the typos, go make the fonts the correct way. Put it in a format that I want it to be in. Here's my template. Like doing all of those little menial tasks that I don't need to spend my time and focus my effort on. It's the same thing with this podcast, this platform that I use. It automatically transcribes the transcripts like it does all of the speaker a, speaker B. I can actually take this transcript and I can delete the words in the transcript and it cuts that part out. So I'm editing by just deleting words and sentences and paragraphs out of the transcript. It deletes that content from the podcast. So before I was having to go to an editor and do it, I still send it to an editor, but when I send it to them, I've already cut out the things that I don't want in there. So then they're just fixing auto tones and some of that type of stuff, and they're really polishing it. So it's not that that editor is getting no work, it's that they don't have to do as much stuff. I can outsource that to AI instead of a VA or a junior intern, that kind of thing. [00:47:58] Speaker B: Right? Yes, exactly. [00:48:01] Speaker A: So obviously we talked a little bit about this already, but next five to ten years, what is coming up over the horizon? What's something you're excited about, obviously, in AI, and maybe what's something that's concerning that you see maybe coming up the horizon that people that we need to, as an industry as a whole, probably need to get a grip on before it's too late. [00:48:22] Speaker B: Yeah, I think that I'm not worried about this existential threat that everybody, you have some conspiracy theorists, and there's two different camps, quite frankly. I think that kind of what I said about AI and safety. Right. I think if you eliminate human emotions and human spite and human, these negative human characteristics, you eliminate that. I think they could probably manage humans better than humans. I think they probably manage the environment better. They can manage everything better. I think. Yes, there's the argument of like, well, what if AI says that the way to protect humans is to get rid of humans, the way to protect the environment is to get rid of humans. Well, you know what, then so be it. I mean, if we're that terrible, then maybe we shouldn't be here. So I'm not worried about the existential threat. I think that the things that excite me about AI, I think, is that just the level of human creativity will be enhanced. I think our capabilities will be enhanced, because one thing that we have that AI probably will never have is experiential creativity. We can create from our experiences and our emotions, right? People can create from passion and love or hate or anger and fear and excitement, and we can create, we have the capacity to turn the intangible into art, into creativity. Right? AI will probably never have that ability. And I don't know, I don't know how you express emotions digitally or in silicon and binary, but I would think that that's something that we have, that if we use AI as a tool to be able to express creativity, express, like, whether it's writing code or creating art or videos or music, if you use AI as a tool and it can work so fast and efficiently or whatever, and you use your own creativity and your emotion and your experiences as the motivation, the epidemic for. Is that a word? Maybe impetus. Maybe impetus, right? No. What's the word? [00:50:49] Speaker A: Don't start me lying to you. I'm not the grammar guy. [00:50:51] Speaker B: Yeah. Anyway, we got a couple of tech. [00:50:54] Speaker A: Guys trying to talk grammar. That's why I have GPT. It tells me what's right. [00:50:58] Speaker B: Yeah, but if you use that and you use the generative AI as a tool, then you're going to be able to create things that you couldn't create before. And I know the artists don't want to hear this, but if someone is inspired and they use these tools to do things that they couldn't do before without a learning curve or skills, you can create amazing things. Right. If I have an idea, but I don't know how to code, then I can get generative AI to help me with that and create these things. And so I think what happens is that the technical skills become less of a barrier. [00:51:41] Speaker A: Sure. [00:51:41] Speaker B: And now it's about creativity and thought and ideas and I think that's going to accelerate human advancement. That's going to accelerate the thing. I think we're limited by our technology, but we have unlimited creativity and capability. And so I think that there's going to be a complete shift in the next five years, maybe a year, two years, but definitely within the next five years, and certainly in ten years, we're going to see a shift in technological evolution, and we're going to see an exponential increase in development and capabilities because we're going to learn how to use AI because it's going to get even better and better. We're going to learn how to use AI to create things based off of our ideas a lot better. Right. And so I think you're going to see a lot of medical advancements. But on the other side, here's the fear part, right. You also have people that have negative inventions and innovations and motivations, right. And so just like we're going to be able to take this amazing technology and create something good faster and better, people are also going to be able to create bad things faster and more efficiently. But that's with every technology, any technology that allows you to accelerate advancement in one way, it can be good and it can be bad. And so we just have to be aware of that. And so when people are afraid of AI wiping out humanity, it's not because AI will make a decision to wipe out humanity. It's because humanity will use AI to wipe out humanity. That's right. And so that's where the fear is that the Spider man quote, right? With great power, responsibility, and then there are some evil people out there, and there are some stupid people out there. And when you mix stupid and evil, Kim Jong, then bad things can happen. So that's the only thing I'm worried about, what people will do. The technology is exciting. I'm not worried about the technology making a decision to wipe us out. We're going to do some great things with this as it grows. But I'm worried about just how stupid and evil some people can be with it. So it's going to basically be, can we bridle ourselves? [00:54:04] Speaker A: Well, you hit on something there that we're being hesitant to use it for good because we're afraid of what it can do, especially in OT, right. We're afraid of what it can do and how do we do it safely. The bad guys aren't. They don't care. They're going to use it right now to figure out how can they build something to get into these spaces, right? So they're already using it for their advantage to build the MVP, that barrier of entry like we talked about on the auto tabletop. I can build a platform. I can build a product. I don't know how to code. Right? Let's say I can't write a script. I can't do anything. Like I'm a newbie. I can barely use my iPhone. But if I can talk to a GPT or some language model AI and it can create some basic MVP for me enough that it can get the message across, and then I can hand it to somebody that actually knows how to build something advanced, then they can take that idea and that basic concept and then grow it into an actual productive thing. It's able to bridge that gap that much faster because sometimes that MVP is the hard part to get to, especially if you're a non technical person and you're trying to do something technical, then you've got to hire somebody and you've got to get your message across, you've got to pay them and you've got all these things that stop you from doing it. It's probably why a lot of ideas don't happen. But this is going to remove that barrier or at least reduce that barrier so that more and more people can do it. And that's good people and bad people, good products and bad products, evil and good. So we've got to be able to use it for good. We need to be safe with it. But that doesn't mean we can just avoid it. [00:55:42] Speaker B: Go ahead. [00:55:42] Speaker A: I was just going to say, tell folks, how do they get to the tabletop? I know you're going to do these every Tuesday, I think you said is when you're going to live stream them. How do people get access? Find out more information about it and maybe even, obviously you got a book coming up as well, so why don't you kind of share all that stuff? We'll put it in the show notes, too, but go ahead and talk about it a bit. [00:55:59] Speaker B: Yeah. So real quick, if you just want to get a hold of me, the easiest way to get a hold of me is find me on LinkedIn, my name, which will be in the show notes. I'm the only one of my name in the world, so just reach out to me on LinkedIn and that's the easiest way to get a hold of me. But yeah, I've got this new book coming out, should be hit February, March, something like that, called Chat GPT for cybersecurity. It's not just Chat GPT, so that'll talk about a lot of how I'm leveraging and I'm teaching people how to leverage this technology for cybersecurity purposes. But yeah, you can go to YouTube and search for threat gen. So threat GE and all. One word, threat gen on YouTube. And every Tuesday we do live streams of the auto tabletop where we're doing that sort of thing. IR tabletops live streamed. With that, you can also go to threatgen.com to learn more about kind of what we're doing with AI and the products that we have. Just letting you know. Unfortunately, auto tabletop is not an individual product. It's priced for companies because it is for tabletop exercises and things like that. But if you want to evaluate or if you're interested, you want to use it for education, just talk to me. I'll work with you. If this is something that's important to you, we'll figure out how to work it out. And then finally, my personal website is cybersuperhuman AI, where I do live streams, and I also have some courses where I teach people how to do this for cybersecurity. [00:57:22] Speaker A: Awesome, man. Hey, I appreciate all the work that you do. I think this is hugely valuable and beneficial to the greater good of humanity and ot and cybersecurity and all the different things. Obviously, I geek out with this stuff, as you know, and love digging into this stuff. So it's a fun, exciting place to be. It's a fun, exciting thing that's coming up over the horizon. And I think a lot of folks are really excited to get their hands dirty and dig into this stuff because everybody sees the writing on the wall. This is the next gold mine, gold rush, whatever you want to say coming, that everybody needs to have this in their tool belt to be able to use this for whatever their role is. It's going to be an impact in all future capabilities. [00:58:04] Speaker B: It's absolutely necessary? Absolutely. [00:58:06] Speaker A: 100%. Well, awesome. Clint, thank you for coming. Appreciate your time today. And again, I'll put all the show notes then in the bottom. Until then, thanks, everybody for coming. And until next time, yes, thanks for having me. [00:58:18] Speaker B: And take care, everybody. [00:58:20] Speaker A: Thanks for joining us on protect it all where we explore the crossroads of it and ot cybersecurity. Remember to subscribe wherever you get your podcasts to stay ahead in this ever evolving field. Until next time.

Other Episodes

Episode 2

February 05, 2024 01:02:10
Episode Cover

Bridging the Gap: OT Cybersecurity in the Evolving Landscape of Industry and Recruitment

With a focus on the OT Cyber Security recruitment space James is the Talent Solutions Director at NDK Cyber. NDK Cyber work with high-growth...

Listen

Episode 7

March 14, 2024 00:42:26
Episode Cover

Securing OT: Strategies for Prioritizing Vulnerabilities

In this conversation, Bryson Bort discusses his background and the creation of Scythe, an offensive security platform. He also talks about the ICS Village...

Listen

Episode 9

April 19, 2024 01:09:10
Episode Cover

From Basics to Quantum: A Comprehensive Dive into Cybersecurity Trends

Summary The conversation covers various topics related to cybersecurity, including offensive security, IoT devices, hidden threats in cables, advanced hacking devices, privacy concerns with...

Listen