Episode Transcript
Transcript is AI generated, and definitely contains errors.
Matthew: [00:00:00] Welcome to the Security Serengeti, where your hosts, David and Matthew. Stop what you're doing, subscribe to our podcast, leave us a lovely five star review, and follow us at SerengetiSec on Twitter.
David: We're here to talk about cybersecurity and technology news headlines that hopefully provide some insight, analysis, and practical application that you can take back to the office to help protect your organization.
Matthew: The views and opinions expressed in this podcast are ours and ours alone, and do not reflect the views or opinions of our employers.
David: And the tip of the day is never tell your AI girlfriend the password to your crypto.
Matthew: Are we sure that what we're talking about today is going to help out people protect their organization?
I don't know if today's really
David: they're in charge of their crypto, I
Matthew: been in charge of their crypto or their AI girlfriends.
David: If you're in charge of your corporation's AI harem, then this may be helpful for you.
Matthew: Can you imagine companies offering that as a perk? Like if you join us, we'll give you a free AI girlfriend or boyfriend of your choice.
David: Yes, [00:01:00] I was guaranteed like Google or Apple or one of those Silicon Valley companies is going to be one of the first ones to do that, that exact thing.
Matthew: and they are going to keep track of everything you tell it. And they're going to use that against you.
First article today. Lamborghini carjackers lured by 253 million cyber heist. Just a little bit of an exaggeration. I think they were really only lured by about 41 million, but this
David: Yeah, it depends on where you start the luring at.
Matthew: this comes from Krebs, of course, the parents of a 19 year old who has been connected with a 243 million cryptocurrency theft were kidnapped recently or else attempted kidnapping for ransom.
Six men traveled from Florida and attempted to carjack the parents while they were house hunting in their brand new Lamborghini.
David: Yeah, it's, it's not the Lamborghini you're thinking of. It's the shitty looking SUV.
Matthew: Yeah. It makes me wonder why they, like the, the Porsche SUVs, it's like, why?
David: it's, it's, it's not a great thing. Mean, you've got Porsche sedans now. I mean, come on. [00:02:00] Porsche was never supposed to be a sedan ever. Somebody needs to go kneecap one of their engineers.
Matthew: Lamborghini Urus? Is that the one that's the
David: Yeah,
Matthew: I gotta see this thing. I mean, it looks like a fine, fast SUV, but it doesn't look like a
David: I mean, it would just looks like a Hyundai or any other SUV. You know, it's not stand out.
A crypto researcher named Zach XBT tracked down who he believed were the thieves in this 243 million heist and published an exposé on them on Twitter. Two of the folks he named were charged in DC, so the exposé appears to be at least somewhat accurate.
Matthew: A couple of them have started recently flashing a lot of money, buying cars, going to expensive clubs, renting charter jets, etc., which probably Blows my mind. We'll talk about that later. His expose included a discord chat session with a username that led to this 19 year old. Supposedly he got 41 million for his role in the theft.
I didn't write it down, but I think he, I think the thing said that he was the he was responsible [00:03:00] for moving the crypto around.
David: I think it was initial access actually.
Matthew: Oh, okay. His father is a vice president at Morgan Stanley, and he went to private high schools and a Harvard prep program. So this is a kid that is already wealthier than the vast majority of people on this planet.
David: Yeah. I believe they live in Connecticut,
Matthew: It's funny. My
David: not known to be an inexpensive place to live.
Matthew: my wife comes from Connecticut and has lots of stories. I'm gonna take this part out. Has lots of stories about stuck up people there and the way they think. And when I told her about this this morning, she was like, Oh yeah, that tracks. So first of all, no honor among thieves, huh?
David: Well, I think it's more like, you know, would a bear still steal an elk from another bear?
Matthew: You know, as long as they were just stealing from rich people, I'd be a lot more forgiving of them.
David: Don't know. Criminal stealing for criminals? Doesn't break my heart either.
Matthew: That's fair. That's fair. It's the ones that steal from like the grandmothers that just really set me off.
David: I don't know. Grandmothers, you know, they don't have much time left. So what are they doing with all that money?
Anyways why can't also, why can't these people just quietly disappear and retire [00:04:00] somewhere? You got 40 million. Why are you buying 10 cars? Why are you renting charter jets and dropping a half a million dollars in a club, what the hell does even a half a million dollars in the club by, I mean, other than like five bottles of Gregor's.
David: I have no idea. But, but Krebs actually mentioned this in the article. To quote Krebs there's also a, a penchant among these among this crowd to call attention to their activities. In conspicuous ways that hasten their arrest and criminal charging and he actually suggests in here that the 19 year old may have bought the car for his parents considering it was brand new and they were also house hunting.
So I'm wondering if they were looking for a house to buy with their new 41 million in crypto. So it sounds like the parents may have been neck deep in this whole thing, which is reminiscent of the rumors around the San San Beckman freed stuff.
Matthew: Yeah, his parents being involved in it
David: Yeah.
Matthew: that's interesting because I mean, his [00:05:00] dad is a VP and generally speaking, most people that get to VP. Don't necessarily get there because of their kindness and what wonderful
David: saying they may have flexible ethics.
Matthew: mean, this guy was supposed to go be a lawyer. He was planning on being a lawyer and apparently doesn't know what the law is or does knows and just doesn't care.
David: You know, this would be much more understandable if you were a VP at Wells Fargo.
Matthew: Me and Wells Fargo,
David: so, I mean, either way, you know, we're, we're, we're lucky these attacks, these attackers are actually not all that smart or maybe we only are hearing about the dumb ones.
Matthew: but which ones are you talking about? The six that tried to take the folks or the ones who had the initial or both of them,
David: The flamboyantly spending their money ones who are not smart enough to say, Hey, I've just stolen millions of dollars. Maybe I shouldn't go and buy a Lamborghini for my parents with it.
Matthew: 10 cars dropped a half a million dollars in a, did you there was actually, have you seen any of the Elspeth?[00:06:00]
David: No.
Matthew: So one of those, there was a guy who was going to marry into this new family and this new family was apparently a criminal family and they, and this is, this is all fiction, but it's interesting.
It's kind of related. The father tells the son like, Oh yeah, we're criminals. That's how we made all of our money. We, you know, we're a lawyer firm for criminals and he immediately goes and like confesses to a stripper that like he's married into this family with the big secret. And it turns out the family hired the stripper to like, try and get the stuff out of him.
David: Hmm.
Matthew: then they kill him because he's not trustworthy.
David: No.
Matthew: It's like these, it feels like that's a pretty good strategy for some of these guys, like, you know, give them a million dollars, see what they do. If they go out and buy a car and start spending money on clubs, like, Don't work with them anymore.
David: Yeah. I mean, it's almost the opposite of Brewster's millions.
Matthew: Is that the one where he just disappears with it?
David: No, this, that was a Richard Pryor movie in the eighties where he stands to inherit 360 million. I think it is.
Matthew: Oh, and he has to spend it all in a
David: So he has to spend 30 million [00:07:00] in 30 days. In order to qualify to be, to inherit the 360 million or something like that. I don't remember the exact numbers, but good movie. I
Matthew: wild, wild, wild.
David: But you know, the kids blowing all this money kind of reminds me of conversations that you and I've, you and I have had about the democracy, the God that failed by Hans Hermann Hoppe and the increasing rise in time preference and the destructive, the overall destructive nature of, of that which of course I blame on the Fed.
And it's constant, constant destruction of our money by devaluing it. Cause it raises, you know, that's really what this is. These guys, their time preference is so high. They can't, they can't not spend the money. You know, they can't sit on it. They can't be patient. They just don't have the time preference for it.
Matthew: That's interesting.
David: And I think that's, you know, we're seeing that overall throughout the entire American society, I think. And this is just like another symptom of it.
Matthew: Yeah. [00:08:00] You know, thinking about that, eh, I don't have anything intelligent to say about that.
David: You did for a second. Are you sure?
Matthew: No, I don't.
David: Okay.
Matthew: there was another article we looked at several weeks ago, but didn't discuss, I don't think, where the CEO of cryptocurrency firm said that he was physically attacked and the attackers made him transfer both his personal wealth and the company's complete store of crypto to them, which I'm not sure that I believe honestly, I think that might just be an excuse.
Or he just stole the money,
David: Yeah, that's, that's what I'm thinking. I
Matthew: downside to crypto. Someone steals money from a company via wire transfer credit. It can frequently clawed back and generally they're not able to steal the entire, you know, if your company has, you know, a billion dollars in cash, like usually they're not stealing the whole thing.
They're usually stealing a much smaller portion of it, but with crypto, someone breaks into your system, they can just. Make it disappear. Also, generally speaking, thieves in the U. S. [00:09:00] don't try to physically rob companies in such a way as to wipe them out. I mean, obviously there's people who go in and, you know, this is a stick up, give me the money in the cash register. But again, that doesn't generally kill a company or wipe them out. That's much more dangerous and small, both more dangerous for the robber and the person being robbed and also much, much smaller potatoes.
As I understand, most of the time they only get like a thousand bucks when they do that. Does not seem worth risking your life, but hey, yeah,
David: says something about wealth in the digital age. The original thieves stole 243 million from one person. You know, a multi multimillionaire, you know, anybody with that same amount of conventional wealth does not have 243 million in liquid assets. They own businesses, stocks, real estate, you know, et cetera.
So they can't be robbed of the millions in a single heist. I mean, if they do have that amount of liquid assets, then they are in a seriously protected area, [00:10:00] like a public or private vault, it would take like an oceans 11 level effort. To get away with all that,
Matthew: even, even
David: three children with a phone.
Matthew: even more though. Hold on. We talked, we did the math on this the other day. I think it was, it was, it was a year that I was talking about the weight of gold. It was, it was you. Yeah. Where like a million dollars is like 250 pounds in gold or something like that. I'm going to do the math again.
Hold on. Let me do it
David: Yeah. It's 40, 000 to the pound.
Matthew: So 1 million. Divided by 40. All right. So it's 25 pounds per million. That was 250 pounds for 10 million. So like trying to steal 243 million in gold. If it's 10, if 1 million is 25 pounds, 243 times 25 is 6, 000. So three tons of gold.
David: Wow.
Matthew: Like, yeah, we're not even talking we're not even talking to notions.
11 caper talking,
David: Well, we're talking. [00:11:00] Yeah. So we're, what we're talking about is we're talking about either gold finger or the book, not the movie or diehard three.
Matthew: mm Yeah. The, the multiple, multiple trucks,
David: Yeah, the dump truck's stealing the gold from the Fed. Sounds like a good idea.
Matthew: Sounds like a good idea. Yeah. 'cause everybody talks about how gold is, so, yeah. That's interesting. That's really interesting. This, it's 240 and who makes $243 million and then leaves it in a cryptocurrency. Would've cashed out like 50% of it or more to start.
Spreading my wealth out. Like, I mean, like you said,
David: Maybe he did.
Matthew: of it in stock. Oh yeah. He's, he's got another 750 million somewhere.
David: Who knows?
Matthew: Yeah, maybe wild, wild, wild, wild. You know what? He could have just given me, you know, 40 million of it and I could have helped him out here.
David: You know, that way the heist wouldn't have been so bad. You know, imagine if you'd given me four to me, and then they would have only stole 200 from you.
Matthew: I know,
David: less would that have hurt?
Matthew: I would have helped him out and helped him secure that.
David: [00:12:00] But, you know, there are a few things that this could end up leading to, right? So, you know, it may start forcing these attackers to keep a lower profile so they don't get attacked. Or we could end up something like a digital mafia. You know, if you do a big heist, regardless of who you hit or how much you take yeah, if, if you live in a certain, certain region, you have to pay tribute to the Don, you know, in order to be able to stay alive, keep your money maybe they'll help you launder it or maybe we could see cops starting to leak details of thefts with the hope that other crooks will help lead them to the thieves.
Matthew: I could, or corrupt cops sharing information about it since it's all digital and it's so easy to transfer. Like I could see a cop to like being like a finder's fee. Hey I'll tell you about who we're looking at. If you'll give me 10 percent of what you get. Yeah.
David: that just recalled the the FBI agent from the Silk Road investigation that got busted for stealing the crypto from dang, what's the kid's name now? It's a Dread Pirate Roberts, there's his handle
Matthew: I don't
David: can I not remember his name?
Matthew: I can't remember. That's fine.
David: But yeah, so the [00:13:00] cops stole the money there, so maybe you'll see some of that too.
Matthew: Wouldn't that surprise me?
David: But as an aside, Krebs mentioned that the FCC has started enforcing new rules about sim swap and port out fraud. So they, which is supposedly began, began implementation or they, or they had a deadline to implement of July 8th of this
Matthew: Hmm.
David: So there'll be a link in the show notes to this light reading article. but they had a, they had a good summary here that I'm going to quote in full. That describes what these new rules are. So the rules require wireless providers to adopt security methods of authentication, of authenticating a customer for redirecting a customer's phone number to a new device or provider, and they would require that providers keep records of SIM chart changes, requests and the authentication measures they use.
The rules would also adopt processes for responding to failed authentication attempts, [00:14:00] institute employee training for handling SIM swap and port out fraud and establish safeguards to prevent employees from accessing customers personal information until after that customer has been authenticated.
Matthew: Interesting. It's a lot of extra overhead.
David: Yeah, which is why this was actually stated for June, slated for June, and they got pushed back and asked them to delay, and they delayed it a month. Hopefully, I mean, all this is good depending on how this is executed, because that's the real wrinkle there. This may sound good, but execution is where they'll fall down at.
But anything that reduces mswap and, Port over fraud, I think is a good thing considering how much money is tied up in, in mobile handsets these days. But something else you can do personally is use a paper wallet for your crypto or an offline, a hardline hardware wallet. Don't keep your crypto in an exchange.
Matthew: Not your keys, not your crypto.
David: Yep.
Matthew: All [00:15:00] right. So number two for us, AI girlfriend site breached user fantasy stolen. I know you were on this site date. I'm just kidding.
David: Hey, I still have my fantasies though.
Matthew: Ah, they weren't stolen. Thank God. This is from Malwarebytes. Moi de AI is a quote platform that lets people engage in AI compowered, not safe for work chat, exchange photos, and even have a voice chat on end quote. Apparently they believe in freedom of speech, so they don't censor the prompts of replies, which is a great and high minded ideal.
Although in this case, apparently some amount of the prompts involved horrifyingly explicit references to children.
David: Well, it's interesting in the article, what they, something they also mentioned in there is that supposedly if you complain about. References to children, they will take it down,
Matthew: Yeah, I
David: seems like maybe they aren't entirely open, but what they're doing is just failing to build in the safeguards up front and rather than do it on the [00:16:00] backside.
So do they really care? It doesn't, you know, I think that brings the, some questionable, questionable ethics in place for that company itself.
Matthew: Yeah. Well, honestly, that's probably really hard to build in the appropriate safeguards for stuff. It's probably a lot easier just to do it on the back end. If somebody complains, someone will help you.
David: Hmm. Maybe.
Matthew: Maybe. Apparently, they save all the prompts that the users create, which makes sense to me, because in order to maintain state in the conversation, make sure that you can, you know, refer back to stuff, you probably save them all.
But, the problem is, is that they saved all of the user prompts in plain text.
David: No.
Matthew: Yeah, and that prompt database was stolen, and all of those prompts were associated with the user's email addresses, which can frequently be tied back to real identities. You know, some people are dumb enough to put their whole email username in their, or put their whole name in their email address, but other ones, there's been so many breaches these days.
These days, I imagine you can probably tie back pretty much any email address to someone's real identity.
David: Well, I, I got the the impression from the [00:17:00] article also that they had a sign in with Google or sign in with Facebook option.
Matthew: That's funny.
David: But to say they did as a girlfriend seems to be an overstatement about the relationship that's trying to be fostered here. It was either Malwarebytes or 404 404 Media called them sexual partner chatbots.
Matthew: yeah, no, you're correct. I just checked. Continue with Google, continue with Apple, continue with Microsoft. They've built into all the
David: Yeah, all the
Matthew: going to go ahead and I'm gonna go and sign in right now. Just kidding. As of October 11th, there have been reports of, well, okay, let's go back to the girlfriend thing. I mean, you know what? No, that's not, I don't think I want to dive into this actually. I don't think, I don't think it's not that kind of podcast.
David: away from that.
Matthew: As of October 11th, there have been reports of extortion specifically of the devs trying to get continued access.
There's a Twitter thread by Greg Linares. Where he mentions that he had heard two reports and one of them had asked for a VPN. It did, it didn't explicitly state out, but that implied to me like they were asking for [00:18:00] VPN access back into moi. Like, why would they just be asking for a VPN to anywhere?
David: Hmm. Yeah. That sounds like it.
Matthew: Yeah. He also called out that there's a new risk of false accusations since now there's a known breach. Attackers can create fake prompts and fake responses and use that for extorting people who weren't originally involved. One thing
David: anybody, even if they didn't have an account on the site,
Matthew: yeah, now you can, now you can just add it into the data.
Yeah. If they wanted to, of course, once they start doing that, then it becomes very easy for other people who actually are in the data to say, Oh, no, that's not me. Look, you know, that's a false accusation. They're just adding anybody into there. So,
David: well, that would also put pressure on the original company though, to have some kind of process to check names against the list and say, was this person a customer or not?
Matthew: I mean, it's probably on their better on their part to just say no, everybody here is not on the, this wasn't really stolen.
David: I don't think they can get away with that.
Matthew: Yeah,
David: I don't [00:19:00] know.
Matthew: And Greg Linares also calls out that teams with access to sensitive internal data who may be at risk of having it being extorted they should look at creating under duress honey tokens. Maybe like a document that you open up when something has gone wrong, and that tells the SOC that they've been compromised, which is interesting because that's the same kind of document you may want, same type of honey token you may want on their system anyways, if somebody's broken into their system, something with a name like, you know, password file or something.
David: So you would have a, so what this is suggesting is you would have like a document on your desktop You never open unless you're under duress and then you open it and that triggers an alert. Is that what he's
Matthew: Yeah, I think that's what he's suggesting. And I think that by itself, that is very paranoid and very, but on the other hand, if you wanted to put a honey token on everybody's system anyways, To detect if somebody is on the system, accessing things they shouldn't be. I think you can get two birds with one stone.
David: Yeah, I don't know. It just seems his suggestion just seemed odd to me.[00:20:00]
Matthew: That's fair. It was a little weird. Moi. ai said that all the data was secure and encrypted. Apparently they lied. Do you, any, any thoughts on your part? How many, how many companies do you think are lying about this type of thing? Or alternatively have such mediocre controls. Then it might be encrypted in one place and unencrypted in another where, you know, I don't know, maybe a developer is working on it or something like that.
Or a developer maintains an unencrypted copy for their dev work or something.
David: Yeah, unfortunately the latter one probably occurs more more often than we would like
Matthew: Yeah.
David: But this makes me wonder if, you know, do we need to start teaching regular people about what a SOC 2 is and that they shouldn't trust an organization that doesn't have one?
Matthew: Hmm.
Yeah. I think, I think
David: Not just, you know, any, any company that does partners with a third party. There's always a conversation about, you know, let's, let's see your SOC 2, blah, blah, blah, et cetera, et cetera. You know, what other controls do you have in place? You know, we're going to start educating regular people about the same thing or asking them [00:21:00] to say that don't do any business with a company that doesn't have a SOC 2, because they aren't protecting your data. At least a baseline level of assurance. Cause that's really all it's talked to is a baseline level of assurance, that they're doing some security. So I don't know if how hard that would be, or if that makes sense to us even trying to start getting that out in the public attention.
Matthew: Yeah. Like kind of like the underwriter, underwriters lab seal of, you know, this has been tested and validated.
David: Yep.
Matthew: I can see that. Be interesting. How long do you think before men and women both decide they're tired of imperfect relationships with real people and just move wholesale over to artificial companions who never disagree with you or get mad or leave?
I do think that we'll probably have an AI personal assistant in the near future that will arrange things in your life for you. Things that seems like AI can handle is, you know, you get an email about something and it knows, you know, you get an email about notifying you about an event and your AI knows that you like this particular type of event.
So it throws it on your calendar automatically [00:22:00] and then tells you I was thinking about this this morning. God, I would love to have an AI that could review my incoming bills, pay them. As long as they're under, you know, a reasonable amount automatically and then do my budget for me. I have a very detailed budget where I go through each, you know, credit card line by line.
I'm like, all right, so, you know, this is this goes in the food budget. This goes in the entertainment budget. This goes in the insurance budget, et cetera, et cetera. And then do all that for me. Set up date plans. I'd love for the AI to set up date night with my wife. You know, hey, my wife wants a date night this night.
It'll interface with her AI assistant, figure out what she wants to do without asking her, then make the reservations, check how busy everything is. Oh, she mentioned earlier this week that she'd really in the mood for tacos. Like, all right, let's go get tacos. Set up a vacation based on your specifications.
I was just having this conversation the other day. I'm trying to have a conversation about, Oh, I want to go here, but I want to go do this, but I want to do this, but I don't want to do that. I want to go someplace where this is available, like being able to take all that stuff into and then come up with a good kind of vacation for you.
I know [00:23:00] there's, there's, you can do an agent, you can go to a travel agent to do that, but they probably won't know you as well. And they're motivated to send you to the hotels they have deals with, and then you have to pay them for that.
David: Surprise.
Matthew: Yeah, yeah. But each of these AI assistants, that was all, that was all kind of an aside. The point of it is, is that each of these AI assistants is probably going to be programmed to act like a person. Because that's what we're used to. We're used to dealing with people. And they're going to probably be programmed to act like nice people, and they're probably going to have Avatars that are very attractive and humans are going to develop emotional connections to these AI assistants and potentially fall in love.
I just saw the other day that apparently chat GPT passed the Turing test. Did you see that? I
David: No, I didn't.
Matthew: think it was chat GPT.
David: But I think from what I've heard though, the The Turing test has been a less than great indicator of intelligence for some time now.
Matthew: Yeah. Okay. So this is. So [00:24:00] this is weird. So I see an article from September 18th saying chat GPT passed the Turing test. I see an article from June 14th. GPT 4 has passed the Turing test. I see an article from July 25th, 2023. Chat GPT broke the Turing test. So yeah, so apparently it doesn't mean anything anymore.
But the the point of it is though, is still, is that chat GPT at this point in time is mistake. You can mistake it for a human maybe a slightly weird human that writes in specific ways. But I felt very strongly about this since watching Blade Runner 2049 back 8 years ago in 2020, 2017. The AI companion played by Ana de Armas was, in my opinion, an extremely prophetic look at the future of our relationships.
I'm sure that people are going to have a physical companion still. Stuff that we're not going to get into in this podcast. Having someone who can handle and take care of almost all of your emotional needs by somebody who's unfailing there to support you can look any way you want to and once we get VR up to a space like that may be basically reality and can be tweaked to be exactly what you [00:25:00] want emotionally and socially.
I don't know I think that people are just gonna basically abandon the real world.
David: It's possible. There have been several movies and stories about that. The surrogates come to mind with Bruce Willis and Roda Mitchell. But in theory, that's not, that sounds great. You know, basically you get the ideal partner for you. But is that what a per, but is what a person wants you know, perfection, what a person needs to be the best version of themselves.
You know, if you had a partner or, or friends who did not push back on your flaws or push you to be better and just accept you for who you are, would you do things to better yourself? Or would you just end up becoming a self centered, you know, my shit doesn't stink jerk, you know, imagine.
Matthew: so
David: Yeah. So imagine a world where you only interact with it with AI sycophants and no real people.
All interactions with other people are done through AIs. You know, I think that's terrifying. It would be like a perfect [00:26:00] prison. In the Matrix, Agent Smith said that the first Matrix was like that, and it collapsed. People would not accept it. You know, I think it's a pretty good premise for a, for a book or a movie.
Matthew: Yeah, I, I feel like there are a lot of people in charge of companies who already kind of have this like really wealthy people. They're surrounded by a sycophants that, and we've seen that a lot of them are quite unhappy. So that's that's yeah, I don't know.
David: it's hard to say about, you know, what that says about them or, you know, that, that aspect of it. But I think if you could have the ideal AI companion, um, you would, you would want one that you didn't see as perfect because the AI would know what you need versus what you wanted. So sometimes you'd get a cookie and sometimes you get a kick in the nuts.
Matthew: Yeah.
David: It almost be like, you know, a mistress that your wife knows and is a good on good terms with. Yeah. Yeah. Yeah.
Matthew: Yeah, no, I agree. You probably would not want them to be just a complete [00:27:00] pushover all the time. You'd have to there, there's probably a percent. It'd be interesting to see if You could choose like how much of a brat you wanted your companion to be.
David: Well, I don't think you should choose. I think, you know, we were talking, you were talking about
Matthew: Oh, it chooses
David: evaluating personalities is, you know, you go through some kind of learning period, maybe it's months, maybe it's a year or something for the AI to get to understand you, who you are, what motivates you, what motivates you.
What doesn't et cetera. And for it to understand what you need and push you for your needs and not your wants. So, you know, maybe it forces you to get up at six o'clock in the morning and work out because that's what you need, even though you don't want to, you know, things, you know, that's a very simplistic.
Concept, but that kind of thing is what I'm thinking about. Forces you to go to night school to get that degree or, you know, to learn some HVAC or whatever, because that's what you really need to do versus, you know, playing Call of Duty till two in the morning.
Matthew: That's interesting. And I could, [00:28:00] is that the same AI companion as your quote unquote girlfriend? Or is that a different one? Is that like a, cause that almost sounds like what your personal trainer is supposed to do.
David: Well, like I said, that's a simplistic example of just that. I'm thinking about in other aspects too outside of that.
Matthew: I almost feel like you'd want two separate. AI assistance for that, like one. And then I think that people I'm sure would be able to choose
David: Well, maybe you'd end up with, well, maybe you'd end up with a half a dozen. You
Matthew: Oh, I'm sure
David: your nutritionist. One would be your physical trainer, you know, maybe, you know, maybe that's the way it would work out for you
Matthew: My nutritionist is going to talk like Emeril Lagasse
David: no, I was thinking Alcazar from Futurama.
Matthew: 'cause No, no. Or nevermind. No, nevermind. I'm trying, I'm, I'm trying to think. I was, I was gonna make a stupid accent. Justin Theo, is it Justin? Theo. Who's the Cajun chef? Cajun chef Theo from the eighties. Hold on. That actually, it actually comes up from the eighties.
[00:29:00] 'cause I used to watch this guy all the time when I was a kid. Justin Wilson. I guarantee that guy, that's who I want doing
David: oh yeah, yeah, yeah.
Matthew: and then I want Arnold to be my personal trainer guy.
David: You'd have Hans or Franz instead,
Matthew: And we're here to bump you
David: op.
Matthew: That'd be amazing. That'd be, see, this is the future that I want is I want, I want, you know, I want to have like a team of AI companions that like push me and and I don't know that I would want, that I want that in the same. I don't know if I want that, the push and the pull in the same the same companion, but I don't know, maybe it does.
Maybe it's to make them seem more real if they're more complex.
David: Yeah. Well, I mean really depends on the relationships That are expected to be developed but you know having one that's just gonna kiss your ass all the time was gonna end in horror
Matthew: Probably. Yeah, that makes sense.
David: But this whole thing is a disaster, [00:30:00] you know marriages are gonna end because of this Jobs are going to be lost and people are going to die from this breach, you know, the data is going to get out or someone's going to blackmail someone for, for what's in here and they're going to kill themselves. This is not, this is not great.
But I mean, but, but what this, what this tells us is you really need to think before you interact with any AI that's outside your control. You have to assume that everything you say or type is either going to get out to your organization. Or the world depending on the context of whatever you interact, the AI you're interacting with and what the context that AI is.
So be careful what you say.
Matthew: All right. For our final here, we have a short discussion. I threw this
David: Yeah. Mm hmm.
Matthew: top of, I have no idea this is coming. Have you been keeping up with your low confidence detections from tech dot FYI? And the answer is no. I added this as a bonus article, something that I actually started thinking about about six months ago and spending the last 12 months or so thinking pretty heavily about detection as I'm sure you.
Folks, just from, we've been doing a lot of detection related articles, [00:31:00] maybe it's time I started doing something else. The author is Gary Katz, and he suggests using these low severity, low precision alerts to enrich what he calls primary detections that are of a higher severity and precision, but also narrower in scope.
He also mentions correlating low confidence alerts on endpoints to find other activity, which is basically RBA. Dave and I have discussed RBA in the past. I still think it's a good idea, but it requires a complete reworking of how your alerting works. In addition, I've been thinking about other stuff that your IR team cares about that might not strictly be an alert.
For example, RDP connection to or from another system, new user created, new scheduled tasks. Nobody in their right mind would create these as an alert. Scheduled tasks happen so many times in a large environment. But, if you create an informational alert for these types of items where that informational alert never bubbles up they can be correlated with the atomic alert.
So for example, if you do alert on network IDS for command and control traffic, then if your SIEM or your SOAR tool or whatever you're [00:32:00] using for your ticketing system is is functional, you can then see what I'm personally calling contextual alerts for other Things that happened on that host that generated the traffic that other activity that might raise or lower your suspicious view.
For example, you know, that I network for command and control traffic comes up and you see, oh, there was a new scheduled task that was put in like a minute before this other alert. That's super suspicious, you know, here's this unknown executable that was installed on or that was run on this system has never been seen before on any other system in the environment right beforehand.
Of course, sometimes there won't be anything which, you know, but sometimes there is. Either way his point is the exists, these alerts exist and you should be using them. For example, Palo Alto's impact detection is a low alert. That's wild to me. That has done, personally for me, I've seen that do an excellent job finding penetration testers.
David: It sounds like the idea, this is the idea of this is using RBA, but setting some alerts to add zero risk. So after the threshold cross, all the alerts that cause the threshold to [00:33:00] be crossed are going to show up to the top. But also you're going to get all these other alerts that did not add any risk, but add the context.
So I don't think it's a bad idea. So you know, as, as mentioned, you know, these are context alerts and not risk alerts. It might resourcing, you know, how much time do you have to spend creating context alerts versus how many times, how much time do you have to spend generating, creating risk generating alerts?
Okay. And maybe you end up setting all the vendor alerts to contextual only. So they're in the RBA, but they add zero risk.
Matthew: I don't know. I would, I would actually almost potentially do the opposite. There's a lot of good, there's a lot of bad vendor alerts, but there's also a lot of good vendor alerts. And I know it requires a lot of work to go through there and figure out which ones are the good ones and which are the bad ones.
David: So are you saying that some vendor events should be alert? Some should be context. Or are you saying that vendor alerts should be risk alerts and custom rules should be [00:34:00] the, the, the context alerts?
Matthew: I think, actually, I think it ends up being a mix of both. If there's going to be some custom alerts, like for example, you know, new user created a new user created, you do not care about that. 99. 99 percent of the time. The only time you care about that is when it happens, you know, within five minutes of a malware detection or something like that.
David: Or within five minutes, that user generates a ton of risk.
Matthew: Yeah, yeah, exactly. So I think, and the nice thing about this contextual alert is it does work with RPA. Like you said, you can create them as zero risk alerts, but it also works with the way that we've been doing alerting in the past as well. So it just works in both scenarios. So, and maybe what you end up doing is you end up creating, you know critical and high network IDS alerts are primary alerts as he calls them or atomic detections.
And then everything medium is a medium and low as a risk alert, and then informational as a context alert. Like you can kind of mix and match these things too.
David: Yeah. I [00:35:00] think in the, in, in, in if you're thinking about this in the RBA context, then you look at it as, you know, is it a threshold alert is a risk alert or is it a context alert? And then, so it's, you know, high risk, medium risk, zero risk.
Matthew: Yep. I don't know. Like I said, I wanted to have a very short discussion on it
David: Yeah. It's not a bad idea.
Matthew: it is interesting. And you have a point about the, I mean, the biggest thing here is the time. Do you, do you have time? To set up all these contexts because there's so many things that the IR team cares about, like maybe that's something you task the IR team with setting up because you don't, if those aren't going to be firing alerts, you don't need them to be quite as robust and maybe quite as nearly as well tuned as you know, but you called threshold alerts here that you could ask the IR team to basically say, Hey, you know, as long as you set it up with this tag here, that means context, like it'll never fire, you know, go wild, whatever you guys want to see.
Feel free to set up.
David: [00:36:00] And maybe that's something you could get an intern to do too. So you set your parameters for your intern and you say, all right,
Matthew: We want to see all scheduled tasks, all new services, all the executables I've never seen before. All new users, all RDP connections, all remote management and monitoring tools. Yeah. Give them a list of stuff and then have them put that together.
David: Yeah. So I think this could be, this could be useful.
Matthew: Just need some interns.
David: Don't we all,
Matthew: Can I borrow one?
David: Bill Clinton's got mine.
Matthew: Oh boy. Well, that looks like that's all the articles we have for today. Thank you for joining us. Follow us at Serengeti Sec on Twitter. Subscribe on your favorite podcast app and leave us a five star review.