SS-NEWS-142: GM Sharing Driving Data, Testing Detections

Episode 142 May 06, 2024 00:45:37
SS-NEWS-142: GM Sharing Driving Data, Testing Detections
Security Serengeti
SS-NEWS-142: GM Sharing Driving Data, Testing Detections

May 06 2024 | 00:45:37

/

Show Notes

This week, David and I discuss how GM is fraudulently collecting driving data and selling it to insurers, and Anton Chuvakin has another article on Detection Engineering - How to test your detections!

Article 1 - Long Article on GM Spying on Its Cars’ Drivers
Supporting Articles:
How GM Tricked Millions of Drivers Into Being Spied On (Including Me) [Non-Paywalled]
GM Shuts Down Tool That Collects Data on Driving Style

Article 2 - Testing in Detection Engineering (Part 8)

If you found this interesting or useful, please follow us on Twitter @serengetisec and subscribe and review on your favorite podcast app!

View Full Transcript

Episode Transcript


 David: [00:00:00] Welcome to the scary Serengeti. We're your host, David Swinger and Matthew Keener. Stop what you're doing and subscribe to our podcast and leave us an awesome five star review and follow us at Serengeti Sec on Twitter. 
 Matthew: We're here to talk about cybersecurity and technology news headlines, and hopefully provide some insight analysis and practical applications that you can take into the office to help you protect your organization. 
 David: And as usual, the views and opinions expressed in this podcast are ours and ours alone and do not reflect the views or opinions of our employers. 
 Matthew: I made the mistake of getting a GM for my last car. Now my wife knows when I say I'm going to the strip club. I'm actually playing Pokemon Go. I'm never gonna hear the end of this. 
 David: Well, I mean, it's probably just because you're using her account. That's what she doesn't like. 
 Matthew: You're not wrong. She'd be very upset if I did that. 
 David: Yeah, that's because you suck at it. 
 Matthew: You suck at Pokemon Go. 
 David: I've never played it. I've seen people play it and laughed at him, but 
 Matthew: Yeah? 
 David: I've never actually played it myself So I'm not sure exactly what [00:01:00] what it entails. 
 Matthew: It's an incredibly boring game. 
 David: I'm sure that's why it's a unpopular. 
 Matthew: you know what? It does have It does have promise, because I could see I know you don't like, I know you don't really like John Ringo, but he did release a game that talks about a similar kind of alter, alter, augmented reality game, where you actually go out and you shoot stuff like it overlays the bad guys in your real life world, so you can, you know, use cars as cover and stuff like that. 
 As soon as somebody releases something like that, I am all in. 
 David: That would be yeah, that's not too bad Well, I mean you have you have to tie that in with Like a Google glass or something like that, though, holding up your phone. Unless you get a, I mean, it might work is if you got a, you know, a toy gun that had a cell phone mount on the top, like a site where you would click the phone in and then it would act as like your, your red dot or 
 Matthew: I think, I think it, I think it's going to require the augmented reality glasses but as soon as they come out with that, I am all, I am [00:02:00] 100 percent there. 
 David: Yeah, but I think you're with that kind of thing that you could, there's, they're still going to have to have like no go zones because I think to have that with Pokemon go so people don't get hit by cars and stuff because they're absorbed at whatever they're doing, not paying attention. 
 Matthew: People just running across the street in the middle of like firing guns at random things, yeah, and I can just see the uh, the calls to the cops from that. Oh my gosh, people with guns. 
 David: Yeah. I'm trying to remember what the name of the movie was. There was a movie in the eighties. Oh man, the name eludes me now, but it, it focused on a guy who was in college and he and his friends used to play this game where they would shoot each other with paintball guns. Like during class and, you know, just daily life throughout the college. 
 It was kind of like a game of tag almost. 
 Matthew: I don't think you can get away with that anymore. 
 David: Oh no, you 
 Matthew: Those are no, no. That's what I'm looking for. Oh my God. Why can't I think of it? What do they say now? Where like, even if you point your finger in a gun shape, no [00:03:00] tolerance. Like 
 David: Oh, zero tolerance. Yeah. 
 Matthew: tolerance. You're suspended. 
 David: that's fine. I didn't want to go to the school anyway. Alright, but maybe we should talk about what we're here to talk about. 
 Matthew: No, I thought we were going to, I 
 David: Not that we're here to talk 
 Matthew: the 
 David: Pokemon Go. We're changing the format 
 Matthew: changing the format and pivoting over into Pokemon Go. 
 David: Eh, we'll probably get more listeners. Alright, first article. How GM tricked millions of drivers into being spied on. Including me. That including me is in the title. That's not 
 Matthew: Oh, 
 David: me. Because my car, my truck is ancient. And has no connectivity in it. 
 Matthew: I'd just be impressed if they managed to spy on you at all on that thing. 
 David: they just follow the rust trail as it goes down the road. All right. So automakers have started, started selling data about the driving behavior about driving behavior to insurance, to the insurance industry. The drivers of GM vehicles in particular, weren't informed and insurance companies use that data to make risk [00:04:00] determinations about drivers which led to higher premiums for some. Now the, the, the title says the the article talks about automakers, but it's really solely focused on GM. There's no real reference to any other automakers in there. Well, we'll talk about a different article later on that references some other car manufacturers. 
 Matthew: We will. Exciting. 
 David: Yeah. It's a twofer almost. Not quite. But it also does not say when this began. But if drivers bought a new car now, they may have intentionally unintentionally or without their knowledge, then opted into a driving habits, tracking program called smart driver. 
 Matthew: Sounds great. Why would anybody complain about this? 
 David: I mean, it's what people think of themselves. They're a smart driver. So I mean, most people would sign up. 
 Matthew: Yeah, 
 David: I think I've done studies on that. That we're, we're, I think it's like the vast majority, like 90 percent or something. If you ask someone if [00:05:00] they're a good driver or not, they say, of course they are 
 Matthew: And then while at the same time they're raging at all the jerks on the road 
 David: right. 
 Matthew: they don't see at all the, yeah. 
 David: Yeah, it's like 90 percent of the people think they're a great driver and think the other I think 100 percent of the other drivers are not which is accurate cause I've lived in North Carolina, but this, this smart driver program is a subset of GM's OnStar which was originally designed as a kind of a built in roadside assistant many years ago now, but this tracks a bunch of metrics about how a driver uses their vehicle, including the distance per per trip and in total, the start and end times of trips. 
 The number of times they've heartbreak the, the the counts of rapid acceleration and speeding events. 
 Matthew: So they did mention the people, the person who wrote the article, they said that they had no speeding events in their data, they, they got their data from Lexis Nexus and it, they said they had hundreds of, had information on something like 200 [00:06:00] trips and it said they had, you know, so many heartbreaking events, et cetera, et cetera, et cetera. 
 But. They said they had no speeding events. How do you not have any speeding events? Even if you are a strict believer in following the speed limit, you know, you're going down a hill and all of a sudden you're like, oh shoot, I'm going too fast now. I don't know, I just thought that was very weird. 
 David: Yeah. I thought it was odd also. It was like, hmm. How is that even possible? 
 Matthew: I wonder if it's kind of like when they get you with the radar. They, they don't pull you over for doing 1 mile an hour over because the radars are typically, you know, 5 to 10 miles an hour off. So they usually don't pull you over. So you're 10 miles over. And that gives them some leeway. So it's difficult for you to argue like, well, the radar guns are not precise. 
 Cause then when you go into the court, if you can test it, they usually automatically drop you down to the next one down just to kind of preempt you from doing that. So I wonder if they did something like that here, where they're like, well, we're just going to give you, you know, a cushion of nine miles an hour or something. 
 David: Oh, actually, you know what it might be though [00:07:00] is if the author is based in New York, you just, you can't go fast because there's so much traffic, you just can't go, 
 Matthew: You just can't, 
 David: just not possible. 
 Matthew: that is totally possible. 
 David: You know, whereas in like Texas, you, you, if you did not have a speed event in Texas, then you probably would get run over. 
 Matthew: That's fair. 
 David: But if there, if if they were, were under the smart driver 2. 0 And also bought the GM's OnStar insurance, then they would also collected the additional counts for hardcore marine, Ford collision alerts, lane departure warnings, and seatbelt reminders. 
 Matthew: Seatbelt reminders. Interesting. 
 David: Yeah. If you can put up with that ding long enough at least in my truck, if you don't put the seatbelt in and you just click the button, it goes away. I don't know if modern vehicles, if I don't think, I think the seatbelt reminder never goes away in those though. I think it would ding forever. 
 Matthew: I bet it would.[00:08:00] 
 David: Should try that out. 
 Matthew: Anytime now, 
 next time we meet for lunch, when they get there, I'll have torn out all of my remaining hair 
 David: Both strands of it. 
 Matthew: strands of it. So I'm kind of surprised that we haven't found any apps and cell phones. Cause honestly, they don't need the car to do this, right? We all carry the snitch in our pocket when we get into our car. I'm really surprised there's not an app like you can't download, you know, your insurance app or, or even. 
 A flashlight app on the store that gets all your permissions and then collects and sells all this information connected to your Google account or your Apple account. 
 David: Yeah. Well, I mean, that would be much easier for Android. Apple is a lot more strict about what a app can do and requiring approval for what it does as well. So I think that might be harder for apps, for Apple 
 Matthew: up third party app store, or they're going to. I don't know if they have actually done that yet, but it's about to get a lot easier. 
 David: Oh yeah, because of the lawsuit. 
 Matthew: Yep. [00:09:00] I 
 David: anything from it. I like Apple's wall garden. It protects me from myself. 
 Matthew: have no response to that. I'm sure that like Google Maps and et cetera and Waze are collecting those, I guess the question is just have they been caught selling it, so. I don't know. 
 David: Yeah and Waze went downhill after Google bought them too. Their data's not, they're not nearly as accurate as they used to be. 
 Matthew: Not enough people snitching on the snitching on the cops. 
 David: Oh, well, that's one of the only reasons I run it. 
 Matthew: Have you seen the video of the cop who's sitting there filming himself on Waze as soon as somebody reports that they're like it pops up and says is there a speeding trap here? He hits no. He's got like one hand turning off like denying the Waze stuff and the other hand out the window with this with the speed camera or whatever. 
 David: Interesting. I wonder how well that works. 
 Matthew: I have no idea. I have to imagine that it's not just a single thing, but I don't know. 
 David: Yeah, I would think, well, I guess you don't know exactly how the [00:10:00] program works because if 10, 10 people say there's a speed trap there and one, one user keeps saying no they might catch on to that, but I'm not sure. 
 Matthew: It's the same user, like this guy hasn't even moved. What's going on here? 
 David: Well, I'm kind of surprised that, that, that Google hasn't partnered with law enforcement to get some of that stuff waived or whatever, you know, so they, the cops would, would have some kind of portal to the Waze back end to say, Hey, they, in this area, don't report. Even if people report a cop there, don't accept it or don't send it out to the other drivers. 
 Matthew: Give them like a pre warning of the area they're going to be targeting. 
 David: Yeah. Well, not even a pre warning necessarily, but just allow them to log in and pick a zone and say, don't accept any police report any traffic cop reports in this, in this area. 
 Matthew: Fair enough. 
 David: But we mentioned that drivers could, could have intentionally, unintentionally, or without their knowledge signed up for this and that's because [00:11:00] the signup is hidden in all the massive paperwork you have to do when you buy a car and the dealership also may or may not have been diligent about making sure you are aware of that. 
 Of what you were agreeing to when they, when they run that through that paper worthy, or they may even have signed you up without telling you. 
 Matthew: What? Dealerships would never betray me like that. Car 
 David: mean, why would they do that? 
 Matthew: bastions of trust and order. 
 David: Yep. Just like all of our institutions in America, but surprise. They do this because GM incentivizes them to do it. 
 Matthew: Incentives. Incentives. Ugh. Yeah. 
 David: say, you know, they don't exist. But, but, you know, in the article, there's a, there's a claim by GM that customer trust is a priority for us, quote, unquote which is of course bullshit. 
 Matthew: That's depressing. 
 It's it's always, it's always bullshit. It's, the moral imperative of a company is to make money. Trust is [00:12:00] only valuable to them so long as it helps them make money. And as soon as it doesn't matter, then off it goes. 
 David: Would you call that a moral imperative? 
 Matthew: I'm riffing off the what did they say? Someone, someone back in the 70s said something like, the only moral, 
 David: Oh, you're talking about Friedman, 
 Matthew: yeah, 
 David: where the point of a company is, 
 Matthew: the exact quote, but 
 David: is that the 
 Matthew: primary responsibility. So, okay, some morality is the wrong word. That's fair. 
 David: Yeah. 
 Matthew: And, and, yeah, here's the 1970 article. The social responsibility. The responsibility is to conduct a business in accordance with their desires, which would generally be to make as much money as possible. 
 So, and I hate shareholder doctrine, so. 
 David: Yeah. 
 Matthew: No surprise. No surprise. 
 David: But you know, after this came out, Jim's like, we're shutting down this program, which is surprising because they said it's worth annual revenue in the low millions. So you know, some, some [00:13:00] executives are not getting Ivy backscratchers because this has been shut down. 
 Matthew: I feel very bad for them. So very bad for them. 
 David: Yeah. And apparently a salesperson's pay could be docked if they fail to sign up someone for OnStar. Right. Right. 
 Matthew: This I think the salesperson that she talked to was incredibly forthcoming on this. And I, I worry about their job after this. Cause he was he talked a lot about how he gets around what users desire to sign them up for this because the company incentivizes them to almost guarantee a fraudulent transaction. If you told a customer, we would like to track you and sell all of your location and speed data to insurance companies. If they told them in plain language, customers would say no, 
 David: Right. 
 Matthew: but you're going to punish the salesperson if they can't get the customer to want this. So you've, you've just created the perfect [00:14:00] set of incentives for the salesperson to just be like, well, just silently sign them up and if nobody knows, then nobody gets hurt. 
 But this is perfect for the company, of course, because now they get to claim, Oh, well, we would never do that. We would never tell people to do that. 
 David: You know, and that's something that, that kind of highlights the short sightedness of the whole fiasco, because they're like, okay, we're going to, we're going to be say less than forthright about getting people to sign up for it. Then we're going to sell that information to LexisNexis, who's then going to sell it to an insurance company. 
 Who's then going to change people's premiums based on the data that we give them. How do they not realize that this was going to come out? 
 Matthew: Eventually, 
 David: not going to be hidden for, and maybe it was just a matter of they're going to make hate while the sun shines, right? Like, you know, we're going to get found out, but in the meantime, you know, we're going to make some millions off of this. 
 Then we'll, then we'll say, Oh, we're sorry. Customer privacy is very important. Blah, blah, blah. [00:15:00] You know the only thing we can, I mean, what we can hope for is that these lawsuits still go forward and GM does end up having to give back that those low millions to the people they basically defrauded as part of this. 
 Matthew: yeah, that's the only way that I mean, if the company's only social responsibility is to make money, then the only way you can punish a company is prevent them from making money. 
 David: Yeah. I mean, we just talked about this in our last podcast about, you know, you have to disincentivize companies through the legal system to prevent them from doing these heinous things. 
 Matthew: Murder them. 
 David: Some of them certainly deserve it. I mean, I think Microsoft should at least have some amputations. But you know, one of the other incentives from GM is that they have a mandate that dealerships have to provide reports on who gets signed up for OnStar or the number of OnStar signups they have, and it's tracked every month. And this quote seemed very appropriate from the the [00:16:00] article. GM doesn't want dealerships selling cars. 
 They want them selling connected cars. And this, this really jumped out to me. Because any company that sells something that collects data is now incentivized to basically become a data broker. They will, as in this case, build data collection into their products just so that they can sell that data, even though they don't actually need the data they're collecting that they're going to sell. 
 Matthew: Yeah, I mean, extra revenue stream? 
 David: Well, obviously that that's what GM got their low millions from, right? 
 Matthew: Yeah. I think they pulled this from the software industry, right? All these people creating these crappy apps on the phone that people install and now they can sell their data. 
 David: Well, I think it's just a matter of the car companies were, saw themselves in a position to do this. And they saw how data is a revenue stream, regardless of, you know, whether you're in Silicon Valley or you're in Detroit, 
 Matthew: that's fair.[00:17:00] 
 David: I mean, because once GM has sold you the car, why do they care about all this data, how, how far you drive? 
 You know, when you took your trips, the number of trips you took, you know, and all this, I don't think would be terrible. If they were upfront about it and made it plain what you were signing up for. But I think the fact that they don't is basically fraud. I mean, data, data collection doesn't have to be all bad. I mean, auto manufacturers could use this to understand how parts of the car work, or if they're functioning as they expect, or maybe save you some maintenance or breakdown problems or perform targeted recalls based on some of the sensor data from the car. 
 They certainly need fewer narrators from the fight club if they did that. 
 Matthew: yeah, and I'm going to take the obvious contrarian viewpoint here just just so we can argue it over. Isn't this good? Don't we want the insurance company to know how we drive so they can accurately price us based on risk? Aren't we seeing all kinds of [00:18:00] articles about how the insurance company has screwed up their risk equations and they're jacking up the rates on everybody? 
 Now this is ignoring the obvious issues of the line and the duplicity. 
 David: Well, it depends on if they're doing it because they've got bad calculations. You know, if they're already doing, you know, if they're already using the data improperly or incorrectly with their bad algorithms or bad models, more data is not going to improve their models. You could say that, you know, more data is always better, but if you're, if you're feeding into it, if you're feeding it into bad systems, you're still not going to get good output from that. 
 I 
 Matthew: I guess not. I do actually, we, we talked about this a little bit before the show and I wish we'd saved it for the recording, but There is an interesting question here around actually doing, I know some some insurance companies are already doing this where you can plug something into the port in your dashboard and it reports your driving data to them and return for hopefully better [00:19:00] rates. 
 And I would not see it be surprised to see a day where the government or the insurance companies are going to start requiring us to do that. 
 David: actually don't think that's a bad idea, but I think they should go about it in a different way. Because if you, if you go about it and say you're an insurance company, right? And you want this data in order to, improve your algorithms, improve your models and hopefully save the insurance company money by doing less payouts. 
 Then you could incentivize people to put the box in, in the first place. Say, you know, I'll, we will give you a 5 percent discount upfront if you install this black box. And if you're driving habits, are such that you are a lower risk based on our algorithms, then your, your premiums are also going to decrease. 
 So it, you've, you've got people who the, the good drivers are rewarded and the bad drivers are not exactly punished, but they do pay more because they don't get the discount up front nor do they get the possibility of decreased [00:20:00] premiums as well. 
 Matthew: That 
 David: I mean, I think this could be done in a better way than it is. 
 Matthew: is always true, though. 
 David: Yeah, of course, because you and I thought of the way to do it, which makes it better automatically. 
 Matthew: We're super smart. They should hire us and pay us a bunch of money. 
 David: Yeah, exactly. So we can buy somebody else's car that doesn't have the black box in it. 
 Matthew: You know, I was actually wondering about this. Are there any manufacturers that aren't doing connected cars now or or have? I wish I'd looked this up beforehand. 
 David: It would have to be a boutique shop because I think we actually may have talked about this before that, you know, like semis are all connected now and they send data back to the mothership so they can adjust the engine and stuff on the fly automatically. It's just the way that they're built now. And you, you imagine if we're moving to, you know, this all electric future, you know, there's no way zero. 
 Possibly you're going to get a non connected electric car. 
 Matthew: That's funny. Actually, I just did a search for this and there's an article for the top top article [00:21:00] is March 11th 2015 eight Unconnected cars for tech wary drivers nine years ago. There's almost nothing recently about it 
 David: So what are they, Matt? 
 Matthew: I'm not, I'm not going through that from a 2015 article. 
 David: All 
 Matthew: more about, tell me more about Kelly blue book. 
 David: All right. So Kelly Blue Book is the other article we mentioned at the top of the top of the show with that, that mentions other, other auto manufacturers besides GM and they say that some are actually manufacturers reserve. The right to collect and sell data from cars and even the phones that that they connect to the cars. 
 For instance, Subaru's policy specifies that passengers consent to data collection just by entering the car. 
 Matthew: That is the worst version of those stupid agreements where you open it up and like just opening the, as long as you open the door. Oh, what do they call this? The click agree they're 
 David: Click throughs? 
 Matthew: click wrap agreement or 
 David: Oh, yeah, yeah. 
 Matthew: It's just opening the door. Oh my god.[00:22:00] 
 David: Well, I mean, then any Subaru driver who's an Uber driver then needs to give a disclosure for their 
 Matthew: Every time somebody else 
 David: every time you get into the car. Hey, I'm sorry, but you're going to have to also sign this waiver to ride in my Uber. 
 Matthew: Yeah 
 David: that's just litigation waiting to happen, I think. 
 Matthew: yep 
 David: but this next one's the best, which is Nissan. 
 So Nissan's privacy notice says they can collect and share your sexual activity. 
 Matthew: how do they what? I don't. 
 David: I think maybe it's shock compressions when the car's not in motion. I'm guessing it's based on GPS data is what I'm guessing, actually. I don't know. 
 Matthew: Interesting. Hold 
 David: sure they're going to move into, Nissan's going to start making mattresses so they can more easily collect that. 
 Matthew: You know what? I guarantee you people would buy that. There are some people that are so quantifying their life, they'd want to know like, you know, you know, I had sexual activity on an average of 3. 7 times last week. 
 David: Yeah, and you can bleep this out, [00:23:00] BLEEEP 
 Matthew: you can match it up with your smartwatch and your heart rate. So interesting. So, okay. So the New York Post. Has an article about this from last year, and it says there, the, the Nissan spokesman said Nissan does not knowingly collect or disclose consumer information on sexual activity or sexual orientation. 
 Some state laws require us to account for inadvertent data collection or information that could be inferred from other data such as geolocation. So if you go to a gay bar, or you are going to, you know, Go with your Paramore or something like that. It sounds like they could infer and imply. Maybe, maybe, maybe shock, shock compressions, like you said, could be used to infer. 
 David: So that is so basically, this is just in case. 
 Matthew: Yeah, just in case they're like, oh shoot, we just figured out that this dude is hooking up right now. 
 David: Yeah, I'm skeptical of that statement [00:24:00] based on a couple of these other things in their privacy notice, help diagnostics data. How are they getting that genetic information? How are they getting that? 
 Matthew: there's a little needle that, when you sit in the car, it pokes up and pokes you in the butt. 
 David: Yeah, by accident and other sensitive personal information for targeted marketing purposes. Yeah. Yeah. Yeah, how they plan to get all this stuff is unclear 
 Matthew: Oh, no, no, no. It's it's they've got a, they've got a camera that looks into the backseat , there's some collection gear and those little DNA sequencer, a camera, some needles to get the blood tests, 
 David: Yeah, it streams straight to Pronhub. 
 Matthew: Yeah. Yeah. That's there. Oh, that's the other revenue stream. You're right. It's all about revenue streams. Do you think that companies would, would sell that if they could, if they thought they could get away with it? 
 David: Oh, absolutely. They would of course. 
 Matthew: Like some motel chain putting cameras in their rooms and selling the streams to the rooms and then selling those rooms to discount discount to users. 
 [00:25:00] There'd be people that are like, well, honey, it's 50 percent off the normal price. 
 David: Actually, they probably would 
 Matthew: Yeah. 
 David: and I'm wondering if it wouldn't it would probably be perfectly legal for them to do So as long as they got the consent 
 Matthew: tell people. 
 David: and they do the age verification 
 Matthew: Yeah. Yeah. Definitely would not want to make a mistake there. 
 David: But why this matters, you know, if you ever can connect a device, assume everything about how you use that device is being collected and sent to the manufacturer. 
 Matthew: Including your, well, nevermind. 
 David: Yeah. Including things. But you need to pay extreme close attention when you're buying a new car for this kind of thing, for sure. 
 Matthew: And it's funny because I mean, you honestly, you have to pay a lot of attention when buying a new car anyways, because this is when you're buying, this is where they like to sign you up for those extra warranties or the higher interest rates but this is just dark patterns all around. We talked a little bit about dark patterns last week and. 
 This is just more of that capitalism, 90 percent dark patterns 
 David: Now with more dark [00:26:00] patterns, I think 
 Matthew: with more dark patterns. 
 David: the slogan. And if you buy an electric one, like I said, just give up. 
 Matthew: She can't turn that off. 
 David: Yeah, 
 Matthew: And frankly, they don't want to turn off cause they can disable your cool, disable your cool stuff and your features. 
 David: And of course, you know, at least as far as GM goes, they say you can get this this turned off by contacting customer service. And apparently there are some DIY advice on opening the car up and removing the OnStar module which would most certainly void your warranty. But I'm wondering if you can't find a way to just disconnect the antenna. 
 But the problem there is that antenna is probably used for all sorts of things and not just reporting back to the mothership. 
 Matthew: I saw an article talking about how to disable. That for at least one model. So at least one model had a separate antenna for that. So I think it really does depend on how they're architected. 
 David: Yeah. Cause, cause you'd think they would architect it so that it's, which would also [00:27:00] disincentivize people from doing that thing, right? 
 Matthew: I don't think they're thinking at this point about hostile customers, because remember, they're doing everything for the customer, 
 David: Well, they're not paying attention then because all they have to do is look at the whole right to repair fiasco that John Deere's gone through over the last decade, which they finally actually collapsed on. And now John Deere's had to admit that people, when they buy the John Deere tractor, have a right to repair the John Deere tractor. 
 So, 
 Matthew: were smart enough to learn. From other companies, 
 it'd be really 
 David: that's the way capitalism is supposed to work is they're supposed to learn 
 Matthew: And they don't 
 David: when they don't. 
 Matthew: might be a little sarcastic here, not sarcastic also sarcastic, but that's not the word I'm looking for. 
 David: Being something. 
 Matthew: Yeah. Yeah. 
 So they commented in the article about getting your reports from LexisNexis and Verisk. I actually went to those websites to try and figure out how I could get my report and I could not figure out how to get my report, although LexisNexis did have A nice spot on the front where you could ask them to delete all your information. 
 [00:28:00] So I went ahead and went and submitted my request to delete all my information. It took about three or four minutes. It was not too hard. I don't know if that prevents them from gathering more information about me going forward or if that just deletes all the old stuff. I've considered some of those privacy protection services that go around and delete your data for you, but I did see a blurb from Consumer Reports that says they're not worth it, and I've heard anecdotally that requesting your information is for each data broker, and you basically have to go around and play whack a mole on all of the data brokers. 
 Which is annoying. There's a use case here for whenever we get full on AI agents to just have an AI agent that constantly searches for your data across the internet and just constantly ask for it to be removed. 
 David: That's a good idea. That's a good use case. 
 Matthew: Well, I'll get right on that. 
 David: Okay. Expect it next week. 
 Matthew: You can expect it whenever you want. 
 David: That's true. No, 
 Matthew: time number two, we are continuing our further discussions on Anton Chebokhin in testing and detection engineering. This is part eight of a [00:29:00] long series on detection engineering that he's been writing that I have eagerly consumed each article. I'm, I'm thinking of just starting a separate podcast where I just review Anton Chivakyan's articles. He is joined in this one by Amin Besson and an anonymous collaborator. How mysterious. Yeah, so he's talking about when you're building detections, an incredibly important part of that process is testing and validating that the detections work as intended. Otherwise, you get unexpected behavior like failure to fire tons of false positives. 
 David: come on. 
 Matthew: I know. Why are we talking about this? Well, this is an incredibly important step, the second time I've used incredibly important hopefully it's making an impact, that a lot of folks don't do well, including some MSPs that I will not name. I will, I will provide a short, short demo of what I'm talking about. 
 I worked with one MSP that was a black box MSP. You simply fed them the logs and then they did some magic and alerted you when bad things happened. I found out some time after I had left the organization that it turns out they [00:30:00] had, oh, oh, and we, we never got any detections from them, and we're always asking them like, hey we're not getting any detections. 
 Is everything all right? And they would tell us everything is fine. 
 David: You guys are outstanding. That's, that's why we get no detections. 
 Matthew: Yeah, I found out after I had left the organization that they had undersized our black box and more than half the logs were being dropped at the black box. And for several years or some number of years, I don't want to say several, it's probably an exaggeration. 
 Some number of years, the whole thing had been broken and they had no idea because nobody was testing anything. 
 David: Yep. And apparently they didn't have health monitoring on the 
 Matthew: Oh yeah, no health monitor, no nothing. I mean, and honestly we, I could have done a better job on my side. Like I could have run some tests and been like, Hey, you didn't catch this. So I certainly could have done better on my own side, but the whole point of buying an organization, well, nevermind. 
 Different conversation. 
 David: The whole point of outsourcing that is to take the burden off of you 
 Matthew: Supposedly. Supposedly. Second. 
 David: well, this is a good point though, Matt, that I think a lot of people can take away from this though, is when you are choosing [00:31:00] an MSP, that is something you want to highlight for them is the testing that they're doing, the health monitoring they're doing for their feeds. 
 And, you know, and can you get your regular reports on that, those metrics? 
 Matthew: Yeah. That makes sense. A second one that I have experience with was a better MSP than that. Multiple times we discovered that their rules, when they quote unquote tuned their rules, they would accidentally break the logic and they didn't, they tested the alerts when the alerts were At our request, they actually didn't test them until we made them test them. 
 But they would test them upon release, but they wouldn't retest them after tuning. So we found several alerts over the years where they had broken the alert and didn't even notice it because they had not tested it. 
 David: Yeah. And they weren't getting any alerts. So it's obviously fine. 
 Matthew: Yeah, everything's cool. Everything's cool. So David and I are both highly sold on testing. [00:32:00] Otherwise, how do you validate that your expensive and expansive detection logic is actually working? And we've gone into a lot of detail in the past on breach and attack simulation vendors. So I will not be labored that particular point. 
 And we're going to talk about some other approaches to testing. 
 David: Yeah. But you can go back to our podcast catalog and listen to them all. 
 Matthew: Yeah, hopefully I actually haven't checked to see if You can't go back and listen to them all but it's, it's highly likely you can. So there are three types of testing, as I mentioned, the first one is unit testing. This is the simplest form, you perform a quote, simple regular stimulus to check the detection still trigger. 
 The easiest way is manual, run a command that you know should trigger it. If you know that you have a detection for PS exec being run laterally from one system to another go find the command that's I think it's there's one you can run using WMI that'll run PS exec on another system. Go and run that command then go and check out your SEM and make sure that it fired or not. 
 A lot of threat [00:33:00] detection folks do do testing like this on an ad hoc basis. When they're rolling out a new rule, they're trying to, you know, generate logs or validate that it works. And that's great. Like I said, simplest form testing can be done by anybody, security engineer, sock analyst, red, purple teamer. 
 Anyone can do it. Next step up. Programmatic. Set up a script that runs that same thing each day. Trigger it. You ran it once when you were developing the rule. Well, set up something to run psexec on that other system every day. Why? Things change. Firewall settings change. Your systems are added. 
 Logins logging changes. People turn stuff on and off. Testing once guarantees that it works today, but it doesn't guarantee that it'll work tomorrow, a week, or a month away. That one helps, but that one's still, it is still only testing the logic in one place. Although it's very useful, because as soon as you can set up an alert for when it doesn't fire on that, it emails you and says, Hey, this alert is no longer working. 
 You can take another step up and do systematic testing. I don't even know if this is The real, I like the word [00:34:00] though. You can buy a tool like we talked about breach and tax simulation, or there, I think a lot of the vendors are now calling it continuous security validation, where it does that same unit test, but it doesn't do it in just one place. 
 It does it everywhere you put an agent instead of just testing it once and testing the logic, you can now test it and all of your subnets or all of your hosts, if you want to get really wild, I think that's probably a little excessive, but 
 David: If you can afford it, 
 Matthew: Yeah. Additionally, they had a new one on me, and this one I'm kind of surprised and depressed that I didn't think of before. 
 Log injections. If something is very risky, like running something on your on your own domain controller or something or you don't have permission to run it, then what you can do is you can go find fake logs. That have that behavior, you know, maybe in a threat report or you can do it in a test environment and then you can inject them into a test index and monitor that test index. 
 It's helpful to have the test index because then you can ignore all alerts. I'm thinking of [00:35:00] Splunk specifically, but I assume all Sims have something similar where you can ignore alerts that come from a specific index and only use that to validate logic and not send those to your analyst if you don't want to. 
 David: you know, that's one of those things that seems obvious after you've heard it. 
 Matthew: Yeah, I was like, ah, how did I not think of this? So, but yeah, that's you could just set up a script that reads off of a log file that has all of these things that should trigger detections, pours them into your sim, your sim will then trigger on them. And while this doesn't test whether those detections work across your systems, across the environment, it does at least validate that the logic is still good. 
 And that, you know, somebody's tuning didn't break the logic or somebody didn't accidentally disable a rule or something like that. So, yeah. They point out there's three options for these types of tests. You can run them in production. That's the most realistic, but if the commands impact business function, you could cause an outage and then people will not like you very much. 
 You can run it in a testing, I'm sorry, 
 David: No, I was just chuckling at your people [00:36:00] won't like you very much. Yeah. Well, I think what you really meant to say was people will like you even less. 
 Matthew: In a testing environment. Ideally, you will have a test environment that has the same tech stack as production. But a lot of folks don't have that. And frankly, you, it's tough to guarantee the settings are the same between test environments and production environments. Or finally, the pre made logs that mimic the activity. 
 This one seems like a really interesting and potentially a good option for after go live. Maybe you do your initial threat detection during the test environment, you validate it in production, and then you set up those pre made logs that we talked about before. And then you would want to set up an alert if you don't see, like, let's say you've got 100 rules. 
 You don't necessarily, you could set up an alert for each one of the rules if you don't see the detection in your test index, or you could just set up a rule that says, we expect to see 100 notables in this test index every day. And if it doesn't, then maybe you could do a diff or something and report which, what the difference is, which ones didn't fire or something like [00:37:00] that. 
 I 
 David: Or you could set up alerts in your sim to look just for that. And you could have the, that criteria built into those and you have a different dashboard or something for just the testing. 
 Matthew: Yep, there's quite a bit. I'm sure you can do in here. I still think the ideal option here is to see it as a continuous security validation or breach and attack simulation tool. Those guys typically sell for a hundred thousand plus, so if you can't afford one of those, then setting up a few automated scripts or some log ingestion is probably a good investment. 
 David: Yeah. And it's the investment in time, which you have to convince, you know, your leadership that that's a worthwhile investment, which I don't think should be a hard sell, but it's harder with some people than others. 
 Matthew: This has been surprisingly hard. I'm kind of shocked. I thought that the value is self evident and I have yet to convince the leadership team. So I'm obviously missing something here. Second item, second method of testing that he lays out. Adversary emulation. This one's a little different. This is a threat oriented strategy, quote unquote, [00:38:00] where you have a red team emulate the tactics of an adversary that you'd like to detect. 
 In a test, hopefully in a test environment. Then you can use that activity to validate whether you would catch them or not, and also use that activity to develop a strategy. content to detect them. This is a lot more focused and time intensive than unit testing, but it will give you a better answer. If your sister asks, you know, quote, can we detect what happened to change health care? 
 Continue security validation, protect simulation tools, advertise being able to do this, and they can, but it's simpler to automated versus manual pen testing. They do it in a pre recorded method. It's a good starting place, but there's no variations. There's no creativity. It's just pre recorded attacks. 
 David: Can say kind of to be truthful. I mean, if you were being fully honest, you'd say y we can kind of detect it, but you know, it's not 100% certain. 
 Matthew: Nothing in life is 100 percent certain. 
 David: Well, they say death in taxes. 
 Matthew: Fair enough. Another plus, like I said, if you can't detect it, now you have the activity [00:39:00] logged, so you can create the detections to catch it. This is very ThreatIntel informed most companies should not be doing this you should, before you do something like this, you should know that somebody specific is targeting you, and you wanna, you know, Try and make sure that if they do target you, you're good. 
 Finally, the third, I have third method of testing. He describes as purple teaming. I love purple teaming similar to red teaming, but we're blue and red teams work together to both generate alerts and validate that they work. This differs from a red team because red teams are focused on breaking in and not being detected. 
 Red teams kind of just do their work and then they throw the report over the fence to the blue team days or weeks or months later and then the blue team has to try and go back and reconstruct what they did. A purple team is focused on detections. Purple team will perform an attack or an action and then they'll stop and they'll consult with the blue team and they'll say, Hey, we just did this on this server. 
 Did you detect it? They'll go and look, and they'll see if there's an alert. They'll go see if it's logged. This is a results oriented approach where [00:40:00] the result is the detection. They're trying to make sure that your detection logic is good. They're not just trying to, you know, make sure that they can break in. 
 All three types of testing are important for different types of verification. As mentioned before, you can't test a rule just once. The environment changes. You should be doing the unit testing as often as you can, hopefully again in an automated fashion. But then you should use purple teaming to get a little more advanced than the automated unit testing. 
 Because maybe there's a new method that attackers are using and the new method bypasses your detection logic. And then the Adversary Emulation is kind of the most mature which I think you'll know when you get there. . 
 Additionally, he commented something else that I hadn't really thought of. You want to try and to test all of your detections as possible. This includes vendor based detections. This is something else I hadn't considered. I was just expected the EDR to work. 
 But your EDR probably detects privilege escalation, but have you verified that it detects all of the various privilege escalation methods you think it [00:41:00] should? I asked my EDR vendor a few years ago what MITRE attack techniques they detected, and they replied all of them. That is obviously not true. They were super squirrely about getting me an actual map of what they detected, so I ended up just creating one by mapping their detections that had fired over the previous year. And then over like, cause they, their detections had the MITRE technique that they mapped to. So I was able to look at, you know, in the SIM and pull up all the detections for the previous year and then create a, using the attack navigator, create a map and they didn't cover all of them. 
 I mean, I guess it's possible that, you know, certain techniques were not used in our environment in the last year, but they only covered like a quarter of it, which frankly is pretty good, honestly. 
 David: Yeah, well, I'm not sure if, you know, all this is gonna scale though. Cause if you're talking about individual security tool detections, how many IDS signatures are you running in your IDS right now? A thousand, maybe, maybe more. I mean, it's just not [00:42:00] feasible to do that. So I would say prioritize them and maybe you do anything that's tied to a SIM event or a SIM alert, those all get tested. 
 That's, that's, I think that's a no brainer. You absolutely have to do that. And then maybe after that. You know, you say, well, we're going to do the most frequent EDR events, or I'm not sure exactly if you even, if anybody really has the time to go beyond the sim ones but you're going to have to prioritize this work. 
 You can't, you're not going to do everything. I mean, it sounds great, but I really don't think that's feasible. 
 Matthew: Yeah, no, that's fair. I think that that is definitely for the folks who got a lot of, a lot more time on their hands. I didn't even know how I would go about testing all those things. I actually, it's funny you mentioned the IDS. I did that with the IDS too, cause I couldn't find what our network IDS did, so I ran the same thing. 
 It didn't have full MITRE mapping, but I ran it and compared it and it detected quite a few things too. 
 All right. Who's responsible for which testing? Well, detection [00:43:00] engineering should include unit testing as part of their life cycle. Adversary emulation should be a collaboration between the red team, thread intel, and detection engineering if you have separate teams by that. He didn't actually mention purple team in this last paragraph about this, but I've seen it led by the sock in the past or thread intel. 
 I don't know if you've got any thoughts on who you think should lead purple team testing. 
 David: I mean, it's a tiger team that that is supposed to, the outcome is supposed to be detection. So I think that, I think the blue team should lead that. I mean, in purple team, unless you're, unless you're purple team, the actions that red team are taking are informed by threat intel or your, your, your threat hunt team I don't think it makes sense for a threat intel to lead the purple team either. 
 Matthew: Yeah, yeah because purple, purple team, adversary, adversary emulation makes sense to be led by thread intel because you're picking a specific adversary to emulate. But yeah, purple team is much more focused on detections. I don't know that they should even be involved at all necessarily. So why does [00:44:00] this matter? 
 Well, many of the places I've seen treat detections as one and done. You create it, you validate it, and it just trots off to the sunset and happily continues detecting forever. But as I mentioned, I have some real life experience that shows that this almost never actually happens in real life. So what you should do about it is you should test, retest, and validate your content. 
 If you aren't doing it, you probably are not detecting nearly as much as you think you are. 
 David: Ignorance is bliss, though. 
 Matthew: Truth, you know, that actually, I have strong thoughts that maybe that's why leadership doesn't care about this as much as I think they should. It's entirely possible that they do not want to know 
 David: Yeah, so they would prefer plausible deniability versus no, knowledge of failure. And that's not surprising, unfortunately. 
 All right, well, that's all the articles we have for today. Thank you for joining us and follow us at Serengeti Sec on Twitter and subscribe on your favorite podcast app.

Other Episodes

Episode 6

April 18, 2021 01:05:27
Episode Cover

SS-RPRT-006: Economics of the SOC

In this episode, we take a deep dive into the new Mandiant sponsored Ponemon Institute report "The Second Annual Study on the Economics of...

Listen

Episode 123

August 21, 2023 00:44:38
Episode Cover

SS-REVW-123: Defcon and Black Hat Trip Report!

Matthew has returned from Hacker Summer Camp, full of stories and information about new technology.  So sit with us for a while, and listen...

Listen

Episode 61

May 16, 2022 00:40:55
Episode Cover

SS-NEWS-061: Russia Renting Tech Prisoners to Companies

In this episode we discuss Russia putting prisoner's to work on IT, and Cryptocurrency launderer put on US Sanctions list.   Article 1 - Russia...

Listen