Episode Transcript
Podcast is AI Generated, and has errors. Like the amusing one concerning David's name, which I definitely saw and 100% left in.
David: [00:00:00] Welcome to the scary Serengeti. We're your hosts, David Smorgasbord and Matthew Keener. Stop what you're doing and subscribe to our podcast. Leave us an awesome five star review and follow us at SerengetiSec on Twitter.
Matthew: We're here to talk about cybersecurity and technology news headlines and hopefully provide some insight analysis and practical applications that you can take into the office to help you protect your organization.
David: And as usual, the views and opinions expressed in this podcast are ours and ours alone and do not reflect the views or opinions of our employers.
Matthew: Speaking of the other day my boss asked me what steps I would be taking in the event of a fire drill. Apparently the answer he was looking for was not big ones.
David: First article is on fire drills and fishing tests, and this is something we pulled from Google's, Google's blog, and this is written by Matt Litten [00:01:00] and this is a blog post that he wrote, which is comparing fishing training to the evolution of fire drills and, you know, how those were created to prepare for potential fires. And it starts off with a brief history of the concept of fire drills. So, in the you know, late 19th, early 20th century, there were some rather horrible fires. I'm trying to think of the one, I should have looked this up before, but there's a really bad one in Georgia at a textile factory.
Matthew: yeah, I know exactly the one you're talking about where they lock the doors to keep them inside and yeah, it was all women and children in there doing the weaving or whatever. I don't remember what it was called either, but I know what you're talking about.
David: But based on, you know, these huge fires that killed a whole bunch of people, they're like, Hey, we need to do something about this. And one of the concepts that came up with, with was the idea of fire evacuation tests. And, and this is gonna seem weird because we're used to the way fire drills are run today, but apparently the [00:02:00] early tests. We're, we're rating individual performance, you know, basically George Costanza, how skilled are you at getting out of a, out of a a building that's on fire? So
Matthew: I'm loving, I'm loving my mental image of this, of like people like elbowing and tripping other people to get out because they're like, no, I must win.
David: yeah, well, it, the first thing that popped into my mind was that one episode of Seinfeld where George Costanza like knocks over an old lady in a Walker, I think, uh, because he thinks there's an actual fire to get out of the, out of the building. So they were testing people based on how fast they got out, and these were surprise tests, and they didn't tell them when the drill, that it was a drill.
They, people thought these were actual fires, and, and they were need to get out of the building.
Matthew: I would have loved to have seen that. Like, I just, I'm just imagining like people like fleeing each other. And like, you're talking about like knocking people over and being like, get, get out of my way.
David: so, obviously this resulted in more injuries than an improved [00:03:00] survival rate for actual fires. And also it turns out that they were basically worthless. They did not improve survival rates at all. In when they were actual fires, but what did bring down survival rate or bring down survivors, increased survival rates were.
Improvements in fire protection stuff. So wider doors, push bar exits fire breaks in construction, lighted exit signs, things of that nature and several survival rates for fires have steadily improved since this time also.
Matthew: I have, I have a question for you. A lot of these things like wider doors, push bars at exit, lighted exit signs. Those are things that the government put in place. I know how you feel about the government in general. Is this in your opinion, an appropriate use of government? I'm sorry for putting you on the spot.
You can tell me to off and
David: Well, I don't think there's ever an appropriate use of government. Ancillarily, the fact that the government did this and improved survival rates, I think is just coincidental
Matthew: be, [00:04:00] I'd
David: this.
Matthew: I'm, I'm trying to imagine a world where these aren't government regulations. I can just see people like interviewing for jobs and being like, what are your fire? You know, like what, what does the, what does the place that we work at like,
David: Yeah, well, I see the thing is that my perspective on this is this kind of goes back to the stuff we've talked about before about regulation versus law enforcement, you know, if more. building owners or factory owners or whatever were held to account for creating unsafe environments where people got killed, then it would have incentivized them to come up with these kinds of concepts on their own in order to improve the survivability of the people in the, in the factory versus mandating it from the government,
Matthew: so you wouldn't have the government enforce it. You would have it enforced through people suing the factory owner and driving them out of business and
David: not necessarily, not necessarily suing the factory owner. But if you have if you're a building owner, And you bar the door so people cannot get out in the [00:05:00] case of
Matthew: Oh, he should be
David: you you should be hung. Right? So you do a public hanging for this guy, and maybe the next guy will not chain the doors to his building and things of that nature.
And you could also have, I'm not saying lawsuits are, are, are, Can't be on top of that wrongful death, etc. Because if you're, so, you know, if, if one member of the family gets killed in there, that's all the lost revenue from that family that they're not going to get from that breadwinner anymore. So they should be able to sue him for lost wages or lost income because that guy was negligent and caused that guy's death, you know, stuff like that could, could incentivize people to come up with these same concepts or maybe even better ones.
I don't know. On top of that. Yeah.
Matthew: I'm so in use to the law, actually applying to businesses and business owners that it was totally foreign to me. I didn't even think about it.
David: Yeah. That's right. Well, we'll get into that a little bit in the next article to actually so, yeah. And obviously, because we're all used to the today's fire drills where they're announced and everything and we [00:06:00] have evacuation signs posted and everything that's a state where fire drills have evolved to today. And apparently there are actually regulations. FedRAMP that are requiring phishing drills, fire drills are required to like fire, fire drills have been required in the past.
And this is the standard thing where a security team creates a phishing email and they track how many emails go out what's the impact of the email, how many emails are opened. You know, and then the mandatory training, if you actually click on the link or putting your credentials or whatever from that yeah. And in that way,
Matthew: I've heard of companies that actually fire people that click on too many phishing links.
David: seriously,
Matthew: Yeah.
David: wow,
Matthew: Yeah, I, I, yeah, it's actually kind of weird because for a while I had heard places saying that, oh no, you know, of course we wouldn't do that. And then I've heard recently from people who have worked at companies that did in fact fire people for that.
David: how many is too many?
Matthew: I
David: point there?[00:07:00]
Matthew: don't know that answer.
David: I mean, I, I'm not saying I'm, it sounds farfetched, but if you consider that, you know, the main accountant that you have that manages your Swift account. At a financial institution continually falls for phishing emails every time. Maybe, you know, that person is not the right one to be in that job because that's a high, that's a high risk role.
Should their, their account get compromised that you have a huge financial loss from that. It all depends on, you know, the volume and context, but it's, it sounds more far fetched than anything I've seen anyway.
Matthew: I said, I'd never seen or heard of this before. Pretty much every place I'd ever worked has been like, you just get training. There's no real consequences for failure.
David: Right. And, I'm just wondering how much of that gets back to the managers. I haven't been someplace that, that wherever I was a manager where I got phishing dashboards or whatever for what my employees [00:08:00] results were during the phishing tests where that would influence how they rate the employee or something like that as well.
I don't know.
Matthew: I don't know.
David: But these, these, these fishing tests are equated or, or kind of the equivalence of the old fire drills, where you're supposed to recognize that there's something dangerous going on and, and individually react in the appropriate way. Failure is an individual failure, just like the old, the old fire tests versus a systemic problem. And I think, you know, and this is, this is obviously what, what yeah, I wrote this blog article saying, but I don't think that that's exactly right. Because failure, failure to identify a fish. And respond to it isn't an individual failure. It's not a systemic problem. You know, the systemic problem might be the, the orgs inability to stop or prevent in that individual failure from leading to a bigger problem.
But I would not say that A, a phishing failure [00:09:00] is, should not be considered an individual failure the same way that failure to properly evacuate in a fire was an individual failure. But I think there's some notable differences between this, this example of how phishing, phishing drills relate back to fire drills because phishing emails in general are not deadly for one. And not responding to a a fire evacuation order would, would, would only kill me. It would not cause the entire building to catch fire and burn down which could happen in not literally, obviously, but, uh, in the case of a phishing email where you click on that someone clicks on a phishing email and it ends up to a, in a compromise of the entire enterprise. And then he goes on to list what he calls harmful side effects. of modern day phishing tests. The first one being there's no evidence that test results result in fewer incidents of successful phishing campaigns. And he quotes in here, a study that had 14, 000 participants that had [00:10:00] repeat clickers that will consistently fail the test despite recent interventions.
Matthew: Yeah. And I saw a talk back in, at RVA sec, I think, gosh, back in like 2017 or something that had very similar results as a security analyst working at a hospital. And he was talking about how he ran the phishing tests and he just had folks who clicked on literally everything, no matter what it was.
David: It's I don't have any insight. I don't have the insight into that.
just weird. I mean, if I could delete an email, I'd do it. I don't read it.
Matthew: Yeah. Yeah.
David: And, and one of the, one of the going down the other list of harmful effects is the next one is apparently FedREP guidance requires the bypass of systematic controls to increase the likelihood or artificially increase the likelihood that, that, that a phishing link is going to be clicked.
Matthew: This is basically kind of like with a penetration test where you start inside the network. So they're basically, they're dodging your proof point or your [00:11:00] whatever, whatever, whatever gateway you have for fishing
David: You know, I'm not, I'm not entirely opposed to the idea of them starting inside the network. What I am opposed to though, is them starting inside the network with their tricked out laptop. You know, if they start inside the, inside the network. With all their downloaded, downloaded tools and everything already on the box.
It's a Linux Kali Linux box or whatever, you know, if they're in your network, they're not going to have that. You give them a generic box off the shelf
Matthew: I'm going to download your tools.
David: and say here, you've, you have a login, that's all you get. You have to get your tools, you have to, you know, you have to do all that because one of the other things you're checking for, you know, as part of instant response is those, those indicators, right?
That someone's downloading tools, et cetera. And if you take that away, then that's something else that another advantage that they have, which is unrealistic from a pentest red team kind of perspective. [00:12:00] Mm
Matthew: Yeah, yeah, I think I, I get why they do it. The, the common complaint is that it takes so long to find a hole. Like, are you hiring the team to, you know, spend two to four weeks, like finding a way in like a, like a dedicated attacker might but then they tend to treat the results like their gospel.
We're, we're ignoring multiple levels of something that might've stopped a bad guy. And sure it might, it probably would not have stopped a dedicated APT who really wanted to get into your specific environment, but those outside layers don't have to stop that most of the time they have to stop. It's, it's, it's, it's like the bear you don't, you don't have to be the fastest guy.
You just have to be faster than the slowest guy. I just heard the other day about listening to a webinar about ransomware operators. It's funny cause we talk so much about APTs and threat intel and the vast majority of them are apparently just opportunistic. They will hit whoever they can reach.
They don't target specific. They don't actually target specific verticals or industries.
David: hmm.
Matthew: They're just going to make money.[00:13:00]
David: Right. Yeah, I mean, typically government are the only ones that are targeted or government adjacent businesses are the ones that targeted that there you have, you have attackers actively targeting them specifically versus targeting any bank or any insurance company or just anybody who could be ransomware.
Matthew: Yeah. So yeah, I understand. You have to treat it differently if you're in certain industries. 100 percent agree on that. But I think the majority of us, if we're not in one of those special industries, we should be focusing on different things.
David: Yeah, we should be focused on outrunning outrunning our friend and not the bear. Or outrunning our competitors and not the bear.
Matthew: How many are your competitors? Which you should be focused on doing anyway, so.
David: Another harm that elicited is response teams have to respond to test notifications. Employees get upset and think the security team is tricking them, which degrades trust and security [00:14:00] team. And then the last harmony list is.
Large enterprises may have multiple independent phishing products. So you have people getting phish test multiple times from multiple different groups. And I think harm in this entire, personally, the entire list is really a bit of an exaggeration if you want to call it harm,
you know, the fact that phishing training doesn't work is not a directly, directly harmful to the organization,
Matthew: just a waste of money.
David: yeah, well, and bypassing control, that's just stupid, but not harmful.
You know, and if your, if your instant response team is responding to phishing training, you bought a bad phishing tool
Matthew: Yeah, that one bugged me a lot. Like, Especially if you buy, a lot of times the companies that offer the reporting button will also offer the phishing training. And if you, you should be able to programmatically handle that. Yeah, if, yeah, and sometimes, sometimes there's ways around that. Like maybe they forward it, the, your user doesn't know the correct way to do it.
They forward it to a cyber security [00:15:00] address or. Something like that, but, but the vast, like 98 percent of them should be handled automatically.
David: easily.
Matthew: God, can you imagine having like 10, 000 employees and then telling them like, when you get a phishing email, report it. And then sending out a fake phishing email and having them all report it.
David: Oh, that'd be horrible. That would
Matthew: Like 20 percent of people report it. So you have a thousand emails pouring into that box.
David: Well, and then you've got legitimate phishes that are in there
Matthew: Oh, that are hidden in there.
David: right?
Matthew: Yep.
David: That would just be a nightmare. Especially if all those employees are expecting an answer back. Get some kind of affirmation. That they did the right thing or whatever. Okay. And moving on the, the, the loss of trust I would, I would not categorize that as a, as a loss of trust.
Employees are not gonna not trust the security organization because they tried to fish, do this fishing training. They may dislike them. But I don't, I don't think it's actually going to cause any loss of trust of the security team. I've never seen that or heard that [00:16:00] before.
Matthew: maybe before, like when this was first coming up, I feel like at this point in time, pretty much everybody understands phishing training and it's just something that you have to do.
David: Yeah, maybe 10, 15 years ago, that would be a problem. But not, not today. the other thing he mentioned about large orgs having multiple phishing training programs, that's an organizational failure. That is not anything against the training itself. I would say if you, if you want to call it actual harm for the fishing training, it would be the waste of money and resources that you spend on the training tool and the people to support it.
That does not actually produce any risks or reduce any risk.
Matthew: yeah.
David: That's, that's actual harm there. Now because of, you know, what he's calling about the harms and the lack of relevance here and the, and the, the very realization that. You cannot stop 100 percent of phishing emails from being clicked. You just can't get to [00:17:00] 100%. The security industry should move towards de emphasizing surprise and tricks and move this more towards a phishing fire drill kind of scenario.
Where you educate your users, users about how to spot phishing emails inform them on how to report, allow the employees to practice reporting the phishing emails, and then collect metrics on that, such as, you know, how many people completed the training, how many people got emails, how many opened it, how many forwarded it how long did it take between the time they received it and the time they reported it things of that nature.
Thank you very and I think this is actually a pretty good concept with typical phishing training that, that sent out, most users don't report and they may not know to report or they just delete the email. And you don't know how many people recognize it as a phish in actuality versus simply those that clicked on the link,
Matthew: And I think this is a large part of the, part of the employee base. I've seen phishing trainings where they report, [00:18:00] that's one of the metrics they show is how many people reported it. Usually, I mean, if I'm thinking of most companies, like they typically do like an anti phishing slash security training once a year.
And then they have to do these phishing tests, like quarterly or annually or once a month or once a day. I don't know. And it'd be, I think it'd be interesting to see, like, the percentage of people who report the phishing email, like, over time is going down or going up based on when they had that training. I guess the other part of it is, does the training, like, this is, this is a common problem with training. You give somebody training at a point in time, but then you have to reinforce it. You have to, it's not enough to just show them a picture and say, Oh, report email. You should report it. How do I report it?
Where do I report it? What does that look like in my outlook? Or, you know, Google, or whatever.
David: Right. And you're expecting them to remember, remember that nine months later,
Matthew: Yeah. Yeah.
David: Also and I'm kind of thinking that, you know, it may be less important that employees can recognize a phishing email. And then it is for them to know how to [00:19:00] report it since you're always going to have gaps there.
And as long as someone's able to spot the email and report it, then at least you have some kind of reporting that you can you were, you can respond to.
Matthew: Gotcha. Yeah. That makes sense.
So, and it's funny because this is the opposite of the common statement about how defenders have to be perfect all the time and the attackers only have to be right once and they can be wrong all the time, breaking in. But that's not true. Attackers may not have to be perfect, but there are plenty spot, plenty of spots along the compromise chain where they can be caught.
Like in this case, when they send out a phishing email, they're probably not going to target one user. They're going to target. If it's a ransomware group that's being opportunistic, they may target hundreds of users, and you just need one user reporting it, and then you can start tracking it down based on IP address and link and all that stuff.
David: Right. Yeah. It's, it's almost the, the. The deal where you only have to be right once,
Matthew: Yeah, that's
David: you only have to have one user report.
Matthew: Yeah, because you don't, if you don't detect them on that initial phish, [00:20:00] you've got a chance to detect them when they download the malware, or when they, you know, open up their CNC channel back. You don't detect them then, you've got a chance to download, you've got a chance to catch them when they're moving laterally.
Of course, the big issue there is that most of your protections are front loaded. Okay. against guarding against that initial stage and a lot of times when attackers get into the network it's just a delicious open tasty middle. So
David: Yeah, I would say, even with this concept, though, that you may want to do advanced training for key individuals, you know, VIPs people in accounting, maybe HR, you know, someone that can have that is susceptible to CEO fraud. Things like that.
Matthew: yeah, if you've got one person who does all of your payments they, they need to be very well trained.
David: Yeah. And you may even want to consider additional restrictions on those types of individuals too, where, you know, they cannot get out to the internet or something like that. Something, I mean, that may seem a bit heinous, but you know, if you have [00:21:00] somebody that can bankrupt your company by making a mistake maybe you can say, okay, well, you're not going to be allowed out to the internet.
I'm sorry with this account or something like that. Here's another account that can browse the internet, but can't receive email or something like that, you know kind of idea.
Matthew: Yeah, I agree.
David: Now in the blog post, he gives an example of what a phishing email may look like. I'm not going to read this whole thing.
I'm going to give you an example of kind of what he's looking for in here, where the subject line says, I'm a phishing email. And at the top of the body, it says, you know, this is a drill. If I were an actual phishing email, you'd want to do these things. And this is how you can recognize an actual phishing email.
Want to take a look at that, at the example it'll obviously be linked in the show notes. Um, what this, what a phishing drill email may look like. Yeah,
Matthew: back to the training thing we were just talking about. Like, you're, you're telling them what it is you're looking at. You're having them do the thing right there. You're having them physically report it, which is [00:22:00] good. I think it's just, I just like it. It's, I don't know that that's what I would want to do.
Like, if you have to run this 12 months a year, I don't know if it's super helpful to do it 12 months a year, but I think you should at least intersperse these in the middle.
David: And, and as he points out towards the end of the blog post you know, we should be focused on secure by default systems and not relying on the phishing training that individuals receive to prevent compromises. You know, and I would, I would, I would agree with that because what we need, we need better resilience in the face of phishing failure because there is going to be failure.
And you can't expect 100 percent detection. So, you know, we need better prevention, better detection, better response to phishing and not totally rely on end users reporting phishing. And the reason that we're bringing this up is obviously phishing is still a huge deal today. So one of the primary attack vectors for attackers getting into the network. And what I, what I liked about this idea is that even if you have a [00:23:00] phishing training program today. You can still do this type of training interspersed with this to see how it goes.
You don't have to throw it your current fishing training program to adopt this concept and see if this is beneficial for your organization or not. You can intersperse a few of these tests in there with your regular fishing training. You know, conduct, obviously you, you want to inform your employees ahead of time that, Hey, we're changing the way we're doing phishing emails.
We're going to do this kind of concept sometimes and encourage them to do the reporting, include the reporting in the, in the body of the email and see how many emails you send out versus how many employees actually report the email. Cause obviously what you really want to see is. That when you tell them, Hey, click this button to report the email that most of the people who received the email, go ahead and click that button.
Matthew: Yeah, I think there's a couple interesting things here. I think, first of all, you are, like you said, you're gonna have to warn them. You're doing something different because we've now trained employees that if they click on a link in a phishing [00:24:00] email that says it's a phishing email, they get punished. And they're gonna be like, is this a trick?
It's telling me to click the link. Is this, is this like your latest, like,
David: man, that, that just, that just reminded me of a email we got when I was at the, at, at the army cert. the general in charge of INSCOM, the army intelligence security command, he received an email. And the subject line of the email was, this is the Amish virus. It was a plain text email. No links, no hyperlinks.
Matthew: print it out and walk it over.
David: the subject line said, this is the, this is the uh, Amish malware or something like that. And in the body said because we don't believe in technology and computers and everything, it's on the honor system for you to reformat your hard drive and delete everything off of your system.
So the general gets his email. So what's he do? He forwards it to his chief of staff. And so what's his chief of staff do? He [00:25:00] forwards it To I can't remember what the position was, but the main IT guy for, for the organization. And what's this guy do? He forwards it to us at the army cert and asks us, asks us if this is a threat.
Matthew: is one of the saddest things, but you know, especially in a top down organization like the army where everybody's almost afraid to not to. Yeah.
David: Yeah. To, to make a common sense judgment about an email that has no attachment, no links in it, and it's plain text. And it's telling you, Hey, reformat your hard drive,
Matthew: so I thought that's actually where your story was going to go. Is that like when a general forwarded it to somebody, somebody was like, well, this seems dumb, but the general said to do it.
David: That would, he would have made the story funnier if, you know, the chief of staff or somebody had actually reformatted his hard drive.
Matthew: Yeah, we're looking over his history, and he's like looking up how to reformat a hard drive. He's like, he spent some time on this. Oh, I almost forgot. So there's two things I want to think about. That's reporting numbers. Oh, yeah, I think [00:26:00] actually what would be super interesting here you said mix it in with your other, but I want to be a little more specific about that. I would love to see somebody shoot out one of these one week ahead of your standard phishing training.
And then compare the response rate to like, be like, you know, Oh, you know, 27 percent more people reported this and then you can use that as a, or you can do a B testing to send it out to half your folks and send out the normal way to half your folks and start directly comparing, you know, who's reporting more, who's catching more.
David: And you can do that to say that you front load that this quarter, but you don't do it next quarter
Matthew: Yeah,
David: if the numbers change in your frisking response.
Matthew: be interesting to see if it falls off. See if there's a, yeah. Yeah. I think there's a lot of interesting things you can do with this. Yeah. All right, next article. All right, we have the CrowdStrike outage and market driven brittleness. You know, we couldn't let this go by without talking about the CrowdStrike thing. This is
David: CrowdStrike thing?
Matthew: the CrowdStrike thing and someone [00:27:00] doesn't have CrowdStrike. So Bruce Schneier wrote an op ed for the New York times on the recent CrowdStrike outage, but it was bumped apparently to the Biden Harris brouhaha.
So, but it did appear in law fair.
And the entire essay is about how brittle global internet infrastructure is. And he of course talks about how this expands beyond the internet to food and electricity as well. I mean, we can all see what happened during COVID. Supply chains fell apart and it took years to recover in some cases. And he presented a link to the Texas electrical grid.
Actually, hold on. I should say before we start, before we dive too deep in this, he wrote this with somebody else. It was a, it was a combination piece. He doesn't have it at the top, but he mentioned it at the bottom.
David: No, it's Barath Raghavan,
Matthew: Yeah, Bharath Raghavan. All right, awesome. Just wanted to make sure that we credited him appropriately. All right. So they also present a link to the Texas electrical grid and I know preppers have talked endlessly about how impossible it would be to replace transformers in the event of a EMP event because transformers generally there's no store of them.
You have to build them. Etc, etc. [00:28:00] As needed. Yep. So, and he points out CrowdStrike could have completely avoided this by pushing out 1 percent as a canary and then going to 10%, et cetera, or in my opinion, they could have tested this by, they could have prevented this by testing it on any single live windows system. It's not like they need to test it on a hundred systems. Like one would have figured it out.
David: Well, maybe a system at their office, maybe. I think they have
Matthew: Eat their own dog food, eat their own
David: they have machines that, that concentrate.
Matthew: Or allowing end users. So right now they have a sensor update policy. You can update the sensor logic. You can choose to be on like N minus one and minus two, but you cannot do that for content, which makes some sense, like if there's something important that just came out, you want to get it out as fast as possible to folks.
But it would be nice if you could have like an hour or two. Like we, we don't want to apply the, if you'd had like a one hour delay, I think it took an hour and a half for them to figure this out and pause it. So an hour wouldn't have done it, but if you could have a two hour delay, it would have saved you[00:29:00]
David: We'll see. The thing is that they could, they could set this out so that the default is no delay. But if a company wanted to go in and say, no, I want a delay, You know, that's on them to adjust their risk posture to
say, we're, you know, we're not, we're willing to accept the risk of reduced coverage for an hour or a day or a half a day or whatever, versus the, the risk of you pushing out a piece of crip, a piece of crap detection, which, you know, bricks our system for, for instance, you know, should that happen?
Matthew: for instance or even better, like companies could do the testing themselves. Like they could have, you know, a canary group themselves that gets it initially. And then have other groups that get it. You know, two hours later, four hours later, et cetera.
David: And that's the way I've managed McAfee before, because in EPO, you could have different policies like that, where, you know, 5 percent of the enterprise was on the current version. And, you know, the, the other [00:30:00] 95 percent of the enterprise was on, you know, two versions back and run like that for a couple of days before you roll them everybody, the current version.
Matthew: yeah, makes a lot of sense. So his, his, his choice of who to blame for this is market incentives. He believes market incentives for pushing companies to run as lean as possible, move as quickly as possible and to worm their way into the infrastructure of all companies as integrally as possible. So they can't be replaced easily.
And he does mention that there is a penalty, but the penalty is usually small enough or short enough to be ignored. For example, a stock price hit or a small fine that represents a tiny portion of your profit.
David: You know, one thing he doesn't mention in that list though, is reputational damage and loss of customers. I, cause I'm wondering how many customers right now were in the process of POC and CrowdStrike, or we're considering moving to CrowdStrike. And now they're like, Hmm, I think what we got, we're hold what we got.
Matthew: Yeah, that is a good question. I don't know the answer to that. I don't know how many folks are going to pull CrowdStrike based [00:31:00] on this and replace it, but for folks that are in the POC phase, I definitely see them being like, whoa, whoa, whoa, whoa, whoa, whoa, whoa.
David: Cause I, I can almost guarantee when the last pass compromise happened. They lost enterprise customers for sure with that
Matthew: Yeah, yeah, but that was, I feel like that one was easier to pull in, to rip and replace. I guess it depends on how extensive your usage of it was. If you,
David: Yeah. Well, like I said, it's not just the current customers, but potential customers also are looking at this and saying, yeah, I'm not going to, I'm not willing to take that risk for, you know, to go with that organization. You know, they were not going to adopt LastPass or they're not going to adopt CrowdStrike because of what this just happened with that, that reputational damage there.
Hmm.
Matthew: I mean, this is a personal opinion. They're not paying me for it. Although they could if they wanted to CrowdStrike seems to be neck and shoulders above most other providers. If this was something where they were like the third best or the fourth best in the infrastructure in the ecosystem, I think that would make [00:32:00] it a lot easier for people to be like, Oh, well, yeah, they're, they're, they're not that great.
Anyways, we just needed an excuse to move off them.
David: Yeah. But are they?
Matthew: I don't know. It's a good question. If only, if only we had some way of testing and validating stuff against each other, but
David: Hmm. Hmm. Right.
Matthew: yeah.
David: But of course, Bruce is going to say this is a market failure. It's not like market incentives, there are market incentives for companies to make their products better than other competitors. You know, and he makes a bunch of claims in the, in the the article about things which are unprofitable which is not really accurate.
They may be more expensive. They may reduce the profits, but that doesn't necessarily make those actions unprofitable. And the unprofitable things are are things which of course would improve resilience. As what he said are unprofitable, which is why companies won't adopt them.
Matthew: Alright, so he makes a house analogy here he compares the way that we are building software now to an analogy of a house built by dozens of different companies who need [00:33:00] continuous access, but if one of them fails, the whole house collapses. So this is how software built reminds me of an XKCD, because of course there's an XKCD for everything, which I'm sure you guys have probably seen, it's the one where there's a tower
of blocks now it's like Jenga blocks and it's all modern digital infrastructure.
And then at the bottom where there's a one little block holding things up as a project, some random person in Nebraska has been thankfully mean to thinklessly maintained since 2003, like SSH.
David: Yeah, I think his analogy is a bit overstated because he's assuming that, or in the analogy, there are dozens of different companies and every one of them needs continuous access. And if any one of them fails, the house collapses.
Matthew: Yeah.
David: that's, that's, I think that's an overstatement that every one of those things is going to cause a total collapse.
Sure, there may be one or two or a few, but not every single one is going to lead to collapse for if there's a failure.
Matthew: Only if we're lucky they, the, they finished up the essay by stating that they want to force companies to build resilience into their [00:34:00] systems. They bring up chaos monkey and Netflix. They point out it requires leadership at companies to commit to long term profits over short term profits. And. Yeah, I mean, that's the case that they made.
So, I have a couple of discussion points I pulled out here. First is loosely coupled versus tightly coupled. As I was reading about this, this seemed like a tightly coupled system. A loosely coupled system is flexible, allows broad inputs with minimal dependencies, and tends to be very scalable. Whereas tightly coupled systems are rigid with strict inputs and outputs and strong interdependencies.
Where, this is, this is what we're talking about with the house analogy. A tightly coupled system is a bunch of stuff that is all very very rigidly interconnected and when you jostle something or break something the whole thing falls down. Whereas loosely coupled systems handle it more gracefully.
And this is, I actually should have put this definition later, maybe I'll edit this out and put it down or actually talk about that later. So market incentives, I would personally agree that market incentives don't reward resilience, at least not in [00:35:00] the type of resilience that he's arguing. I mean, David and I have had lots of discussions about this over the past, but this current focus on short term profit maximization so you can increase stock price does, I think, tend to lend itself towards who can move the fastest, which is I find vaccine personally.
David: Yeah, I wonder if that's got anything to do with stuff we've talked about in the past about executive pay being tied towards the stock price because of tax incentives and things like that versus paying, paying the executives in money because then the, the, the executives are almost entirely focused on the stock price because that's tied directly to their compensation.
Matthew: mean, so if we're looking at stock price, we were looking at this the other day, it was down like 40 percent or something. Let me go check real quick.
David: Yeah, I was really surprised that it was that big a drop.
Matthew: Yeah, so it looks like it's steadied out. So over the last month it's gone down 131 or 34 percent over the last month, which [00:36:00] is huge. But that being said, the price, the drops started before. It was actually on four days straight of decline before this happened on third Friday, which is interesting. I wonder why. Like the first 40th for the first 30 of decline were built in before this happened, this happened at three 43 and now it's down at two 56. That is a difference of 80. That's a, that's only about 27 percent ish back of the envelope math.
David: I don't know. Check the news to see if there's some kind of announcement or something a couple of days before that. I don't recall anything.
Matthew: but still, this is a bigger stock price drop than we've seen for a lot of other, maybe the reputational damage is going to actually work this time.
David: Yeah. Like I said, I'm wondering how many people were, were considering them or just not going to back out of, of switching or or in their conversations with them. The
Matthew: Mm hmm. I think that makes a lot of sense to me. All right. So his end goal is to convince companies to [00:37:00] deliberately break things and make them more resilient and embrace inefficiencies to make them more resilient. By embrace inefficiencies, I think what he means is instead of those tightly coupled systems, like taking the time to build it as a more loosely coupled system, taking time to do more of that testing, et cetera, et cetera.
David: was saying that was unprofitable the article.
Matthew: Which we've seen companies make that profitable. Like, that's like a. We've seen, we've seen this a lot in fashion, you know, the fast fashion companies, they, they throw it out as fast as possible, they don't give a crap about the environment, they just do whatever, and then you've got companies like Patagonia, who, at least on the surface, I mean, I guess you can make that argument that it's all corporate whitewashing, but people seem to be willing to pay more for a article of clothing that they believe is made humanely and justly and out of recycled stuff.
I guess, I guess the question is, I don't know if that, that applies to personal choice because you can, that way you can separate out the, the commercial groups. I don't know if that applies to companies. I don't think there's companies that are clamoring for, you know, [00:38:00] artisan handmade EDR.
David: well, it's, it's got a matter of. Differentiation, right? So I was taking business classes. They're like, you know, if you're going to get into a market one of the things you need to consider is how are you going to differentiate your product from somebody else's product? And, you know, one of those differentiations in this software context might be that, you know, it's going to be deployed bug free or something
Matthew: it.
David: those lines, right? It's gonna be tested, we're gonna
Matthew: It's going to be tested.
David: you know, so that could be one of the differentiators to say that, you know, you we're, we're gonna have to do fewer updates, fewer patches or whatever, because our software development's like, was this much better? Or something along those lines. Some other way they're gonna differentiate.
And it could be along that those lines of what he was saying are unprofitable. Those could be the things that make it profitable because that's how you differentiate that business from one of your competitors.
Matthew: that would be, that'd be interesting. That actually make for a more interesting decision point because frequently [00:39:00] decisions for these things are made purely by cost or they're made by capability. But you're suggesting is that the, the companies have the ability to add like a, our product is a little more expensive, but we are guaranteeing that your systems will not go down.
David: Right. Yeah, exactly. Imagine, just look at how many just a simple, for instance of how much time does your it organization spend patching operating systems right now? Right. Imagine that a, a, an OS vendor came out and what they said, their differentiation was that you are only going to have to patch once a year,
Matthew: Hmm.
David: you know, and all those man hours you spend patching and vulnerability scanning and all this other stuff.
You simply are not going to have to do with our product because that's how much better we are. You know, so I'm not, I'm not saying that's technically feasible or anything, but I'm saying that that could be a differentiator where the excess money [00:40:00] that they spent building a solid product may equate to more customers because those customers see the value in purchasing that product because they don't have to spend X number of man hours doing patching and vulnerability assessment on that product now.
You know, they're recouping that cost somewhere else. And that's, and that's, that's the, one of the great things about the market is those kinds of things can happen. And I think this article is over catastrophizing the whole thing. Because he, you know, he's saying they should deliberately break things.
I would say this is, this situation is an example of things being broke, right? Which may lead to better resilience. It's just not taking place at the company or took place after the project was shipped. But breaking things and learning from it is something that we're doing now. Right.
Matthew: Testing, testing. Yeah, we test and prod here.
David: I mean, sometimes, sometimes we look at a situation like this and we say, you know, this is a terrible failure and look to blame someone or something when this may be just how [00:41:00] it should work. Maybe this is how software development should work. You know, sometimes software is written in with that takes a, it's a certain cost to write it.
There's a certain cost to purchase it. There's a certain cost to implement it. And when it fails, there's certain costs to failure and it's recovery. Right. Now hopefully we're going to learn lessons from this and those are going to be a reduced likelihood of this happening in the future. What would it look like if all those certain costs were precluded from happening for this software, for all software, that cost may be too high.
And you know, we should, of course, try to do better, but we can't expect perfection for the expense that it would cost for those things not to have happened or for those things not to be a risk.
Matthew: Yeah.
David: And one of the things that he doesn't really mention here is over consolidation, right? Because Microsoft Windows is 73 percent of the computer OS market. you know, we can argue why, but it's certainly not [00:42:00] because Windows is the best operating system. And quite frankly, I think it's getting worse. Windows is not improving with age.
Matthew: No, no it's not. That's actually kind of an interesting, so thinking about the over consolidation point you're making if. I remember back in the day when McAfee was doing, was working on that like layer that would connect various tools together,
David: yeah,
Matthew: exchange layer or something.
David: the, the, the the threat intelligence exchange, I think.
Matthew: Can you imagine a situation where the signatures are divorced from the software, the detection engine, and you can choose which detection engine. Product to run, you could be like, all right, so we're going to choose this really robust detection engine. That's got a lot of air handling and a ton of testing.
And we're going to run that on our, you know, Business critical servers, but then on laptops that are less important, like we're going to deploy this more mass market [00:43:00] one, that's a little faster and a little more oriented towards that. And then you feed the signatures in separately.
David: Yeah. So you have signature creation companies
Matthew: Yeah. Which we're starting to see now. We talked a little bit about AmbleLogic and SockPrime and stuff like that. They're more in the analytics than detection as we talked about last week. But I think that that's one of the, cause that's one of the main, main. Points of like frustration right now is like every EDR company maintains their own detections and they all maintain their own detection teams And they don't all detect the same stuff
David: I, you know, this may be one of those things where how AI ends up changing the market. Where, you know, AI is doing that generic detection and you, you're, you're, what you do is you're building what you're, you were paying for the AI model and then you're subscribing to a test sets or something like that, then improve the AI's intelligence and you can, and because they, I can assume any kind of test set, then.
You know, the AI engine is going to be completely divorced from the test [00:44:00] sets you use in order to teach it how to better prevent whatever's going on, you know, man up malware or, or, or, or web traffic or whatever in the environment.
Matthew: because that's one of the only ways that I can think of for that differentiation because like you were talking about before with the the one patch a year and like really heavily tested and really You know make sure like that would work really well for certain types of devices like medical devices and manufacturing devices Where they're loathe to interrupt You know, for patching or anything like that, that's exactly what they would want.
But then when a company goes out to buy stuff, they don't want to buy, you know, three different EDR products and then try and merge all the signatures together to analyze them in a sim.
David: Right.
Matthew: So they end up just buying one and then, you know, not installing it or it creates the monoculture you're talking about.
So if something goes wrong with CrowdStrike and all you've got is CrowdStrike, like, boom, everything's down.
David: Yeah. I don't, I don't know.
Matthew: Yeah, well, this is a case where the companies are never going to do that. [00:45:00] Cause they, again, this goes back to like, they want to be integral. They don't want to share the environment with two other different AV vendors.
David: Right. I mean, it's what, what, what they call an economics lock in. Right. And when they want to lock you into them because the switching cost is too high.
Matthew: Yeah. Nope. All right. Anyways, back off my, back off my random speculation.
David: Yeah. We move on to your great quote.
Matthew: Oh, right. So I, I I was reading over the, so in the Schneider blog, I feel like the secretly best part of the Schneider blog is the comments. It's funny because in YouTube, the worst part of YouTube is in the comments. But for Bruce Schneier and some of these security blogs, there's some really good, there was a really interesting quote here from Gerald Weinberg, who is a computer scientist at.
I didn't leave the quote up. He's a, he was a computer
David: Winesburg law,
Matthew: and yeah, he coined this law, if builders built buildings, the way programmers write programs, the first woodpecker that came along would destroy civilization. Which is of course, you know, [00:46:00] in the theme, a wild overstatement, it would require at least a dozen woodpeckers, but it is, it
David: unless it was Woody. Cause Woody would do it all by himself.
Matthew: yeah. What is an overachiever? No, but like thinking about some of the, some of the wild things we've seen in the past couple of years, you know, like SSH under maintained and companies relying on open source software and, you know big companies making stupid mistakes like this. We didn't even talk about how the mistake happened with CrowdStrike, which, because I'm sure everybody knows, but it's because they didn't actually test their detection signatures.
They tested the template and then they would populate the template. And as long as the template, you know, looked correct, they were like, boom, let's go. Although their validator failed too. So just, just an epic failure. Yeah, I don't know.
David: Well, I think, I think the flip side of, of, of his his law there's, you know, if maybe if programmers were held to account for failure, like bad builders elder account, maybe there would be less
of it, [00:47:00]
Matthew: We'd have the housing crisis, but for that'd be interesting.
David: But I think there has, there has to be a middle ground there between, you know, the move fast, break things and being sued into oblivion for errors in code though,
Matthew: Yeah. It's well, it depends on how critical the system is. Like if it's a, if it's a, if it's a video game on your iPhone, like move fast, break it, break it all like, and nobody cares. Nobody likes those games anyways. But like, if you're building EDR, if you're building, you know, Oh, a deep in the OS at the kernel level stuff, like, yeah,
David: right?
Matthew: be a little more careful than that,
David: Yeah. One would think,
Matthew: but I'd be wrong. So that's
David: well, the thing is, you know, we're, we're talking about here is that, you know, because of the EULAs and all that bad programming is not punished in any way.
Matthew: No.
David: And I think, like I said, there's, there's got to be some middle ground there because I think if you over punish it, then we're going to miss out on some great products because people are going to be afraid to build them.
But if you don't punish them at all, then you end up with [00:48:00] crap.
Matthew: We've got nothing,
David: No, that's typical for you.
Matthew: All right. So what you should do about it? The only real comment I have in here is that every company needs to kind of make their own decision on looking at how resilient do you want to make your environment and your products and all of that?
David: Now, what's something else to consider there is, you know, the role that third parties pay in your, play in your resilience now, and verifying that they are taking their consider make performing their due diligence to the level that you're, you're comfortable with if that, if their failure to perform their due diligence is going to bring down your organization. But that's all the articles we have for today. Thank you for joining us and follow us at SerenitySec on Twitter and subscribe on your favorite podcast app. [00:49:00]