Episode Transcript
Transcript is AI generated and has errors.
[00:00:00]
Matthew: Welcome to the security Serengeti. We're your hosts, David Schwendinger and Matthew Keener. Stop what you're doing. Subscribe to our podcast. Leave us a lovely five star review and follow us at SerengetiSec on Twitter.
David: We're here to talk about some cybersecurity and technology news headlines and hopefully provide some insight analysis and practical application that you can take into the office to help protect your organization.
Matthew: Views and opinions expressed in this podcast are ours and ours alone, and do not reflect the views or opinions of our employers.
David: You know, my PICUS was blue once and no one wrote a report about it.
Matthew: Your what was
David: My ficus, my, my, my ficus, my ficus was blue.
Matthew: Woo. Thought we were going to have to age restrict this podcast. All right, so today we are talking about the Pikus, Pikus Blue Report. I actually spoke with the the, the Pikus folks at Blackhat, and they confirmed that it is pronounced Pickus, which makes me so sad.
David: I'm still going to come out. We're Is forevermore, Picus,
Matthew: So David did most of the work on this one [00:01:00] and did all of the documentation, but I am stealing all of the credit here as usual.
David: it's more like stealing the blame, really.
Matthew: So this is the blue report. We talked, this is the second year of the blue report by PICUS where they look at the attack simulations from their security validation platform, which is killing me because we were breaching attack simulation, then we were continuous security validation, and now we are security validation platform.
They use this to assess the real world performance of security products over 136 million different simulations. This includes their route. Results from the attack path validation and the detection rule validation products. So detection rule validation is actually something that I got to talk with them at blackhat about because I specifically asked them you'll recall david when we talked about breach attack simulation before we've talked about How do you connect this with your rules?
I asked them if they worked with Cardinal Ops to, for example, to match their validations with the content rules being provided by Cardinal Ops. And they were like, [00:02:00] Oh, we actually provide detection rules ourselves. So apparently they're getting into the content development as well, and they test their own rules.
Frankly, I think it's kind of a conflict of interest there. I don't know that I would want to have the same vendor provide my content as tested my content.
David: Yeah, there's a conflict of interest there. And I guess it, it really depends on how they're doing the actual confirmation.
Matthew: Yeah,
David: You know, if, if you could peek behind the curtain to see how they're doing the validation, it might be perfectly valid validation versus something that could be, you know fudged depending on the outcome.
Matthew: testing, or studying to the test or teaching to the test.
David: Right. Yeah. It's like the the heck was it? I think it was a Microsoft problem where Microsoft had a bug and someone had submitted the bug to Microsoft said, Hey, this is a problem you need to fix. And here's my exploit code that demonstrates how you can exploit this [00:03:00] bug. And Microsoft, rather than fixing the bug, simply defended against the exploit. And he's like, Oh, what if you change this, then the exploit still works because you didn't fix the problem. All you did was prevent my exploit from working.
Matthew: That's about right. So we talked about the blue report in episode 137, which wasn't too long ago, but we, the problem there was we talked about it about eight months after it was released this time. We were getting to it only a month and a half after it was released. Pycos red report last year.
David: Yeah. Which is an awesome report. I mean, even now, I think that report was still be valid to go back and read.
Matthew: So there actually is a Picus Red Report 2024. I don't know how I missed this. I'm gonna have to download and see if there's any difference in this one from last year's,
David: Hey, we got subject for our next podcast.
Matthew: which is going to turn this into the biggest podcast.
David: We could turn it into the, into the reports podcast where all we do is read, you know, or summarize reports for people.[00:04:00]
Matthew: I could actually see some value in that. Cause some of these reports are trash and having somebody like explicitly come out and be like, this report sucks. Don't waste your time with it.
David: Summarize it, but will it tell you it sucks? No.
Matthew: probably, eventually, then we'll just be replaced by AI. I wonder, we, you know what, we have 147 episodes. I wonder if we're at the point where you could train us with AI, train an AI on us and just let it go.
David: Well, you could train it. You can easily train it on our voices, but could you train it on our sarcasm and, and wit? I don't know.
Matthew: Rapier wits. Turns out it's more like a cardboard sword wit. All right. You had a comment on the report organization.
David: Oh, right. So the report, the way the report's organized is right up front, they have two sections, which is key findings and key recommendations. And you, within seven pages is, are those two parts. So you don't have to read the full 38 pages of the report, I think, in order to really pull out some [00:05:00] useful information from it.
You can go into the, the report to get the details or whatever, but if you don't have time for that the first seven pages will give you what you need from to fully understand the way the report the, the findings for the report and what you should do about them.
Matthew: So that's always nice. All right. We are going to work our way through this son of a gun and talk about the methodology first, and then we're going to go through some of their individual points and then finish up with the recommendations at the end. So methodology there's a methodology here. They have a couple of definitions here. Prevention effectiveness and they have the most redundant definitions here. 80 percent means that 80 out of a hundred simulated attacks were prevented. The higher the number, the better. That's amazing.
David: the, more like the definition of percentage.
Matthew: Yeah, yeah, that's ridiculous. And then they also looked at detection effectiveness. Although this one looked at both whether the attacks were logged and whether the attacks generated alerts the generated alerts thing. Well, we'll talk about it a little bit later when we talk about the alert score, but that [00:06:00] one bugs me.
So they rank the scores on Zero to a hundred percent, which makes sense. They called inadequate zero to 19 percent basic 20 to 39 percent moderate 40 to 69, all the way up to optimized through manage and optimize, which is 90 to a hundred percent, these seem very arbitrary. I mean, it looks like they just picked out quintiles. And I don't know that 40 to 69 percent actually represents a moderate level of alerting or logging.
David: Well, I think they, and what they're trying to do here, I think is develop some categorization for something. No one's really categorized before, before. Cause I was thinking that when I was reading through this, you know, it'd be light, nice if they were able to sync this up with CMMI or something as far as maturity model.
Matthew: mean, that's, they're using maturity model terms for sure.
David: Right. But it's not tied to something else. That's more a more or more robust definition of maturity. So I don't [00:07:00] know. I think it's a start anyway.
Matthew: I do too, but I would like to see, cause I don't cause zero zero to 19 percent inadequate. Yes, absolutely. Is optimized 90 to a hundred percent. Do we, I mean, we talked about this a little bit before, do we need to detect a hundred percent of things? This is kind of like I know that there's been some talk in some of the capability maturity models.
Where you don't actually want to go all the way up to the top because going to the top makes it too rigid. I don't remember if you have seen this, but there's when they're talking about like sock procedures, like you don't want to go all the way to a five for sock procedures where everything is very rigid and hidebound and inflexible,
David: Well, I mean, I think, I think the, the, the, the issue here is that there's, they're trying to say that if, you know, you hit these percentages, then this means. That you're doing this, this, and this. When I think really the problem is that if you're hitting these [00:08:00] percentages, it's assumed that this, this, this, right. It's not a definitive indicator of that. So like, if you go into the report, the description they have for optimizes organizations with optimized security controls, continually monitor, refine, update them to keep up with the evolving threat landscape and
Matthew: not related to a percentage. Yeah.
David: And maintain their edge in exposure management. But if you go through the report, there are, there are organizations that are really close to that, like 85, 86, right, that are very close to being optimized and yet their other scores are much lower than that in a different category, which seems because I would like to think anyway, and that. Your scores would go up almost in a uniform fashion where, you know, your detection and your prevention scores will be very similar because you are, Your organizational processes that got you those scores are very similar.
Whereas this seems to be like all over the place. So I think [00:09:00] what they're trying to do is say, if you have these scores, that means you're doing this when it really, it's not exactly a correct measurement. It's almost, it's not exactly arbitrary, but it's not a definitive rating about how well you're doing something.
If you get that score, it just, it's almost coincidental,
Matthew: I agree. And this is actually, there's some people have started talking about capability maturity models for detection, and maybe that's a discussion for a different day. So let's not get, I'm not, let's not get hung up right here on this. Cause I was never going to get through this. So,
David: which everybody's dying to do.
Matthew: I know, right?
Let's, let's really dive into the depths of this. So overall prevention and detection effectiveness performance. They measured on log score, alert score, prevention score. So we're talking about those here. The log score increased from 37 percent to 40%. Now you will remember that this is using their tool.
So the sample here is companies that are using their tool. So I think that it makes sense for the log score to increase almost 50 percent [00:10:00] because They are ostensibly, they have purchased this pickest tool to help improve their security posture
David: In theory,
Matthew: in theory. And this is, this is going, moving the organizations from basic into quote unquote, moderate.
David: Well, there's, you know, in these organizations, when they cross those thresholds, they throw a party like, oh, we moved into the, into the moderate zone. So,
Matthew: Yeah. What I would love to see here. Cause they talk later about attack paths. I would love, what I would love to see here is super targeted recommendations about like, sure, you know, you're only logging 40%, but you know, if you log these three new sources or these three new event types, you would cover some
David: right. And
Matthew: of our tests that are, you know, right in the middle of the attack path, like an attacker has to go through these accounts or these domain controllers to get to your crown jewels, quote, unquote,
David: Pegasus reports have those,
Matthew: maybe.
David: which would be outstanding.
Matthew: Yeah, it's, yeah, it's possible they offer this, maybe they just don't talk about this report. The alert score fell. Yeah, the alert score fell [00:11:00] from 12 percent from 16%, which is really weird to me, although I can think of a couple scenarios when this happens. The most obvious scenario is that alerting has gotten worse, which is kind of implied by this, but doesn't seem, people's detection logic doesn't just get worse.
And again, they ostensibly bought this tool so they can improve their scores. I think what's more likely is PyCAS has either expanded their testing, for example, maybe they doubled the number of tests they have, and the detection controls only detected a little more. Example, they had 50 tests last year, and if you detect 8, that's 16%.
This year, there's 100 tests, and you detected 12, so numerically, you detected more than you did last year, but because there are more tests, your percentage goes down.
David: Yeah, I think that's probably, that's probably exactly what happened.
Matthew: that makes the most sense to me. Cause the other thing I thought of was maybe Pycus reworked their tests to be born devious and not detectable, which there might've been some of that too.
Maybe some of their tests were really obvious.
David: I mean, what that also shows though, is that [00:12:00] the, the number of alerts they're, they're, they're creating are, is still either stagnating or, well, it's gotta be stagnant. You're right. It's not going to fall back unless people are disabling a bunch of alerts.
Matthew: Yeah. Yeah.
David: but it just, I think it's just, it's an evidence of stagnation.
So there are every, every year. Or every day you're getting more and more different types of attacks and different kinds of attacks. And the, and the volume of attacks is increasing, but if you never create more alerts to detect new attacks, then you're not going anywhere. You are getting worse.
Matthew: And that, I mean, and I wish that they had made that explicit. Because otherwise this almost just looks like people's alerting just got worse without that context.
David: Instead of just lazy,
Matthew: It's just lazy. Yeah, yeah. So this means though, 12 percent means that less than 1 in 8 attacks trigger an alert. I assume by alert they mean in the SIEM because later they talk about a much higher percentage of things are [00:13:00] prevented.
So alright, so yeah, yeah. Because otherwise that would make no sense. But here's my, this goes back to this percentage thing again. It makes sense not to alert every time something is blocked. Like if you, if your organization had a hundred percent blocked, a hundred percent logged and a hundred percent alerted, you'd be like target back when they got breached.
Cause they were ignoring alerts because there were so many alerts.
David: Well, the thing is, if you're blocking and that is a prevention, that should really, you shouldn't be, in my opinion, most of the time, you're not going to, you should not generate an alert based on that because there's no action for you to do. Action's been done, right? So if you've got, you know, 100 percent stuff being blocked or prevented, then you should, your alert should be zero.
Matthew: And should the, I think we talked about this before too. And the last thing, now that we're talking about it, I'm wondering, I'm having deja vu. Like, I wonder if those numbers should add up to a hundred percent, like prevented and blocked should add up to a hundred percent. Or prevented and alerted. [00:14:00] Sorry.
David: I would say in theory, right? So I think what you're, you're prevention. And your, well, ideally your prevention and alerting should be 100%, but absolutely your prevention and detection should be 100%. So anything that's not prevented is at least detected so that you can go back and find it in the logs if you need to, even if you haven't generated an alert on it, ideally, you know, you would have an alert on all those though.
Matthew: I I do think though that there are some times you want to detect block things. I agree that most of the time you don't like if it's a scan activity from the external, you don't want it to detect that. That's boring. There's no action, but if you've got like command and control detections coming from an internal
David: Mm hmm.
Matthew: You do probably want to detect on that, even if it's blocked,
David: Yeah. Cause that's. Right. Well, I, I would, I would say in that scenario, there's two things that are happening there. You've got the prevention of the devious activity, but you have the detection of the fact that something went wrong and it got there in the first place. So I'd almost call those two different actions or two different activities which of course aren't, are not going to [00:15:00] show up that way in the sim or whatever that we are going to show as one.
But I would say there's the, there, there are two sides of one coin there.
So everyone seems to be in the middle and law on logging. You know, if you look at the raw numbers, you got 57 percent on logging and your alerting is at the bottom. So I'm inclined to say that it would actually be better if those numbers were flipped where you're alerting was at 57 percent and your detection was at you know, 12 percent or, or, or whatever, because at least that, cause it, the way I'm looking at it is that if you've got 57 percent alerting, then you may have fewer alerts, but at least all of them are detecting something, or most of them are detecting something, whereas.
When you've got a high number of detections and no number of alerts, you're logging a whole bunch of stuff, but you're not alerting on it. It's like a whole bunch of stuff is being wasted. So it almost be an indicator that you're better with fewer alerts and then, or fewer alerts, but overall better [00:16:00] quality alerts that Would trigger more frequently or more accurately than the number of the detections. I'm not saying I'm just, there's just in I'm not saying that's, that's what you should shoot for. I'm just saying that if you were to look at the raw numbers and flip those, I would almost say that it would. Looking at it in a raw aspect that you would say that one organization that looked, you know, had the detection of 57, the alerting at 12 and an organization that had alerting at 57 and detection at 12, that, you know, the one with higher alerting would be probably better protected than the other, I guess is what I'm trying to say.
Matthew: Yeah, I don't think that there's a direct relationship between them because there are certain controls that are probably more important to have logging where maybe let's, let's say active directory.
Where Active Directory, in this example, maybe their, let's say their controls cover 10 log sources. So Active Directory represents 10 percent of the logs. If you have Active Directory in there, you've got 10%. But 30 percent [00:17:00] of their attacks involve Active Directory. So if you've got Active Directory logs in there, you can detect and alert on 30 percent of the attacks.
Because so many attacks are like privilege escalation and lateral movement and stuff like that. So I don't know that we know that those can be directly related because my first thought to my first thought of response when you're saying that was we can't switch them because if you're monitoring your detection, your logging percentage is so low, then you can learn on it.
But then I realized that that's not necessarily true because those aren't related.
David: Well, that's not, that's not quite what I was trying to say either. I, what I'm trying to say is that looking at it as a gross indicator of competency and protection level, you'd say that one that has the higher alert percentage and the lower detection versus flipping those, if you were just to compare those two side by side, which one would you think is better off from a From a, from, from a protective standpoint than the other,
Matthew: I,
David: you know, so it's just like a broad indicator of quality versus [00:18:00] a definitive indicator of quality.
Matthew: Yeah, I agree. This is, this is a weird one. Some of these stuff, some of these things are really interesting, but not, just not very useful. Some of them are more than others. They did have an interesting quote here. More logs do not necessarily equate to more visibility or better security outcomes, end quote.
David: Yeah, I think, and this is a holdover from the, the whole philosophy of log everything.
Matthew: Which I will admit, I'm a huge proponent of, unfortunately. I get that it's not realistic, but I just, I do wish that we could log everything.
David: yeah, well, the thing is that you can log everything, but if you're not generating alerts from those same logs, then what's the point?
Matthew: Yeah, you're not wrong. I just, I just want it all so that I know.
David: yeah, you need, you need to prioritize the logs and, and bring them in if you have use cases for those logs. Otherwise, what you've done is essentially pointless, right? You're, you're logging all day, but you're not alerting on it. It's only really good for forensics at that point. So sure. Once you eventually find out something bad has happened, you can go back and [00:19:00] yes, it's find it in logs.
But that's not great.
Matthew: Yeah. And given that my history is in IR and digital forensics, it makes sense that I am all about having all of the logs but it does allow us to say how busy we are. We can say we have onboarded X new. Log sources and then leadership can go ask the threat detection engineering team like, Hey, what rules do you have in these sources yet?
And then go, we don't know. We didn't know they were on board. But stuff like this report or stuff like this, a tool, stuff like this tool could be great for figuring out which logs to prioritize. It should be able to tell you, you know, X number of attacks. Use Windows logs, Windows OS logs. These need to come in first.
So there's a lot less clarity. There's some things that we have a lot of clarity around. Like if you look online, there's tons of data around. There's a whole website built around Windows logs. Windows operating system logs. And I found a cause I went looking for this a couple of years ago. I found a couple of years ago where someone had built out a website that told you like, these are the important logs to have for [00:20:00] forensics.
These are the important logs to alert on, et cetera, et cetera. I haven't seen that for like AWS and Azure logs application logs. That stuff's just not there yet, or I can't find it. It's possible that it's there and I'm just. I haven't found it yet. So a tool like this showing you like what specifically these attacks and generating the logs of these attacks are super helpful to figure out what tools you need to, what logs you need to get into your sim and what alerts to build.
David: And this is, and that's in the report whenever there's a, there's a, there's a a red indicator for an attack to say, Hey, this is red because you didn't, you don't have an EDR alert for this, or you're not logging this windows event or something else that's in the report for these.
Matthew: It can be. I have only seen a detailed report from one of these products. And yeah, it would tell you like we ran this report at this time. We ran it from this IP to this IP address. You should expect to see a log, a log that looks something like this in there.
David: And that includes all your security controls [00:21:00] then also. So we would expect to, expected to have seen a Windows event. We would have expected to see an EDR event. We would expect to see a firewall event. All of those would be listed in there based on your control stack.
Matthew: I don't know about that. It's probable that they don't include everything. I, I, they probably don't have a full awareness that the companies that make this probably can't give you, you know, every single tool. Yeah,
David: They know what tools you have, you know, because if you've done all the integrations, then it's going to know that an EDR event was not triggered, you know, so it could say that there was no CrowdStrike event for this. So you should have a CrowdStrike event for this because that would make your remediate because, because I would think that ideally. Which you, this would be tied into your, your ticket management system too, right? So you'd run the scan, it would say, okay, you've failed this, this, and this, and it would kick off tickets to the appropriate group to create the content for [00:22:00] those mistakes.
Matthew: that doesn't make a lot of sense.
All right next section is real world performance of cybersecurity products. They have a comparison here and they mentioned that a number of security products have been tested against MITRE Ingenuity attack evaluations and have achieved a hundred percent in prevention and detection coverages, but they said that they didn't actually find any tools that were able to successfully detect a hundred percent or prevent a hundred percent of attacks in the real world.
They have a scattered. Oh, sorry.
David: I was just going to mention the scatterplot is that, you know, the scatterplot diagram they have in here, it looks like a shotgun blast between zero and a hundred. So it's not like, sure, they didn't hit a hundred. They were down at 90 or 95. They are all over the place as far as their, their failure rate.
Matthew: that diagram is something doesn't make a lot of sense to me. My assumption is, is that the x axis is the product and the y [00:23:00] axis is 0 to 100. Like, they don't appear my wife actually, we were talking about this as we were walking and my wife suggested that they were probably in alphabetical order, the products on the bottom, but then they just removed the bottom and that explains why it's like 0, 100, or, well, I'm sorry, like 10, 90, 80, 70, like they're not arranged in any discernible order and I have, if you assume that they are arranged in the same order, um, you know, the same one that is on the far left is the same one that is on the next one on the far left, then it makes even less sense.
For example about one third of the way through the graph, under prevention, there is a cluster of three items that are near, one of them is near 100 percent and the others are like 90 and 85.
Matthew: But if you go directly below to the detection score, right in that level, it's all the dots are between 50 and 75%. So you're saying that there was a tool that prevented nearly 100 percent of your attacks, but only [00:24:00] detected 60 percent of the attacks? That doesn't make any sense. Your detection score should always be higher than your prevention score.
You can't prevent it if you can't detect it.
David: Well, unless, unless it's what we were talking about before where the prevention event simply not logged.
Matthew: Yeah, I, maybe, but I feel like I'm taking crazy pills here. This makes no sense to me, but yeah,
David: Well, you are taking crazy pills, but I'm not sure that's, how that's impacting this.
Matthew: So, I mean, if you look at the detection score, it looks like the vast majority of them are between zero and 25%. It looks like there's a huge, like clustering there. And if you look at prevention, it looks like the main line is kind of clustered between 50 and 75. This whole thing makes, so I would love to see the data underneath of this.
Cause I think that. Their definition of detection is not the same as my definition of detection. That's the only way this makes sense.
David: You know, maybe we should go to their site and find the feedback and, you know, write a [00:25:00] dissertation.
Matthew: I would love to like have someone on and really like dive in. All right. Anyways the only thing that these graphs illustrate is that these products are all over the map in terms of detection and prevention, which I guess is a good point. It's hard to, hard to take action on it, in a specific way.
David: Well, I mean, based on what we're seeing here, and we also don't know what their output reports look like. So it's hard to say, but just based on the fact that the, the, 23 and you're 23 and 24, it doesn't give me a lot of optimism for whether the output of these reports are really being action to any great degree.
Matthew: Yeah, there's that. I would love to actually see an example report, I wonder if we can get one. Cause I would love to know which of these tools scored in the, like, 10 25 percent for detection. This actually seems like this is valuable information in and of itself the tool detection and prevention score, they, I wonder if they could start a sideline selling this data.
Ha!
David: Well, I think that's something you and I talked about before that [00:26:00] these, these, the thing about what we're looking at here is, we would like to assume that the graphical output that we're looking at here, these scores and everything are the result of conscious decisions by organizations about what they're going to detect and what they're going to alert on when we're probably more closely looking at not the conscious decision of the organizations, but the conscious decisions of the product of the product vendors for their default configuration settings.
Matthew: Oh, no, I agree with that. That's, that's, that's the part that I'm saying is actually valuable to sell. Because like, if you're looking at trying to purchase a new EDR tool, I would want to know which one comes most secure out of the box. I think that would be interesting and worth spending money on.
David: Yep. I agree. You know, and the thing is that if they did this, if they, if they did it correctly, you could sell it two different ways, right? You could sell it to people on, hey, the default outta the box for this EDR is better than the default out of the box for this EDR. And then they could also sell it back to those particular vendors saying, Hey, [00:27:00] you greatly in, in increase your default configuration.
You know, the protection of your customers by changing these default configuration settings.
Matthew: Yeah that would actually be really interesting if they found a specific company, for example, was using the same tool, but had a much better rate. Yeah, that'd be interesting.
David: Well, I mean, the assumption there would be that that company has made those changes.
Matthew: Well, yeah, yeah, that's what I'm saying. That's how they find. They go talk to that company and figure out what changes. I was agreeing. I was agreeing.
David: no, I was just thinking that you, maybe that would be something they could get in the license agreement or something to say that you, that Pycos can consume those config changes.
Matthew: And roll
David: They can get a report back on the config changes or something like that automatically or something from the, the company.
So versus having to go in and ask them for, you know, having to interview 'em or something to get that output. I don't know. You know what I mean? 'cause you wanna automate that as much as possible and reviews the human [00:28:00] interaction for it. So if they could not only understand that this company is running.
CrowdStrike, and this company is running CrowdStrike, but this, this company A has got greater success rate than company B, and automatically they can consume the configs from both and do a compare and contrast to find out what is better.
Matthew: Yeah. I agree.
David: So I think a lot of companies might be, you know, reluctant at best to do that though.
Matthew: So, they explain this by looking at variability based on several factors, including environmental configuration differences based on network architecture, compliance needs, user behavior deployment nuances, where are you putting it, how is it interacting with other security tools and integrating specific configuration settings, and of course, the fact that the threat landscape is always changing.
So they recommended continuous validation, which I am all for, and in fact, they deliver as a tool. That sounds almost like I'm saying that, yeah, but yeah, surprise. [00:29:00] And ongoing fine tuning, but of course, the big question there is with what budget? Because as we all know, many companies do not budget for care and feeding of their tools.
They just buy them and put them in place. Yeah.
David: money to buy the license. You didn't know that, Matt?
Matthew: So their next item they talked about is defensive gaps that they found with automated penetration testing, which again automated penetration testing definitely seen being kind of folded into this vertical. They're. PICUS Attack Path Validation, a cutting edge automated penetration testing solution which identifies the shortest paths that attackers might exploit to gain domain admin privileges.
They were able to successfully achieve domain admin in 24 percent of the tests, which actually is interesting. I'm going to skip a couple of items here, because they also said that in 25 percent of the tests, they were able to crack user password hashes. PICUS So if they're saying that in 25 percent of the environments [00:30:00] tested, they can crack a password hash, and in 24 percent they're able to achieve domain admin status, that means they have a nearly 100 percent success ratio if they can crack even one password hash.
David: Oh, right. Yeah. And that doesn't really, that doesn't, and they didn't say crack an administrator password either.
Matthew: No, they said anyone's
David: Anyone password.
Matthew: So that is a wild claim, and actually, now this is another thing that I would love to ask somebody specifically and be like, so tell me, this seems interesting.
So, in these assessments, they found that in 40%, there's at least one instance for domain administrator access, and just blank. It looks like that is an incomplete copy paste.
Alright.
Hmm. This doesn't make sense, actually. So next item that we had claimed here was penetration of testing assessments performed by PICAS APV revealed that in 40 percent of testing environments, there was at least one instance where domain access administrator was achieved.
David: Yeah. That's why I had that other note [00:31:00] is what's the difference between the 24 percent
Matthew: because they said
David: That's why I had the note in there about that because they seem to be saying the same thing with different percentages. I didn't understand what the difference was.
Matthew: I agree. Yeah, they're saying they found it in 24 percent of the tests, and they found it in 40 percent of the environment. Maybe there were multiple tests in the environment? Because one of them is
David: that's the difference. It's, it's based on the organizations.
Matthew: gotcha. Well, that's
David: So 24 percent of the tests across all organizations, but only 40 percent of the organizations, I
Matthew: We need to talk to them. We could write this report better.
They should give us access to the data and then we can write the report.
David: yeah. Well, write to them and say, Hey, can we get access to your Google doc that you used to,
Matthew: Google doc and all the data is in a Google spreadsheet. All right. I think this is, so this attack path thing is interesting. And I think this is something a lot of orgs don't look at. There are attacker tools like bloodhound that look at it. And I recently looked at some cloud tools that included as part of their findings, where they are [00:32:00] looking specifically, it's a little bit different than the Azure AD or Active Directory related attack paths, but they will look at and call out if there is a vulnerability on a system that's exposed to the internet and it'll like kind of walk through how the how it's exposed to the internet and how an attacker can get there and what vulnerabilities they exploit.
So I think that's a super useful concept.
so typically these attack paths begin with actions like dumping a user's hash, cracking it, following by privilege escalation, then lateral movements using the newly obtained privileges. And it's interesting how they are doing this all automatically. Leading to a full compromise in the network, which I don't think we've seen any attack, at least it hasn't hit like the major news of any attackers using these types of automated cracking privilege escalation lateral movement yet.
Have we,
David: None that I can think of off the top of my head.
Matthew: I know, right? That seems weird to me. It seems like. We're the defense is actually ahead of the attackers [00:33:00] for a brief moment in time. Although we have seen a bunch of, I know CrowdStrike semi famously recently came out and said that the average ransomware time was down to like 36 minutes or something like that.
David: Well, I wouldn't say we're ahead concern the scores we're talking about here. Cause it looks like they're getting away with doing this full thing. Right.
Matthew: Yeah, that
David: so I don't think we're really ahead.
Matthew: is fair. That is fair. So, or they, their recommendation here is organization should focus on critical areas of privilege, privilege, escalation, credential access, and lateral movement. It's helpful that they provide all this. And I think that this is a point too where you can use attack paths to identify and stop privilege escalation and the points for the find the points for the attacker can be most easily detected and stopped.
I would bet if you looked at your environment, you'll find there's a couple accounts that just have admin credentials or domain admin credentials to just aren't as well secured as the others. And if you get those [00:34:00] remediated, you make it a lot harder.
David: Right. Almost like choke points or, you know,
Matthew: Choke points. I was trying to think of that earlier today and I could not think of that. I'm getting old.
David: getting,
Matthew: All right. Next time it was detection rule effectiveness. They listed out a bunch of different problems that they found that led to failed logging, failed monitoring, failed detections improper log source. Consolidation was top one at 23%. This was around event coalescing. And reduction and event volume by bringing it all into one.
I think they're talking about data models in Splunk and stuff like Kribble, where it's a pre processor, although that's built into Splunk now, and you can reduce the volume before it comes in. If you screw that up, I can definitely see why that would break stuff. I didn't realize even 23 percent of people were using that stuff.
They're suggesting there's disabling the event coalescing, which is a little bit of a wild recommendation. How about you just fix it? Log source availability combined was 15% between broken log source and unavailable log source, either it's disabled or it [00:35:00] was airing out. And I'm kind of shocked that these two are this low.
For a while I was a SIM admin way back in the day we're working as a government contractor. SIM admin is probably a strong word. I was working with the SIM admin, but he made me focus on the broken log sources and it was just a constant headache. Just every single day I was fixing broken log sources.
But maybe the automation to correct has gotten better.
David: Oh, maybe
Matthew: Maybe. And then the next set, there's a bunch of different performance related problems. And I think that some of them can be dropped at the feet of Sigma. So stuff like Sigma and Splunk content now coming with Splunk and default rules, Are in a lot of ways, a huge boom. We've talked before about how content shouldn't be bespoke and how everybody should be running mostly the same detections because attackers are using the same techniques, et cetera, et cetera.
But the problem is, is a lot of these rules, like if you just downloaded it from a Sigma repository and you slammed it in place in your environment, it's not going to work. It's, it's, it's the Splunk ones, [00:36:00] for example, rely on all of your data models being properly filled out. And if you're not using the data models or you don't have them set up in exactly the way that Splunk does, like the rule is just not going to work.
I don't know if that's true or not. But that's, that was the first thing that came to mind for this.
David: that makes sense though.
Matthew: No, I
think finally before, Oh no, there's two more, three more. Oh boy.
David: Yep. Well, this stuff is just like FYI, neat, not really terribly useful, but it's something that you always see in the report, but the statistics by region by industry region, attack vector, Mitre tactic and threat group. So we're not going to dig real deep into all of these, but we're just going Highlight a couple of things that seem to jump out in these different sections.
Call them honorable mentions, if you will. So in the performance by industry in the terrible detection section, you got transportation at 10 percent detection, the government at 19. And this is the next worst one. When you go above government is at [00:37:00] 45. So it's not even like they're strung across the whole gambit.
It's like, you've got these really terrible ones, then you've got mediocre, then you've got better. When you, and when you get to the alerting, you know, based on the fact that we, we talked about this before, that there are 12 that, that across the board, everybody's at 12 percent there. So they're just about uniformly terrible and alerting except for aviation, which was, which was decent.
And. A couple of things to note here is that banking got three times worse this year than it was last year. So they're doing more than stagnating, I think, over there.
Matthew: that blows my mind. This is the alerting, right? So this, this goes back to kind of what we talked to. They, they, for, they would have had to have made no changes to their alerting. And they would have had to
David: they would have had to
Matthew: would have had to triple, triple their tests when the, in the banking sector made no changes to their alerting for that to have, for them to have not gone backwards. That was wild.
David: And [00:38:00] along with the alerting, you know, we talked about the, the transportation and government and detections and alerting transportation's at 2%, government's at 7%. So they're not detecting and not alerting in either of those two industries.
Matthew: Blows. My. Mind.
David: Nothing really interesting in region. You go down to attack vent attack vector everybody's terrible at, at data exfil identification at 9%.
Matthew: It's not really a surprise.
David: and the next worst one is web app at 51. So that's another, you know, big jump between the differences there.
Matthew: So what's interesting about that, hold on, before we jump to the next part web apps are one of the top vectors for attacking. So I find that super interesting that everybody's bad at that. And that's one of the, that, that and known credentials are like the two top ones for how bad guys are getting in these days. That's just interesting. That's very, very interesting to me.
David: Now, when you get into the prevent prevention effectiveness by operating system, macOS [00:39:00] endpoints were at 23 percent and Windows is 62 and Linux at 65. So Macs are three times worse at prevention than other operating systems. and I can think of a couple of reasons for this where, you know, you've got the, the myth of Mac invulnerability where people just aren't as concerned about the security of Macs because there's this mythology that, you know, the risk of being compromised on a Mac is low. And that's, it's kind of the thing is it's, it's a myth and not a myth at the same time is that the reason that that myth is even there is simply market share. The share of the Macs is just so much less than the Windows. They, they aren't being the, the focus of attacks. Another reason is there's fewer Macs so there's a, a reduced focus on prevention on those Macs in any organization.
So in general, there's a lot fewer Macs than Windows systems in the organization. The last one is lack of corporate support. In a certain [00:40:00] organization I was at they had dozens of Macs, but no Mac technicians at all. When I got there, I became the Mac technician, even though I wasn't even in IT. And IT refused to support the Macs, said Mac is not a support system or operating system here. We understand that parts of the business need it to do their work, but we don't support it.
Matthew: Let's screw you guys.
David: It was just you know, unbelievable.
Matthew: Yeah. I think that makes sense. I think it's that last one. I've definitely seen that in companies I've worked with in the past, little to no support for Macs.
David: But I will say years later, they did turn around and start supporting, but it was years later, you know, finally I was able to get out of that.
Matthew: Only after you left.
David: Actually it was right before, but but the MITRE tactics, nothing really to say their threat group. Also, nothing really to say there
Matthew: Sad day, sad day. All right, last last bit here before we get to the key recommendations assuming anybody's still listening, [00:41:00] is spotlight on ransomware attacks and vulnerabilities. They had a list of major ransomware groups and they would tell you how many of the organizations would successfully prevent the ransomware based on the TTPs of that organization.
Black Byte was the best, or worst, depending on how you're looking at it. Best, I guess, if you're If
David: you're an attacker.
Matthew: threat attacker, only 17 percent of organizations were successful in preventing it. Now, what I wonder is that 17%, does that mean that every single TTP was blocked or like they blocked a single key TTP around lateral movement?
Like, how are you defining blocked here? Cause
David: Yeah. You'd have to ask them. Cause looking at that from the outside, I would say, okay, 17 percent did not end up getting encrypted, you know? Cause that's the bottom line, successful failure when you're talking about ransomware.
Matthew: I don't know if that's true. Cause attackers have the ability to do different things. If they, you know, tried something and it failed, they could just turn around and try, try, [00:42:00] try again. So, yeah, I don't know. The, I'm kind of shocked that some of this ransomware is so little caught, but I guess I shouldn't be, these are the folks that are, you know, currently most active trying to change.
They're constantly trying to keep their tactics updated.
David: Hmm.
Matthew: It doesn't make sense to me.
All right.
David: And the vulnerability section, there's nothing, nothing really to report in here. It, I mean, it's kind of interesting to say that the with the 10 least caught vulnerabilities being exploited listed there, other than that, nothing really to note in the, in the the, the vulnerability section.
Matthew: How boring.
David: And now we get to the exciting part, which is the key recommendations.
Matthew: Really? You thought this was the exciting part? I thought this part was boring. Well,
David: I was quite thrilled by it. I
Matthew: I am glad that you are happy about it.
David: Hmm. Liar.
Matthew: Caught me. So their number one recommendation is adopt a proactive security mindset.
David: Done.[00:43:00]
Matthew: Yeah and one of the earlier points they talk about, corporation should adopt and assume breach mindset, honey, you're like 10 years too late.
David: Right. I've been talking about that forever now.
Matthew: Yeah. Item number two, implement continuous threat exposure management. To continually identify, prioritize, validate, and fix your exposures.
David: Yep. Good advice.
Matthew: Enhance your detection and prevention mechanisms. I prefer to enhance my calm, but
so they talk about it, optimizing the entire detection engineering pipeline, including log collection, performance alert mechanisms, regularly review and update detection rules, all great things again I'll wait until we double or triple our budget.
David: and of course strengthen your, your ransomware defenses.
Matthew: Eh, you know what? I'm not worried about ransomware. I'm just going to ignore that.
David: Yeah. So, and typical ransomware stuff, effective backups, endpoint detection controls, you know, and obviously regular simulations,
Matthew: Which is of course what they sell. [00:44:00] So definitely do that. Improve endpoint security configuration, security controls and endpoints, make them properly configured, but your EDR tools in place, conduct regular audits and endpoint security assessments, I'm sensing a pattern here.
David: I'm missing it. You're going to have to spell that out for me. So, and then enhanced log management analysis. So fix all the stuff but they listed above in the same section that we're just a problem.
Matthew: And prioritize password security, implement strong password policies, confirm that your password hashing methods are robust and regularly audit and force your compliance with password security policies, all good stuff. None of these are really.
David: It's not mind blowing,
Matthew: Yeah, yeah,
David: but I mean, that's pretty rare to see that anything. It's not like all this stuff hasn't really been said a dozen times before
Matthew: you're not wrong This is this is honestly getting kind of vexing to me at this point Like we we know how to secure stuff like none of this is a surprise None of this is like I can't believe it. I [00:45:00] had no idea how to do this. We know it all
David: right? Yeah. It's
Matthew: suck at doing it
David: Well, that's, that's where these recommendations and stuff need to move to is away from saying, do this, this, and this to, this is how you get this, this done with these resources and this budget. That's really, and because it's so hard, that's why you're not seeing anything, you're not seeing a lot of that anyway on providing some feedback on how to be, how to get the things done that you know, already know you need to get done. But if you don't have a breach syntax simulation tool I think you should most definitely get one.
Matthew: I Agree
David: and if you have a breach syntax simulation tool, read the reports and take advantage and take, you know, do the recommended actions that the report says.
Matthew: Yeah, I don't know.
David: if you're running these things and you're not doing something with them, it's kind of a waste of your money, which you could give to me
Matthew: You know, actually on that subject, one thing that I would love to see that they [00:46:00] didn't report on here, Separate out the results from folks that have had their tool for a year versus the new ones.
David: Mm. Mm. Mm. Mm.
Matthew: I think it'd be interesting to see like, oh, you know, this, these people have had the tool for a year and they have made these improvements.
David: Right, which you're, so you're probably going to see a definitive change for the first year and then from year two to three, it's like they do less and less until you get to year four and five when they do nothing,
Matthew: Ah, yeah,
David: where shiny and new, they start doing a bunch of work on it and then as the older it gets, the more it becomes OBE and they start doing less and less with it.
I think that's a good, that would be a good yeah. a good numbers to see in there.
Matthew: definitely be interesting. All right. Yeah, you had one more item here.
David: Yeah. So if you can only focus on one thing, I think you should be focusing on alert development for the logs you have versus continually onboarding logs without the use cases to develop alerts form and [00:47:00] take advantage of the logs you have. Otherwise you're wasting your time. You know, bring, like I said before, bringing on logs and not developing use cases and alerts form is not.
Making your organization safer. It'll help you with forensics, but it's not going to increase your protection.
Matthew: Yeah. And I think the biggest opportunity here is focusing on the alerts generated by your tools. As they said here, the tools prevented 76 percent of the attacks and almost certainly those. Tools generated some type of log saying, Hey, we block this thing. If your tools are blocking 76 percent of the attacks, it is very unlikely that an attacker is making their way completely throughout your environment with never being detected. Now. This goes back to what David said earlier. You don't want to just turn on, it'll be like, ah, the guy in the podcast told me to just turn on alerting for everything that's blocked. That's a little wild. You want to selectively turn on alerting for block [00:48:00] things. We talked a bit earlier about command and control being blocked.
Maybe you want to look at failed local admin creation or failed domain admin creation or failed policy creation in AD where someone trying to escalate their, like, look for. Look for those critical few, which I hate saying because it's such a business term. I'm gonna have to bleep out or remove a lot of swear words in this one. But look for those, look for those blocked ones that indicate something truly wrong and not just random scanning activity
David: Right.
Matthew: on those.
David: Yeah. What you want to alert on is, is a, if you're going to alert on something that was blocked. You want to alert on something that was blocked, which is indicative that something else failed before that happened.
Matthew: Yeah.
David: All right. Which leads us into our next article, which hopefully this should be fairly short which is not so much as an article as a white paper or a guidance document from [00:49:00] DHS, my favorite organization.
Matthew: Oh yeah, I'm sure this is, I mean this goes back to what we were just talking about, about the what things to log. The DHS is here to tell us what to log.
David: yep. So this is the guidance for implementing M dash 21 dash 31. Improving the federal government's investigative and remediation capabilities.
Matthew: We're the government and we're here to help.
David: every day. So this is a DHS CISA document that says and this is a quote from it M2131 describes logs that agencies must capture as well as any required retention times.
It also establishes a maturity model to track agency implementation. This document provides operational guidance to assist agencies with implementation of the M2131 requirements. So this is dated that's very intuitive. So this is dated December 22 2022. But the URL seems to indicate that it was published in February of 2023.
So we're bringing this up, you know, because [00:50:00] we just talked, got done talking about all this logging stuff in a previous report. But also cybersecurity dive. I just published an article entitled CISA officials credit Microsoft security logging expansion for improved threat visibility. So just if you recall, just over a year ago, there was a nation state linked intrusion into Microsoft exchange online where 60, 000 U.
S. State Department emails were stolen. And that took place in the spring of 23 after this, this guidance document published. And after that whole hullabaloo, if you will, Microsoft decided that they were going to provide some additional security logging from M365 for free, whereas they had been previously charging for that logging. And this lack of logging was blamed for the government's inability to detect the attack. Of course, it's never the government's incompetence. I would expect, I, I expect them someday to blame the one armed [00:51:00] man but they would only do that if you were Russian or Chinese or if it would get them some additional funding for one arm, one armed man detections,
Matthew: What was the, what was the detection that we just read in the pickest blue report for government? Wasn't it like 12 percent or 17 percent or something?
David: Yeah, the Detection was at 17. I think the alerting was at seven or something like that.
Matthew: Shocking, shocking. And these are the people that are telling us how to do security.
David: Anyway, so I came across this and thought it was relevant. As well as the fact that I hate the government. So here we go. So this is a quote, another quote from the document system recommends agencies prioritize the following event types listed in order of priority. For collection and storage as they work to achieve full EL1 compliance.
So starting from the top here, which is the highest priority identity, credential and access management. So certain at the top we have identity, credential, and access management, followed by [00:52:00] operating systems, followed by network device infrastructure, cloud environments, Amazon Web Services.
M365 and finally Google Cloud Platform.
Matthew: Sounds perfect. Let's roll.
David: So M365 are these last logs you should worry about unless you're one of the 10 places that are using Google. And this is what CISA was telling government agencies before this whole breach in the spring.
Matthew: Wild. Yeah, I'm curious actually. What do you think the priority is? I know my top priority is probably Microsoft 365 and Azure AD.
David: To be honest, I would not provide a, I would not provide a recommended prioritization.
Matthew: Oh,
David: every organization needs to look at where they're doing their business at, where their critical functions are at, and all that. That's, Up to the individual organization. So I personally would never provide a recommendation on prioritization.
Matthew: that would be funny if [00:53:00] they're like, all right, first thing we got to do is Microsoft 365, but we use Google. Nope, government says Microsoft 365.
David: So go out and buy it. So go out and buy it and then start logging it
and then get back to me. So, so even it had, they been those, these logs the whole time due to this prioritization. They probably would not have been collecting or monitoring them anyway. I'm guessing in that case, then that would have been a one armed man blaming situation. So a bunch of governments government agencies came out with another report just last week, which is the best practices for event logging and threat detection. You know, nothing earthy earth shattering in there, but couldn't hurt to take a look. I'll read through it. The link will be in the show notes. Well, it's just ridiculous that the government, for all their failures, is never the one to blame for their failure. It's always someone else's fault.
Matthew: all right. Well, that looks like that's all the [00:54:00] events we have for today. If, shit, we don't know where,
David: Our outro is missing.
Matthew: Outro is missing. Well, I'd love it. It looks like all the events we have for today. Follow us at that Serengeti sec on Twitter and I don't know.
David: on your favorite podcast
Matthew: There you go.
Subscribe on your favorite podcast feed.