SS-DISC-152 - Detection Engineering Behavior Maturity Model

Episode 152 November 04, 2024 00:40:51
SS-DISC-152 - Detection Engineering Behavior Maturity Model
Security Serengeti
SS-DISC-152 - Detection Engineering Behavior Maturity Model

Nov 04 2024 | 00:40:51

/

Show Notes

Today we discuss the Detection Engineering Behavior Maturity Model, which is a new Capability Maturity Model for Detection Engineering (surprise!) from Elastic.  It seems a little overly complicated to me (M.) but super useful despite that!

Article that we originally saw 

Direct link to Elastic Blog Post

If you found this interesting or useful, please follow us on Twitter @serengetisec and subscribe and review on your favorite podcast app!

View Full Transcript

Episode Transcript

Transcript is generated by AI, as you can see when you read the names! Errors are to be expected. 
 [00:00:00] 
 David: Welcome to the Security Serty. We're your host, David Magic Keener. Stop what you're doing and subscribe to our podcast and leave us an awesome five star review and follow us at Serengeti sec on Twitter. 
 Matthew: We're here to talk about cybersecurity and technology news headlines, and hopefully provide some insight, analysis, and practical application that you can take in the office to help protect your organization as long as you are doing detection engineering this week. 
 David: And as usual, views, opinions expressed in this podcast are ours. Ours alone, do not reflect the views or opinions of our employers. 
 Matthew: I was looking at maturity models for podcasts and then I realized they only have immaturity models, but I think that's appropriate for us. 
 David: Yeah, even so, we'd score not great 
 Matthew: We're not even good at being immature. I'm going to announce our article, but David did most of the work on documenting this article. So I'm going [00:01:00] to let him hand over that, but I picked it out this week. It is called from zero to expert level detection engineering with Elastic's maturity model from detect. fyi, which is actually a whole bunch of different authors that all come in together under that which I've been really enjoying. 
 This one is by R. Segan. 
 David: Is that how you pronounce that? 
 Matthew: I don't know, I don't know how you pronounce an R and a C together like that. 
 David: Yeah. It's more of a handle than a name. 
 Matthew: it's fine. Or, or their name could be like Ronald Segan or something, or Reagan Segan. I can't see from the, the picture is very small. 
 David: R. C. Egan. 
 Matthew: yeah. Security engineer with a focus on Microsoft Sentinel, 
 David: Why would you do that to yourself? 
 Matthew: So the article is not bad, but. It links directly to an Elastic blog post that has, I think, what you estimated, like, 40 pages. Which we did not, I did not go through in detail. 
 David: Yeah. We skimmed it. And really this article is a very light introduction to this. And most of what we're going to be talking about today, we pulled from that blog post. [00:02:00] So what's happened is Elastic has developed what they call a detection engineering behavior maturity model DebMM. 
 Matthew: Which you should be so happy about, because the original joke we talked about for this episode involved DebMM, and it was very interesting. 
 David: yeah, she was 
 Matthew: Sign up, sign up for the Patreon if you want to hear that. You 
 David: You want to hear that joke that 
 Matthew: want to hear that joke? It costs extra for us to debase ourselves. 
 David: which we will do for money. It's time to beat around the bush here. So this maturity model is broken down into five tiers beginning at zero, like all good it people should. 
 Matthew: I wish that all the maturity models would just settle, because I still see some that are 1 to 5 and it drives me nuts. 
 David: What do you mean they should settle? 
 Matthew: Because some capability maturity models go 1 to 5 instead of 0 to 4. 
 David: Oh, okay. 
 Matthew: you're saying, like, you're saying, oh, this is a 3, it may not be the same 3 because some go from 1 and some go from 0. 
 At least they don't go backwards. At least none of them [00:03:00] go backwards. 
 David: Well, I mean, I'm sure others will start up and say, well, you're a, you're a, you're a C, 
 Matthew: Oh, God. 
 Is that a C 
 David: least, you know, there's, the alphabet can only start at A, so at least in that, you won't have any discrepancy. 
 Matthew: That's fair. 
 David: But at level zero, it's foundation, and then one is basic, two is intermediate, three advanced, and four is expert which I'm sure most people feel they are, they are. 
 But in the article, it actually says most socks are probably going to be level zero surprisingly enough. 
 Matthew: That is interesting. Do you think that that is because This is, I mean, we were going to talk later. You've got some information here about how complicated this capability maturity model is, is you, do you think that that's because content detection engineering is kind of a new, at least a newly described role, or is it just that, that most socks just suck at this? 
 David: I think the fact is that most socks [00:04:00] don't put the level of thought into it, really. It's a thing that you do, but it's not it's not something that can be matured. You know, you just, it's, it's an act, activity you just do, you just do the thing. There's no, you know, good or bad as far as whether you're detect, whether you're doing good detection development or not. 
 Which obviously based on our conversation here, that's not exactly accurate, but I just think people don't think about it in that, in those terms. 
 Matthew: Yeah. I think, yeah, I think, which I think is partially because detection engineering, at least the name for it has only been coming up in the last couple of years. And obviously people have been doing it in the past, but I don't think people have thought of it as a whole separate discipline with a whole separate set of things that you do. 
 I think people have just in the past, just kind of been willy nilly throwing detection rules in place based on, you know, what IR finds 
 David: Yeah. And I think that 
 Matthew: you know, yeah, 
 David: hopefully will be changing a lot in the future with the more risk based alerting stuff. I [00:05:00] remember listening to Haley Mills talk at conf in 19 and she was already a detection engineer. I can't remember where she was working at the time. 
 She's with Splunk now, but she was already doing a, she was already a dedicated detection engineer, you know, five years ago. 
 Matthew: Well, no, and I think they did have dedicated development or detection engineers at the time. They just. They were all working separately and it's kind of like the same thing we talked about with the detection rule sets. Everybody was working separately. 
 Everybody was doing their own thing. Everybody had their own processes and now people are finally starting to come together and use shared rule sets, shared language, shared processes. So yeah, 
 David: Yeah. So basically put more collaboration. 
 Matthew: yeah, yeah. 
 David: But like all maturity models, each tier builds on the previous one, creating a structure, a structured and iterative process of enhancing their behaviors and practices. 
 Matthew: I love maturity models. I wish everything in life had a maturity model to guide you through better [00:06:00] behavior. Like, let's say for example, let's create a quick maturity model for exercise right now. Zero, foundational level, no exercise, you sit on the couch all day and play video games and watch TV. 
 Maturity model level one. is random exercise when you feel like with an inconsistent schedule. You're like, Oh, you know what? I'm going to go take a walk right now, or I'm going to do 15 pushups right now or something silly like that. Much level two would be basic exercises. You know, your squats and your bench presses and your rows, you're hitting all of your major muscle groups. 
 You're doing it on some type of regular cadence, but you're not really looking at progression. You're just like, all right, I'm going to do it twice a week. I'm going to do these six exercises. Number level three. a variety of exercises planned to work out every muscle group on a regular cadence. You know, now you're talking about your, I've got my leg days. 
 I've got my upper body days. You're doing some different ones to work out like your traps or your, your slightly different muscle groups. You might be looking at, how do I do a progression? And then for I don't know, I've never been there 
 David: they call it that Arnold level. 
 Matthew: The [00:07:00] Arnold level. Yeah, yeah, you're, you've got like specific, you know, I'm gonna, I'm, I'm on a bulking cycle right now. 
 And you know, I, I'm doing, you know, different weird reps and orientations to make sure that I hit like weird sub muscle groups and stuff. Yeah. 
 David: Yeah. And at that level, you're, you're probably talking about somebody who exercises really, Part of their who they are, you know, like 
 Matthew: of their identity. Yeah. 
 David: actors, certain actors who I have known or as action heroes or whatever, you know, they have to maintain that level or if they're bodybuilders, they have to maintain that level. 
 It's part of part of their, their livelihood to do. So 
 Matthew: So interesting comment about that. I was just talking about that earlier with my wife and we were commenting about actors who, when they retire, Or when they decided they made enough money and how they typically just like balloon up in weight because they're like, I have been eating steamed broccoli and broiled chicken breast for years and now I can finally [00:08:00] enjoy all of the fruits life has to offer. 
 David: it was like Michael Ironsides. Oh, he looks 
 Matthew: Yeah, yeah, yeah, he he's like, I don't have to stay in shape anymore. Let's go. 
 David: mean, but that's the opposite of like Sylvester. You know, Sylvester, I think it's just part of his genetics, you know, part of who he is to work out crazy. Like that last Rocky movie, like that last Rocky movie he was in, he was in his 60s and he was friggin ripped in that movie. 
 I can't remember if, which Rocky movie that was, but he, where he fought that actual boxer he was ripped in that. He was like 65 or something like that. Yeah, 
 Matthew: Wild. All right. So I'm surprised there isn't a blog somewhere creating maturity models for normal things. And Dave and I were talking about this beforehand. I may start this cause I actually am a very structured person and I really liked the idea of having a capability maturity model for things like exercise and eating and cleaning the house and life skills. 
 David, you brought up that there is an existing thing like this, but only for martial arts. 
 David: the [00:09:00] belt system, you could say, is a maturity model for that. 
 Matthew: I was surprised. One of my friends told me that his kid was getting a black belt. And I was like, isn't a black belt supposed to be? And then I looked it up and he told me, and apparently it's actually just means that you've mastered kind of the basics and you're ready to start, like you're prepared to start learning. 
 It's almost like getting out of like high school. Like you've completed with your basic education and now it's time for you to really start learning the stuff that you actually need to know. 
 David: Really? Has it always been that way? Or is 
 Matthew: have no idea. 
 David: on that certain martial art? 
 Matthew: When I looked it up online, there were several comments about like a black belt just means that you're like you've completed the basics, 
 David: Huh. 
 Matthew: I have no idea if that was all. 
 I used to think that the black belt meant that you're a master, but 
 David: Right. And I knew there were levels to the black belt. 
 Matthew: yeah, so an Eastern Asian martial arts, the black belt is associated with expertise, but may indicate only competence depending on the martial art. Yeah, 
 David: So there are some where that is a [00:10:00] signifier of talent or capability. Interesting. But when, when they're talking about this, this particular maturity model they say that the model could be nonlinear. I, you can perform advanced tasks soon or sooner versus doing them. One after the other. They caution against this skipping or deprioritizing criteria across the tiers and say it's not recommended, but they say you can do it if, if it's necessary. 
 That seems kind of contrary to me, but if you get into the maturity model, kind of makes sense when you see what they're defining here. Because they don't go from tier one. You don't do X and then in tier two, you don't do that same X better. There are different activities on each tier. 
 Matthew: Yeah, this is a weird maturity model. Like you just said, every maturity model that I've seen in the past, typically you perform the same activities at each level of the model. You just do them better or you add more activities. You know, tier one, you do [00:11:00] X tier two, you do X plus Y or tier two, you do X better. 
 But this one, you do completely different things at the different levels. So, and you can be, you can be doing, and well, we're going to do it later. I'm not, I'm not going to jump the gun on this. 
 David: You really want to though, 
 Matthew: I do really want to. 
 David: but this model is really focused around content for the SIM. Also when we're talking about content detection, they're not talking about EDRs or IPSs or anything like that. 
 Before we start. Discussing the models. They give some reasons why they why they say that it's important to have these maturity models for your rule set. So, but they, they list this in 2 ways. They do issues. And then, you know, how that model addresses those issues. So, under the issues they have alert fatigue lack of rule context and clear understanding about why the rules exist. 
 And then what I would call rule creation and maintenance issues. You know, you, you don't prioritize your rules. The rule quality [00:12:00] is not great. Your logic is not great either. They're complex. You don't do testing. The rules are inflexible. So they're very narrow scoped and then automation metrics and threat intelligence integration is what they say the issues are and why you should have a maturity model to improve those. 
 And they said that this model helps those by reducing SOC fatigue and optimizing the alert volumes by improving their accuracy and enhanced, it enhances detection fidelity and includes regular updates and testing for those rules. Ensures consistency and high quality detection logic they integrate contextual information and threat intelligence and automate routine processes to improve the efficiency. 
 And then, of course, that's all continually measured and this helps you stay ahead of threats and maintain the effective detection of and improve your overall security posture. 
 Matthew: So [00:13:00] that's all fun and it's all well and good. But I think the biggest reason to have a model, in my opinion, is to have a consistent direction and most importantly, have a way to measure progress. This goes back to why I love maturity models so much as it gives me a direction and a place to go and a, and how to get there. 
 That being said, this is a pretty complicated model. Trying to maintain this model is. Going to be a lot of overhead. 
 David: Yeah, it's gonna be fair amount. We'll get into the some of the complexity here in a minute. 
 Matthew: So additionally we've spoken before about how wasteful it is to have each other company develop their own content independently. The article did mention a couple of rule sets and provided links to those rule sets and I think It makes a lot of sense to use shared rule sets for most of your content because the attackers are mostly using the same techniques I think about 80 percent of our content should probably be the same and be put in place first. 
 And then we can start working on the 10 or 20 percent that's actually custom and specific to our organizations. They appreciate that they put in that [00:14:00] link in the article to those content rule sets. 
 David: Yeah, you just have to modify those based on, of course, you're the tool set, which speeds those. The SIM. 
 Matthew: Yeah. 
 David: So when they define their their tiers, so zero is foundation, which means there's no structured approach to development management and the rules are created and maintained at, in an ad hoc fashion with little to no documentation or peer review shareholder communication or personnel training related to. 
 Rule creation. Then you have BASIC, which is the tier one, which establishes a baseline of rules and there's systematic rule management and version control documentation, and there's regular reviews of the threat landscape and some initial training for personnel. Then Tier 3, the intermediate, focuses on continual rule tuning of rules, reducing the false positives, identifying and documenting the gaps, and a thorough internal testing and validation, and ongoing training [00:15:00] of personnel for rule development. 
 And then advanced which is Tier 3, is the systematic identification and ensuring legitimate threat levels are not missed which is the, which they equate to false negatives. And they engage in external validation of the rules covering advanced CTPs and advanced training for analysts and security experts. 
 Matthew: So I'm sorry, I'll let you finish. No, I'm sorry. Finish, finish all five or 
 David: And then finally, you have the expert level. So this level was characterized by advanced automation, seamless integration with other security tools, continuous improvement with regular updates, external collaboration, extensive training programs for all levels of personnel, proactive threat hunting, um, which complements the rule set enhancing on it. 
 The management process and identifying new patterns and insights that can be incorporated into the detectables. Additionally, although not commonly [00:16:00] practiced by vendors, detection development is a post phase of incident response where, you know, you take what you learned from the incident response and then develop content based on that. 
 Matthew: I, so I, I know that you pulled these descriptions from the article and I don't know if the author of the article. really dived into it because I don't think that these quite represent. So this, this description, and this, this explains why when we first talked about this before we started recording, I thought this was more of a standard maturity model because, for example Tier 0 says rules are created and maintained ad hoc with little documentation. 
 And then Tier 1 basic said establishment of basic rules with systematic rule management. Like those are, those are That implies to me, you know, one is created and maintained ad hoc, the next one is systematic rule management and that includes documentation. Three is continuously tuning the rules and identifying and documenting gaps. 
 Like, those feel like those are all progressions of the same activity. [00:17:00] But if you actually go into it, and you go to tier zero in the thing, and I know that I'm jumping the gun a little bit here, but under Tier 0 for structured approach to rule development and management, it includes all the way up to regularly reviewed consistency and adherence to documented standards detailed documentation, but this Tier 0 description here says rules are ad hoc with little documentation. 
 So that, that implies to me that the author of the article also does not quite understand how this capability maturity model works. Differs from normal capability maturity models. 
 I've stunned you into incoherence. 
 David: No, I'm just double checking I pulled it from. 
 Matthew: You pulled it from the original article. 
 David: Yeah. So that's why I was double checking to see if the, the author of the, the detection FYI had miscategorized it versus what the authors of the [00:18:00] actual maturity model came up with, but that's straight out of the maturity model. 
 Matthew: I mean, yeah, no, you're right. Cause I can see under tier zero foundation, it says detection roles may start out being created to maintain ad hoc with only a peer review, often needing proper documentation, but then one, two, three, four paragraphs below. It talks about qualitative behaviors where defined is standardized process with rule creation for detailed, detailed documentation alignment. 
 Huh, maybe they themselves are having like a, like a double think moment where they're simultaneously in their mind thinking more of like a standard. You know what, let's move on to the next bot because the next bot explains a little more about how this differs from a standard model. Which again, I'm jumping the gun a little bit, but. 
 David: Well, this also would be the fact that this was written by three people. So there may not be 
 Matthew: yeah. People may be writing different parts of it. Yeah. 
 David: may not line up exactly with one, one things as far as the other goes. I'm sure it was peer [00:19:00] reviewed and everything before they, they published it, but still you're going to get to some level of discrepancy there. 
 Matthew: Yeah. 
 David: But each of those tiers are further broken down into criteria, which is further divided into two parts, quantitative and qualitative. And each of those then has a maturity level themselves, which goes from initial repeatable, defined, managed, and finally optimized. 
 Matthew: I hear you like maturity models. I gave your maturity model a maturity model. 
 David: Yeah. Right. It's like each 
 Matthew: Each one of these is a little minor maturity model. 
 David: Well, each well actually we'll get into the ex the exact stuff here in a second. But, so the quantitative measurements, which they say are each of the activities are broken down into is to maintain the state. So these provide the structured way to measure the activities and process and processes that maintain or improve the rule set. 
 So these are designed to be more reliable. For comparing the maturity model of the different [00:20:00] rule sets. Keep them on track over time. And then you have qualitative behaviors, which is the state of the rule set. And these are subjective assessments based on the quality and thoroughness of the rule sets. 
 Matthew: Yeah, here's a, here's 
 David: vary between people. 
 Matthew: Yeah, here's an example. One of them for, The structured approach to rule development and management, the quantitative measurement for defined is regular creation and documentation of rules with 50 to 70 percent alignment to defined schemas and peer review processes. 
 So they, they, they seem to be providing like a, most of these that I'm seeing seem to be percentage based. What percentage of your rules or, or processes are following what your goal is. 
 David: So what you have is you have tiers and then you have things to do within that tier. And then there are things you need to do to maintain that thing you did. 
 Matthew: Yeah, this is, this is wildly complicated. 
 David: So within each of those, then you have, then you have five levels for each. So if we break this [00:21:00] down into the number of things that you need to track, so you have five tiers, you have two to six activities per tier. 
 Matthew: And each one's different because they don't, like we said, this maturity model, you're not doing the same thing each tier, you're doing different things each tier. 
 David: Yep. And then you have two criteria for each, which is the quantitative or qualitative, and then you have five maturity levels. So that's a total of 180 different things to track. But technically, since you only need to track one maturity level per criteria, that would only be 36 
 Matthew: Only. 
 David: only. But I think, you know, based on the fact that levels don't differ. 
 I mean, they kind of vote on each other, but they're not the exact same activity for each tier. It's going to be, it's going to be a bit complex for you to actually track your organization's maturity using this model as a measurement. 
 Matthew: Yeah. 
 David: So I'll ask a question. You, you, you move up the tier a lot, but you don't have to. 
 And like I said, because each of these are different, I think you're you're never going to escape the necessity to jump tiers [00:22:00] with this model. So, 
 Matthew: I don't understand. So, okay. I mean, so I'm looking at one right now that is tier one basic. So not tier zero, tier one. And it is driving the feature with product owners ensuring that detection needs are on the product roadmap for things. And initial is no engagement with product owners. 
 Isn't that just the same as tier zero? Like, I don't understand why they didn't map these behaviors, just, just map these behaviors along the whole thing, and then map these sub maturity models to the actual maturity model. Like, it's tier one, and But the initial is no engagement with product owners. Why can't that be a tier zero? 
 I 
 David: right. Yep. 
 Matthew: think they tried to get too cute. 
 David: I suppose. But so, so if you want to track this, it'd be like, you know, tier zero, you'd have a 1. 5, a 2. 3, you know, so you had, you know, your activity Number [00:23:00] one, which is level rated at a five, and your tier zero activity two, which is rated at a three, and your tier zero activity four, which is rated at a five. 
 And then you'd have to also then go do the same thing for tier one. With its activities, which are, there are six for tier one. So if you want to document what your progress look like you'd have, you know, basically a big spreadsheet in order to 
 Matthew: Yeah. 
 David: have that all written down. 
 Matthew: Honestly, if you're measuring this, I would just get rid of the tiers. Like, just get rid of the tiers completely. Like, like, like looking at the, looking at another one of these guys, end to end release, testing and validation. The initial is no formal testing. That seems like a zero to me. Repeatable is basic testing with minimal validation. 
 Seems like a one defined comprehensive testing with internal validation processes, multiple gates. That's a hell of a jump from one to two, but that's still, it was like I guess I could buy that as a [00:24:00] two. I'd probably go a little bit less. Managed advanced testing with automated validation processes. 
 That seems like a three. And then number four is continuous automated testing. That seems like a four just get rid of the tears. No tears. 
 David: Well, I think what, what, rather than necessarily getting rid of the tiers, I think really what they're talking about, it, rather than a tier, it's more of a prioritization. You know, Yeah. You should prioritize this over that, so you should prioritize doing the stuff in tier 1, or tier 0 first, and then move on to tier 2, but because it's a prioritization, it's just a preference, versus, you know, you must be here before you can go there, which is what the tiering seems to imply, right, 
 Matthew: Yeah. If they phrased it that way, if they had called it priority zero, like these are the things you need, you absolutely need to do as a SOC. Then I would mostly agree with this. 
 David: Yeah, so we obviously can't go over this whole thing because it's pretty big. 
 Matthew: Got like five hours scheduled. 
 David: oh, [00:25:00] okay. All right. Well, let's start. Long, long time ago, and it's not far, far, far away. But, so what we'll do is we're just, what we're going to do is we're just going to go over the quantitative and qualitative criteria for one of the four activities in Tier 0. 
 So in Tier 0, you have structure, approach, rule, development, and management, which is Activity 1. Then you have creation and maintenance of detection roles, which is Activity 2. Activity to roadmap and document documented or plant. I'm sorry. Roadmap documented or planned. And then the final 1 is threat modeling. 
 Perform. So we're just going to talk about the very first one of those, which is the structured approach to rule development and management. So under that one, under the qualitative behaviors, which is the state of the rule set you start off at initial, which is no structured approach. Rules are created randomly without documentation and that goes through repeatable. 
 Matthew: hold on, let's, I mean, let's talk about each one of these a little bit. I can see, I can see a lot of socks doing that. That's kind of the, the one where your boss has always come into you asking what happened or you're creating [00:26:00] rules based on, you know, what you saw, what, what didn't get caught last week when the bad guys came in, you're, you're creating rules based on thread Intel articles you see online. 
 All 
 right. And I guess no discussion needed. 
 David: Well, I was expecting this long soliloquy or something. 
 Matthew: Yeah, sorry. No, I mean, I mean, that doesn't make sense. That's definitely where I started in terms of my content development is it's kind of like, what do you see the most? And this goes back to what I was talking about before, where each content engineer is working in their own silo, where they're just kind of doing it themselves without the assistance of Thread Intel or community sharing, which by the way, I've started to change my mind about Thread Intel over the last year or two. 
 I'm starting to come around to 
 David: You're feeling all right? 
 Matthew: I've started to see more useful thread Intel. I've started like some of the reports we've looked at where like it lists out. These are the most common TTPs we see. That has started to turn me around to make me think that it might actually be useful. 
 David: Mm hmm. So well, I mean really what you're saying is they're actually coming along to doing the things that we'd asked them to do years ago. It's [00:27:00] really what you're saying. 
 Matthew: Either that, or I got access to it. Maybe it was there all along and I just didn't have access to it. Cause nobody trusted me enough. 
 David: Mm hmm. That could be. You weren't read in. 
 Matthew: All right, sorry, next 
 David: So, so going back to this, so you have the, you start off the initial with the no structured approach to rule creation, and then that goes through repeatable, defined and managed up to the final, which is optimized and an optimized. You have continuous improvement based on feedback and evolving threats with automated rule creation and management processes. 
 Hmm. 
 Matthew: I, some of this is interesting, looking at the, so repeatable is some rules are created with primary documentation, defined is schemas for documentation. That's kind of interesting. I'm, I'm thinking also in terms of accessibility for analysts. I probably want something in here about, you know, That's probably a separate thing in terms of like the ticketing system, analysts having easy access to this. 
 I'm thinking about a SOC that I have worked with in the past that they documented each rule, but the rule documentation was in [00:28:00] packets like PDF packets with the rule documentation, which wasn't exceptionally helpful because when they would, let's say this unnamed SOC that may or may not exist released rules in groups of 10, rather than having a single place they updated, they would send a PDF that had the 10 rules that were being released and the documentation of that. 
 And you would just have to save that in a file somewhere versus having a, you know, wiki or something like that, where all the documentation is kept together and it's linked to relevant tools and 
 David: Well, if you, 
 Matthew: and yeah, 
 David: that all that would be included. You know, in your Git repository. 
 Matthew: and that goes back to the schema part defined schema. And, and as you said, I think you looked at this and detectionist code is not mentioned. 
 David: Now, detection as code is a tier one activity. 
 Matthew: But that's but it's under managing and maintaining rule sets, which is similar to a standard approach to rule development and management. That's [00:29:00] interesting. But it's listed as managed. So it's tier 0, 1, 2, 3, tier 3 within that, which I think makes sense. 
 And this one's interesting too, because it says under the quality and quantitative measurements, 80 to 90 percent of rules are managed using detection as code principles. Wouldn't you rule, wouldn't you pretty much always be doing a hundred percent? I mean like 0 percent or 100 percent except for the period of time where you're kind of in between and you're moving one over to the other. 
 David: yeah, I mean, the only reason you would have less than 100 percent is during transition. 
 Matthew: Yeah, that's kind of weird. 
 David: I guess they're assuming that you get to that point, but you haven't finished your transition yet. 
 Matthew: Yeah. I don't know, or maybe they assume that you're not creating them as rules as code? Or maybe there's rogue code builders or rogue rule builders that aren't abiding by the, I don't 
 David: Yeah, it's not like rules would necessarily age out either so you don't just have to wait a period of time and then all the old rules will be gone. You'd have to go through a dedicated process to convert the rules to a [00:30:00] detectionist code. But anyway, so the other, the other thing other than the qualitative, then you have the quantitative measure, which is the activities you need to maintain the state. 
 So that starts off at initial, which is there no formal activity rule creation obviously and then finally at optimize, you have fully automated and integrated rule creation and management processes with 90 to a hundred percent. Alignment to define schemas and continuous documentation updates, 
 Matthew: Steve, that's so interesting to me. I, again, in the period of transition, but once you fully transitioned over to doing these schemas, I, That's just weird to me. I don't know. 
 David: right? I guess they're the 90 to 100 percent is assuming, assuming some level of imperfection, which you may need to deal with. 
 Matthew: Yeah, maybe 
 David: But, and at the end of the model, they have a list of questions and steps for improvement to help do the assessment. Of the organization. 
 Matthew: that makes sense. Yeah, I'm gonna definitely gonna have to, I'm definitely [00:31:00] gonna go take this back to my mystery organization that shall forever remain unnamed. I, I did, I did want to know, I didn't see this myself, but maybe you saw it. Is there a target level of maturity? And I asked that because Anton Javakin has discovered maturity models for stock in the past, and he stated that he believes the ideal, or maybe he was repeating somebody else, I don't know if he directly stated this, but I saw that. 
 They were saying the ideal maturity for the SOC is not to go all the way up to a 4 or a 5 because at 4 or 4 or 5, processes tend to be very rigid and difficult to change, and a SOC or an IR team needs to maintain some level of flexibility to respond. 
 David: I did not see anything in here that recommended a, a a specific level. 
 Matthew: Yeah, and it's, and frankly, it's entirely likely that some of these things are very difficult to do, and the level of effort goes up, the 80, 20, 80 principle, 80 20 principle. You may have to pick and choose which ones of these are the most important to get to this level of, you know, automated [00:32:00] workflow. 
 David: Yeah. That goes back to the prioritization we were talking 
 Matthew: the rules every year. That may be tough to do. 
 David: Well, I think in an optimized environment, you would not do that either because that would automatically happen 
 Matthew: yeah. 
 David: just as a normal course of business. Those rules will be automatically reviewed and updated as they as they triggered or did not trigger, 
 Matthew: I wish this end to end testing, I wish this end to end testing one was tier zero. I have thoughts on that one. Alright. 
 David: But I think it's helpful to have a model to follow. And of course, while, while this really, I think, is a good guide I think, you know, most of us could have built our own, but it wouldn't be too bad. But who has time to do the whole thing? So, using this one as a guide you could use some of this, all of it. 
 I think you might be able to add your own activities and come up with your own maturity levels for those as well, where you could substitute. In here, 
 Matthew: Yeah, I'm 
 David: since this is not like a NIST, 
 Matthew: I'm 
 David: this is not like a NIST, documented [00:33:00] NIST thing where, you know, everybody thinks it's gospel. I think you could take this and maybe tweak it a little bit for your own organization and, but use it as a as a fairly good basis for your own maturity model. 
 Matthew: Yeah, the first thing that I would do is what I was talking about before with like the documentation making sure it's accessible is I would add something in here for kind of the, the ticketing system or the automation system and like making sure that you know, things like, does every rule have a drill down? 
 Do we have procedures and processes for investigating each one? I know that's technically outside of the realm of the detection engineering folks but it's important for the, the usefulness of the rules. Thanks. Yeah, 
 David: I don't think that goes to process, right? So this, this should not just be the rules themselves, but the processes around the rules. So I think that's a, that's a valid thing to include. 
 Matthew: yeah, before the rule is rolled out, it has, you know, a drill down, it has a, it has, you know, maybe a dashboard for helping to investigate it, it has a standard process to follow when the SOC gets the [00:34:00] rule if there's any special escalation process because, you know, oh, this is a cloud alert, it has to go to the cloud team or anything like that, all that stuff is documented and made available to the SOC. 
 David: But one thing I really like about this model, which you don't see in other models, is that it doesn't just tell you where you should be, but it also helps you figure out what you need to do in order to maintain that maturity level once you've achieved it. You know, and I kind of think that with that in mind, it might be more important to focus on the maintenance versus the initial stuff. 
 So the quantitative, what they call a quantitative measurement, the activities used to maintain state. over the qualitative behaviors to get to the state of the rules. Because if you focus on the qualitative it may start to drift before you get your maintenance activities in place. But depending on how this works out, it could be that once you have the maintenance activities in place, it will then guide the stuff back into [00:35:00] alignment. 
 Really depends on how well that works out. And while this is imperfect and can be complex, I think it's, it's, it's worth a a read for detection engineers and IR analysts to take a look at this, read through it and maybe even give it a shot to see how their org compares to this maturity model. 
 Matthew: Yeah, and I think this is big enough, too, that I don't think you want to try and consume it all in one sitting. I think you probably want to work through maybe the tiers one by one over the course of a week. This almost feels like a sans class, where they, like, just shove everything down your throat. 
 And you're like, oh my god, this is too much! 
 David: What also might, might be an overall exercise you could take to your whole SOC and just kind of piecemeal out part of it, say, Hey you know, the you know, analyst Bob or whatever, he is just going to do. Tier zero activities, one and two, and you're going to read through those. You're going to fully understand them, and then you're going to take a look at what we're doing there and [00:36:00] see where we're at on that level. That way you get your whole team involved in understanding what, what you're going for here. I think it would be good to have the whole team read it, read the whole thing, but only have, you know, an analyst do work on a smaller portion of it. 
 Matthew: Just kidding. 
 David: I didn't hear what she 
 Matthew: I said you're a sadist. 
 David: Yeah, I've been accused of that before. So, yeah, I think this is and of course this is the first pass, really. This is the only maturity model I've seen for detection engineering. Others may come up with it, or others may improve on this one, and eventually we may end up with NIST developing a detection engineering security model maybe. 
 Matthew: Yeah. Yeah. I was pretty happy with this. This is definitely, it's more interesting and way more in depth than I thought it was when I first saw the initial article, because I saw that initial article and I quickly scanned it and I was like, Oh man, I think we need to get more articles for this for our thing. 
 But I think we're at 40 minutes. So I think that was quite a bit. 
 David: Yeah, once you get to that blog post from [00:37:00] Elastic, it is pretty extensive, so it's going to take you a minute. 
 Matthew: A 
 David: At the top, you know, this is one thing I like about the stuff you find on the web now for, for this is they, they have the reading time of 92 minutes for this which for me, because I read slow, I have to double that, so this is a, you know, this is a three hour read for me. 
 That would do the whole thing. 
 Matthew: think I spent 90 minutes on it and I only got about. A third of the way through. So, whoever set the timing for this reads real fast. 
 David: Yeah. Well, there, there somebody at work one time showed me a website where it set the speed on the browser and it would auto scroll, and it would rate your education level based on how fast you could read it. You know, and I was still, I was still at middle school. 
 Matthew: That's not, that's, that's That doesn't seem like that's actually associated with that though. Like 
 David: Well, they were assuming that once you reach a certain level of education, you've had to read so much that your ability to read and Comprehend has [00:38:00] improved over time. 
 Matthew: I guess. 
 David: And this may, that may have also been assuming that you're actually in the active process of. Going to school. Cause you know, like everything probably atrophies over time, but I've always been a slow reader. 
 Matthew: I am getting slower, and I am actually having more and more trouble. I was thinking about this the other day. I'm having more and more trouble focusing. I used to be able to read for hours on end and had really good focus and really good retention on reading. And as I've gotten older, I have less and less focus and less and less retention. 
 Like this one, like I was just talking about, like I spent 90 minutes reading it and going over it and I didn't even get halfway through. And I'm gonna blame the internet and phones. I think, I think I need to start training myself to stop, like, I don't doom scroll, like, I don't do much on social media and stuff, but still, like, you know, Reddit and Feedly sometimes and stuff like that, definitely have been going for, especially Reddit, where it encourages you to kind of take a, take a glimpse, snapshot of what the headline says and then go through a bunch of [00:39:00] comments. 
 And read a bunch of small stuff. 
 David: Hmm. 
 Matthew: I need to try and spend more time reading longer form articles and books again. And try and really force myself and retrain my brain. 
 David: Yeah. Well, it's not like the old days. Like you say that apparently Teddy Roosevelt used to read a book before breakfast you know, at a time when that was really your own, your, your only outside entertainment, unless you're going to go listen to a live orchestra. 
 Matthew: Or I go, go attend public debates and stuff. 
 David: Oh yeah, watch him get shot. 
 Matthew: I don't know that every one of them had that, but 
 David: And those were certainly the less entertaining ones. 
 Matthew: that's fair. But I mean, I've heard, like, people would go and like Abraham Lincoln used to debate people before he was elected president, and people would turn out in the hundreds or thousands to just listen to two people argue. 
 David: Well, that's actually how the whole debate started is with him and Stevens where he would go and basically heckle Stevens when he was given some speeches. And that's how the, the presidential debates actually started is with him [00:40:00] heckling his his opponent. 
 Matthew: Yeah, I don't think, I don't think this, I don't think we as a society have the the 
 David: Or I'm sorry, Stephen Douglas. It wasn't Stephen's, wasn't his last name, Stephen Douglas. 
 Matthew: Douglas, alright, yeah. 
 David: yeah, yeah. 
 Matthew: Anyways, 
 David: Well, Steve and I are in first name basis, so that's why I call him Steve. 
 Matthew: Alright, 
 David: but nope. And that is all the articles we have. So thanks for joining us, and follow us at Sergey Asek on Twitter, and subscribe on your favorite podcast app.

Other Episodes

Episode 32

October 24, 2021 00:24:17
Episode Cover

SS-CONF-32: Splunk .conf 21, Part 1

In this episode, we talk .Conf!  David and I attended Splunk .conf remotely and sit down to discuss this years presentations and announcements.  Unfortunately,...

Listen

Episode 124

August 28, 2023 00:52:21
Episode Cover

SS-NEWS-124: Smart Cities, FraudGPT and a change in Ransomware Behavior

Back in the news cycle, we discuss the AI Challenges at Defcon, FraudGPT and similar, Smart Cities and a new wrinkle in Ransomware Behavior.  ...

Listen

Episode 81

October 10, 2022 00:29:18
Episode Cover

SS-NEWS-081: Malicious OAuth Apps and Poor Crypto Returns

Malicious OAuth apps are coming for your Exchange admins!  Oh noes!  Also, Powerpoint gets in the malware delivery game and it turns out that...

Listen