SS-NEWS-143: Minimum Viable SOC Transformation!

Episode 140 May 20, 2024 00:52:09
SS-NEWS-143: Minimum Viable SOC Transformation!
Security Serengeti
SS-NEWS-143: Minimum Viable SOC Transformation!

May 20 2024 | 00:52:09

/

Show Notes

We turn back to one of my (Matthew's) favorite analysts, Anton Chuvakin and his recent article on what a Minimum Viable SOC Transformation looks like.  Then we take a few minutes at the end to discuss making self-driving cars ignore stop signs. Cheeky and fun shenanigans!

Article 1 - Baby ASO: A Minimal Viable Transformation for Your SOC

Article 2 - GhostStripe attack haunts self-driving cars by making them ignore road signs

If you found this interesting or useful, please follow us on Twitter @serengetisec and subscribe and review on your favorite podcast app!

View Full Transcript

Episode Transcript

Transcript is generated via AI and has errors. Like David's name. It's never going to get that right. 
 [00:00:00] 
 Matthew: Welcome to the Security Serengeti. We're your hosts, David Schwedeker and Matthew Keener. Stop what you're doing, subscribe to our podcast, leave us a lovely five star review, and follow us at SerengetiSec on Twitter, which I haven't logged into in quite a while. I should probably just take that out if we're not going to actually pay attention to it. 
 David: Like, if you'd put that password in your password manager, then, 
 Matthew: it isn't my password, man. I just don't log in. I don't log into any social media on a regular basis. I'm just not interested in social media. 
 David: Yeah. Well, I get emails every, it's gotta be daily now from LinkedIn saying, Hey, you've got new invites. I don't really don't want to log in and look at that. 
 Matthew: Yep. 
 David: Some stranger from New Delhi saying, Hey, I want to connect with you. He's like, Hmm, yeah, I don't think so. We're here to talk about cybersecurity and [00:01:00] technology news headlines, and hopefully it provides some insight analysis and practical application you'll be able to take back to the office to help protect your organization. 
 Matthew: the views and opinions expressed in this podcast are ours and ours alone, and do not reflect the views or opinions of our employers. 
 David: You know, Matt, the last SOC the last SOC transformation I was involved in turned into puppets. 
 Matthew: Oh, I bet that was way more successful than your average sock transformation too. 
 David: Well, it's certainly more entertaining. 
 Matthew: Yeah. You've got a defined goal. You can say, look, it's done. I was successful. 
 David: Oh, and it's easier than Build A Bear. 
 Matthew: I haven't done either of those things, actually. Maybe I should add making the sock puppet to my to do list. 
 David: You never made a sock puppet, seriously? 
 Matthew: Not that, I mean, I might have made one back in school, back, you know, in the 80s, or 
 David: I say, maybe I'm the weirdo, because I think I've probably made at least a half dozen of them. 
 Matthew: interesting. Was it fulfilling? Maybe 
 David: is a highlight of my existence, 
 Matthew: I'll make that my [00:02:00] retirement plan then. All right. All this talk of sock transformation. Our first article today is from Anton Chewbacca and we're apparently turning this into just an Anton Chewbacca and reaction podcast every time he releases an article, we will cover it here. The title is baby ASO, a minimum viable transformation for your sock. 
 And he has spent a lot of time looking at security operations and Kind of compiled this article as a rant slash reaction because there's a lot of folks and there's a lot of vendors out there that are talking about SOC transformation and how to do it. And one of the trends he has consistently seen has been quote, change is hard, but transformation is hard. 
 Harder and he gives us two additional theorems or addendums to this on why he believes this is true. Theorem one is quote, many who say they want to transform, really don't, unquote, and theorem two, which is related. Many wish for the purported results of a transform operation but cannot bear [00:03:00] many or any of the costs. 
 And I feel like this describes so many businesses and it's not just stock transformation, it's really. It's really just business in general. They want next gen results with, you know, old and busted budget levels. 
 David: right? I mean, it's what el it's what everybody's been used to with the call it? The publicized losses, 
 Matthew: Yeah. Yeah, I mean to some extent this is just normal for business, right? You want to spend as little as possible and get the maximum result. So I guess that part is not, it's not shocking at all. It is kind of interesting though. I actually think the first theorem, many who say they want to transform really don't. I feel like a lot of people, a lot of business folks, especially use the word transform without any real meaning to it. 
 David: you know, with lip service. 
 Matthew: Yeah, 
 David: difficult for the SOC though is, the, they don't necessarily know why they want to transform but, and when you're talking from a business process, they want to transform in order [00:04:00] to gain additional profit or whatever. 
 But there's, there's no kind of corollary to that on the SOC side. There's an assumption that you're going to get some kind of benefit from it. But there's no documented, it's hard to document evidence or proof that it is true once your transformation is complete, that you've made it there. 
 Matthew: I think, 
 David: No, go ahead. 
 Matthew: I would agree that generally speaking when people, and this goes to what I was saying about like the business using this as a buzzword transformation and transform just seems to be accepted as shorthand by everybody on the leadership side is like, you know. Just improvement. Like you said, there's, there's not, they just throw it around, like it doesn't mean, it almost doesn't mean anything at this point in time without, like you're talking about, without actual goals or plans or concrete final expected outcomes. 
 David: Yeah. Well, that goes into what I caught of, or, you know, there's actually a third theorem here that [00:05:00] is, you know, many may want to transform, but don't know what that means and thus cannot plan or have hope to get there. 
 Matthew: Yeah. It's just become that accepted buzzword. Like we're going to every, every company is constantly talking about transforming themselves, but none of them do it's just the same company. But they've reduced, you know, cost by 5%. Our transformation is complete. 
 David: Well, I think, well, they're so confused. And I think, you know, this goes to what you were saying a minute ago that they confuse transform improvement. So they transform and they improve. When they hear the word transform, when people think of transform, they think of like magic or miracles, right? Where something significantly changes for the better. 
 And I think it's difficult for people to wrap their mind around actual, what actual transformation looks like. 
 Matthew: One of his comments here is, was that people talk a lot about transformation, but they actually do their plan is minimal improvement. So these are, there's a number of challenges to transforming the [00:06:00] modern To the modern model of SecOps, but this really applies to darn near anything. The first challenge is sunk costs. Folks are attached to their current processes and tools. For example, years of rule development work in Splunk. Generally speaking, if you're looking at a full transformation, and actually that's a terrible example, cause you can use the Splunk content and the transform SOC. 
 Maybe I should say something like years of role development and, and then more of a legacy SIM, like ArcSight, where you want to transform, but as part of the transformation, you may be looking at more modern SIM technologies or data lakes or something like that, but you look at all the effort you put into Qradar or your, your EDR or your something else. 
 And you're like, ah, we, it's going to take us, we have to rebuild all of that. 
 David: Right, or some other homegrown tool or capability. 
 Matthew: Yeah. Another one that, that he mentioned as well was the current processes. Getting people to change processes and change culture is really hard. There's been at least, [00:07:00] there's been, I've been in two different companies that have gone through culture change or attempted culture change. And then one of them, the culture change was a complete failure. 
 I've seen it happen where you try to change the culture. You bring in new people, you try to tell folks, like, you know, you know, your new job responsibilities, that your job responsibilities now involve coding, and I'll talk more about that later. 
 You want to get people to stretch, you want to get them to treat their jobs differently, you want them to change their processes and their culture, and they resist. Yeah, 
 David: And it's even more difficult if you, if the people at the top are, you know, you bring in new people at the bottom, but the people at the top don't change. And then, you know, that's a additional resistance at a, at a more significant point in the organization. 
 Matthew: I think if you want to do culture change, you almost have to change both because if you change it at the bottom, but you don't change it at the top, then the people at the bottom don't have the power to force people at the top to [00:08:00] change. If you change it at the top, but you don't change it at the bottom, then the people on the bottom dig in their heels and resist if they're not sold on it. 
 Although, of course, you know, if you control people's salaries and performance, you'll eventually get there. I think it'll just be a long and painful process. 
 David: Yeah. I mean, you can also force them out. 
 Matthew: Yep, 
 David: mean, the thing is, if you, if you get people in at the top that have the buy in, they have more control. Capabilities to influence those under them. Whereas those at the bottom don't have a lot of capabilities to influence those above them. 
 Matthew: that's fair. All right. Resistance to change due to increased complexity. This one is particularly, I think, an item here for modern, modern SOC ops. Resistance to change due to lack of knowledge. This is a big one. Modern SOC tends to follow more of an engineering and automation led model, whereas historically security operations has been Very manual and very kind of, you know, people you think of, think of your common model of sock, you know, a whole bunch of [00:09:00] people sitting around computers, responding to alerts one at a time, going through and investigating stuff moving to the modern model of security operations is a huge change. 
 And there's a big knowledge gap between the two. So 
 David: And you can also run into the lack of desire to those people for those who have done things the old way in the past to learn the new way and to embrace it. 
 Matthew: that actually reminds me of the job retraining programs that the government tried to do in West Virginia, somewhat recently. After they closed down, after coal mines were closing down, they were like, ah, we'll retrain you. Don't worry, if I remember correctly, only a tiny fraction of those retraining spots were taken by folks. 
 David: Yeah, that rarely ever works. 
 Matthew: Yeah, you, number one, the people have to be interested in retraining. Number two the retraining has to be available to retrain them into something that is useful and profitable and, you know, society [00:10:00] wants, 
 David: Not only that, but the personalities have to align also. To say that you're going to take someone who's really interested in, you know, forestry or whatever and say, Okay, we're going to put you behind a desk you know, that's not a recipe for success either. Whereas, you know, if it's organic, then those people who are interested in forestry go into forestry, and those who are interested in sitting behind a desk and coding go into coding. 
 But what you take somebody whose personality doesn't lend itself to either one of those things and try to put them there It's not it's not gonna work it. 
 Matthew: That's fair. 
 David: So resistance change due to fear because Well, it's gonna cause, will we will will this thing? Cause us to fail or you know, During the transformation, is the organization going to fail to catch a major, major incident on the weight as part of their transformation? 
 You know, so that's, that's a major concern saying, well, we're not used to doing it the new way. What if this new way is not right and we miss something while we're, while we're getting there or the way we change, [00:11:00] we, we try to change too fast and we're not up to speed on the change and that causes us to fail. 
 And, and is, is the new thing going to be better versus the old? You know, tried and true ways, which, you know, I'd be to say the, the old ways are true is, is, you know, that's a matter of opinion. I mean, if the old ways were true, then we would be successful in those old ways and we wouldn't be thinking about new ways to do it. 
 Matthew: you know 
 David: and then the one of the other things that, that could be part of this fear is that, are you going to get undercut on your way to transforming, leading to failure? So if you have a goal in mind for what your transformation is going to be, And you start down that path and you either lose general management support or you lose funding on the way and can't reach that goal is that going to lead to failure? 
 Which, you know, it could very well because you're, you're neither one nor the other. You're halfway in between. And maybe that's not that, that just being that way is, is going [00:12:00] to lead to all sorts of other problems. 
 Matthew: that's an interesting one, especially because a stock transformation is going to take, I mean, any, any big transformation of a major area of your business is going to take a year or more and you lose key personnel kind of in the middle of that, or like you said, leadership, you know, your CISO leaves and the new CISO comes in and They do the normal leadership thing of, I hate everything that the person who was here before me did. We're gonna do, we're gonna do it all different. 
 David: Eh. Yeah, I'm gonna put my stamp on it. 
 Matthew: So, because of this, frequently, management wants transformation light, quote unquote. They want all of the benefits of a transformation. They want viewer alerts. They want higher fidelity. They want lower personnel costs. All of these things. They want They want more automation. 
 David: Well, they claim they want more automation, but they're generally unwilling to actually automate, because you could break something if you automate 
 Matthew: So [00:13:00] true. But they want none of the costs. They only want a few things to change. They want improvement to be slow and incremental. And like you said, they want, they're unwilling to actually automate stuff. 
 And this is this, this automation thing. I think I have more about this later, but I find the automation thing to be super fascinating because this is. This is one of those things where everybody talks about how they want more automation, but it is surprisingly hard to implement more automation. For a couple of reasons that I've seen first, not every tool provides an API to automate, which I found shocking when I learned this in like 2018, when I went looking to go automate blocking of email addresses and an unnamed email product and discovered that the email product we were using for our email security gateway did not have an API and point to block email addresses. 
 It's 2024. And if you have the product, you probably know who I'm talking about, but I just found out they're planning on releasing that capability. In Q2, so sometime in the next month or so it's [00:14:00] 2024. I've been asking for this for six years because I wanted to automate email blocking 
 and I 
 David: they, they must have put that on the short list then. 
 Matthew: Same thing with a proxy that I've worked with. 
 Again, I will remain a named but I've worked with a proxy that does not, and has no plans to ever create an endpoint for blocking Proxy items at an end point I'm sorry, the no API endpoint for blocking URLs or domains in the proxy that are malicious. 
 David: serious? 
 Matthew: Yeah. So 
 David: Holy cow. 
 Matthew: there's certain things that you want to automate 
 David: there, that would put that product on the list to be gone. 
 Matthew: and it might be, but that's still too, even, but even if you had that on your list to get rid of. Sometimes on the SOC side, you may not have the authority to get rid of it. It may be run by a completely different team who has a completely different set of requirements. 
 David: eh, 
 Matthew: they're like, you know, that is important, but it doesn't really impact us. 
 David: I mean that's something I've been doing for, [00:15:00] man, many, many years now, is having that as a hard and fast requirement in a requirements document for any new 
 solution to be acquired. Is that it must have a full API. When I say a full API, I mean an API which there's nothing you cannot do in the API that you can do in the GUI. 
 Matthew: That makes excellent sense. And I would strongly encourage that. Maybe one day, one day I too will get there. Other, other issues with automation you mentioned before people are unwilling to actually automate. This is another one where people pay a lot of lip surface to automation, where if you introduce a workflow, say for your automation platform, and you're like, you know, what if we created a platform that would automatically Block, you know, email from malicious email addresses. 
 They'd be like, Oh, but what, what happened if you, you know, blocked all the email or something like that? You accidentally put in, you know, star dot star. 
 David: Well, they don't accidentally do that, do they? Eh. 
 Matthew: Typically what, typically what I've seen here is that people will add a [00:16:00] manual step where you have to verify it. And I think that's fine as a, I mean, that's still better than nothing because generally speaking, like I was talking about before, if you have a, let's say you want to block, block something at the proxy. One option is that you alt tab out of your ticketing system or your SOAR product or whatever, and you have to log into the proxy and you have to copy and paste the domain over, and then you have to hit enter. Uh, and then the fully automated one is, you know, if an analyst decides that this incident is malicious or this malicious and they mark it as a true positive, then the SOAR product will automatically go out and block everything that all the indicators that are related to that, that are marked malicious. 
 But then in the middle, you have an option of maybe hitting a button. Or maybe running a running a command to block that specific IP. It's still faster than having to tab over and log in. So I think there is a midpoint here that you can take advantage of. 
 David: Well, I mean, on the path to automation, that's, that is like one of the steps in the middle, right? [00:17:00] So, first you, you fully develop your process. You vet your process manually many, many times to say that, yeah, this process is good. Then you work on automating that process. And I think as far as the fears, and then once the process is automated, instead of the manual thing, it's a button push to execute the automation, right? 
 And then once you've, once that's been running for a while, then you go and say, okay, we're going to automate this so that you don't even get the button push anymore. But one of the things you could do as part of your automation too, is say, okay, what are all the things that could go wrong, right? You know, write those down and maybe you put those as error handling in your automation, right? 
 You know, if you're afraid that someone will say block star dot star, you put that in the automation and say, if this thing has, you know, if the entry is star dot star, don't do that thing. So you could add additional checks in there on top of that also. But, but the thing is that if you, if you're always afraid of doing something that's going to break something, then the likelihood of [00:18:00] you doing anything new is, you know, small. 
 You're always going to be doing the old thing. At some point you have to, you know. Face your fear and deal with it when things do go wrong and say, well, you know, this one thing went wrong overall, this still has been better. 
 Matthew: and, and frankly, even with the manual process I just described, like it's not like a person manually adding it in is faultless. People make mistakes all the time and, you know, put the, put the wildcards in the wrong place or accidentally blocked too much because they weren't paying attention. So. But anyways so why, why does management want transformation light? Well, we talked a little bit about it before, and apparently I'm skipping through my notes in inappropriate ways, but management typically sells the project for approval where they need to show strong results, but the budgets are limited. The organization may have a limited capability to actually transform. And we'll talk about this on the next topic regardless of whether they need to transform. Typical resistance of change. 
 David: And also I think the, the [00:19:00] failure of vision, you know, they can't actually imagine what the new thing is. They can only imagine the old thing with racing stripes on it. 
 Matthew: So he brings up the concept of the DNA of the sock, which I find very interesting, because this, this is, this especially hits close to home. I've talked about this a little bit over the years. I've talked to a couple of vendors about it. I recently talked to the manager at Accenture about it, where you read through the blogs on the internet and you read about these socks that are doing these amazing things, you know, you listen to podcasts, this guy's talking about how they're using. 
 Jupyter notebooks and analysts aren't, they're like, they don't have a SIM. Like you think of a SIM analysts are running like hunting queries out of, you know, prebuilt Jupyter notebooks and they're writing data science queries and they're compiling statistics on the fly. And you see stuff like Google. 
 I think it was Google that wrote that open source endpoint forensics tool, where you could basically just like log in before, before CrowdStrike came out with real time [00:20:00] response. Where you could just log into any system this way and just start pulling artifacts and running forensic stuff on various systems in your environment. And then you look at what's going on in the more typical SOC, where everything is still manual, and everybody is still staring at the screen, and they're still reviewing the alerts as they come in, one by one. Like, there's, these two groups are in completely different places. And Anton put some words to it. 
 He calls it SOC. And basically his take on this is that if the history of the organization is a classic kind of 1970s, 1980s knock, then your SOC is going to resemble that. So if your company is older and they, they, they did that, you know, network operations center then your SOC is, that's just, that's just the DNA of what your company expects. 
 And they're going to expect you to build a SOC in the same way. And in these organizations, change is difficult, incremental, lots of institutional resistance. The characteristics of the org in the SOC will be a big [00:21:00] push towards shifts. You know, they want 24 7 coverage. It's critically important to them to have somebody there overnight, regardless of whether the alerts justify it or not. 
 They want tiers. They want Tier 1, Tier 2. And they want formal procedures. Everything has, you know, they've got a big book of procedures, and everything has a procedure for everything. 
 David: Yeah. And basically this means if you're, if your company is older than 20 years, this is 
 what you got. 
 Matthew: Ha! Ha! It really Yeah. Yeah. The opposite is if the history of the organization is more recent and you started as, for example, a software engineering organization, then your SOC is going to resemble that the org is going to be a lot more flexible. There's going to be a focus on automation and engineering. 
 You may have a normal nine to five or some kind of shiftless schedule where. You know, maybe you've got a couple people that come in earlier and a couple people that come in later, but you're not going to have a formal, like, you know, you are the 8 to 4 shift and you are the 4 to 12 shift. It'll likely be tearless. 
 A lot of these modern socks seem to be tearless. And anything that is repetitive [00:22:00] is automated away. Some, some places even say, like, if you're going to do that a second time, automate it instead, which I think, I think two times, probably a little aggressive. I can understand the, 
 David: Yeah, it should be more like, if you, if you have had to do this twice in x period of time, then you're going to automate it. You know, if you have to do it once a year or twice a year, then that's the threshold. If you have to do it more than twice a year, then you're going to automate or something like that. 
 Matthew: So this is, this is super interesting to me. This really explains, cause I've always worked for companies that have been older. And had been with this traditional knock DNA. And it just hurts sometimes when I read about these, I guess, fancier companies that are doing all this cool stuff. And I'm like, ah, that that's not us. 
 I don't know if we'll ever get there. 
 David: They come by and slap you on the back of the head and say, stop, stop fantasizing. We live in the real world, son. 
 Matthew: Yeah. Yeah, I guess so. So what after, after [00:23:00] this was, this concludes kind of the rant part of the article, and now Anton has some. Notes about what, if you, if you're, especially if you're part of these older companies what is kind of the minimum viable transformation and what you should be focused on that fit within your DNA and the stuff that you're allowed to do, but still kind of progress towards that mythical future transformation. 
 So first he has some notes number one, team, your expectations. You're not going to be able to get some of the. Amazing results you may have heard about if you listened to any podcast talking about these modern socks and number two, the work increases linearly quote, but some value occurs at later stages. 
 So if you don't see a ton of value immediately that's because some of this takes a little while to occur. So divided this into a couple of different places people process and tools. So number one, people analysts become engineers. This is, I think the most important one right now, analysts and engineers are two very different groups of folks. 
 Some people do frequently cross [00:24:00] this divide. I mean, eventually a lot of analysts get promoted into chairs, but they rarely kind of come back to be an analyst. But the idea here is that analysts get involved in threat detection engineering. They get involved in creating, tuning the development of alerts. 
 I didn't write this down, but they also should be involved in automation. 
 David: Yeah, well, I mean unfortunately there are a lot of them are resistant to it. 
 Matthew: Yeah. And I think that that is a real, I mean, that's the reality. This goes back to what we're talking about before. Like some people are just resistant to they, they, maybe they don't think they can do it or they don't want to do it or, but yeah, I think that, I think that the more you can get your analysts involved in the engineering side maybe even in the, the, the tool creation side, then you're, you're going to get, cause the people who are running the tools. 
 may not be necessarily thinking about how the SOC uses the tools to detect. This is something that I think I've had a lot of success doing, is bringing the Splunk SIEM people together with the, a couple of the analysts that are really interested in threat detection. And I think even if you did something as simple as [00:25:00] a monthly call or a bi monthly call, where you got the analysts and threat detection together to kind of talk about the alerts, and you'll see analysts making comments about alerts that the threat detection team is like, what? 
 I didn't, I didn't realize that was a problem. 
 David: Yeah, I think it's better to do a rotation. So if you have, you know, regardless of what tiers you have or whatever, but if you have roles within the SOC, then you can say, you know, and maybe it's a monthly rotation or something like that. But, you know, every month, someone's going to rotate into, you know, the detection engineering role, and they're going to be staff all to the detection team, they're not going to do analytics for a month. 
 And then you just run through, you know, that cycle and eventually they're going to come back around and they're going to get that month again in a year or something like that, depending on how many folks you got and everything, or maybe you make it more frequent. I think it's, I think too frequent though is they don't, they don't, they aren't there long enough to absorb and understand. 
 I think maybe like a month rotation or something would be good. 
 Matthew: [00:26:00] No, 
 David: you have a smaller team, maybe you do quarterly. So they have even more time to. You know, learn from that team before they come back to do analytics. 
 Matthew: like the idea. I guess my only problem was it potentially is twofold. Number one, especially if you're a small team. It may be tough to get people free to do this is, you know, if you've got, you know, four analysts on your, two analysts on your security operations team, are you talking about like full time or is this kind of like a part time 20 percent type role? 
 David: No, I would do it full time. 
 Matthew: Yeah. In an ideal world, I would agree, but I'm just thinking of if it's possible, if, again, I think, I think it depends. I think it depends if you've got a real small team, it can be tough and you'd have to convince leadership that, you know, I know we hired this guy as an analyst, but they're going to go spend some time over here and I'm going to need another analyst. 
 Yeah, yeah, yeah, 
 David: say, well, consider this training, right? So if we want our people to get better, we need to train them. So if we're able to put these people through detection training, then they're going to be better analysts. So [00:27:00] this is a, this is a, you know, a team improvement effort. 
 Matthew: again, I'm not disagreeing with you. I think it's a wonderful idea, and I personally would love to do something like that. I just don't know if management would accept it. Simply because you almost have to have one or two extra roles to allow for that. Although, I don't know, maybe you could sell it as, sure, we have to have an extra analyst role or two. But then, The other, you know, the engineering teams don't need as many folks cause they'll have a supplementary folk from us. It's almost like, it's almost like loaning the, the budget would have to be. It's like loaning budget. 
 David: Your business case for budget time and say, Hey, I want an additional billet and here's what this additional billet is going to get for us. 
 Matthew: I will point out here that Anton uses engineer on the sense of software engineer, but I feel like in most organizations, the engineers are more like product experts and operators and administrators, not engineers in the same sense he uses here in terms of people who build things. 
 David: Yeah. That's my experience as well. 
 Matthew: So my next item here is [00:28:00] very similar. to the thing train analysts in the engineering school set skill sets and prioritize hiring those folks with those engineering skill sets. Of course, the negative here is folks in engineering skill sets and titles usually get paid more than analysts. A lot of companies use analysts as shorthand for entry level and engineers as shorthand for mid level. So, you may need to increase pay here. 
 David: Well, one thing I'd note for that also is if you're going to actually hire for that skillset, don't do that. Before you have buy in that they're going to be able to do engineering things. Cause you certainly don't want those skills to atrophy. Also, if they come in, they are only stuck doing analytical work 
 Matthew: yeah, they'll be unhappy. Yep, that makes sense. So this is one place too where training folks with the role rotation, like maybe you could do like a 20 percent job where somebody spends 8 hours a week doing a different role. That may be some way to get around some of the, uh, the, the, the job related restrictions. 
 It certainly won't be as valuable as being full time [00:29:00] somewhere else, but, 
 David: Well, if you're going to do that, if you're going to limit it though, I would say try to limit it so they do spend some time every day. So that it continually reinforced. Cause if they only do two hours on Monday and they don't do anything again until two hours the next Monday, I think you're going to lose a lot there. 
 Matthew: I was more thinking of, if it was 20 percent time, I was thinking of like an eight hour chunk, like taking a whole day. So I dunno, you could certainly argue over. Doing a little bit each day versus doing a big chunk more rarely. Certainly more often would be better for reinforcement of learning. But an 8 hour day might be better in terms of them actually accomplishing something. 
 You try and divide something up into an hour and a half a day, you may not make much forward 
 David: Yeah. I'm sure there's a sweet spot in there somewhere. 
 Matthew: Probably. As you automate items, move folks up to higher value roles. Don't just fire them, they've got a lot of information about your company. Make sure that you're taking care of your folks. 
 David: Yeah. And I think one thing you need to do is, is, you know, indoctrinate them. know, what you want them to do is they, you want them to [00:30:00] absorb that mindset that you have for the SOC transformation. And the change that you want so that, you know, they, they are also supporting this effort. And when they talk to somebody else, they're, they're talking about a SOC transformation and they're all in on it. 
 Matthew: Second item for people, continuous improvement. Learn, learn, learn. A culture of learning and improving. Make it part of your performance objective. He has a real focus on the site reliability engineering book. But I think that, I think that really anything around the engineering and, and up, upscaling your people works here. 
 David: Well, that ties into the, you know, the, the concept of the detectionist code idea that I think is pivotal to SOC transformation. 
 Matthew: Yeah. Yeah. He has another article, I don't remember if we talked about it or not, about how security can learn a ton from site reliability engineering. Simply because, you know, they've got to have everything work perfectly or, you know, five nines of reliability 
 and security definitely is, does not have that level of liability. 
 David: No. [00:31:00] Right. 
 Matthew: Process. First time here should not be a surprise to anyone who's listened to us so far. Automation, review the SOC stack, find automation opportunities, prioritize and start implementing them. You don't want to do this all at once. You want to probably start with your most common use case, which will probably be malware phishing. 
 And start getting rid of, getting rid of the boring parts. Measure differently. This one's an interesting one. New metrics that measure how automated you are and make those a priority for improvement. This, this is the idea that if you're not measuring something, you're not prioritizing it or it's not visible. 
 So if you're talking a great game about how you're automating, but you're not measuring how automated you are and showing how you are focusing on that it's all just talk. 
 David: So include that in your standard briefs on top of, you know, we had this many incidents, we had that many, you could say, we automated this many playbooks to this level. 
 Matthew: Yeah. And if there's some way to track manual actions versus automated actions, like, you know, last month we automated 8 percent of our actions. And then [00:32:00] in six months, you can be like this month, we automated 15 percent of actions. We've doubled the automation and you know, 
 David: Yep. And you might actually be able to tie that into the actual instance you work also to say, Hey, we had 10 incidents this month. And five of those incidents or on average, the incident was automated to 50 percent or 25 percent of the incident that the response to this incident was automated. 
 Matthew: I'm trying to think of how to collect that metric. Which might be tough. I think maybe the easier one to do is if you start automatically closing metrics, like for, or closing incidents, like for example, here's an example for phishing. You automate a phishing incident such that somebody reports something, or, or, or maybe you've got a, a product like Proofpoint Tap, where it tells you that somebody clicked on a mission malicious phishing link. Maybe you could automate that by automatically, you know, checking your proxy logs to see if anybody did click on the link, or maybe you just trust the proof point and you automatically reset their credentials and send the ticket to the service desk and then [00:33:00] you close the ticket. Like, do you have to look at that on your side? 
 Or maybe you do have a search, you know, to tell did anybody weird log into this? Like there's things you can do that maybe you can fully close that ticket without somebody looking at it. And at the end of the month, that's the, that's the metric is, you know, 20 percent of our machine review tickets were closed automatically, 
 David: Hmm. Well, I mean, in that case, then you can say that phishing is 100 percent automated. That's your, that's your thing. But I was thinking of, of, was this type of incident came in, there are 20 steps in that we've automated five of those. So we're 25 percent automated on that. 
 Matthew: On that incident. And then you can just add all those up. Yeah. Yeah, because I was thinking more of the ad hoc stuff like blocking and stuff like that. I was trying to figure out, because if you block something in an automated fashion with your SOAR, you can probably create a metric out of how many automated actions took place. 
 But then the problem is, is how do you measure that against the manual actions? Like, how do you track, you know, oh, we blocked, we manually blocked 27 proxies or domains this month, and we [00:34:00] automatically blocked 13 or something. I don't know. Anyway, so we're getting to the weeds. Next item. Create feedback loops that make continuous improvement part of the SOC DNA. 
 He gives a main example of a blameless post mortem where you talk about the failure afterwards, but nobody gets, you know, fired for it, nobody gets blamed for it. You're focused purely on how do we stop this from going on from occurring again. 
 David: Right. And that relates a lot to Deming where he talks about most failures of in pro in a process are due to the system and not individuals. so you need to stress on fixing the system and not blaming individuals. Well 
 Matthew: final item under process is detection pipelines, altered detection, the life cycle process and work towards continuous development for new content and improvement of existing content. Once you build a rule, it's not set in stone. You need to constantly keep up with it, modify it, make sure it still works. 
 We've talked about a lot, a lot with breach and attack simulation. 
 David: that just ties right back into [00:35:00] the last podcast where we were talking about. You know, monitoring your detections and and that, 
 Matthew: Vital item was tools drive to automation. Big thing here is deploy SOAR, some type of SOAR product start automating use cases one by one. Automating enrichment. You have a, you have a note in here, David, on automated enrichment. Typically that comes by default in most SOAR platforms, which is super helpful. 
 That's usually the first step in SOAR platform. It does the enrichment automatically. 
 David: right. Well, I'm saying that that is the lowest, that's the lowest risk, right? You know, if you failed, if some part of that script fails, it's not going to break anything. You just don't get the data. 
 Matthew: Yeah I think the next step generally is you automate investigation. So, for example, let's go back to that phishing review. Let's say that every time you get a phishing ticket, you know that you're going to, you know, submit the link or the attachment to a sandbox. You're going to do a search in your sim for all the logins for that user to see if there are any strange logins by IP address or [00:36:00] user agent. 
 And those are kind of your two main atta Oh, and you're going to search the, your sim for the proxy logs to see if anybody clicked that link, or maybe search your EDR logs for if anybody wrote that hash to their desk, you can automate all those things. You can write that standard search. You can have the sort of do it so that when a person picks up the ticket, all that information is already there for them, which again has the same problem. 
 Like you, You are touching stuff, but you're touching fairly low risk stuff. Like you're querying your sim, you're, you know, submitting to a sandbox. So that, and then finally, the final step is usually automating the remediation or automating blocking. And that's of course where people tend to get really, get really twitchy. 
 David: Yeah. 
 Matthew: Yep. Final one for the automation bit. Well, two things. First reduce toil, find what most folks spend most of their time on and automate it. And 
 David: there's, there's two ways. The two things used to be automated. You know, whatever happens most frequently and whatever takes the most time. 
 Matthew: then finally, when reviewing tools, make sure that they can be automated, which I went into great [00:37:00] detail at the beginning. And you, I just thought that everything had APIs these days. That 
 David: One would like to think. Anyway, 
 Matthew: So why does this matter? Well, Knox get knock style socks, cannot scale. They rely on people and they rely on lots of alerts and the people to review the alerts. 
 There's lots of things you can do to improve that, but the, the, the, the, the problem is kind of base. Like you just cannot scale that way. Threats expand geometrically with your attack surface, and the attack surface is just constantly growing. The attacker's capabilities are constantly growing, so it's tough. 
 David: yeah, and that ties into, you know, the concept I think you and I talked about before that I'm kicking around, which is the idea of operational debt, right? So you have technical debt, and I think there's also operational debt, where if you're not improving, you're really going backwards and you're accumulating debt as you don't improve your operations, as you remain stagnant doing the old way. 
 Matthew: If. 
 David: adapting new concepts and new capabilities to the way you do your operations. 
 Matthew: That makes a [00:38:00] lot of sense. Would that also include, so, that includes the, the, the concept that, you know, the work is increasing, but if you're not improving your procedures and policies and your, the methods of doing work, then you're just naturally falling behind. Does that also include the design of your operations teams too? 
 David: I mean, it could if, if design change is necessary in order to in order for the change to happen. You know, that's something else that Deming said is, you know, change is not mandatory because neither is survival. 
 Matthew: All right. Well what can you do about it? You should evaluate your SOC DNA and figure out if a real transformation can occur then prioritize and set your expectations based on that. And then document where you want to go and get leadership buy in. 
 David: Next article. GoStripe attack haunts self driving cars. By making them ignore road signs. And with a title like that, you know, it came from the register as well as in their opening statement, they mentioned boffins, you know, I mean, nobody else uses that term anywhere in the universe other than the [00:39:00] register. 
 But boffins have apparently written a paper that says it's possible to interfere with autonomous vehicles by exploiting the machine's reliance on camera, computer vision, and it causes them to not see road signs. And they dubbed this. And they're going to present this in a paper at the ACM International Conference on Mobile Systems next month. So basically what they've just, what these researchers found out is that the way that self driving cars scan signs is they scan them like in the old sci fi movies where a line comes down over the sign that scans it in a, in a, in a linear fashion or row at a time versus taking a snapshot of the entire image and then analyzing that. 
 So they found that what they did. Was they would use LEDs to rapidly flash different colors onto the sign as the active capture line moved down the rolling shutter for the camera. And once that camera rolls all the way down that sign, then it creates. An image and then sends that image to [00:40:00] a classifier that compares it with what it thinks a stop sign should look like. 
 And since the colors are all messed up, it says, oh, well this is not a stop sign and ignores it. 
 Matthew: the pictures they give with the different colors, I mean, I get why they did it this way but like you're talking about, this will be like where each line, each set of pixels, each line of pixels is going to be a different color. I bet it just looks absolutely riotous to the 
 David: mm-Hmm, 
 Matthew: to the sensor. 
 David: And this can't be seen by people either. So while they're doing this, they're fooling the camera, but they're not fooling. The human eye 
 Matthew: Yeah. Cause it, to the human eye, it just looks like a normal light because white light is a mixture of all the colors. So yeah, I think the only thing you would see here as a human is you'd be like, Oh, that sign is lit up. 
 David: is weird. 
 Matthew: that's weird. And then maybe, maybe some places have lit up signs and that's normal and other places don't. And you might be like, yeah, right. As you're right, as your car just drives right past the sign, you may look at it and go, that's weird. 
 That sign is lit up. Wow. 
 David: Yeah. Well, when they start digitizing all signs, then you'll be able to attack both [00:41:00] self driving cars and regular driving and regular drivers by changing the signs. 
 Matthew: Yeah. That's so much simpler. We should just go straight to that. 
 David: Oh, yeah. I mean, it's coming. So this, they, they came up with two ways to do this attack. One. Does not require access to the vehicle so it uses the tracking system to monitor the vehicle's location and Dynamically adjusts adjust the LED flicker according to the vehicle position 
 Matthew: this one was kind of weird to me. Does this occur on the vehicle or my assumption reading this was, this is an led light that somebody has like on the roadside pointed at the sign. Are they using the LEDs of the vehicle itself to flash different colors, like led headlights or something? 
 David: because this is like I said, this does not require access to the vehicle. 
 Matthew: Right. All right. Yeah, so I don't know, maybe it's just written weird because it seems like the, the fact that you have to track, because you don't have to track, you can just flicker 
 the lights at an LED, flicker the LEDs 
 David: got to [00:42:00] be in sync with the line, though, I think, and that's what, and I think that's where the tracking of the vehicle has to come in, because the LED on the sign, the changing of the colors and everything on the sign have to correlate with where that line is being done on the, on the sign, I think. 
 Matthew: I think you can look at the standard, whatever the standard CMOS sensor is, they probably all have very similar refresh rates and very similar rolling shutter rates. And you would just be like, Oh, so the average one, you know, the shutter is, you know, or it changes lines every 360th of a second. So we're going to change our LED lights every 360 days, 360th of a second. 
 But yeah, I don't know. Seemed wild. Fortunately does require being there physically, although for a high enough priority target, I think that's doable. Yeah. You can read about numerous Israeli and Russian assassinations. One comes to mind where the Israelis put an autonomous or a remotely controlled machine gun in like the bed of a truck or something and waited until their target, an Iranian of some sort, drove past where the machine gun was set up and then triggered it [00:43:00] remotely. So, 
 David: stole that from Bruce Willis movie. 
 Matthew: this does seem a little uncertain for an assassination. It could be used to create a distraction. Maybe, maybe, you know, you, you turn off their autonomous driving as they pass a stop sign and they drive off into traffic and crash, and then somebody walks up and shoots them, but this seems more like a Tom Clancy novel or a James Bond movie than reality. I know that autonomous taxis were used as murder weapons and at least one of Charles Strauss's books, although it was by accelerating and running into things or driving off into canals and locking the 
 David: Huh. 
 Matthew: So not as useful. 
 David: that's the thing about this. It, it, the the attack could be Practical, meaning you can do it, it's capable, and the vehicle could be subject to it. Impractical, as far as getting anything valuable out of the attack. 
 Matthew: Yeah, I mean, I could see it being very valuable if it could, maybe it was a high enough traffic stop sign or something because then people would just be like, Oh, you know, the car screwed up and it didn't see the stop sign, just drove into traffic and then they [00:44:00] died. 
 David: The thing is, there's no guarantee of death in the crash either. 
 Matthew: true, true. It kind of spoils it if you have to walk up with them and shoot them in the chest after the crash. 
 David: Yeah. 
 Matthew: How likely do you think it would be for technically minded pranksters to do this to random stop signs? 
 David: Certainly, certainly possible. 
 Matthew: Plausible. I don't know. Most pranksters shenanigans tend to be cheeky and fun. This would be cruel and tragic, so really not a shenanigan at all. Like a 
 David: it could, what they, that's one thing they didn't say in here and I'm not sure if, if if it's possible that rather than simply having the, the car not understand the sign, but get it to misinterpret the sign as something else. Right. Or slowing down. 
 Matthew: Yeah, or left turn coming up or no because they also use they wouldn't just drive off the road because I can see the road Too. 
 David: Yeah. So, you know, 65 mile an hour speed limit on the interstate, you change that down to 35. 
 Matthew: they get rear ended. Yeah Yeah, cuz they only tried this on stop yield. Oh, they did try it on speed limit signs. So [00:45:00] interesting 
 David: Um. But the other, the, the, the second method of attack which they which was quite witty to the renaming of it was ghost stripe two does require access to the vehicle, 
 but that involved placing a transducer on the power wire of the camera to detect the framing moments to to refine the timing control. 
 Matthew: So that implies to me that the the rolling shutter is based on so I wonder if it's drawing a different level of power each time You It like reads a line then so it can detect, you know, like a, like a, like a tiny spike in the increase in power. Oh, interesting. They're doing weird things with power these days. 
 Like I'm sure you've seen some of those proof of concepts where they can like detect which keys you're typing based on the changes in power 
 David: Yep. Yeah, 
 Matthew: timing attacks and the attacks on the processors based on how much power they're using. So wild. 
 David: all that stuff kind of goes back to the whole idea of, you know, in the eight, in the, in the nineties, the government was big on this, this concept called Tempest, 
 Matthew: Yeah, I 
 David: uh, where everything had to [00:46:00] be hardened. And we, you know, when I was in the army, we had computers, I, I, should you not, they weighed 45 pounds because they were in these metal cases with these, uh, Faraday cage linings and stuff for them. 
 It was ridiculous. 
 So there are some countermeasures to this, to this that's would, would prevent this from happening. So you replace the camera with, with the sensor that takes a whole image at once, instead of done, does the line scanning or the line scans are randomized. And you could increase the, the, the number of cameras that scan the sign, which would lower the success rate and require more complication more complex algorithms and, and hacking in order to make it work. 
 Matthew: Or you could now that this is known, they could take this into account, although you could change this up in a bunch of different ways by using different patterns of LEDs. Or you could just take color out of it make it black and white. 
 David: I think all signs should be black and white.[00:47:00] 
 Matthew: Probably make sense to simplify this, bunch of people that are colorblind, so 
 David: But why this matter is there's this, the interesting thing that I think, well, why this is important or, or it could be of note is that A critical piece of tech that makes self driving cars work is the computer vision. So, and computer vision is used in a lot of, a lot of things, including the face recognition, because you may have mentioned that, or recall that we mentioned that when we were talking about the Your Face Belongs to Us book. 
 Matthew: Yeah, some automakers are using radar and LIDAR, but I can't imagine the resolution on those is good enough to identify signs. I think you have to use computer vision for signs. 
 David: That's what I'm saying. Computer vision is critical to the, to the thing, even though there are multiple different systems that work together in order to make the self driving 
 car work. Unless, we started adding like transponders on each one, you know, like each, you know, a road sign that says speed limit 65 as a little radio transponder on it, that's going 65, 65, 65. So that every [00:48:00] car that goes by it or, or a stop sign, but that, that wouldn't help you identify exactly where it was. 
 Matthew: You still have to match it with computer vision or something. 
 David: Well, years ago, there was an experiment out in LA where the road was lined with magnets and the cars tracked where they were at and the speed they were going and everything with where the magnets were at, 
 Matthew: Interesting. Interesting. I would not be surprised about having a Having a connected road like that. Eventually I can't even imagine the costs. I know we can't even afford the roads we have right now. 
 David: you know, well, you can't afford not to have potholes in them. But actually, I mean, in the future, we, I mean, it's difficult for us to imagine what the future is going to be like, but if you have led signs. The LEDs also broadcast what they are to cars. Then the car doesn't have to read the sign, it just receives a transcode, you know, a transmission that says the sign's a stop sign and it's 50 feet away. 
 You know, there could be technology like that or maybe all signs will end up getting replaced with barcodes and they'll start reading barcodes or [00:49:00] QR codes or something to determine what they are. I mean, there's all sorts of different options out there. It's only limited by the bureaucrats and the government, which means we're gonna end up with freaking garbage, but all sorts of possibilities. But I'm wondering if, if the any lessons from the attack and defense related to one computer vision project will be leveraged in relation to other computer vision projects. You know, we mentioned the the facial recognition, so that uses computer vision. But other things will probably or use computer vision today. 
 So I'm wondering how many of these attacks on like automated cars for computer vision that will translate into other computer vision projects for, you know, like facial recognition or whatever. 
 Matthew: about that is I, I originally added a joke in here about like, now we can project LEDs on her face to confuse, Facial recognition, but that actually makes me wonder if we could, where if we could project, so computer vision relies on kind of the dimensions of our face. If you could project like fake shadows or fake bright spots onto your face to fool [00:50:00] like, Oh, you know, cause a bright spot is your cheek and you know, a dark spot is maybe under your nose. 
 Like if you projected bright spots under your nose, like the computer vision is like, Whoa, this doesn't match your face. 
 David: Wait, I was thinking the exact same thing. 
 So everybody wear a baseball cap that has some kind of light sensor array on it. 
 Matthew: I, I, and I've seen, there are people that have clothes that supposedly mess with cameras where they'll have like IR LEDs around like the, the neckline. So any camera that relies on IR all it sees is just like bright, like just a, just a big white blob for your face. 
 David: Oh, nice. Like that to be my DMV photo. 
 Matthew: But the problem is of course, if, if, if you're using, if you're using a camera, that's using normal light, it doesn't see it at all. I wonder how we could mess with kind of like how, you know, In this, when you, when you shine this on a star, a stop sign, you just see a light on the stop sign with your human eyes, but the computer sees so much more. I wonder if you could shine like hidden messages onto [00:51:00] places and stuff like that. I don't know. I don't know. There'll be interesting things. 
 David: Yeah, you could offend a lot of cars. 
 Matthew: So what should you do about it, David? 
 David: I have no freaking clue. Other than pay attention, don't let your car actually do 100%. Oh, pay attention when your car's doing your driving for you, I guess. 
 Matthew: Yeah, don't fall asleep. Don't take naps. Don't watch movies. 
 David: Well, if you're on the interstate, you can do that. But, just not, not in you know, anywhere where you're going to have to deal with a lot of signs. 
 Matthew: Yeah. My only advice is keep a close eye on stop signs. If they're lit up, take over. Unless you think you're, you know, not important enough to be targeted or you're not in the spy movie. 
 David: Well, everybody is important, Matt. 
 Matthew: Well, that looks like that's all the articles we have for today. Thank you Serengeti Sec on Twitter and subscribe on your favorite podcast app. [00:52:00]

Other Episodes

Episode 134

February 26, 2024 00:50:58
Episode Cover

SS-RPRT-137: The Blue Report

This week we take a look at the Picus Security Blue Report, and provide some analysis of the statements.  Interesting findings here.  The report...

Listen

Episode 135

January 29, 2024 00:41:40
Episode Cover

SS-NEWS-135: Atlas of Surveillance and the MOAB

This week we discuss the expansion of the EFF's Atlas of Surveillance, the Mother of all Breaches (not to be mistaken with the Mother...

Listen

Episode 107

April 17, 2023 00:49:11
Episode Cover

SS-NEWS-107: Cofense State of Email Security 2023

Cofense released their annual State of Email Security 2023 report, so we take a deep dive and see what it has to offer.  A...

Listen