SS-NEWS-130: Skills Shortage and Ransomware reports to SEC

Episode 127 November 20, 2023 00:53:05
SS-NEWS-130: Skills Shortage and Ransomware reports to SEC
Security Serengeti
SS-NEWS-130: Skills Shortage and Ransomware reports to SEC

Nov 20 2023 | 00:53:05

/

Show Notes

This week we talk about a Ransomware gang reporting a victim to the SEC, the CyberSecurity Skills shortage is not what it seems to be, and the disconnect between Threat Intelligence and Detection Engineering.

Late breaking news article about Microsoft Defender for Endpoint adding Deception

Article 1 - Ransomware gang files SEC complaint over victim’s undisclosed breach

Article 2 - A Simple SOAR Adoption Maturity Model

Article 3 - Cybersecurity talent shortage: not the lack of people, but the lack of the right people

Article 4 - Frameworks for DE-Friendly CTI (Part 5)

If you found this interesting or useful, please follow us on Twitter @serengetisec and subscribe and review on your favorite podcast app!

View Full Transcript

Episode Transcript

SecuritySerengeti_130 
 DAVID: [00:00:00] Welcome to The Security Serengeti. We're your host, David Serengeti, I'm Matthew Keno. Stop what you're doing and subscribe to our podcast and leave us an awesome five star review and follow us at SerengetiSec on Twitter. 
 MATTHEW: We're here to talk about cybersecurity and technology news headlines, and hopefully provide some insight analysis and practical applications that you can take into the office to help you protect your organization. 
 DAVID: And as usual, the views and opinions expressed in this podcast are ours and ours alone and do not reflect the views or opinions of our employers. 
 MATTHEW: Did you hear that the Alpha V slash Black Cat ransomware gang just announced its latest affiliate? 
 DAVID: And I didn't know they announced those. 
 MATTHEW: Yeah, that they put out press releases and everything. Also, I was just thinking that wouldn't it be really annoying if sometimes we talked really slowly and then other times talked really fast. So the people that said that there's podcasts to listen at times to speed. 
 DAVID: Well, you know, it's funny. I listened to everything at double speed and I have actually listened to a podcast where I'm not [00:01:00] sure it's at double speed because it is so slow and like. Man, if, if this is doublespeed, I can't imagine listening to this in regular time. 
 MATTHEW: You're not wrong. Every time I switched to a new podcast listener and it was on normal speed and they were so slow. I couldn't believe it. You almost have to, but then you get somewhere that people get excited and they talk faster. And then you're listening at fat and you're like, Oh my God, I have to slow this down. 
 This is too much. Yeah. Before 
 DAVID: I generally have to do that is with accents, you know, if there's a thick you know, Australian or New Zealand accent, Brit to some degree, but generally like everything is fine except for that, but 
 MATTHEW: start, I do want to add one quick thing in here. Before we. We put together the articles we wanted to talk about back on Wednesday, and we missed one that was announced between when we started talking about the articles and when we did this. If you're using Microsoft Defender for Endpoint, apparently you [00:02:00] can now do deception as part of your EDR. 
 You can add deception fake accounts and fake hosts and leave you, you were right. David lures are documents, batch files, and more. So we've, we've talked about deception a little bit in the past, but the main component that makes it hard to do deception is doing it at scale, pushing it out across the environment, putting those honey uh, honey badger, putting those, putting 
 DAVID: you only keep your honey badgers in the data center. In the man trap, 
 MATTHEW: putting those, putting those And it's not honey pot. What's the word for the accounts, honey, something 
 DAVID: honey, well, you have honey tokens. 
 MATTHEW: honey, tokens. That's what I was thinking of. Yeah. Like putting the accounts and stuff and then triggering the alerts on them. And that's now all built into. Microsoft for defender friend point. 
 So random, cool. Now we can proceed to our actual planned content. 
 DAVID: All right. And the first article is ransomware gang files, [00:03:00] SEC complaint over a victim's undisclosed breach. And this comes to us from bleeping computer. So the black cat ransomware gang has filed a US securities and exchange commission complaint against one of their victims for not complying with the four day rule to disclose a cyber attack 
 MATTHEW: This is so good. 
 DAVID: is just if this is something you can't make up. So Black Cat listed on their website that the software company, Meridian Link had their data stolen and unless they branch, unless they paid and they were gonna leak it unless they, they paid within 24 hours. Now according to data breaches.net the gang said they breached Meridian on the 7th of November and sold the company's data, but did not encrypt any systems. 
 MATTHEW: Good for them. 
 DAVID: It's a kinder, gentler black cat. 
 MATTHEW: That's yeah. 
 DAVID: So when Meridian found out that this had happened, they contacted Black Cat, but [00:04:00] they hadn't paid yet. So Black Cat thought they would pressure Meridian Link by filing this complaint with SEC about the fact that Meridian failed to disclose the incident within the four day required window. aNd on the black cats website, they publish screenshots of the form. 
 They fill it out. At tips, complaints and referrals page. bUt apparently they weren't well versed in the actual requirement because while Oh, This new rule had been created by the SEC. It did not, does not 15th of December. So they're a month early before this would even be relevant for affecting Meridian Link. 
 MATTHEW: I got too excited. 
 DAVID: Yep. They didn't read the fine print. 
 MATTHEW: Most people don't, myself included. 
 DAVID: Well, I mean, that's going to come around and bite you in the ass when you're talking about SEC and other regulators. You don't read the fine print, you could get in real big trouble. Well, I guess ransomware [00:05:00] gangs aren't, aren't really concerned with that too much. 
 MATTHEW: They should be. 
 DAVID: But Bluebeam Computer contacted Meridian Link and Meridian Link said, quote, based on our investigation to date, we have identified no evidence of unauthorized access to our production platforms and the incident has caused minimal business interruption. Submerging links, you know, blowing off the whole thing saying, blah, this didn't even happen. 
 Anyway, 
 MATTHEW: Which is what they would always say, regardless of it was truth or not. Companies lie. They're not, they have no morals. 
 DAVID: no, come on now. nO, this fellow Brett Callow, who's a threat analyst at security firm MC soft says that the maze ransomware gang claimed that they were going to do the same thing in the past, but there's no evidence that they followed through on it. And he also thinks that this is, this would have the opposite intended effect because executives would be would not want to pay the, the blackmailer because they would always [00:06:00] hold it over their heads in the future. 
 At any point. They could rat them out to the sec. 
 MATTHEW: That actually sounds kind of like what happened to the the CISO at Uber, right? Like the, the incident they covered up happen years before they found out? 
 DAVID: Yeah, but it didn't they didn't it wasn't found out if I recall correctly it was not found out because the The two hackers that were involved in that ratted on them though 
 MATTHEW: Oh, okay. 
 DAVID: Because they failed to Provide additional extort or extortion or tried to extort them again 
 MATTHEW: Gotcha. All right. Yeah. No, I would agree with this analyst. I think that now that, now that companies know that the hackers may go directly to them, it's, it's all, all I'm going to convince them more like we need to make sure we abide by this now and soon. 
 DAVID: Right. Because the thing that I think the, the the wrinkle there is that, um, they sure the [00:07:00] ransomware attack could be devastating and may put them on a business, but there's no certainty in that. But a government regulator will absolutely put them out of business. 
 MATTHEW: Yeah. maYbe we got the joke the other way around. Maybe the Alf, the rant black cat is a affiliate of sec and maybe sec paid them. They're like, Hey why don't you go? 
 DAVID: Oh, that wouldn't surprise me. SEC is a criminal organization, but the reason that Matt and I find this so amusing is we're getting to the point now where there's where ransomware gangs have at least five different ways to start extorting people. You know, they have the pay the ransom to get the data back, pay to prevent the data from being leaked, pay not to tell your customers have the customers pay and now pay not to tell the SEC, we will, we will 
 MATTHEW: So I'm trying to figure out what's coming next. We should be able to predict this, right? Like, is it pay not to put it in a time capsule? Is it paying not to give it out to a psychic organization? I don't, serious. 
 DAVID: [00:08:00] block it from remote viewing. 
 MATTHEW: It's funny, actually, David and I tried to seriously come up with, we threw a couple of back and forth, but I don't, how many other ways can you get them to pay for this information? 
 DAVID: Well, if there is another way, they will find it. So we just have to wait for them to be innovative and come up with another method. 
 MATTHEW: What about, what about pay not to release fake information? 
 DAVID: Well, they could extort anybody with that. I don't think that 
 MATTHEW: Yeah, well, no, I'm thinking I'm thinking cause you, cause right now, a lot of this, when the information is revealed, there's the usual kind of sliminess that we expect out of companies, but what if they inserted like some really actual. Something like real unpleasant in there or something like evidence of crimes that would get them investigated by the sec or something like that. 
 DAVID: Oh, well, I actually just thought of two ideas. One of them is not mine. The first one being, which is not mine, that rather than take data, they could put data, which is child [00:09:00] pornography throughout the entire enterprise. 
 MATTHEW: yeah. Yeah. That's kind of what I was thinking of, like something illegal, but I didn't, I didn't specifically tweak on that 
 DAVID: The other idea I'm thinking is extort executives directly based on what they find in the data. 
 MATTHEW: where you can find evidence of they did something specifically harmful to the organization and be like, if you pay me a hundred thousand dollars, you don't lose your 300, 000 a year job. 
 DAVID: Well, I was thinking more along the lines of kind of like in Mr. Robot, where all the executives had decided that they were going to dump that toxic waste. yoU know, so the ransomware gangs finds these internal meetings of some uh, something that took place where the company deliberately is going to do something heinous, not necessarily any particular executive, but they could, if a particular executives were implicated in that correspondence, they could do that, you know, kind of like the, when the Sony breach happened, there are all those emails that came out where Sony executives were bad mouthing celebrities and [00:10:00] things like that. 
 Maybe they could. They can extort executives for certain content of the data they extracted, maybe. 
 MATTHEW: Yeah. That makes sense. It makes a lot of sense. Actually, that's probably the next one. 
 DAVID: but as we said before, we're not, there's so many people getting advice on ransomware, we're not, we're not going to get into that But just to say that, you know, just like any blackmail scheme, it's better to pay it, not to, not to ever pay and get it over with because once you pay, then it's an ever, you know, it's a never ending gobstopper for the blackmailer to keep coming back to that. 
 Well, it's never, you pay once they're like, oh, well now you pay me again, or I'm going to, I'm going to blackmail you some more, 
 MATTHEW: Especially like you just mentioned with the the executives. Now you can just get them every year, just hit them up for your protection money. 
 DAVID: yeah, your Christmas bonus. 
 MATTHEW: Yeah. 
 DAVID: But whatever you're going to do, make sure you have that plan up front. Cause I think it's regardless of whether you decide to pay, not pay, whatever I think [00:11:00] it's important for you to have that plan ready. Cause I think the faster you act, regardless of what your decision is, the better off you and your company are going to be in getting this, the ransomware situation resolved. 
 MATTHEW: So our second article today is a simple SOAR adoption maturity model from Anton Chevakhin, which we're doing two articles from Anton. 
 He's a hitting on all cylinders right now. He's developed maturity models for SOC, SIM and vulnerability management in the past, and now he has one for SOAR. It's amazing. The maturity is measured on the following dimensions. onE is typical customizations and that ranges from the maturity level. 
 One is out of the box playbooks to fully mature as complex custom playbooks. Second row is typical actions and responses starts first at notifications ends at acting automatically at scale, which is a phrase to strike fear and every C levels heart. The[00:12:00] third row is typical playbook types starts off with enrichment playbooks, and then all the way fully mature is automatically mitigating and resolving incidents. 
 Typical metrics starts with no formal metrics, ends with automated metrics at scale. Next line is automation coverage ranges from 0 percent to 90 plus percent. Cause you'll probably never get to a hundred percent. You can't automate everything. Frankly, 90%, I think is pie in the sky. And then the bottom was the typical related security processes. 
 This was basically around the maturity of the development and integration processes. For example using out of the box integrations at the most immature level. And then when you're fully mature, you are writing custom integrations for your own internal tools. 
 So there's actually not that much to discuss here. I saw it and I thought it was cool. But when I read it over, I was like, Oh, this is actually pretty straightforward. And I generally agree with most of it. I debated removing it for another article. And honestly, if I had seen [00:13:00] that the Microsoft defender for endpoint one beforehand, if I'd seen that with enough time, I would have ripped and replaced this. 
 DAVID: Yeah. 
 MATTHEW: are, 
 DAVID: the one thing I would say about it is that it is pretty straightforward, but it's also very, I don't know, squishy if you will, 
 MATTHEW: yeah. 
 DAVID: no clear, Hey, we've reached level three. You're gonna say it's all gonna be, we think we're at level three based on this and this. There's, there's nothing where you're gonna say, yeah, we're certain we've reached this maturity level to be able to rate yourself anyway. 
 And I guess since you're not really talking about regulation or anything, maybe that's not such a big deal. But I think some of this is, I guess, like I said, it's, it'll be difficult to get a good handle on where you're at, I think. 
 MATTHEW: Yeah. And I think that's true of a lot of maturity models in general. But looking over it. Having been involved in an automation journey for the last five years. I like it. I Think it's generally pretty good. So I do have a couple of things I wanted to discuss specifically, and that is the [00:14:00] target maturity. 
 So a lot of times I recall seeing that, for example, they recommend socks not get to Much capability maturity model level five, because that was too rigid in process. They recommended that socks stick somewhere between three and four, cause that allowed them to be somewhat flexible. So I don't know. I looked at a few of these for example, the one where it was typical actions and responses maturity model level five was acting automatically at scale. 
 That is probably a bridge too far for non tech companies. 
 DAVID: Hmm. 
 MATTHEW: Yeah. I don't think your average bank or your average manufacturing company is going to be comfortable with that. 
 DAVID: Well, I think that's a really good point is simply because there are five levels, it doesn't mean you do have to reach level five, right? So depending on the size of your organization, the complexity of it. et cetera. You may say, well, actually, you know, [00:15:00] we're comfortable at level three and we think that's as high as we really want to get to where the, where the cost in moving to level four is too high, whether it be actual financial, you know, time whatever you say, it's just not worth it to get to level four. 
 MATTHEW: Yeah. 
 DAVID: And I think that's one of the challenges with a lot of maturity models is people don't see it that way 
 MATTHEW: Yeah. They're like, ah, we got to get to five, five, five, five. 
 DAVID: yeah, and a lot of senior leadership also don't see it that way because. Five is better than four, right? So why aren't we trying to get to five? You know, they have hard time wrapping their mind around. 
 Well, really the trade off is not there to be that mature. 
 MATTHEW: That's fair. Was a, there was another one on here that was a little weird. His typical playbook type level three was automated triage. I actually think that automating triage is harder than automating. Response [00:16:00] and automating enrichment and automating a number of other things. And I wasn't a hundred percent sure what he meant by automating triage. 
 DAVID: Well, I, I think, I think automating triage is absolutely possible. I think you have to think, consider what does that really mean for you? And what I would say is that automating triage means that. Certain bad are automatically remediated. Certain good are automatically excluded. And then the middle is then moved on to the next step. 
 MATTHEW: All 
 DAVID: And I think that's where you could automate triage. Not to say that, you know, everything is going to be within the triage step is all automated and done, but just you, you drop it into the three buckets and then the middle bucket is what actually moves to the next step. And everything, those other two ends are what is automated triage. 
 I think 
 MATTHEW: All 
 DAVID: that's what I would, that's what I would consider anyway. 
 MATTHEW: That's fair. I did notice he has automated triage and he has automated enrichment on here, but he does not have automated containment or automated mitigation or mediation [00:17:00] at any of these levels. 
 It is interesting. I would have put, cause I would have put automated containment at like a level two or a level three. Where you can use something like CrowdStrike to put something in network containment, you can pull emails, you can lock accounts. Like all those are fairly simple out of the box actions. 
 That 
 DAVID: And when you, when you say that, I think you could all, that's another thing where you, where you would want to be a little bit flexible as well as saying perform automate. Like you, you would say you'd do automated containment of percent of incidents or something like that because you're going to have criteria or I would think you should have criteria where you say. 
 We are not going to do automated containment for these reasons. You know, if this is a production level system that affects blah, blah, blah, absolutely no circumstances where we do automated containment on that. Or, or, you know, something like that, where at least you have your criteria defined to say what we would do automated containment on and [00:18:00] say that that is what what we mean by automated containment in this space for 80 percent or whatever. 
 MATTHEW: makes sense. All right. So the other only other thing on here that I saw that was kind of interesting is level one typical customizations is using the out-of-box playbooks. And I actually wonder how many companies use the out-of-box playbooks. Typically speaking, the out-of-the-Box Playbook is not gonna match your process. 
 Although I guess if you're a really immature, you probably don't have a process. So just having some kind of process is better than nothing. I don't know. I, what I'm curious here is how many companies use the out of the box playbooks versus build their own very simple custom playbook that just does enrichment 
 DAVID: Well, when they say out of the box playbook, I assume what they mean is. It's out of the box playbook and then you just do a couple of configurations and that's it. You don't really change any of the flow with them. That's what they mean by out of the box. And necessarily that you just plug it in and turn it [00:19:00] on, right? 
 MATTHEW: Well, I mean, you can, I'm saying I only really have a lot of experience with one of these automation platforms and you can make a copy of it and you can adjust it if you want. I don't know. 
 DAVID: Yeah. Cause I'm thinking any, what I'm, what I would think is that any out of the box playbook, if you turned it on would wreak havoc. 
 MATTHEW: Potentially. 
 DAVID: So I mean, any of them, I don't think any of them would work out of the box. I don't think it's really, I don't think it's a thing. So I don't know. 
 MATTHEW: Yeah. I'm just there. There's dozens and 
 DAVID: than I do. 
 So, 
 MATTHEW: There's dozens and dozens of these out of the box playbooks. Like they're, they come with every least again, in the tool that I use, they come with every integration you turn on and everything you install. Like there's. 25 new play. 
 Well, that's not true. There's usually like three or four new playbooks. 
 DAVID: can see that using an out of the box playbook for the steps, not actually the actions that are taking place because the organizations that are. They're on this level one journey, may not [00:20:00] have good playbooks to begin with in doing regular incident response. So maybe just following the out of box playbook that says, okay, this is basically instructing them on how you do containment, you know, and walking them through, okay, first thing you want to do is isolate the machine. 
 Okay. That's, you know, just the steps in the playbook is what they use out of the box. Not actually the playbooks themselves. If you, if you understand where I'm trying to go with that. 
 MATTHEW: Yeah. Yeah. In an ideal world, all these playbooks we've built out of little sub playbooks that are basically Lego blocks so you can stack them together, but I don't know how often that's followed. All right, I'm done. 
 DAVID: Stick a fork in you. 
 MATTHEW: Please don't. That hurts. 
 DAVID: Such a party pooper. 
 MATTHEW: Yes. 
 DAVID: All right. Moving on to article three, uh, cybersecurity talent shortage, what we think it is and what we're doing about it. So this comes [00:21:00] obviously because we can't fill it. sO this comes to us from venture insecurity. And, and I'm wondering if that's a play on the venture venture brothers, 
 MATTHEW: I, this is a blog that I've recently started reading. I think it's actually a venture capitalist. Yeah. Sorry. I know how much you love venture brothers. 
 DAVID: I do get awesome. But the the author of the, the article says that, you know, there's a shortage of cybersecurity professionals. But why is that when there's boot camps, colleges, universities, PhD programs ISC squared has even launched a, a certified in cybersecurity entry level certification, as well as the U. 
 S. government's national cyber workforce and education strategy. And, and he, he says, you know, with the real shortage is not in quantity, but in quality. And, and, and he begins the the article with some statistics [00:22:00] here. So uh, apparently there's over 660, 000, over 660, 000 open roles in cybersecurity in the United States. And that's on top of the. Hunt 1. 1 million people already employed in cybersecurity, 
 MATTHEW: Explodes my mind. Cause that means, hold on. That means that there's a third of the roles are open. That means for every person employed in cybersecurity, there's a half a role open. That's 
 DAVID: yeah. 
 MATTHEW: wild. 
 DAVID: Cause he says that would make the total demand almost 1. 8 and around 1. 8 million. 
 MATTHEW: Yeah. 
 DAVID: buT he gives some other interesting statistics in here and to compare that 1. 8 million with the, with other professions. So he says there's only a million doctors in America. There's 1. 3 million lawyers, which is about 1. 
 2 million, too many. aNd there's 1. 4 million account. 
 MATTHEW: We're almost, we're almost [00:23:00] outnumbering the accountants. yEah. 
 DAVID: and 
 MATTHEW: too, actually, is that our salaries are not quite as high as those, but they're kind of close. 
 DAVID: as those three domains, 
 MATTHEW: Yeah, like sure. Sure. Sure. Specialized doctors like surgeons make a lot general GP general practitioner. Doctors only make around 200, 000 and you can make cybersecurity. Now, admittedly, 
 DAVID: to the 
 MATTHEW: you're not like if you're starting in cybersecurity, you're making under 100, 000. And if you're starting off as a doctor, you're making 200, 000. 
 So they're definitely, there's a difference there, but it's not a difference in order of magnitude. Like cybersecurity is making almost as much as these well know the, all of these three professions, doctors, lawyers, and accountants are well known as the road to upper middle class. So I actually think that that's really interesting and really good that cyber security folks can make almost as much as these guys and more than some of them[00:24:00] were in the ballpark. 
 DAVID: cybersecurity is notoriously hard with only a few companies willing to take on someone who doesn't already have three to five years experience. And I'm not sure that's exactly accurate, but some of this is the HR is the fault of HR departments who are forcing requirements and do screen of candidates before security departments get to see them. 
 I mean, it's getting better. But still, some of this is just the fact of. The existence of HR because with the with the pay that they're expected to provide security professionals, they're expecting certain requirements for education, certification, et cetera before they want to agree to those kinds of salary bands. 
 MATTHEW: And I think that's exactly it. I think that from HR's perspective, they're looking at these salaries, which are like I just said, you know, it's, we're not making what a doctor makes, but we're making real close. [00:25:00] Sometimes, and they're saying, we're not like we're making, some of us are making director level and vice president level levels of money and they're looking at, Oh, to be a director, you need 15 years of experience, but a malware analyst with five years experience gets the same as this director with 15 or 20 years experience. 
 I think that's, I think that's the major heartache that HR has. 
 DAVID: Yeah, I think another issue with this as well is I think there's a, there's a, there's a pretty large problem with actually lack of interest in the field. You know, you have people that get into cybersecurity because they hear it's an up and coming field, they'll get a degree or certificate or whatever in it, but they have actually no real interest in the work itself. 
 MATTHEW: Yeah. 
 DAVID: And they think once they have that piece of paper that they're entitled to something. That paper, that certificate, that degree entitles them to something, some level of pay, something, certain positions, et cetera, uh, when they really [00:26:00] aren't interested and don't have not worked in the field also. 
 MATTHEW: Now, and this is something that I, cause I remember when I went to school when I went to college in 98, there, the big thing then was the Microsoft certified system engineer. I think that's what MCSE stood for. And everybody, everybody in college was like, Oh man, we got to get into it. If you come out of here and you get your MCSE, that's a hundred thousand bucks a year, straight out of college. 
 And this was of course, 25 years ago. So a million, a hundred thousand bucks was worth more than it was now. And if you're listening deep into the future, you know, I know that you're, you're like a hundred thousand dollars. Like I just bought a loaf of bread for a hundred thousand dollars, but 
 DAVID: All 
 MATTHEW: so this is not, and this happened to lawyers too. 
 Actually the, the media, there are lawyers to make a whole bunch of money, but there's a lot of lawyers that only make like 50, 000 a year, 
 DAVID: Yeah, I mean, after, after dealing with a few MCSE certified folks, I realized that that certificate was [00:27:00] completely worthless. 
 MATTHEW: but it got them a hundred thousand dollars a year. 
 DAVID: Well, I certainly hope they were not getting paid that. But the, the author goes on to, goes on to say, I think that with few exceptions, getting a job in cybersecurity requires people to have an understanding of the thing that they're going to secure. One cannot secure what they don't understand. 
 MATTHEW: you talking about? Of course we can. 
 DAVID: So someone interested in doing product security shouldn't know how the product's built, you know, where it wants to social be specialized in cloud security should understand cloud infrastructure, uh, and someone who wants to build a career securing I. T. infrastructure should understand how I. T. is provisioned, et cetera. 
 MATTHEW: You are so needy. So I, I generally agree with you. I think the problem here is that colleges don't believe in this. Although this is actually the same as you were talking about people trying to cash in on this. This is the colleges trying to cash in on this as well to provide these degrees to supposedly justify their [00:28:00] 200, 000 fees for a four year program. 
 And then they crank out. Folks who don't have any of that background knowledge 
 DAVID: Yeah, well, you know, I, I was just, just came to me while you were, you were going through this is. You know what? I think they could might help remedy that is if it worked more like a a law degree or a uh, a doctorate degree or medical degree where you do undergrad in some I. T. Right. You do programming. 
 You do infrastructure building or whatever. That's your undergraduate. Then your master's program is where you did security part. 
 MATTHEW: Or, or your 
 DAVID: as well. Cause then you learn the IT part and then you learn the security part based on what you understood in the, in the, in the undergraduate part from that the IT portion. 
 MATTHEW: I mostly like that, but I think, I think for most people going into six years of college is asking a lot, I would rather see like the major BIT or computers are a computer science and then the [00:29:00] minor B security or whichever one, I think I'd like that a lot because then you at least have a fairly significant background and whatever you're supposed to be securing. 
 DAVID: Yeah, well, I, I kind of think that what would be would be even better though, is if it worked along the same lines as like special operations in the military, uh, you, you, in general, you can't sign up to go into special forces as a, as a. MOS when you join the military, right? You have to go in, have a certain job, do that job for a while, and you pry, apply, and then go through the courses to get into special operations. 
 So I think we better for organizations in general, if they hold their, their security staff almost exclusively from their IT department and then backfill their IT department. And we're pulling over qualified, smart IT people into the security realm and then taught them IT taught them security. 
 MATTHEW: That makes sense.[00:30:00] 
 DAVID: bUt further on in the article, he goes on to say that there's an oversupply of talent in the entry level side and a shortage of talent on the senior side. And when I first read that, my first thought was, well, then we just need to wait, right? Cause eventually the entry level people will become senior people. 
 MATTHEW: Yeah. Just wait a little longer. 
 DAVID: But I think, I think the way that he categorizes though is not exactly accurate because it's not so much, we have a shortage of, or of people on the senior side, we have a shortage of people in particular roles on the senior side, you know, application security folks, infrastructure security, you know, those particular areas, malware, reverse engineering, um, And the entry level people don't, aren't all necessarily going in at the ground level in those specific security fields to mature up in those, in those disciplines within cybersecurity. 
 MATTHEW: yeah, I, uh, yeah. And it goes back to what you were saying a minute ago, [00:31:00] where if they don't have the background and what they're trying to secure, even if they become seniors simply through time, are they actually getting any better? 
 DAVID: Mm. Mm hmm. Right. 
 MATTHEW: Or are they just getting more senior? One of the, I don't think it was, 
 DAVID: Right. Yeah. 
 MATTHEW: And a generalist like that usually works their way into compliance or they work their way into management. 
 And it wasn't this article, didn't we talk about another article the other day that maybe I just read it and we didn't talk about it, but I saw some article a couple of weeks ago that was talking about how there's actually a surplus of managers and compliance folks. And there's a shortage of reverse engineers, incident response. 
 Scripters, software engineers application security folks, cloud security folks, 
 DAVID: Mm. 
 MATTHEW: the practitioners. There's a shortage of, there's a ton of generalists that we have. We have, we have all the generalists we need. 
 DAVID: Oh, it makes me think of what you were talking about with the education system where there's a three to one ratio of administrators to educators. 
 MATTHEW: It's 
 DAVID: is it like we have a three to one [00:32:00] ratio now of governance and auditor or whatever to practitioners? 
 MATTHEW: man. I saw, I saw somebody on Reddit today talking about CEOs and about middle management as a what did they call it? They called it like. A charity program for mediocre people. That's not the exact words they 
 DAVID: Middle management is? 
 MATTHEW: yeah. Cause they don't really do much and all they have to do is make decisions. And frequently those decisions are bad and they have to justify those decisions. 
 And they usually do a bad job of that. And all the layers of management are on the backs of the people doing the actual work. 
 DAVID: It's frustrating being a free market person to think that, to imagine that companies don't figure this out. 
 MATTHEW: Some do Your favorite video game company valve. Aren't they nearly flat? 
 DAVID: wEll, they're practically anarchical based on the interview. There's a really good interview with the chief economist. It's interesting that a [00:33:00] non financial institution has a chief economist, but the chief economist with valve on econ talk about 10 years ago, really interesting conversation there about how valve is structured from the inside. 
 And there's a, and he gives an example in there where, the desks in valve, and this may be different because everyone's so much remote now, but at the time, 10 years ago, obviously it's, it's, everybody was in the office, but in valve, all the desks were on wheels. And if you want to work on a project, you would actually unplug your desk. 
 Wheel it over near someone else and plug your desk in near them and work on a project with them. And that's the way projects were decided on. What was worked on is more people would start clustering around a certain project. And then that project would reach critical critical mass. And that's the one that would be taken to market. 
 Apparently, that's how. Portal was developed is this just an idea by one guy? And then people are like, Oh, that's really cool. And they started expanding, expanding that group until eventually it became you know, one of the [00:34:00] greatest games of all time. 
 MATTHEW: Yeah. Yep. I've heard that story, but yeah, that's 
 I, yeah, I don't know. I haven't had nothing else. 
 DAVID: but he suggests, and I, and I don't disagree with him in the article that engineering needs to take a, or, or security needs to take a engineering approach, you know have manual processes automated by security engineers. hAve detection engineers, tailor detection logic to the organization, automation engineers to create playbooks and automate manual processes, and get security engineers to assemble things and build things within the security sector that are needed, tailored to the organization. 
 MATTHEW: Yeah. We've talked a little bit about this. anD weirdly enough, this is kind of related to the sore maturity matrix that we just talked about about automating more things. In my kind of ideal sock model, there should be twice as many. 
 Engineers as there are [00:35:00] analysts, because most, most socks I've worked at, there's a ton of analysts cause they have a ton of alerts coming in. But if you have more engineers than analysts, you can focus both on detection engineers to make higher quality detections, but also automation engineers. To allow those analysts, the fewer analysts you have to do more through automation. 
 We've talked about this before. I think you're in agreement. Yeah. 
 DAVID: Yeah. I wouldn't disagree with that. 
 MATTHEW: So now the problem is, of course, is that all these security engineers are the ones that are in demand and we don't have, because those are the, those are the doers that we're short on. 
 DAVID: Well, and, and like you were saying before, that's not what the colleges teach security people either. They don't teach them the engineering part. So he thinks they should usually bring software engineers to do this cybersecurity but, but in the, in this is something he quotes in there, which I'm not sure if this is accurate or where he got these numbers from. 
 But he, according to him a software engineer.[00:36:00] Out of the gate, we'll make 20 to 40, 40 percent more than a security engineer out of the gate. And if you do that, how do you convince them to, rather than go into software engineering, to go into security? 
 MATTHEW: Yeah. And we've talked about this before too, especially those Fang folks. You go look at like levels. fyi or something and they just, they get paid a lot of money. 
 DAVID: Hmm. 
 MATTHEW: And I've heard, I've actually talked to more than one person who thinks that security engineers should be paid about the same as software engineers, because they have roughly similar skill sets. 
 And generally one could be the other with some training and some time. bUt I haven't, I haven't been anywhere that actually believes in that. At least it doesn't pay the same. 
 DAVID: Well, I think you'd get a huge disparity in that though. You're going to have some security engineers that absolutely are on par. But I think most just aren't. 
 MATTHEW: No, but you hear that about software engineers too. There's, there's a lot of these technology companies have gotten really bloated with software engineers that [00:37:00] do almost nothing all day. 
 DAVID: Hmm. 
 MATTHEW: I mean, I'm sure you've seen some of those, like my day in the life of working at Google, where they wander around the campus all day and write a couple lines of code. 
 Now I'll admit if those couple lines of code were super impactful, that could actually be uh, a truly worthwhile to pay them to, you know, wander around the campus all day. But I don't know. 
 DAVID: And how to measure that. 
 MATTHEW: Yeah, yeah. You can't measure by lines of code, but they don't look, they don't look like they're thinking about problems. They're not talking about like, I'm sitting here pondering how to fix this as efficiently as possible. And I've just heard lots of anecdotal stories about bad coders. So I dunno, we're getting distracted. I'm getting distracted. 
 DAVID: So what he suggests is that, you know, we cannot solve this problem by hiring entry level people and we can't solve it by getting engineers to pivot from their careers into security. And I think. I'm not sure he's, I don't, I don't really agree with that. Cause I really think that is where you get your best people from is the people who pivot into [00:38:00] security from other it other it teams, and this is where I've actually seen the best people come from also, and schools are pumping out it people more than they're pumping out security people. 
 So if we steal from them, uh, I think that's a better way to go. And, and once they've been in it, they've been in the trenches, they understand how it works. And then you just bring them over and teach them security. I think that's easier. 
 MATTHEW: would agree. 
 DAVID: But he thinks what we need to do is upskill those who are already working in inside cybersecurity. And I have a hard time thinking that that is scalable. 'cause that is ridiculously time and dollar expensive to do. 
 MATTHEW: I know it is, but I feel like we have to. 
 DAVID: Well. The thing it, and maybe, maybe we do, but that then that's where you come in to say, the business needs to make a trade off to say we're gonna buy less tools or whatever and we're gonna spend more money on training and education and. The, the business is going to, the reason you're gonna have a hard time convincing [00:39:00] businesses of that is if you buy a tool, you own a tool, if you upskill an engineer and they quit, then you, all that money is lost to that company. 
 thAt's the business mindset on that. 
 MATTHEW: Yeah. I think that's part of why we get. So part of the problem is we've got all these entry level folks coming in who don't know what they're protecting and they're not really getting any better because the companies aren't investing in them. And then they leave and go somewhere else where they don't get any better and nobody invests in them. 
 And then they leave and go somewhere else where they don't get any better and nobody invests in them. And we're, we're almost creating the problem for ourselves. 
 DAVID: Well, I think maybe, maybe a possible solution to this is more along the lines of actual contracts multi year contracts. So you want to hire an entry level person you hire, okay, you're going to come on board. You have a three year contract minimum. So they come out, you sign them up. They come on your company and you're like, okay, we need this [00:40:00] skillset. 
 So you're going over and you're going to work in it for a year, doing nothing but that skillset program, whether it be programming or infrastructure, whatever, you're going to do that for a year, and then. You're going to come over to the security team and you're going to spend a year learning security, and then you're going to be productive for a year before your contract's out. 
 MATTHEW: That actually might not be too bad. I mean, I think some have similar stuff right now where you can uh, you have to pay it back. I don't know how much they pursue that or like they'll pay for the training, but if you leave after a year, you pay it back. 
 DAVID: And I think you can get some of this through intern programs as well. But there's something else I want to mention here that he brought up in, in the article that I thought was a really good idea. But I'm not sure if you're going to get, you're going to get folks to invest in this, but quote, security vendors need to help solve the security talent shortage by making their tools accessible to practitioners who [00:41:00] aren't working in large enterprises. 
 The need to qualify to meet the minimum spend and other restrictions make it impossible for many in the industry to try different products end quote. 
 MATTHEW: I don't see how this relates. 
 DAVID: But what he's saying is you, you right now sitting at home. You don't have access to an enterprise MDE deployment to understand how MDE works. But he's suggesting that security vendors like Microsoft. allow people who aren't part of an organization to have enterprise level access to security tools to understand how they work at an enterprise level. 
 MATTHEW: All right. I can kind of understand that. I can kind of understand that. 
 DAVID: Because sure, maybe you can get a, a, a, a copy of some tool and install them on your local machine and. That may or may not give you some level of understanding for that tool, [00:42:00] but it's certainly not going to give you an understanding of that tool, that enterprise scale. 
 MATTHEW: Yeah 
 DAVID: but that's a fair amount of expense also for a software a vendor to provide that infrastructure for people to get access to. 
 MATTHEW: that makes sense. 
 DAVID: aNd he talks a lot about software engineers and equating this to software engineering. I actually think that it would be more, it's more beneficial to have infrastructure and site reliability engineers. Brought on, I mean, SRE folks, no scripting. They understand CDC pipelines, API integrations. I think that would be huge, hugely beneficial to security and infrastructure folks, they understand the plumbing of the IT organization, how AD works, you know, how the network works and how to break it and how to get around in it, as well as. 
 How to get around the controls that are present in the infrastructure. So I think actually those two groups, I, I would say are more important than software engineers or would be more important or more beneficial to security than software engineers. 
 MATTHEW: I think that makes [00:43:00] some sense. 
 DAVID: So we're going to do this last article. We've been on for almost what? Close to an hour. 
 MATTHEW: like 40 some minutes. All right. We'll talk to this. All right. For our last article, this is one that we want to talk about last week. I really should have dropped the sore one and just talked about this. But it's fine. So this is a article from Anton Chivakin about an ongoing series of detection engineering. 
 I'm not going to cover, this is part five. We're not talking about all five. But I've been thinking a lot about thread Intel lately, and I had not been able to really put into words why I didn't like thread Intel. You've probably heard me talking about, I don't believe in thread Intel, thread Intel is not worth it, blah, blah, blah. 
 But he actually put into words exactly how I felt about it. And I'm grateful to him for that. We'd originally planned on doing an entire episode around thread Intel. AI, but we talked to the few vendors and it turns out their eyes are not quite ready for primetime yet.[00:44:00] 
 DAVID: Surprise. 
 MATTHEW: Yeah. All right. So according to the threat detection, I'm sorry, according to the detection engineering maturity model, threat Intel is used by detection engineering teams that manage and optimized, uh, defined, he doesn't include CTI at this level, but from my perspective, I'd say that, yes, they use threat Intel at that level, but it's a IOC feeds. 
 Mostly they're not really digesting it. They're just beating it, uh, managed. He says known threats are used to prioritize content development in a vague manner. aNd this is probably, you know, your detection engineer saw this article just posted that says. This new ransomware group is doing this new thing and then optimized content development is driven by the threat intelligence teams who have identified known and active threats that are targeting your organization. 
 And that sounds like a flipping utopia. Anyways. All right. So biggest issue with thread Intel for [00:45:00] content engineering for detection engineering. So detection engineering is asking the thread Intel team what to focus on. And the thread Intel team is giving them high level items such as attack techniques. 
 So we've talked about this in the past, but the majority of attackers use the same attack techniques. So this is frequently not very helpful and any given attack techniques. So one of the attack techniques is Command and script command line or scripting interpreter and PowerShell. Like you could probably write 150 rules for just the attack technique of PowerShell. 
 So while it's. You kind of useful to know that the attacker uses PowerShell. It's not very helpful for trying to actually write content based on that. So he has a list of bad threat Intel practices. Number one, delivering a list of attack techniques takes weeks to develop into content. Only the really commonly well known ones will be covered and they will typically [00:46:00] be very general rules. 
 Because it'll be like, Oh, hidden window used in this PowerShell command. Turns out that happens all the flipping time. So this will lead to false positive rates uh, very generic Intel, not specific enough. The content or development engineering, I'm sorry, detection engineering. This is kind of a new term for me still working my way through it. 
 They want extremely specific information on what the attackers did. And that that actually, that's one of the things that I'm trying to work my way through right now is. There's a there's a spectrum here on one side of the spectrum, you have very few rules, but they're kind of broad, and they detect a bunch of behavior. 
 So on that side, you might have a rule that triggers when somebody runs an encoded PowerShell command. Encoded PowerShell commands are used by a lot of attackers. They're also used by a surprising number of benign vendors who are trying to hide the PowerShell code they're running. David, have you ever run [00:47:00] a query for encoded PowerShell commands in your environment? 
 DAVID: I have not know 
 MATTHEW: You should try it. We, I, for sure, like four or five years ago. I saw this randomly and I was like, Oh, why don't we have a rule for that? We should have a rule for that. And I did a little exploratory hunting and I was shocked. There were hundreds and hundreds of encoded PowerShell commands. And I was like, Oh my God, we are thoroughly totally. 
 And then I would decode them in CyberChef and it was legitimate stuff. I have no idea why the vendor chose to encode it, but it was ridiculous. Now on the other end of that, you can make a very specific role where it's an encoded command and it's a hidden window and the attacker tends to run scripts out of this specific directory. 
 But that'll only fire if that specific. Attacker is in your environment and they don't change your TTPs. So there's a spectrum there and you're not supposed to get too far on either end. Too much information is another bad practice. Big, long thread Intel [00:48:00] reports that are 90 plus pages are not terribly helpful because then somebody has got to sit down and read 90 plus pages and try to digest and determine what content can be created from it. 
 DAVID: who you run through an AI that's going to spit that out for you. 
 MATTHEW: I tried that. I couldn't get it to give me the detail. The AI was very good at giving me high level bullet points. But I couldn't give it to get it to like, that's something that I've been thinking about doing with the GPT. You can create your own GPT now. I've been thinking about trying to get a GPT to digest threat intel reports into actual actionable content chunks. 
 I don't know. Next bad practice, a lack of variety. There's tons and tons of reports around windows and office 365. There's not as many around SAS specific clouds like Google cloud, et cetera uh, no value add a lot of their Intel teams just copy stuff off the feed and forward it to the rest of the team. 
 Not helpful and slow. We see this a lot by the time you get the report the attacker, this, this occurred back in. You know, six months ago, nine months [00:49:00] ago, and they've already moved on to a new methodology. So if the detection engineering team builds content based off the report, they're too late.. 
 So what threatened to what detection engineering needs from threat Intel, he actually put together kind of a a mental model of almost a ticketing system. Or a knowledge base of what they want. So they want thread Intel to populate this knowledge base with well described knowledge items that can each lead to a piece of content. 
 Each of these knowledge items would be tagged so you could search it. You know, maybe by attacker, maybe by methodology the attack techniques would be a good tag. The life cycle attack life cycle would be a good tag. Each one of them would be tracked as issues and project management software so that they can be stuck in a backlog of detections that are prioritized. 
 And each of those knowledge items will be focused on a single threat. They will be specific to a technology, a [00:50:00] protocol or an operating system. The impact would be evaluated. They would be uniquely described. They would all be relevant to the organization. So none of this none of this like, Oh, here's a really incredibly piece of content for Mac iOS. 
 We don't use Mac iOS. They'd be shortened to the point and they'd be delivered in a timely manner because creating a detection out of a piece of the, one of these pieces of content might take two or three weeks. So if threat Intel doesn't get them to you until a couple of months after they're used, then it takes a month for detection engineering to roll them out. 
 They are no longer useful. So, and this is, this is specifically why I don't like threat Intel. Most threat Intel seems to consist of reports that are prioritized for decision makers and strategic at the strategic level. Although honestly, I don't think they even do a good job of that. Cause I don't think they give the right kind of information to actually make a strategic decision off of. 
 They are not prioritized or [00:51:00] configured for detection engineering teams to write content. So IOC feeds and long form reports, not good for detection engineering. And that's what I care about the most. So that's why when I see a thread Intel, I get unhappy. 
 DAVID: Well, I mean, that's the so what, right? You know, what's the point of generating threat Intel unless. You're going to have some kind of action that's going to mitigate a threat or a risk. 
 MATTHEW: Yep. Yep. And there's, and there's, I mean, that's the two audiences for thread Intel, right? There's the strategic audience and then there's the tactical audience. It seems like everybody's really focused on the strategic side. And I've seen this. A lot myself where uh, a VP really wants to have a thread Intel plan and they set up the thread Intel plan to feed them 
 fOr whatever reason. 
 DAVID: Well, that's all the articles we have for today. So thank you for joining us and follow us at Serengeti Sec on Twitter and subscribe on your favorite podcast app.[00:52:00]

Other Episodes

Episode 20

July 25, 2021 00:56:27
Episode Cover

SS-NEWS-020: Indicted Chinese Hackers, Lawyers and Backups

In this week's episode, we discuss indicted Chinese hackers, more lawyer discussion and backups.  Test your backups.  TEST THEM. Article 1 - US Accuses...

Listen

Episode 58

April 25, 2022 00:38:05
Episode Cover

SS-SUBJ-058: Future of Infosec Work Pt. II

In this episode, we finish taking a deep dive into an article by my Infosec spirit animal, Daniel Miessler on the future of Information...

Listen

Episode 51

March 07, 2022 01:04:42
Episode Cover

SS-BOOK-51: Book Review - Daemon by Daniel Suarez

In this episode we review and discuss a book that was nominated to the Cybersecurity Canon, but not accepted - Daemon by Daniel Suarez....

Listen