Episode Transcript
[00:00:00]
David: Welcome to the Security of Serengeti. We're your hosts, David Swinger and Matthew Giener. Stop what you're doing and subscribe to our podcast and leave us an awesome five star review and follow us at SerengetiSec on Twitter.
Matthew: We're here to talk about cybersecurity and technology news headlines and hopefully provide some insight, analysis, and practical applications that you can take to the office to help you protect your organization.
David: And as usual, the views and opinions expressed in this podcast are ours and ours alone and do not reflect the views or opinions of our employers.
Matthew: We were all surprised when we saw the recent Chinese intelligence leak included wall hacks for CSGO.
David: Actually, we weren't surprised, but we are happy that we have them now
Matthew: We're happy we have that evidence. People have been saying that Chinese hackers have been ruining these games for years and we finally have the proof in hand
David: and now we can use it.
Matthew: that we can use it. Yeah. Everybody's a wall hacker.
David: All right. First thing we have today is the Blue Team report from Pikus Security. And I have a soft spot in my heart for [00:01:00] Pykus simply because of Eliza Kazan's. So we reviewed the Pykus security red report last year which talked about the top, the top 10 MITRE attack techniques targeted by attackers. And it was a really good article. If you have not gone back and listened to that one, you should Matt, we'll put the link to it in the show notes. I mean, it had a ton of information in it. You would almost say too much. Cause what's, what's the length of this year's red report,
Matthew: Was it like 240
David: like 214 pages or something. So yeah, that report's pretty hefty, but last year's was really good. I respect this year's is probably as good. But when Matt was looking for the red team report, he found the blue report that we go through this one, which is considerably smaller at 24 pages.
Matthew: Yeah. Yeah. The, so it was, it was such a good report. I don't think I ever finished processing it. There's almost too much information in it where you could, you, I think that you could occupy a threat. Detection [00:02:00] engineering team for probably three months going through the red report and actually like creating content on all of it so much.
David: Yeah. But the expectation would be that that would be beneficial. So.
Matthew: Yeah, no, yeah. Yeah. And I, I mean, I get, there's, there's a lot of information there that you may already have covered by yourself. So maybe it won't take all three months, but it's so huge. It is so much information. Yeah,
David: of saying, okay, got this need that you know, so make sense. But what PICA security does is they do continuous threat exposure management which they equate to the program, which is supported by breach and attack simulation tools. So it's kind of like continuous threat exposure management, C T E M, C TEM would be vulnerability management to you know, tenables, vulnerability assessment kind of thing.
Matthew: maybe this is a new threat. This is a new term to me. I don't know. And I feel like we've seen a lot of this. I know when, when we originally talked to who's that vendor burden [00:03:00] we talked to, they, they were kind of pioneering breach and attack assessment, breach and attack simulation, and. Now we're getting into this and Pantera.
And there's all these, all these other platforms that are moving into like the automated pen testing space and they're all kind of interrelated. And I don't think that it's really settled down into a single product yet.
David: And hopefully it won't, because then you have some diversity in, in what these, what these what these tools do, and you can pick one that best suits whatever your, your goals and objectives are.
Matthew: Yeah. I don't want them to be carbon copies of each other, but it is annoying when I'm reading this and I'm like, I don't know what this does. So there's something to be said about having a consistent vocabulary, even if the tools themselves are not,
David: Right, because then it also makes it difficult for you to actually find competitors to what they do. If you can't can't group them in some way.
Matthew: you know, just
David: but this report is based on data from their customers who are running their tool from January to June of 2023 over the course of 14 [00:04:00] million simulations. So just a couple,
Matthew: yeah, but today they don't describe much in terms of. Like what those simulations represent, right? Those are not 14 million unique simulations. It's 14 million simulations, right?
David: right? And we don't know the actual number of customers either. Pikus says they have over 300, right? So somewhere between three and 400 customers to give you an idea about how many unique environments, those 14 main simulations are run through. So they found that the prevent, the prevention effectiveness of these as demonstrated by these simulations is, and that's, that's the, basically the, the number of simulations that were executed and then security tools in whichever environment the simulations were run in stopped whatever the, the attack was. The detection effectiveness is a combination of two [00:05:00] scores, which is the log score of, you know, whether a simulation was identified and sent to a log. And then the alert score about whether the, the attack was identified and actually triggered an alert that was then viewed by an analyst.
Matthew: We assume it was viewed by an analyst.
David: Well, that, yeah, yeah. That's the, that would be the assumption for why you generate an alert, I guess.
Matthew: Yeah,
David: Yeah and they gave you a passing score quote unquote to something higher than 70 percent in either of those two categories.
Matthew: I wonder why they pick 70%. I wonder if orgs would say, like, if you asked your sisso and you said, Hey, we stopped 70 percent of activities or attackers activities. Is that passing? I wonder what they would say.
David: I don't know. I think they picked passing because that's generally the passing grade in school.
Matthew: I don't know. Yeah. I'm not disagreeing with you on how they picked it. I'm just wondering, you know, what, what management would say. And then
David: they would say anything less than 100 is a fail, [00:06:00] probably, is not realistic either.
Matthew: Yeah, that's kind of what I was getting at is like, is yeah, leadership and management is going to be like, no, you know, 99 percent is, is passing or failing. And then that's just not actually ever going to happen.
David: Right. It's kind of like I've had the arguments with people about what do you constitute as full coverage when you're deploying agents, right? Because you can't, you can never really get full 100 percent agent deployment. So you have to say, okay, we get 90 percent coverage, then we're going to say that we are fully deployed as far as our agents go or something like that.
Matthew: Same thing with like logging. You're never going to get a hundred percent of stuff logged.
David: Right.
Matthew: Nope. I'm there with you. Just, just imagine, just imagine what leadership would want to say. That's all.
David: Well, when you're a SysOmat, you can deal with that.
Matthew: Any day now, I'm, I'm waiting for the callback,
David: All right. So the overall numbers that they, that, that, that they came up with based on these [00:07:00] simulations was that on average 59 percent of attacks were stopped and on average 37 percent were logged and on average, 16 percent were alerted on. now the numbers are kind of weird there because that doesn't add up to a hundred percent.
Matthew: and frankly, I don't think so. I had some, I had some questions around this, whether it should add up to 100 percent or if it should add up to more than 100%. So for example, if it's prevented, should it be logged? If that is true, then let's say 59 percent of them are prevented. Then you would also see 59 percent of them were also logged, and then you'd have 118% uh, If it's prevented, should it be alerted on?
Maybe, maybe not. I could see an instance where you would want 59 percent of them prevented and then 41 percent of them alerted on, and those two should add up to 100 percent maybe. The other one is, yeah, like I just said, the alternative is they should be either prevented or alerted, and [00:08:00] then logging should be 100 percent either way because you want them, you want 100 percent of it logged, whether it's prevented or not, but that would total 200% Total, because you have 100 percent logged, you'd have 59 percent prevented, and you'd have 41 percent alerted, and that would add up to 200.
I don't know, I'm not, I don't, I don't know that percentages is the 100 percent is the exact right way to measure this,
David: Yeah, I came up with a different way. I think would have been a better way to measure this versus the percentages is so you break it down into categories, right? So, and then you score the category. So the first one is undetected. So they get a zero. Then you get a one, you get one point for detected, you get two points for detected and logged.
Then you get three points for detected, logged and alerted. You get four points for detected and prevented. And then you get five points for detected, logged and prevented.
Matthew: All right, and then you and then you say the max score you can get is this.
David: Right.
Matthew: you measure it based on that. That makes a lot of sense.
David: Yeah. Because[00:09:00]
Matthew: Yeah. I think, I
David: that way you've got a more definitive idea about what that really, you know, what actually happened because you can't detect, unless you detect, you can't do anything. Right.
Matthew: Yeah. Yeah. And some of these it's almost implies like for example, well,
David: if you don't log, you can't alert. So I think there's, there's, it could be more finely adjusted here to make it more plain exactly what the true state of the, the architecture is. So as far
Matthew: yeah. I agree.
David: You know, is, is that good based on, you know, what, what our expectations are that this, that you would prevent almost 60%.
Matthew: I don't know.
David: but consider, but consider a statement that they made in there that we'll get to later. If this is assuming no fine tuning of, of alerts, 60 percent prevention for a tool out of the box and just shoved it in the network, that's actually pretty decent.
Matthew: Yeah. I'd agree with
David: Not, [00:10:00] not, not great, not perfect, but still pretty decent for out of the box.
Matthew: Yeah. I think I agree. I think there probably is some tuning involved, but I don't think it, given that this is an average across multiple companies, there's probably not, I don't think you can assume there's heavy tuning involved, so yeah, I think, I think it's actually surprisingly good. Considering I would not have expected them to just block kind of with default or whatever exists right now, this percentage.
David: Yeah, but 30, only 37 percent logging of attacks. That seems pretty sad. Yeah.
Matthew: is that the tools, when you, when you put them in, like most tools are configured in a reasonably decent way in terms of what they're logging whereas I'm sorry, not logging, but preventing talking over myself now but in terms of. Logging, you have to set up most of that on your own, unfortunately. So having gone through this exercise before, like figuring out what you should log in order to generate alerts and there's all kinds of [00:11:00] guidance online, but none of it's very little of it's configured by default and you've got to set it up and you've got to have your admins go in and turn on various log types, et cetera, et cetera.
It's a pain.
David: And I think this might be also a bit misleading as well, because the only thing that they're looking at is security tools, right?
Matthew: Know if they are or not. They weren't terribly clear about that.
David: Well, based on my working with. Attacks in vendors is that's really what they're looking at. They're looking at because what they're doing is when you, when you deploy an attack SIM tool the DAX SIM tool integrates with your SIM your log management tool and the consoles for your different security tools, right?
So it wants to know, did a security tool alert on it? Did it send it to the log and did the log generate an alert, right? So that's where the integration with the attack SIM tools are at. Versus looking at application system logs that are being sent to the SIM, which could be used to alert [00:12:00] without the attack simulation tool seeing that a log was generated from it.
And that's, you know, when you're talking about living off the land binaries, most of those events are going to be an application system logs are not going to be found in security tools that you're going to alert on saying, Hey, this is a suspicious low bin activity.
Matthew: So I think I'm trying to think I've seen I've seen some a couple of tools that do allow you to look at some of those other tools. They don't they're not specifically. I guess the question
David: not saying you couldn't configure, you could probably configure your breach syntax simulation tool to do
Matthew: okay.
David: I don't think they out of the, you know, when you deploy a breach syntax simulation tool, unless you go that extra mile, I don't think you're going to, you're going to get that. Unless they are, unless the breach attack simulation tool that you're deploying specifically says, Hey, we need system logs.
And when we run this attack, we're expecting to see this system log or this system event show up in your log [00:13:00] management system. You know what I mean? And you're not, you're never gonna have a prevention if that's where your detection is at, but you, you are gonna be able to say, at least you detected it and went to your log management system, and then maybe you generate a, a, a alert from that.
Matthew: That's fair. Yeah. I don't know how much I don't know how much customization they require for this. But I have, I know, for example, a lot of these tech simulation tools, they do, because one of the things I've been looking at is how to set these up for custom content and they come with like banks of pre run attacks and depending on what you're trying to do, that's good.
Like if you're trying to test against things that attackers might run, that's great, but if you're trying to test your content, you may have to create custom tests that match your content. Because you may have something where, you know, maybe you detect six of the options used by ImpactIt, but this particular attack uses a different one.
And it's like, oh, you can't detect ImpactIt. Well, I can, I just can't detect the specific one you used.
David: Right.
Matthew: So there's a lot of [00:14:00] weirdness and nuance in these types of things.
David: Yeah. Well, brain strength simulation is, you know, imperfect. It's still, it's still great. I think,
Matthew: Oh yeah, oh
David: even with those kinds of shortcomings so the 16 percent alert of alerts though not terribly shocking at that. Yeah. It's kind of funny when I was reading through this, I actually just watched the movie dread or rewatched the movie dread
Matthew: I love that
David: yeah, it's great movie.
But at the very beginning. Judge Dredd says there's 17, 000 criminal actions a day, and they're expected to be able to respond to 6%. So we ask this partner, okay, which, which, which, which call do we go to? And this is the kind of, the exact same kind of thing you're looking at with the 16%.
Matthew: Yeah. Which you can't detect at all. And this goes back to, and you're right. You're actually really right on the, the lull bin stuff. Cause the lull bin stuff, every time I've looked at detections for lull bins, they're always incredibly noisy because you've got a ton [00:15:00] of admins in your company. That are all using these tools and you cannot simply attack, or I'm sorry, you cannot simply have an alert every time somebody uses a Lulban because you will drown
David: Right, you have to, those, those events have to be extremely well curated. In order to get value out of it.
Matthew: You have to combine them with some other bit of weirdness. If you've got, you know, if, if this other kind of little minor weird thing happens, doesn't you don't alert if a Lulban is used, you don't alert, but those two things happen together.
David: Yeah. Well, that, that, that's why risk based alerting is so important.
Matthew: Yeah.
David: Alright, so moving on to industry, industry differences. So this was something that was kind of interesting. So in the first one for prevention, You've got the lowest at 56 per 56. 56%, which was healthcare, and then the highest, which is the service industry at 81%. So that's a difference between, what, [00:16:00] 25%?
Matthew: Yeah. So it's a pretty wide, it's pretty wide, but even. Given the lowest is still 56%. I think that was pretty good.
David: Well, 56% is only 3% below the entire average for everybody. So it's not like a huge, you know, it's not like healthcare was way behind the average. They were almost average. So it's right, right on the edge there. And it kind of makes sense that in a, in a healthcare setting, you would not want to do a lot of prevention, which is stopping things which could cause a production impact when you're talking about, you know, potential lives on the line. as far as that goes, and I haven't worked in this industry, so I'm making an assumption here also.
And it could be off base, but that, but I, I can imagine that that would be why you would see a reduced prevention in that, or that that service industry.
Matthew: I think so. I think that if you screw up your detections and you stop a surgery, surgical robot in the middle of surgery you're going to have a bad day in terms of your performance review.
David: Yeah. [00:17:00] And it's also going to raise your malpractice insurance,
Matthew: Yeah. Oh my gosh. Yeah. I can imagine. Why did this patient die? We're blaming it on the security team.
David: It's McAfee.
Matthew: I could be, oh boy
David: so the next one was alerting. So alerting the lowest was 2%. Which was done by conglomerates, and then 5 percent at retail. But the highest was healthcare. 40%. And then for logging, you had the lowest at 18 with the highest at 60.
Matthew: that one again. And I think the, it's basically like we talked about before prevention comes built in. I think the lowest at 56%, that's probably the default or somewhere maybe around 50 percent is kind of the default or maybe healthcare like opened it up to, to again, make sure that stuff worked.
Maybe the actual default is like 60, 65 or something. And then alerting and logging, someone has to put some effort into it.
David: Yeah. And who wants to do that? Right.
Matthew: Nobody. It's boring. Sucks. There's no, it's a thankless [00:18:00] task.
David: Yeah. That's why you buy you, you hire an MSSP. It's like, yeah, I don't want that. But insight Pike has said they had their own insights from the analysis they performed around the report. so the first first set of insights they say they have is four impossible trade offs that organizations have to deal with.
The first one being between prevention and detection efficacy. And this kind of bears out based on the data in the report, which is interesting to say that it's almost a, a. completely opposite comparison when you talk about detection and prevention. If an org did better on prevention, they did equally bad on detection, almost across the board, which was pretty interesting in the report.
It's kind of odd but it seems to indicate that there is a direct trade off between prevention and detection. Now, when I when I was running this talk one of the questions I used to ask analysts is given 100 percent of a budget, what percentage would you spend on prevention, detection, and response, and [00:19:00] why? To get an idea about how they thought about and approached defense in general. And this kind of shows some of that in the fact that, you know, if you do this, you can't do that. That there is a, there isn't a you can't do both.
Matthew: What was the, what do people answer most often in that? Did they go for prevention or?
David: Most of the time they did the 33%. Yeah, it's been equally
Matthew: a stand, man. Take a stand.
David: Yeah. Rare, rarely you would get anybody to say anything outside of that. Yeah.
Matthew: thinking about it from this, given how much more successful prevention is than alerting or logging, There's a valid argument for putting the majority of your money into prevention and just sewing that up. Like if the default is in the 50 percent range, like maybe putting that money into really tightening up your tools.
I don't know. I don't know the opposite. There's also the opposite is also that the default tooling does pretty well by its own. Like maybe we need to focus on alerting and logging [00:20:00] instead to shore up the weakness. But I think both of them are valid arguments, depending on
David: There's no perfect answer here for it. I don't think.
Matthew: well, there is it's, it's the answer that I said.
David: Of course it is. Sorry to have doubted you. All right. The, then the next one is. The tradeoff between logging and alerting and I think this could be stated a different way. You know, if you learn on everything versus false positive volume, right? You know, if you go back to what we were talking about with the, the low bins, you know, it's difficult to alert on that automatically.
So there's a trade off between learning on everything and the amount of false positives you have to deal with. So I think that's, that's really what that trade off is that there stay in there between logging and alerting. The next one is choosing, choosing which types of attacks to prevent. So, you know, do you focus on CNC versus lateral movement?
Basically which part of the MITRE framework are you focusing your efforts on? [00:21:00] And then the last one is vulnerability management. Basically, how do you prioritize your vulnerability management? What are you going to patch? What are you going to let slide, or what are you going to delay? In patching as far as the vulnerability management goes,
Matthew: And both of these, both these last two go back to things we've talked about before. Like IR and so much of security is putting band aids on a bad vulnerability management program. Like if your org does IT well. And does vulnerability management well and secure configuration and all that stuff. Like 90 percent of security is not relevant to you.
David: Yeah, and we talked about this time and time again that security is really the hidden factory of I. T.
Matthew: Yeah.
David: But one of the one of the or a good quote that that I pulled on this report is this, which is part of their analysis. This report finds that organizations do not consistently prevent or detect cyber attacks.
The reason is likely less about the quant quality [00:22:00] or capability of the security controls they have in place, but more about how effectively these organizations are utilizing these tools. So you don't need to buy the new shiny thing. You need to get the shit you have now configured correctly is really that's how I would state that
Matthew: No way. No way. I need new tools. I need new tools to
David: I need new budget. I need more budget 'cause I gotta buy the new shiny.
All right for our second article today, we are talking about an online dump of Chinese hacking documents offering a rare window into pervasive state surveillance. And it's funny because this article and this dump has been the subject of a whole bunch of online conspiracy theories about, you know, Oh, the AT& T outage happens just after this dump of documents and stuff.
Matthew: There's been a whole bunch of, I've been seeing a whole bunch of people talk about how it's all distraction from this dump.
David: I was listening to a news podcast that played a clip from a news channel that, that said that the ATT outage was called by [00:23:00] space weather.
Matthew: There's a thunderstorm in space.
David: Oh, hilarious.
Matthew: There actually was I read I read there's coincidentally a solar flare at the same time, but they said that was not it I think they said it was a software
David: what they were talking about was the solar flare and they called the solar flare
Matthew: the space weather
There's a document dump for a security contractor called I soon in China some of the places I saw this it was a lowercase I like an iPad or an iPhone which is Weird, but hilarious. It was apparently 190 megabytes. So it's not huge, but apparently it has some really interesting information. What is less and less interesting is a lot of the capabilities described within are kind of what we expect from a state actor.
But the dump included contracts, marketing presentations, product manuals, lists of clients and employees. The presentation.
David: actually makes the, the, the dump less impressive because if you're talking about a hundred and ninety megabytes of text, that's a fair, that's a fair amount. [00:24:00] If you're talking a hundred, hundred ninety megabytes of PDFs and PowerPoints, that's less impressive,
Matthew: PowerPoints are just information information sparse on purpose and
David: on size
Matthew: yeah, yeah, and we can't believe anything we read in there because if it's a marketing presentation.
David: now, right? Yep, exactly.
Matthew: So the presentations and product manuals describe tools the Chinese government uses to watch their own dissidents and to watch other nations.
So they described a couple of different tools in there. There's a tool that allows you to break into a Twitter account and post as the user, even if they have two factor enabled. One of the chats or one of the marketing materials said this was, this could be used to curb illegal public opinions, because of course in China, can't have those.
They watch all their own citizens, because we don't do that in the U. S. We're free here, for now.
David: you know, we don't, we don't say that American citizens have illegal, we call it misinformation here. It's having a, having a illegal opinion.
Matthew: Yeah. Well, to be honest, it's still better than going to jail for it. Has [00:25:00] anybody gone to jail for misinformation? Not yet.
David: No, they just crush your livelihood and have you starve to death instead. So, I suppose that's the upside. You'd be homeless
Matthew: Trade
David: instead of going to jail.
Matthew: A tool to determine the real life user behind a social media account if they can get in, if they can get in their Mac address and using something they called Intel from 360. I looked up 360 security and there is a place called 360 security services, which is a, they advertise as a private eye.
So apparently they have an arrangement with these guys where they can figure they, I don't know, maybe they watch network traffic and other places or something. I don't know how exactly they're doing it, but supposedly they can match a MAC address to a real user.
David: Hmm. That's not a typo. They meant 365.
Matthew: I don't know. Could be. All this is translation. So it's and the guy that does the, when we get to the part about the thread, he does mention that like, he's not great at opinion, Chinese, and some of it is approximation.
David: [00:26:00] Hmm.
Matthew: There were two tools for analyzing email inboxes one of which was, did keyword searches and created relationship graphs based on the information in the mailbox, and the second one could be used to create phishing email, emails.
It didn't say this explicitly, but I would assume that it can create custom attack emails based on what it finds in the inboxes, because why else would it? Review the inbox before creating the phishing email.
David: Right. Makes sense.
Matthew: Yeah malware for computers, Android and iPhone says it can be installed without root. And as one of the default capabilities there, it dumps common Chinese clients like WeChat and QQ.
And then it also can dump telegram chats. Apparently they advertised APT capabilities, but that's probably just marketing speak cause APT doesn't mean anything anymore. They had a cool little portable hardware VPN. They have a couple other hardware items that. I don't want to really go ahead and get into references this did not contain these databases, but it contained references to databases that contained hacked data to be sold to the Chinese government.
That [00:27:00] one's kind of interesting to me because that implies they're going out kind of freelance and trying to find information the Chinese government wants and then turn around and sell it to the government.
David: Well, I wonder if that means they actually don't have a contract yet. Oh,
Matthew: have, they do have a couple of contracts their list of their list of Customers include something like, I didn't write this down, but it was something like 40 regional Chinese governments. So they have existing contracts, but it's just like Raytheon I'm sure. Or any of the
David: want another
Matthew: Yeah, yeah, they, they have a, they want a bunch of contracts,
David: right?
Matthew: so, and, and later, actually there's some comments where one of the governments or one of the, the, their government customers comes to them and asks them to break into specific email addresses.
So it sounds like they're both pursuing the contracts and then doing. Let's call this business development work on the side.
David: Right, right. You
Matthew: That should have been our
David: director at the,
Matthew: Oh God, that should have been our
David: that staying up late at night putting together these PowerPoints in order [00:28:00] to get more contracts.
Matthew: Yeah. They got to build their 40 hours to their current contracts. And then
David: Yeah. Then do 20 hours over time doing business development. And part of that business development is hacking after hours.
Matthew: that actually wouldn't be bad. Although we did see, didn't we see last year or something where some of the employees in these companies were doing that after hours hacking for their own personal benefit.
David: Yeah. Yeah. I vaguely remember hearing something about that. I don't remember exactly about that. It makes sense. I mean, why not leverage that entire infrastructure? That's just there. I mean, it's not being used. Might as
Matthew: got the skills. Yeah. Yeah. The article mentioned 14 governments that were targets. I'm not going to list them all. Cause that's very boring segment for the next 30. But once I went through this and the, the related mastodon thread, I actually counted up 21 countries that were mentioned. So a couple of little bits and pieces here.
This company charged 55, 000 to hack Vietnam. I don't think they said which specific group in Vietnam. I don't think they were hacking the whole country. They need pop tarts if [00:29:00] they're going to hack the planet.
David: a compromise. Every computer in Vietnam, 55, 000.
Matthew: 000. Malaysia, they hit the Ministry of Foreign Affairs. Mongolia, the Ministry of Foreign Affairs. Thailand, they said MOF. I assume that's Ministry of Finance.
David: Yeah. I would, I would assume so. I
Matthew: Yeah, they I see you highlighted this one day Hong Kong is on their list, which you mentioned earlier. I don't know if you want to go into that.
David: mean, Hong Kong is part of China. So they may have regional contracts to say, you know, they do Hong Kong, they do you know, Uyghurs or whatever.
Matthew: Another part
David: have an internal division and external division or something.
Matthew: would never do that. I'm sure the FBI has no contracts like this on Idaho or any place that has a lot of a lot of folks that have different opinions.
David: No, never.
Matthew: Nope. Apparently there are chat logs suggesting they try for NATO countries, which goes back to what we're talking about, about the contracts. But decided that the NATO countries were too hard to break into.
There was a chat about the [00:30:00] UK foreign affairs I, I guess department, but another contractor had them which is, does this mean the government assigns them like guards in a basketball game? Like, Hey, you guard that guy. You take Malaysia.
David: Yeah. I think it's more like, you know, US combatant commands, like, okay, you're responsible for this area and you're responsible for this area. Kind of thing. I'm getting, getting, I'm guessing. No, that also puts up my false flag antenna. You know, Oh no, NATO is too hard, you know
Matthew: the, so that we can all relax and be like, Ooh, they're not in here.
David: well, it, it, it, it's, it's a way to make the Chinese sound terrible, but at the same time that you're still, you know, defended against them and, and don't need to worry about it. Cause there, there's a guy in here, Dakota Carey, a Chinese, a China analyst with the security firm Sentinel one said that the documents appear legitimate.
Because they align with what we would expect from a contractor hacking on behalf of the on behalf of the Chinese security apparatus with domestic political [00:31:00] priorities. Okay, so this is obviously legit because we think that's what they would do. Not that there's any other corresponding evidence or whatever.
It's just, that's a very weak way to say, Oh yeah, sure. This has got to be legit because it's what we think it would be. Because fake documents would never look legit. Right.
Matthew: Yeah. Yeah. Yeah.
David: I'm, I'm obviously not saying that I think it is a false flag or whatever, but it, it just sounds ridiculous because it makes NATO sound like, you know, they're all tough and you don't need to worry about them on there. But from my own personal experience, I know that's not accurate
Matthew: well, and the other thing is you talked about before, it's a podcast as well about the idea of like, these guys might be second stringers. These guys might be,
David: right.
Matthew: Maybe they're not the top of the line contracting company,
David: Yeah, because if you look at the list, like you guys get Kyrgyzstan is like, okay, I don't think you put your a team on Kyrgyzstan.[00:32:00]
Matthew: What? I don't know. Maybe they've got a really bang up program.
David: So that's why I think, you know, this might be a, you know, a dump from a, a less great government contractor,
Matthew: Yeah.
David: you know, as Archer might put the Dane cook of a Chinese hacking firms.
Matthew: All right. So I'm gonna skip down here to the masses on thread. So I went through the long mastodon thread and there's a researcher whose name I should have grabbed, but I'm going to link it. They're at still at infosec dot exchange. And he went through the whole dump, or a big significant portion of the dump, and I pulled out the most interesting parts of it.
First of all, excuse me, the city of Yangcheng has a budget for cyberattacks. A city in China has a budget for cyberattacks. Their budget is 830, 000 US, and 280, 000 of that is devoted to attacking Taiwan, according to a chat log. Do cities in the US have budgets for cyber attacks? I, I don't think that most mid-sized cities [00:33:00] do, although, honestly, I'll bet New York City probably does.
David: Well, I know New York City has an anti terrorism unit so actually that is not, you know, I didn't think about it until you just mentioned it, that, you know, that actually makes sense that a large city may have its own organization like that. Like I said, like you, you're right, I think New York probably does because they also have an anti terror, an international anti terrorism unit.
For New York city itself.
Matthew: Hmm. Interesting. Yeah, it so there's two interesting things about this. Number one you mentioned before again, is this, do we know that this is cyber attacks? We don't know this is cyber attacks. The, the researcher just said that it was for cyber attacks, but maybe it was for cyber defense. to like, maybe it's just a contract, a budget to bring in contractors to shore stuff up.
I don't know. Interesting that the 208, 000 is devoted to attacking Taiwan though. That one's definitely an attack. And that makes me wonder if all cities in China are given kind of a fractional responsibility for attacking Taiwan
David: Well, actually [00:34:00] I wonder what the major industry is in Yang, Xi, because it could be that the major industry exchange is semiconductors. So they get 280, 000 to attack the Taiwanese semiconductor industry to do, to steal industrial espionage or to perform industrial espionage against Taiwan.
Matthew: Yeah. Yeah. Because cyber attacks doesn't necessarily mean degrading their capabilities. You're right.
David: And China does a huge amount of industrial espionage. That's probably primarily what China does actually.
Matthew: That makes a lot more sense. That makes a lot more sense because that way each individual area can target whatever specific stuff they need to target to support their, this is, this is, this is their version of the U. S. Chamber of Commerce or local chamber of commerces. All right. Most of the ISUN workers are underpaid. They, there was a list of employees and how much they were paid. The very top two made 270, 000. I don't even remember what the. Chinese. Is it
David: The one.
Matthew: one? All right, [00:35:00] but once you convert it to us that says they make 35, 000 U. S. Although it doesn't say monthly or annually 35, 000 U.
S. Monthly makes sense for two people that are running a company. Maybe.
David: Hmm.
Matthew: Yeah, that's about 400, 000 a year. Well, that would make sense in the U. S. Because the average CEO in the U. S. Is paid like 000. But there's, but most people in there were making about 1, 000 U. S. Again, it doesn't say monthly or annually.
David: Well, I mean, that could also lead to the, our assessment that this is just second string company.
Matthew: Yeah,
hold on. I'm looking something up. So, all right, so here we go. 2021, the last full year for which Beijing's National Bureau of Statistics offers data, the average Chinese worker earned 105, 000 yuan a year, the equivalent of 16, 000. So I bet this is a month. I bet this is 1, 000 a month. If the average worker makes 16, 000, that's, so they're paying their hackers less than average.
That, yeah, like you said, that goes with them being a second string company. And [00:36:00] that's also really interesting how much, I bet I could, I bet I could pay them more than that. We've got some projects we could. You can peel them off and
David: well, you also might think that, you know, if this is a second string company, maybe most of their employees are straight out of college or something as well. So their second string company, because they also don't have the talent pool either,
Matthew: yeah, that's possible. There was a bunch of stolen call logs and data from telecommunications providers. They mentioned like they broke into Mongolia's main telecommunication provider and several other countries, tele telecoms which that's interesting. That's, I don't know if that was for a contract or if that's part of the data they stole that they're looking to sell, but interesting.
They have a gamified all in one red teaming platform where you can, you know, get in with your friends and attack a bad guy. This is their after hours stuff, maybe, and you can grade other people's attacks and you can give them bonus points for I don't know, effective or elegant attacks.
David: and then they get a digital hat.
Matthew: [00:37:00] Yeah. Yeah. Yeah. They get to use the blink tag for a couple of weeks. So, and that is, that's all the specific, interesting stuff that I pulled out of there.
David: Yeah. I was hoping for more. I was, I was rather underwhelmed at least by what was reported in there.
Matthew: That's fair. I think the only, I think the main things in there. that are really kind of important on there is that they're targeting specific governments that the some of the details on the contracts were really interesting, but yeah, you're right. I guess there's nothing that's really shockingly, like, I can't believe they did that other than they bother looking at Kyrgyzstan, Kyrgyzstan.
David: Well, they could be like the United States where. Everybody does. I mean, we have something for every country on the planet, you know, regardless of how serious the threat is from that place. We have something on the shelf for him.
Matthew: Like Batman,
David: same kind of thing.
Matthew: like Batman and all of his plans for taking down all the Justice League.
David: Yeah. Except for ours are probably mostly shit. Batman's was really good.
Matthew: I don't know. [00:38:00] There's a. It was a sarcastic comment, but or no, it was it was a robot, robot chicken. There we go. It was robot chicken where Robin was like, what's your super plan for taking me down, Batman? And he just like smacks him. There was,
David: his hand done.
Matthew: was there was another one. He was like, oh, the Martian Manhunter's weakness is fire.
And Batman's like, everyone's weakness is fire. Ah, robot chicken is. Treasure.
David: haven't watched robot chicken in forever though.
Matthew: I, they, I mean, it's hit or miss. It's, you know, sometimes it's, they, they just have some absolute gems and then there's segments where you're like, Oh my God, why did they,
David: the whole Darth Vader pray that I don't alter this contract further.
Matthew: All right. And finally, I have a surprise article which I dropped on David. We did not discuss this beforehand. I mean, we took notes on it, but like when we met to talk about this I just couldn't believe this and want to talk about it. The formal Google, former Google CEO gets into the AI powered kamikaze drone business with white stork from [00:39:00] Gizmodo.
David: Yeah, I'm less shocked.
Matthew: Eric, Eric Schmidt, former CEO of Google has a startup to. to develop self driving kamikaze drones. If you spend any time recently on the combat footage subreddit or reading about the Ukraine Russia war, you know that there's a lot of drone warfare going on right now. Started, so, well if you go to the combat footage subreddit, you can see a lot of drone camera shots of soldiers using first person view drones on the battlefield.
These are almost always built on top of existing drones. They add You know, a little metal rod on the end to complete a circuit. And then they add some explosives and they drive it into somebody or they will add racks on top where you tilt the drone and like grenades roll off the racks and stuff like that.
But White Stork wants to modernize these. They want to build them. They want to purpose build them in large numbers. And the goal is to build a 400 drone with a small amount of explosives that will target and attack autonomously.
David: What could possibly go [00:40:00] wrong? You know in the way that's presented rather than it, you know, the army is Has the same that the one shot one kill so instead it's one drone one kill Since these seem to be designed at 400 bucks a pop to kill individual soldiers rather than groups of soldiers like rockets or artillery
Matthew: And you know what? It's funny. I was thinking that 400 is really expensive to kill a single soldier, but if I remember, hold on number of rounds to kill a soldier, if I remember right, it's in the, it's in the, yeah, during world war two, it was estimated a hundred thousand rounds of small arms ammunition were expended per casualty.
Given that around of two 23 is what? 50 cents.
David: Mm
Matthew: So that's 50, 000 to kill per casualty. So 400 per casualty is a bargain.
David: hmm, cuz assuming you don't miss
Matthew: I mean, I'm sure some of them will.
David: Well, some of them are going to kill civilians, they're autonomous,
Matthew: Yeah, yeah, yeah, and some of them are gonna, you know, misread [00:41:00] non enemies as enemies, you know, like
David: Pick a tree and run into a tree.
Matthew: Yeah, but still, even if it's 50 percent accurate, it's still going to save a lot of money on
David: Oh, and that's what's most important when you're talking about murdering people.
Matthew: And apparently you can dodge these too. There's been some footage of soldiers dodging them. You have to wait until they kind of commit to their final, and then you dodge out of the way and hit the ground, because shrapnel rises. And then you can Of course, if you screw that up,
David: Yeah, you don't get to go back to the last save point.
Matthew: no.
David: But you know for, for, for the way that this is being designed though, this might be good for, or successful an unconventional war setting. Versus in a peer situation, since, you know, in an unconventional, you're going to be seeing a lot of individual soldiers, whereas in a, in a, in a peer competitor scenario, you're going to be dealing with groups and squads of soldiers Where a, a drone swarm might be easier to be [00:42:00] countered or taken down. I can, I can expect, you know, if this is a, if this becomes a big thing, then it's going to lead to having shotguns being a standard issue weapon, you know, either undermounted like a M203 or just an additional carry weapon of a shotgun for counter drone fighting.
Matthew: You know, I don't know that they would actually issue a separate shotgun, because then you've got to carry that again, and you've got to carry a different type of ammunition, but I could see a fliché like round in a 203, because then you get a lot more,
David: Oh, yeah.
Matthew: Yeah. A lot more, a lot more pieces. And again, we talked about this before the podcast but if you've got a whole squad of people, so if you've got one person who's assigned to anti drone duty, you need like an automatic shotgun or something that can, or a semi auto that can pump out a lot of rounds in a short time, but if you've got the whole squad firing with like a two Oh three style canister round, like maybe that's good enough.
David: Right. Well, you can put the automatic shotgun under, like I said, undermount like a 203. You can, you can buy those [00:43:00] today.
Matthew: really
David: Yeah,
Matthew: Lockhart tactical UBS 12 undermount shotgun. Wonder if a civilian can buy this 400.
David: yeah, it's not too pricey either.
Matthew: You know, I was thinking about buying a shotgun, but maybe I will just turn my AR 15 into a shotgun.
David: Well, there you go.
Matthew: Yeah, yeah, yeah.
Alright, so this reminds me of the short film Slaughterbots from 2018, where they hypothesized a future where miniature drones could be used to kill specific people using facial recognition and stuff like that. And then in this film, they, so the film started off with a military contractor talking about the capabilities of these tiny little drones that are about, you know, four inches by five, four inches.
And then they showed news footage and a terrorist attack. And that is what. Frightens me is sure. This is these types of weapons are great when you know, I'm sorry. Well, I should say they're great but they sound great when you're like Oh, you know our soldiers don't have to get [00:44:00] in The fighting like we can just send these drones out and kill the bad guys and we'll they'll only kill bad guys And that sounds great until a terrorist rolls up with a semi and a stolen shipment Outside of washington dc and sets about 2 000 of these loose on the city
David: Yeah, and they don't have to do any kind of discrimination. It's like you see a face blow it up
Matthew: Yeah, I mean, in Slaughterbots, they talked about an attack on the Senate where it only killed people on half the aisle one party over the other, which again, if you use facial recognition, you could send these out. Heck, you could use it as an assassination. You send it out to kill one specific guy, or you, like, 50 of the bots are targeting one specific guy, and 1, 950 of them are targeting random people.
You'd have no idea they were targeted. You'd just be like, holy crap.
David: Right.
Matthew: So, I don't know. I'm
David: There's a show that ran for a few years called the blacklist
Matthew: Yeah.
David: It was it was better towards the beginning than it was towards the end But one of the criminals that they took down on this In this show. Yeah, it's the premise is that FBI it's an [00:45:00] FBI task force. It takes down unusual criminals, but one in in one of the episodes they took down a Assassin whose modus operandi was to do exactly that kill a whole bunch of people around the target So that you don't realize the person was targeted
Matthew: Yeah.
David: But this is really sad and depressing I wish that, you know, smart and wealthy people would start to look at these, the idea of making improvements in warfare tech and realize that improving war tools is only going to get us so far. At some point, which we may be, you know, at or near at right now.
Is that you're only increasing the rapidity with which is going to bring about the entire destruction of the United States because everyone has copious amounts of destructive technology and no one is really ahead quote unquote than anybody else or any country is no, no more ahead than any others, you know, instead, these people should spend their time of their lives.
Trying to establish or, or [00:46:00] maintain long term peace. Cause I have no doubt that it is possible to do. And I'm certain that you can't do it through deterrence or better weapons.
Matthew: All right. Why does this matter? This actually has me starting to wonder if these types of drones become common. And I'm actually surprised we haven't seen a terrorist attack use these or even. Like business intelligence style attacks using these either. Cause we've seen drones used and smuggling weapons into and drugs into prisons.
We've seen them used in warfare now. I'm just imagining a future where the physical security group at a company is responsible for the anti drone tech.
David: Yeah. I mean, you're going to have, you're gonna have buildings that have machine guns mounting on, on the outside, like the Rossi from the expanse series that just shoots a ton of ammunition in order to take, take down Ticked out drones.
Matthew: If they're, I mean, if they do, if there's that many legal lethal drone attacks, maybe I'm more envisioning like security guards [00:47:00] with, you know, net launchers and mounting like anti drone netting above sensitive facilities you can disrupt the EM communications.
David: That doesn't sound nearly as fun.
Matthew: That's fair. You're right. All right. What should you do about it? Nothing. We're #$%^ed.
David: Yeah. Get your will in order.
Matthew: I don't even know what to do. Like, I'm imagining like if somebody sets off, like, what do you, what do you do? You just like hide in your house. You're like, all right, you just hide. How long do you hide? I don't know until we're sure there's no drones around. What if they have drones that are like, Hey, go sit on the ground for six hours and then come up like you, you've talked about before about the, the secondary attacks to hit the first responders,
David: I
Matthew: You could keep an area locked down for days, potentially.
David: Mean, the whole thing behind this is the, the the, the law economics that when the cost come down, the higher the volume of sales go up. So the less costly you make it to conduct attacks and attack and kill people both [00:48:00] in money and blood, you know what they say, blood and treasure, right? Is the cost of warfare.
If you're bringing down the cost of both of those. We don't have to spend as much and you don't have to get your own citizens killed. You're going to see an increased use of these kind of technologies so this is just going to make everything worse and you're going to get more of this crap Unless people actively try to work towards peace rather than being belligerent
Matthew: you know, and we saw that with the U. S. as soon as the U. S. could just bomb other countries, like we just started bombing everybody. Anyways,
David: Well, I, I mean, I did, I did this math 20 years ago. So my, my memory is going to be faulty on this. But if you look at the percentage of soldiers killed in military conflicts versus the, the percentage of civilians killed in, in, in conflict, the, the longer time goes on, the more those, those elements start to skew.
Initially, when you're talking about Napoleonic period, you know, you're talking about 90%. [00:49:00] Military casualties and 10 percent civilian casualties. And as time goes on, they'll start moving the other, the other direction. Fewer and fewer soldiers get killed. More and more civilians get killed till now we're at the point where it's something like 85 percent of people killed in warfare are civilians versus military or soldiers and this is just going to make it that much worse.
Matthew: yeah, no, I'm looking, actually, there's a, you don't have to do the math anymore. There's a civilian casualty ratio article on Wikipedia. The earliest one they have is the Mexican Revolution, which was a one to one civilian combatant death ratio. And if you look at the Iraq War, it was almost a four to one, four civilians killed for every combatant.
David: That sounds low.
Matthew: U. S. drone strikes in Pakistan, ten civilians for every combatant killed.
David: Yeah, that, that sounds more accurate.
Matthew: That's wild. I had no idea that that was
Interesting.
David: Yeah, it's terrible.
Matthew: so that's a terrible place to leave this on, but
David: There we go.[00:50:00]
Matthew: oh, Jesus. Oh, boy.
David: All right. Well, that's all we have for today. Thanks for joining us and follow us at SerenitySec on Twitter and subscribe on your favorite podcast app.