SS-NEWS-132: AI Drones, OAuth Abuse, and 23andMe!

Episode 132 December 18, 2023 00:55:16
SS-NEWS-132: AI Drones, OAuth Abuse, and 23andMe!
Security Serengeti
SS-NEWS-132: AI Drones, OAuth Abuse, and 23andMe!

Dec 18 2023 | 00:55:16

/

Show Notes

This week we discuss Microsoft shutting down a bot network that created millions of fraudulent accounts, the coming AI Drone Overlords, OAuth Abuse, and 23andMe losing 5.5 million folks genetic information.

Article 1 - Microsoft seizes infrastructure of top cybercrime group
Supporting Articles:
Disrupting the gateway services to cybercrime

Article 2 - A.I.-controlled killer drones become reality
Supporting Articles:
Kill Decision by Daniel Suarez

Article 3 - Threat actors misuse OAuth applications to automate financially driven attacks

Article 4 - 23andMe says, er, actually some genetic and health data might have been accessed in recent breach

If you found this interesting or useful, please follow us on Twitter @serengetisec and subscribe and review on your favorite podcast app!

View Full Transcript

Episode Transcript


Transcript is AI Generated and DEFINITELY has errors. Use with that caution. David: [00:00:00] 
 Welcome to the Security Serengeti. We're your host, David Sweer. I'm Matthew Keener. Stop. What you're doing is subscribe to our podcast and leave us an awesome five star review and follow us at Serengeti sec on Twitter. 
 Matthew: We're here to talk about cybersecurity and technology news headlines, and hopefully provide some insight analysis and practical applications that you can take into the office to help you protect your organization. 
 David: And as usual, the views, opinions expressing this podcast are hours and hours alone and do not reflective user opinions of our employers. 
 Matthew: However, I for one, welcome our new AI drone overlords. 
 David: Know, I can't wait to start working in the Silicon Minds. Alright first article. So, Microsoft seizes infrastructure of top cybercrime group. This comes to us from CyberScoop. 
 Matthew: We haven't had an article from them before, have we? I don't recognize it. 
 David: I don't know. Maybe. You know, if our Sir Curious Serengeti oh Chet GPT was working, we could ask. 
 Matthew: I'll, I'll start uploading this stupid transcripts. One at a time 
 So apparently our thing, [00:01:00] our thing will allow us to generate 10 transcripts per month. So I was kicking off transcripts for the first 10. And then I've got, since I started using DescriptEdit, I do have a bunch of older transcripts that I can copy in there. 
 So probably just a couple hours worth to add all the transcripts. Although I will say uh, one of the early transcripts did transcribe your last name as a slur. So I'm really, really hoping 
 David: My name is a slur. 
 Matthew: it transcribed it as one. 
 David: I, I can actually get behind that. 
 Matthew: You're such a swindigger. I mean, Keener, Keener is a actual it's a, like a brown noser in Canada. 
 David: Oh, nice. 
 Matthew: Yeah. You're a keener is actually a, is actually an insult. I'm sorry. So what was the book you were talking about though? 
 David: amazing how accurate that is then. 
 Matthew: What was the book? What was the book? 
 David: This perfect day. 
 Matthew: This perfect day. Byro 11. 
 David: Yeah, that's it. 
 Matthew: I'm going to take a look at that. Came out 13 years ago. So you know what, [00:02:00] you know what though, what we just did, uh, we were talking before about having an AI. fOr podcasts, like that would be something where when you hear, like, whenever a podcast makes a recommendation of some sort, I'm usually driving or something. 
 I don't have the ability. Someone talks about some book they read that's so good or something. I would love to be able to have an AI bot for that podcast that I could ask after the fact, what was the book recommended in episode two or one, or what are, what can you provide a list of all the books that have been recommended in this podcast? 
 Hmm. 
 David: you mentioned that, because actually, on the Ardermanian list, there was an interview with Max Brooks, where he mentioned a book, and I was like, Oh, I gotta remember to look this book up. 
 Matthew: And you never do. 
 David: And I never did, so I needed to make a note to go back, and Look up that book. 
 Matthew: If only there was a Art of Manliness AI where you could ask it, what book did [00:03:00] Max Brooks recommend? Actually, hold on. Let's ask what book did Max Brooks recommend on the Art of Manliness? This is such great radio. 
 It, it does bring you to episode 936 of the Art of Manliness, which is the one with with Max Brooks. 
 But it doesn't tell you uh, what book, but that is something that AI would be really good at, or I'm sorry, large language models would be really good at. 
 David: Mm hmm. 
 Matthew: So there's a transcript here. sO maybe if you could remember a word around, or you could just read through the transcript. 
 David: Yeah, I have to look at that when we're done here. 
 Matthew: Oh, you know what? Hold on. Hold on. I got chat GPT for, 
 David: It's your 20 bucks at work. I 
 Matthew: so I'm going to give it the link to the following says and the following transcript. And I gave it the link. And then I said, what book does Max Brooks recommend? 
 Oh, no. Oh, there we go. All right. It started off with saying, [00:04:00] searching with bang. And I was like, no, don't search with bang. Is it, is it in a far country by Jack London? 
 David: Think so, it's been it's been several weeks, but I don't I don't remember exactly I'd have to go back and actually Listen to that recommendation in context to see if that's what it was. 
 Matthew: I would be interested to hear back if that's correct, though, because again, like this is this, this, this thing drives me nuts. There's always podcasts I listen to and they always make recommendations about stuff and I'm always in the car or I'm always like taking a walk or something. And if I'm taking a walk, I can take out my phone and I can add it to my wishlist on Amazon. 
 But if I'm driving, 
 David: Yeah, I think that's probably it because Brett McKay who's the host of that? Podcasts is a big Jack London fan. So that is probably it. 
 Matthew: it's all right, let's continue actually on the subject of the podcast. 
 David: What are we here to talk about again?[00:05:00] 
 Matthew: Ai apparently we, we were gonna be a, we were gonna be a crypto podcast and crypto dissolved, and now it's ai until the AI dissolves, 
 David: You mean until they become our overlords and 
 Matthew: we come overlords, and then, yeah, and then we're not allowed to comment about ai. 
 David: All right, so Microsoft obtained a court order from the Southern District of New York, allowing it to seize the U. S. based infrastructure of websites used by a group the company tracks as Storm 1152. Very inventive name. 
 Matthew: Yeah. 
 David: Which is funny because there's, there's a, there's another article we're talking about later, which has another storm and then a number after it, it's like, everything's a storm and then you get your number and, you know, move on. 
 Matthew: Can I be storm 13? 
 David: Well, maybe you can petition Microsoft to get your own storm designation 
 Matthew: Yeah. 
 David: on storm 69 69. 
 Matthew: Storm for 2069. Yep. What was that? Was that, there's that joke that like if somebody's name [00:06:00] ends in 70, you know, they're 53 years old, but if their name ends in 69, you know, they're 17. 
 David: Anyway, so this storm 1152 group created about 750 million fraudulent Microsoft accounts and and, and, and various websites. So these accounts were then used in email based attacks for, you know, the typical phishing, spam, PEC, fraud, etc. And Microsoft described the group as the number one seller and creator of fraudulent Microsoft accounts. 
 And 
 Matthew: number one, 
 David: in the world. 
 Matthew: number one way to go. 
 David: But that's not all they do because they also offer services that can bypass capture puzzles. 
 Matthew: How do I subscribe? I hate that capture stuff. 
 David: Well, you could have used one of these three domains that, or websites that they had before Microsoft seized them. 
 Matthew: Damn it. 
 David: The court order was for hotmailbox. me, which is where they sold the fraudulent Outlook accounts. [00:07:00] And then there was First Captcha, Any Captcha, and Nunn Captcha, uh, which provided Captcha Solve services and tools to bypass Captcha. 
 So I guess they, they also had some kind of software package that they could sell you that also did the bypass for you, I guess. 
 Matthew: I hate captcha so much. Some of it's not bad, but like some of it is so obfuscated that I just get it wrong over and over again. 
 David: Well, I was saying it's gotten better, though, because that used to be really bad for me. Like, really? That's not an end? Come on. 
 Matthew: yeah, yeah. Yeah. Or it's like, is this uppercase or is this like a big or a little, Oh, like I can't tell because they're all different sizes. 
 David: You know, I think I told you about that one capture that I got from Google where it said click all the parking meters and one was a mailbox. And it 
 Matthew: Oh yeah, 
 David: me to click the mailbox before it would let me go to the next thing 
 Matthew: I'm 
 David: before it would say it was done. 
 Matthew: those are the worst. I've definitely seen multiple ones of those where I'm like, I clicked all of the, and it doesn't work and it doesn't. Yeah. 
 David: Well, obviously you're not human. 
 Matthew: Apparently not 
 David: I've been accused of that before, but,[00:08:00] 
 Matthew: just because I have no emotions. 
 David: but on these sites also they had, or some of the other things that they, that Microsoft has allowed to seize PCs. Well, Microsoft was allowed to seize, were also social media sites that marketed the services on, you know, Hotmail and the capture sites. Now in the in the blog post from Microsoft, they said that there were also individuals based in Vietnam that helped develop and maintain the websites, produced step by step videos explaining how they're, how to use their products and exploit Microsoft's accounts 
 Matthew: These are entrepreneurs. They shut down entrepreneurs, 
 David: yep, in another country, no less. 
 Matthew: Minorities 
 David: Not, not in Vietnam. I don't think I didn't need the Vietnamese or minorities in Vietnam. 
 Matthew: Have to leave that out too. God damn it. 
 David: and you're, and you're complaining about me being a horrible, 
 Matthew: I didn't say you were horrible. 
 David: yes, you did. That's the way I took it.[00:09:00] And I'm all broke up about it. 
 Matthew: Oh, 
 David: And they also offered chat services to their customers. I guess that's a quote unquote in there. And I, I'm not sure it because they specifically called out Vietnam and they didn't mention any arrests in America or anything like that. 
 I'm wondering if it was actually the Vietnamese were running these sites, but hadn't registered in the United States. But it seems kind of weird because the United States has. Some of the harshest cybercrime laws in the world. So if you're a foreigner in another country, 
 Matthew: In the land of freedom. 
 David: in the United States? 
 Does it make sense? 
 Matthew: I would agree with that. Sorry. I made a terrible joke about the land of freedom and having the worst or most punitive cybercrime laws. 
 David: Well, we have the most punitive all laws. I mean, we have 5 percent of the world's population, but 25 percent of the prison 
 Matthew: Prisoners. Yeah. 
 David: that's phone family number one, and we got to do that. 
 Matthew: Land of the free. 
 David: So Microsoft [00:10:00] said they use threat intelligence from Arcos Labs which is a bot, what they call bot management vendor, uh, which really is bot prevention really. 
 Matthew: Kind of like pest management is also for pest prevention. 
 David: right. Yeah. Management means getting rid of it, I guess. 
 Matthew: So wait a second. 
 David: Well, I suppose that that makes sense though, because if you look at are you if you read Dune, I think it might be in the movie too, where they say if you can control a thing, if you can destroy a thing, you control a thing. 
 Matthew: Yeah. 
 David: So I guess that's what they're talking about by management. You know, if you can destroy it, you can manage it. 
 Matthew: It seems appropriate. 
 David: But Microsoft has also submitted a criminal referral to U. S. law enforcement. And to quote Microsoft, Storm 1152's activities not only violates Microsoft's terms of service by selling fraudulent accounts, but it also purposefully seeks to harm the customer's Arcos labs. [00:11:00] And it's, and it's funny because that doesn't actually specify illegality in that statement to reinforce the referral to the U. 
 S. law enforcement. But Microsoft also said that Scattered Spider, which was the group who hacked MGM in September, had used the services of Storm 1152 to predicate that attack. 
 Matthew: For which part? To do the spamming or 
 David: I assume so. I didn't look, I didn't go back and re look at the MGM attack. 
 Matthew: yeah, no worries. 
 David: But I assume that they got the accounts, which maybe is where they started the fish from or something. I don't remember uh, because they didn't really get it. They didn't provide any details outside of that. 
 Matthew: Gotcha. 
 David: But Microsoft said they had worked closely with Arcos Labs to deploy a next gen capture defense solution. 
 So it sounds like I get the, well, I get the impression from the article then was that maybe Arcos lab came to Microsoft and they're actually the ones that had all the information said, Hey, look what we found. Maybe you [00:12:00] should hire us to do bot management for you. And that's maybe how this whole thing even started. But the, one of the reasons that I want to talk about this is. Well, there's, there's a couple of reasons. The first one being that bot bots for fraudulent account creation at, at, at scale is a huge problem. Especially for financial institutions where they have bots attempted to create accounts within their financial services platforms, and then do nefarious things with credit and other loan type activities. 
 So bot, you know, doing bot account creation at scale is, is a huge issue that everybody should be aware of. The other thing is, I understand that the government is incompetent but should Microsoft have really done this versus turning over what they had to the, to the cops and have the cops do the, the, the seizing? 
 I mean, do we want private organizations going directly to the courts to enforce their terms of service? And can any [00:13:00] company really do this? Or is this just Microsoft could do it because they're Microsoft? 
 Matthew: I think that is the correct answer. Microsoft could do this because it's Microsoft. 
 David: Because companies already today work with internet infrastructure organizations that take down lookalike domains that are used for credential theft and things of that nature. So I'm just, I'm just thinking that, you know, this is one of those things where it bleeds over and to private sector doing what. 
 is ostensibly a law enforcement, you know, government role. And is this something that everybody can do, or is this something that's limited just because they're special, you know, which leads down the path of, you know, this is a right for them, but not for you. But as far as what you can do about it is the bot management pieces, which you should take away from this, because if you have an internet facing website that allows users to create accounts I think you should take a look at your account creation logs [00:14:00] and see if you're actually getting hit by a lot of bots creating or being hit by any bots attempting to create accounts on your site. 
 anD if. Your organization allows this and is one that could be used for fraud, which I think most of them probably can. You might want to look into a bot mitigation solution. I mean, Archos Labs, for instance. But there's several out there. I wouldn't, I wouldn't say Archos Labs is, is, is the one you should go with, but that's something might be something you want to look at depending on the return on investment for doing that mitigation. 
 Matthew: All right. Good times. I would say that maybe this will lead to less. Nope. Nope. It won't. All right. Burn our second article of the day. AI controlled killer drones become reality. 
 David: YAy. 
 Matthew: yeah, so excited. I don't know. What was the name of that book that we read? 
 David: And they talk about 
 Matthew: Why can't I remember? Daniel Suarez kill decision. 
 David: Oh [00:15:00] yeah. 
 Matthew: Yeah. Where he hypothesized about these these killer drones. 
 So apparently it's becoming a thing. Countries are currently debating putting limits on what they are calling lethal autonomous weapons in the U. N. The U. S. and China are telling them, no, we are not going to put limits on lethal autonomous weapons. Which strongly implies to me that they both either have these or are very soon expecting these to exist. 
 David: Well, I mean, it doesn't matter. The UN is a paper tiger. They get, they have no enforcement as has been said before the whole planet. Is in a state of anarchy between the nations. You know, there is no world government to say, Hey, you can't do this. You can't do this. Even if they, there was some agreement at the UN, no one's going to follow that. 
 Matthew: tHis has been exacerbated by the recent use of human controlled drones in the Ukraine, the Middle East, and the Azerbaijan versus Armenia conflict. I recently read Seven [00:16:00] Seconds to Die, which is a military analysis of drone use in the war between Azerbaijan and Armenia. Azerbaijan used drones, specifically Turkey and Iran built drones to target and wipe out Armenia's place in place positions and their anti aircraft and artillery vehicles. 
 Super short summary of that. Armenia was expecting Azerbaijan to come up into these mountainous passes and then just get totally destroyed by the fortifications and emplacements they had. And Azerbaijan was like, Why would we do that and instead use drones to destroy them? 
 David: But do you think that, that the Acceleration and Joan technology may have actually finally gotten to the point where they were able to, to ever Azure was actually able to kick out the Armenians in that 
 Matthew: I can't, I can't even pronounce that area. So I don't know how 
 David: no, no. I'm trying to think about the term for an isolated part of a country, [00:17:00] um, that's separate from the main country. 
 There's a term for it. It's eluding me at the moment, 
 Matthew: Oh, I don't know. 
 David: Because there was just a thing where Russia has an enclave that the Lithuanian were preventing the Russians from running the train to their their piece of land there. buT I'm wondering if, you know, they could, cause this has been like this for a long time and I'm wondering if just the, because they have drones today that allowed them to actually be able to toss those people out, uh, whereas they didn't have the capability to do that before, but this allowed them to do it. 
 That's what I'm 
 Matthew: I don't know if I recall correctly, it's been probably six months since I read this. So maybe recalling it. Nevermind. I'm not even gonna try and remember it wrong, so nevermind. But the title came from, supposedly if you heard the wine of the drone, you had approximately seven seconds to find cover. It's a super interesting book and I'm sure they're gonna be writing one very similarly about Ukraine. 
 'cause in the Ukraine they're continuing to evolve that. In Ukraine they've [00:18:00] been using commercial drones. I recently read an article from a journalist that was there and they're using uh, what are those really popular ones in the us? 
 That photographers use. They're apparently using those and they're like 3d printing bomb holders. So they can like tip the drone and it'll drop the bomb. 
 David: That's handy. 
 Matthew: yeah, but humans are so slow, like humans controlling drones, so slow, we need to make them kill faster. And the U. S. is leaning in on this, quote, Deputy Defense Secretary Kathleen Hicks announced this summer that the U. S. military would, field a tradable autonomous systems at scale and multiple thousands in the coming two years saying that the push to compete with China's own investment in advanced weapons necessitated that the United States leverage platforms that are small, smart, cheap, and many. 
 So that's interesting because frankly, our philosophy over the last 20 or 30 or more years is more been big, expensive platforms, super expensive fighter jets, really expensive. Big [00:19:00] tanks and gigantic, uh, aircraft carriers that house thousands and thousands of sailors and cost billions of dollars. 
 David: Yeah, she must be new. 
 Matthew: The U S is drone is actually like an F 35, like a robotic arm, like if you're hitting the stick, 
 David: Well, it may, it may not be that, but you know, it's kind of like the things you can have it fast fast, good and cheap. Right. 
 Matthew: yeah, or pick two, 
 David: Yeah, right. So cheap is out the door, so it's never going to be cheap 
 Matthew: it's never going to be cheap. I mean, it's going to be, maybe it'll be 
 David: contractors would not go for cheap. No way. 
 Matthew: Yeah. sO I'm actually curious whether you think that autonomous drones are that much more of a stretch than loitering anti radiation missiles. So if you, there's certain anti radiation missiles which target radars that you fire and they just kind of hang out until somebody turns a radar on, then they go for it. 
 Or landmines, you can mention landmines in the article. You can distribute landmines and they'll kill people decades afterwards.[00:20:00] 
 David: Yeah. I mean, we're still killing Cambodians, Laotians and Vietnamese today. 
 Matthew: Yeah. 50 years later. So, I've definitely read fiction books over the last 20 or 30 years. I've been talking about autonomous missiles and micro ammunitions for years. And honestly, I don't know if any of those are real or fictional. I don't pay enough attention to military strategy to actually know. 
 I've been envisioning it for quite a while. 
 David: I mean, that's the whole reason we even have DARPA, you know, it's not to give us stuff like the internet. It's, you know, that the whole research agency is designed to, to find stuff to help us kill faster, better. 
 Matthew: So, I don't know. Do you think it's moral to use autonomous weapons? 
 David: I don't think it's moral to use weapons. So I 
 War is immoral. So I don't think it, I don't think whether you've got a human pull the trigger or an AI pull the trigger, I don't think it matters. 
 Matthew: come on. What are you talking about? Killing the cream of a crop of another country is not moral. Taking their best and brightest and putting them through a meat grinder. 
 David: no, it's not. Matter of [00:21:00] fact, I was just listening to interview someone. They were talking about Tolkien and how Tolkien almost got killed in the first world war. I was like, how much You know, worse would we be without, uh, the hobbit and the Lord of the Rings because he got killed in some trench, you know, fighting the Germans, you know, and he had a lot of friends as part of a writing group that didn't come back from that war. So who knows how much we lost in that war just by those guys dying. 
 Matthew: So I was trying to be funny and then you go out here and get all serious. 
 David: No, I think about it too much, I guess. 
 Matthew: That's fair. Do you think this will make the use of lethal force and small, small conflicts more common or less common? 
 David: Oh, more common. Absolutely. It's the whole idea of the law of supply and demand, right? So if you reduce the cost of something the demand is going to increase and they say that war costs in blood and treasure So if you reduce the cost of those you're going to get more war period And if you just look at the scale of wars that have taken place since [00:22:00] the U. 
 S. left the gold, since the world left the gold standard, really because anybody can print as much money as they need to wage their, wage their wars, the scale has gone off the charts since that happened, you know, at the, at the beginning of the last century. 
 Matthew: Yikes. Do you think it'll make more or fewer mistakes than an 18 year old fresh off the draft 
 David: tHat's hard to say. 
 Matthew: Yeah, this is, this is almost like the talk about AI driving. Sure. It has accidents sometimes, but it's still probably safer than your average driver. 
 David: What really depends on the programming though, because you could program it to do horrible things, right? For instance, right now they do what they call signature strikes, which is based on your cell phone activity. They may decide to kill you, Your cell phone talked to that cell phone, which talked to that cell phone. 
 They say, oh, well, this is probably a bad guy. So we're going to kill you based on that decision. So if those decisions are not sound, it doesn't matter if an AI or a human make those decisions. The reasoning behind the execution is bad, right? So I don't think it's, I don't think it makes a difference, really. 
 I [00:23:00] think you're probably going to get an equal amount of horribleness that takes place. Because the thing is, when they talk about AI, AI killing machines, you're still talking about bombs, right? So it's, it's not going, it's not a precise weapon. It's not, so we're not talking about flying snipers. They're going to see one bad guy and shoot one bad guy. 
 They're saying when they're going to drop a bomb, it's going to kill more than one person every time. So even if they were only trying to kill one person, they're going to kill many. And I think bombing itself is immoral. 
 Matthew: I don't disagree, but all right. So while we are discussing weapons and we're going to move on from the sad and depressing part of this podcast, 
 David: Why are you talking to me then? Maybe you should go. 
 Matthew: and there's an absolute difference between something that kills someone and something that doesn't, the same discussion is actually going to occur for just about everything. 
 For example. AI agents don't really exist yet, but at some point in the near future, you're going to have an AI agent who you're going to be able to tell it the things that [00:24:00] you like, and then it will be able to perform actions automatically based on those things. You tell it example, do you want your AI agent to order the food in a restaurant when you walk into the restaurant based on the stuff that you have liked in the past, that actually might be kind of convenient. 
 Maybe that's a bad example. Cause that's actually useful and convenient. 
 David: Well, I think that the thing is, is that what's the downside risk of the AI making a mistake? You know, we're talking about killing weapons. It's a high downside risk. If they make, if they do wrong, you know, if they order something that's got peanut oil on it, you've got a severe reaction. That's also something that's terribly wrong. 
 But, you know, if they order a Cobb salad, when you want to Caesar you know, not really that bad of a downside that, 
 Matthew: Unless it kills the cook at the end of it. Another one I was thinking of that might be kind of cool is having AI automatically buy the clothes that you'll like. You know, give it a budget, tell it, I want to spend a hundred dollars a month on clothing. Tell it what clothes you have, what clothes and brands you like. 
 It keeps [00:25:00] track and replaces them as necessary. Tell it something like, I want to maintain three weeks worth of socks at all points in time. When you throw out a pair of socks, cause it has a hole, it'll just bring in another pair. I want to have, you know, five pairs of jeans and whenever one of them, it goes like, bring me the next one. 
 David: you know, my AI would die of boredom. 
 Matthew: This might keep me from spending too much money on clothing. 
 David: Yeah. For certain people that would be beneficial. 
 Matthew: yeah. And other people would hate it too. People that like shopping would be like, no, I want to go out and pick my individual stuff. Whereas Once you like figure out exactly what you want. And you're just like, I just want this for the rest of my life. 
 David: Yeah, I'm kind of along the lines of man, what was it? I think it was Einstein, supposedly wear the same clothes all the time, or the character from The Fly. You know, not a lot of variation in the clothes that I, that I that 
 Matthew: Mark Zuckerberg. 
 David: it wears out, literally you know, which is not too often. 
 Matthew: That's fair. Oh, you know, when you go around not wearing clothes, [00:26:00] most of the time. 
 David: Are you peeping in my windows again? I told you to stop doing that. 
 Matthew: so I thought about this for security and honestly, most of this sounded really good, like automated patching, automatic containment, automated password reset. Some places are already doing that. I'm not sure. I guess AI could add more context and maybe more uh, more specificity to doing those things. 
 More complexity, like a more complex decision based on more factors. But honestly, all those kinda sound like any good 
 David: Well, I think we talked about this before, how AI could probably really replace your thread Intel theme. Cause if you had an AI agent 
 and just said, Hey, these are the things I'm concerned about and just had it monitor the web, say, Hey, when something like, it's almost like the what was it? The Google news alerts. 
 I don't know if you can still do that anymore, but that kind of thing, 
 Matthew: I have one on myself. It 
 David: don't have to do specific words. You just say, Hey, this is what we're concerned about. And it'll figure that out. 
 Matthew: Yeah. I still have a Google news alert on my own name. It goes off all the time. 
 David: A lot of brown nosers out there. 
 Matthew: [00:27:00] There's a Matthew Keener. That's actually really popular. There's a lot of stuff out there, but it's not me. 
 David: Well, you should go and take over his life and you'll be popular. 
 Matthew: There was a side note. Did you put this or did I put this, I think you put this in here. 
 David: Yeah there was a mention in the article about it, and there's a quote, officials from China and the United States discussed a related issue, potential limits on the use of AI in decisions about deploying nuclear weapons. That is terrifying. I mean, have these guys not seen the movie War Games or Colossus the Forbidden Project? 
 This, this, that does not end well, period. 
 Matthew: I guess the question is, are they going to allow the, when they say decisions, does that mean recommendations or does that mean actually has its finger on the button? 
 David: That's a good question. I don't know. buT considering who we have in our government, I think this might actually be a benefit though, now that I think about it um, Cause they're idiots. 
 Matthew: Actually even recommendations might be bad. I'm sure you've seen the talk about [00:28:00] AI as a super persuader. 
 David: Oh yeah. So it's like the AI Milgram experiment. Please proceed. 
 Matthew: so why does this matter? Well, we're about to turn over a lot of decisions to AI in the next few years, and weapons is probably the one with the most consequences and moral complexity, but we're about to do it in a lot of places in our lives. And we need to start thinking about these limits and how AIs make decisions and how much veto power we as humans want to have over what the AI does. 
 So to keep you from getting that Cobb salad when you really wanted a Caesar. 
 David: Yeah. I think the other trouble is here, you know, government's making these kinds of decisions. It's because their incentives are completely different than the regular person's decisions. 
 Matthew: You know, this is actually a place where I could see an AI being really beneficial. People have talked before about how the perfect government is a benign monarchy or a benign authoritarian, where they allow you the maximum amount of personal autonomy but you don't have to deal with all the messiness [00:29:00] of democracy and stuff like that. 
 We can, we can argue over, you know, what type of authoritarian matches with your values and what you think they should be. But I could, I can actually see an AI if it's written correctly and it has the right limits being an effective governor. 
 David: Well, probably better. The problem is, like you said, who's going to write it. Because imagine if they write it with you know, all this moralizing in there. So you still have blue laws and stuff like that, because it decides, Oh, well, we think that's bad. 
 Matthew: Yep. 
 David: You know, so it, like anything else, it's going to boil down to the input that goes in there, you know, what the programming is and everything. 
 So. I don't know. I'm, I'm not sure if I'm a glass half full or a glass half empty on this whole AI thing, other than to say that it makes me nervous the more government gets involved in it, 
 Matthew: Definitely a glass half empty. 
 David: because the whole thing there is, is, as George Washington said about government, government is force, period. So anything they do is around that construct.[00:30:00] 
 Matthew: All right. Article three, article three. Bring me in. 
 David: All right, so this is another Microsoft article. Threat actors misuse OAuth applications to automate financially driven attacks. And this is actually from a Microsoft blog entry uh, so threat actors are misusing OAuth applications as an automated tool in financially motivated attacks. And OAuth I think we've talked about OAuth before but OAuth is an open standard that allows for access delegation, uh, And it's commonly used for token based authentication and authorization. 
 And this allows a user to grant third party access to web resources without sharing their credentials. And it also allows applications to authenticate with each other using tokens instead of passwords. 
 Matthew: Yeah. Oh, I've used to offer a couple of things and I find them to be a real pain in the butt. I don't know if I like it better than I actually don't know if it's more, I assume it has to have to assume that it's more secure than using username and password. Right.[00:31:00] 
 David: Well, that's the claim. bUt Microsoft's Threat Intelligence said that threat actors are launching phishing campaigns or password spraying attacks to compromise user accounts that don't have MFA. that could modify OAuth, OAuth applications. And you know what we're talking about here with Microsoft, where we're talking about M365 and Azure as far as this goes. 
 So OAuth enables the, enables the attackers to maintain the applications, even act, even after they lose that initial access to the account that they took over in the first part here. sO they've basically been doing three basic activities with these accounts once they lever it. So crypto mining is the first one. 
 So they'll create it. 
 Matthew: it. Why would they do that? I think 
 David: I don't know. You'd think there's money in it. 
 Matthew: it's a problem. Yeah. As long as someone else is paying for the electricity, 
 David: Yeah, we'll get to that here in a second. So they'll create a single tenant, tenant, OAuth application in Microsoft Entra ID, [00:32:00] which formerly was Azure AD. That's similar to an existing Entra ID in the tenant domain. And they'll, they'll add a set of secrets to the application and then grant contributor role permissions to the application, to one of the active subscriptions, and then adding an additional set of credentials to those applications on the existing line of business OAuth applications. 
 And this allows them to create a set of VMs. That are going to start doing the actual crypto crypto mining. And when they create these VMs, they're also mimicking the organizational's naming convention to kind of hide them in the group of, of VMs to avoid detection and suspicion. And since they've been doing this, targeted organizations have incurred compute fees ranging from 10, 000 to 1. 
 5 million dollars um, because of these additional VMs doing crypto mining. 
 Matthew: you [00:33:00] know, that's still kind of pocket change for a lot of organizations. I wonder, I mean, you have to be watching your cloud stuff pretty closely to spot this pop up, right? 
 David: I don't think you have to watch it too closely if you're, if you're properly watching your, your, your VM usage activity. But I'm not sure if this, if all this is in the same timeframe or not cause I'll skip down here to the phishing section for a second here and say that the phishing stuff ran from July to November of this year. 
 So if you rang out 1. 5 million in just those couple months, I'd say that's a fairly considerable amount of resources being used in that short period of time. 
 Matthew: that's fair. That's fair. I'm just, I'm just trying to think, especially if you're a company that spends up and kills stuff pretty quickly, I guess it's really depends on how you're doing the, your cloud. If you're doing it in the standard way where you lifted and shifted on prem, you know, you have a hundred servers, then you would notice if somebody spun up another hundred servers. 
 David: Yeah, well, I [00:34:00] guess that also goes to the point of, do you know yourself? 
 Matthew: Yeah. 
 David: In this so that you would see that as an anomaly. So if you had 10 machines that are running at a hundred percent CPU is that unusual for your organization or not? Maybe you want to look at that. 
 Matthew: And just because again, I'm thinking of the if you, and then if you're one of those places that's constantly spending up and killing machines based on the amount of, you know, volume your application is getting or whatever, then you might not see this unless they spend them up and keep them up. 
 And then you're like, well, actually you're right. You probably would be more likely because you're probably paying way more attention to how many VMs you have. And if you saw it come up and come down, you'd be like, oh, this is normal. But if it comes up and stays up and you're like, whoa, something's wrong. 
 David: So of course the next thing that attack that people that they were, that the attackers were leveraging is phishing. So they would send a phishing email with a malicious URL that leads to a proxy server. That facilitates a genuine authentication pass through. I thought this was kind of [00:35:00] interesting because I haven't thought about this in a while, but if you get a a phishing URL and you click on it, a lot of people assume that that link is going to take you to a lookalike domain or you know, a fake site where they're going to steal your credentials versus passing you through a proxy that's simply going to take that traffic and suck out your credentials from there and allow you to authenticate to the real site anyway. 
 So you're not going to actually, so everything is going to look totally normal to you because you're going to actually log in to the site. You're not going to get an error or anything like that either. sO I think that's a pretty clever idea. I'm not sure how many attackers are actually leveraging that. Because what they do is they'll also, they'll steal the token from the user's cookie session and later leverage that stolen token to perform a session cookie replay activity. 
 Matthew: That's definitely in a lot more common because so many places have gone to MFA now, which basically just almost [00:36:00] completely shut off other than the occasional person who just says yes, whenever they get the MFA prompt. 
 David: Of course. Why wouldn't you? Yeah. I think we talked about that before. We used to have push notifications turned off as a 
 Matthew: Yeah. And you can, and Microsoft rolled out the ability now to do number matching, which is helpful because it pops up on the screen and says, what number do you see? And you put the number in there. So, so they've just moved on to the next thing. It's hilarious. Like we've been, we were like, ah, we've solved it. 
 And then. They just figure out the next thing. 
 David: Right. It's an ever, ever escalating battle. 
 Matthew: Yep. 
 David: So something else they were doing as part of this is opening Outlook web webmail attachments that contain specific words such as payment or invoice that they can then start interjecting BEC accounts or BEC texts in there. And in this whole phishing effort, they created, the attackers have created 17, 000 multi tenant OAuth applications across different tenants. 
 Matthew: Wow. 
 David: And they're leveraging the [00:37:00] Microsoft Graph API to read emails and send high volumes of phishing emails, both internally and externally as part of this. And I'd mentioned before that this ran from July to November. And in that time period, they sent almost a billion emails. They sent 927, 000 phishing emails in that period, which 
 Matthew: it is a lot. I wonder if any of these hit my organization, 
 David: Well, certainly could be when you're talking about a billion emails, that's a certainly a possibility 
 Matthew: you know? 
 David: that may have, I mean, maybe everybody, maybe not an organization in America that didn't get hit by that. I don't know. Really depends on who their, their their external targets were. And of course the last attack type that these guys were doing was spam, which quite frankly, I'm just, that's just boring. 
 Not going to bother talking about that at all. 
 Matthew: what? We're not going to spend the next half hour diving into it. 
 David: Well, [00:38:00] you go ahead. I'm going to take a nap. You can wake me when you're done. 
 Matthew: Is 
 David: But in the blog post, Microsoft had several recommendations for things you could do in order to prevent or detect that. The first one that Matt and I already mentioned is monitoring the creation of VMs in the Azure Resource Manager audit logs. The next one, of course, is enabling MFA, which may have prevented the action from taking place, so they wouldn't be able to brute force the accounts. They also said you should use the conditional access policies for user and sign in risk device compliance and trusted IP addresses. So that depending on what the, what devices users sign in and from where it will, the access will be conditional into the Azure account. THey said, ensure continuous access valid evaluation is enabled, and what that does is revoke access in real time when changes in user conditions are triggered. So [00:39:00] again, the different device, different user agent, different IP could automate, you could set up a trigger to automatically revoke access based on that. And, of course, you should already have Microsoft Defender. Automatic attack disruption turned on. 
 Matthew: that sarcasm? 
 David: You know I'm never sarcastic. 
 Matthew: This is the Microsoft recommendations, right? 
 David: Yes, it 
 is. 
 Matthew: right. Yeah. Let me guess. That's part of E5. 
 David: aCtually I'm not sure. 
 Matthew: I think it is because I don't have E5 and I don't have it. 
 David: Okay, well there you go. They should, said you should regularly audit your apps and consent permissions. 
 Matthew: so I tried doing that once and it didn't give me enough information to tell that any given app was appropriate or not. I mean, none of them were obviously labeled with this is a hacker's app. Please do not touch me. 
 David: say that's, that's a lot easier said than done. 
 Matthew: That one's tough. 
 David: And here's, of course, their main recommendations. 
 Matthew: E5, E5,[00:40:00] 
 David: Buy Microsoft Defender XDR. Defender for OpenOffice 365. Defender for Cloud Apps Application Governance add on. Microsoft Defender for Cloud. And, of course, Microsoft Intra ID Protection. Make sure you buy all this stuff. 
 Matthew: Each one of them, a low, low 5 per person per month. 
 David: I bet it's more than five bucks, but nah. 
 Matthew: Wild. 
 David: So there you go. 
 Matthew: there we go. All right. For our last one it's actually, there's some duplication. I was just reading that one. I'm like, wow, that a lot of those same recommendations are going to apply to the next article. 23andme says actually some genetic and health data might have been accessed in a recent breach. 
 This pisses me off. Because we 
 David: I could tell by your notes, like a lot of capitalization and cussing. 
 Matthew: we keep trusting these stupid companies who do not care one bit about security with data. That's potentially almost incalculable in value. And maybe the 23andme is not not a phenomenal [00:41:00] example of that. Because apparently the data they have is not all that specific, but they, at least not the data they make available to the users. 
 They have more specific data in the back, but anyways, so in October, there was a report that the data of up to 7 million accounts was for sale on a crime form. Now they have filed their AK and we get some details. I actually didn't go by. I meant to go back and read the AK and I didn't, that was very foolish of me. 
 Attackers performed a credential stuffing attack and might've gotten access to as many as 14, 000 accounts or 0. 1 percent of their total now, 14, 000 accounts. That's bad. Sure. But unfortunately they have a feature called DNA relatives, which allows you to share your information with related people. 
 So from those 14, 000 compromised accounts, they ended up with 5. 5 million people's information, including health related information, DNA percentages, relatives predicted relationships and ancestry reports. There were an additional 1. 4 million users who were [00:42:00] connected via family tree information, but had less data exposed. 
 David: So, so what this sounds like, this is partially the user's fault bar by saying they're allowing strangers with some kind of genetic relationship to get access to their data or my misunderstanding what that means. 
 Matthew: That is exactly right. 
 If you're related, they're not going to 
 David: able to get in there and muck around with your information just because you, you have some, um, some genetic relationship. 
 Matthew: Yeah. Just because two people, five generations ago did the horizontal Lombada. 
 David: yeah, I was gonna say, cause it doesn't, and maybe they do have something that's more specific saying you can only be so, so deviant, if you will, from the previous person. 
 Matthew: Oh, the problem is the problem is I'm related to Genghis Khan. So now I get to share my information with 110 billion people or whatever. 
 David: I think with like half the planet or more is related to 
 Genghis Khan because he was kind of busy. 
 Understand how he had time to conquer with as many conquer [00:43:00] binds and kids as he had. I mean, when did he make time to conquer? I don't, 
 Matthew: now the question is, what was he conquering? 
 David: Well, I mean, just think about how many people would not be alive, you know, when you take him out of the time stream. 
 Matthew: these people would still have gotten, would still have procreated, I think. 
 David: I don't know. I 
 Matthew: are still vulnerable to this. Although it's funny cause not 20 minutes ago, I was complaining about CAPTCHAs, but I'm about to recommend them anyways, anyways, anyways, why aren't companies doing a check of exposed password information using something like, am I pwned? And cause then you've got the whole list of passwords for me. I might have, I've been pwned. You've got your database, but the hash passwords. But you know what your nonce is like, you should be downloading and hashing and checking those on a regular basis. 
 And if you get someone with a compromised password as a company, you should be paying for it. 
 David: I'm just saying, there's a bar. [00:44:00] Companies don't want to, you know, don't wanna go over it. 
 Matthew: No. Why aren't they detecting and blocking people trying multiple logins from the same IP address? So apparently. 
 David: right there. That is, that is the first thing I thought of when I, when I read that is like, why, why, why is this a thing? 
 Matthew: they don't have to even block just rate limit it just first, you know, the first try is, you know, one millisecond wait time and then start going up geometrically, 
 David: still, 
 Matthew: although apparently a lot of these credential stuffing networks that they used allow you to come in from hundreds of IP addresses. So you're not seeing like a million logins from one IP address. 
 You're seeing like 100 logins from one IP address 
 David: I, I think you could even, you could even put a cap on it and say, you know, let's say you have a large family. 10. Right? 
 Matthew: Yeah. 
 David: More than 10 
 Matthew: Yeah. How many people, how many people from a household should be logging in and maybe it's different for a company, but again, you should be able to look at, so have you heard of IP quality score? 
 David: No, I don't think 
 Matthew: Okay. So they do [00:45:00] IP fraud intelligence. It's kind of like a bot management thing, like we were talking about before, where if you buy a subscription from them, they give you all kinds of information about the IP address. 
 They tell you how often fraud has been reported from that IP address in the past whether it's a commercial. Like a, like a data center or it's a residential IP address. And it's meant for companies to block stupid stuff like this, 
 David: Mm-Hmm. 
 Matthew: Where every time somebody tries to log in or connect to your thing, you can go there and look and be like, Oh, this is a high fraud related, like, have you ever gotten like a captcha when trying to log into a website and you're like, that's weird. 
 I don't normally get this. Well, that your IP address you have right now is probably reported for fraud previously. 
 David: Mm-Hmm. 
 Matthew: Yeah, so you can do risk scoring for logins. You know, maybe it's a new IP address. Maybe it has a history of fraud. Maybe it's a data center and then force security questions or MFAs or God forbid CAPTCHAs uh, or if it has a fraud score, that's too high, just block it straight up. 
 So at this point, if you're not. Prepped and blocking this. Like this [00:46:00] is just this at this point in time, credential stuffing should be considered, you must be this high to ride the internet. 
 David: yeah, 
 Matthew: So this is just incredibly disappointing. So why does this matter? Well, what could you possibly do with genetic data? 
 It turns out it's not very specific genetic data. So my first thought was, uh, creating custom viruses or bacteria that target specific people. But I don't, it doesn't sound like they maintain that level of data. And I know that that sounds super sci fi right now, and it is super sci fi right now. 
 But your genetic data is not going to change. What if somebody steals your actual DNA sequence now and then in 20 years? Somebody decides that you're a part of a group of people that they want to kill. 
 David: no, they already claimed that they're trying to limit AI from Working with biological material, right? 
 Matthew: yeah, 
 David: You have your AI overlord decide that you're too, you're too nosy 
 Matthew: yep. You know, they might just decide to kill all people. But a more realistic one is [00:47:00] finding kill people who are in a specific group right now with Hamas and Israel all in their big fight. What if someone, what if a Hamas terrorist bought the information on these 7 million people and looked up people that were Jewish and decided to kill them or an 
 David: that would not be a good 
 Matthew: decided to do the opposite? 
 David: Well, that would not be a good idea for Hermas because There's a lot of Jewish blood in the Middle East overall because not all the 
 Matthew: Jews have been living there for, 
 David: a lot of inner marriage and procreation and everything. So I'd say probably most of the Palestinians have Jewish blood in them to some degree anyway. 
 Matthew: And the other way 
 David: out for, well, for them. 
 Matthew: Yeah, probably. We actually, Dave and I were joking in our pre call about this, like almost like reverse blackmail, figuring out from genetic information that something is wrong with somebody and then trying to force them to pay you to tell them like, you're going to die in five years. 
 Do you want to know how? It's a bullet. 
 David: Well, that would be, you're gonna die in five minutes. What's it gonna be from, well, I was just thinking, you know, [00:48:00] that's gonna be one of the, the clickbait ads on websites now it's gonna pop up on the side. It's like, hey, know when you're gonna die. 
 Matthew: Oh my God. Yeah. Upload. Just, just prick your finger here in the little port on your computer. So and this is, this has been pointed out that with the growth of AI, all of a sudden, we're going to be having just more and more data collected about us. We already have tons of data collected about us, our, our location data on our phones, but as we expand, there's going to be more AI agents. 
 There's going to be more companies. There's going to be even more data. And it started getting me thinking about What types of other information, if it was exposed, what the consequences might be for me. So my Spotify data, I don't want anyone knowing my number one artist this year was Debbie Gibson. 
 David: Wait, Debbie's the bomb. 
 Matthew: actually, actually I've started listening to her again. She's really good. Like Tiffany. I don't want to listen to Tiffany again. 
 I'll, 
 David: sHe, you know, she, I mean, she was only big for a very short period of 
 time. 
 Matthew: Debbie Gibson, surprisingly talented anyways. Anyways, there's [00:49:00] location data. Like what, if you like visiting adult stores there was actually a story recently where somebody. was anti prostitute and they bought location data from one of those companies that sells it and then they mapped the data to, I don't know how they got the location of the prostitutes, I couldn't tell you. . And then they figured out all the people that went to those prostitutes and then they threatened to expose all of them to their friends and family 
 David: Except Hunter Biden, you know, everybody already knew. 
 Matthew: with as much money as he spent on it. You're probably right. There's no way he would have been messed by that. bUt yeah, if you do, you know, if you go to gay bars adult stores, if you do anything that is in the least bit embarrassing, although maybe legal. That location data is already out there. 
 This is a problem we already have sales data from stores ever buy anything embarrassing. Anything that ever gets shipped in a plain Brown envelope. 
 David: Nope, never. Nope. 
 Matthew: Nope. I'm so, I'm surprised that like [00:50:00] a lot of those tube sites aren't getting hit quite a bit as well because this one's even worse in some ways you ever watch something really embarrassing. Maybe even accidentally, and somebody finds a record of that and sends that to, threatens to send that to your wife or your boyfriend or whatever. 
 David: Oh, that's funny. Did you listen to this, this week's smashing security with Graham Cooley? 
 Matthew: yeah, I've not, 
 David: They're talking about the compromise of the credentials to a site that that for balloon fetishes. 
 Matthew: Oh, I'm not so about that, 
 David: Oh, hilarious. 
 Matthew: So as tech keeps increasing its hold over us, or they're going to be. There's going to be more and more and more and more data. There's going to be personalized data of exactly what we like. So the AI assistant can negotiate with stores as we walk by, maybe see if there's sales. 
 There's constant monitoring of ourselves. Right now, I mean, I was actually just at a, at a basketball game for my daughter and there were like [00:51:00] five people recording the basketball game, um, on their phones. Like all this data is going into cloud. It's all going to be. Incorporated. If we have these AI agents, they're going to be listening a hundred percent of the time, like Siri, waiting for us to say, you know, Hey, Becky. 
 And bring up our agent so that it can do stuff. And all that data, if somebody breaks into your agent or the cloud that your agent is based off of, I mean, whoever doesn't say anything embarrassing over the course of a day. Imagine in the past, we used to be concerned about hacked webcams in our bedrooms, but now it's going to be our glasses. 
 We're going to have these, you know, artificially intelligent or augmented reality glasses with cameras in them. We're going to have AI psychologists that we share all of our deepest, darkest secrets with and all that's going to be available. And you know, maybe you've got an AI girlfriend, but I don't think I'm going to go there. 
 David: I don't trust her. 
 Matthew: Trust her, trust her as much as you trust your real girlfriend.[00:52:00] 
 David: Well, you know, I think we talked about this before that, you know, once we get farther This far down this path, though, you're going to have to either every AI agent is going to have to come with its own defensive agent, or every individual is going to have to have their own defensive AI to counter some of this stuff. 
 Matthew: Yeah, I agree. I think that we are, I think that your, your McAfee or your malware bytes or whatever is going to expand. And it's going to connect to APIs for all of your different services. caUse all these services generate things like login prompts. Can you imagine if you're personally, I could go ahead and connect all these other services, the API monitor for strange logins and warn you. iT could like kind of like a password manager, also control your API credentials to all of your stuff, ideally in an ideal world, this would be like Linux where you own your own API and that API lives or the AI, I'm sorry, the AI lives on a device on your belt, like your phone. But I don't think that's ever going to happen. 
 That's too, [00:53:00] too lucrative for the companies to own your data so they can sell more stuff to you so that you can ask it for the movie recommendations and they can just slip in, you know, whatever, whatever Netflix is hottest show this month is. 
 David: don't think this is controllable though. That may, it may take a while, but I don't think, I don't think they're going to be able to control it. I think it is going to get out where you are going to have your own thing. I'm not sure. It won't be limited to that. You know, that's not going to be only the only thing there. 
 They will still have theirs, but I don't think they're going to be a prevent you from having your own and and going around them. I don't think 
 Matthew: well, I mean, but I think it's going to be the same, kind of the same thing right now where people buy you know, Spotify and then other people set up what's, what's the name of Plex or something where you can set it up in your house. So yeah, some people will set up their own and then other people are just going to buy the corporate version. 
 David: Yeah, I mean, most people are going to use like iMessage, right? But other people are going to have their own chat tools that they use instead, [00:54:00] uh, for that same purpose. So I think it's going to be more like that where the majority of people are going to use the corporate thing, but the availability is going to be there for your own individual thing, if you want it. 
 Matthew: That's fair. All right. What should you do about it? Don't give companies your data. I say that hypocritically, 
 David: Yeah. If they say, Hey, swab your cheek and mail this to me. I would advise against it. 
 Matthew: as long as it's only my cheek, 
 David: Now, and with that like that's all the articles we have for today. Thank you for joining us and follow us at SerenitySec on Twitter and subscribe on your favorite podcast app.

Other Episodes

Episode 37

November 30, 2021 00:55:04
Episode Cover

SS-NEWS-37: Default Passwords to be banned?

In this episode, we reviewed some recent news articles that caught our eye, including how to choose your MSP, will the UK ban default...

Listen

Episode 146

July 16, 2024 00:35:16
Episode Cover

SS-NEWS-146 - Sysmon usable as EDR?

This week we discuss two articles - One about how the Technology Adoption Cycle applies to companies and how they acquire a new security...

Listen

Episode 99

February 20, 2023 00:31:05
Episode Cover

SS-NEWS-099: How much does a malware writer make?

We discuss how much IT staff for gangs make, the NSA asking congress to re-auth it to spy on the world, and swatting targeting...

Listen