SS-NEWS-001 - Introducing the Security Serengeti!

Episode 1 March 14, 2021 00:53:41
SS-NEWS-001 - Introducing the Security Serengeti!
Security Serengeti
SS-NEWS-001 - Introducing the Security Serengeti!

Mar 14 2021 | 00:53:41

/

Show Notes

Hosted by David Schwendinger and Matthew Keener, welcome to the Security Serengeti!

Please join us for our introductory episode where we take a look at the most recent news impacting the Information Security landscape.

The Register - Just 2.6% of 2019's 18,000 tracked vulnerabilities were actively exploited in the wild

IT Security Guru - International law firm Jones Day hacked with data posted on dark web

The Hacker News - SolarWinds Blames Intern for 'solarwinds123' Password Lapse

Medium: Anton on Security - Stop Trying to Take Humans Out of SOC … Except … Wait… Wait… Wait…

View Full Transcript

Episode Transcript

[00:00:14] Speaker A: Welcome to the security Serengeti. We're your hosts, David Schwindiger and Matthew Keener. [00:00:19] Speaker B: We're here to talk about some recent headline news and hopefully provide some insight and analysis and and practical applications that's going to help you in the office protect your organization. [00:00:29] Speaker A: Views and opinions expressed in this podcast are ours and ours alone and do not reflect the views or opinions of our employers. [00:00:36] Speaker B: And you should consider whether you really want to listen to this at all. [00:00:40] Speaker A: That's fair. All right, so we're starting off with news. The first article that we have is from the register. It is on a report from Kenna Security, who is of course trying to sell you something. But they may have a point. The summary is there were 18,000 cbes released in 2020, but only 473 of those vulnerabilities were actually exploited in the wild. So a quick breakdown of what they found. 77% of the published cbes had no published or observed exploit, 21.2% of them had an exploit publicly released. But even though 21.2% of them had an exploit released, only 1.2% have published and observed exploits, and only 0.6% have observed exploits. So basically, if you summarize all of that, 1.8% were ever observed actually exploited in the wild, which is absolutely crazy, especially given that 20% of them had the exploit publicly released. Nobody's weaponized them. We know about the exploit, but there must be some reason it's too hard to use, or maybe the software is not pervasive enough anyways, so one of. [00:01:55] Speaker B: The questions I think you should ask yourself when you're looking at this is that it says 77% had no published or observable exploit. Now, of that 77%, how many of those are in the top tier that actually your vulnerability management team is looking at and saying, hey, this is above an eight or nine or whatever your threshold is to say, this has to be done first. [00:02:19] Speaker A: I don't know the answer to that. I don't recall seeing that specifically on the graph. So they do discuss using rules like that later in the paper to drive your vulnerability management program. So we can get to that in a little bit. They had some more information. This is about a 20 page report. This is only part one. I only read part one. There's a bunch of other parts that are a little bit later, but that was 120 pages of reading. It was a little bit much for me this week. So other tidbits in here, 50% of the eventually exploited vulnerabilities had proof of concept out by the time the CVE was published and within one month of the CVE publishing, 70% of the exploits were released. So that definitely is a speed factor here. And there is no relationship between when the exploits publish to when it's actively exploited. They had a pretty steady line, looked pretty steady for about two and a half years after the exploit was published, before it was actively exploited, before there was a dip. So that's interesting. If you had asked me, I would have imagined that there'd be a huge spike when the exploit is published, and then it would gradually drop over time as it was patched. [00:03:38] Speaker B: Well, it really depends on maybe how it's being leveraged though, because if you're talking about something that there's a lot of mass scanning going on, attackers might be just letting their tools just run, and occasionally they're going to find something and take advantage of that. And there's probably some threshold where after 2.5 years there's just a diminishing returns or something like that. Or that's typically when attackers know that there's no value to continue down or they've moved on to the next best thing or something like that. [00:04:13] Speaker A: On the opposite side, for vulnerabilities that were exploited before publication, the numbers were very low. The numbers were, it's a log chart, so it was in the tens to hundreds, not in the hundreds to thousands like after, I guess the hundreds after. So it's super rare that they're exploited before publication, except for maybe one instance that we'll talk about in a few articles. The big thing here, this is their selling point. They wrote a machine learning model to prioritize remediation efforts. And then they compared it to other strategies. And actually I think looking at some of the other strategies was really interesting. They had sort of simple ones like remediate everything over cvss five, remove everything over cvss ten. They had some strategies like remediate the big ones first, like Microsoft first, et cetera, et cetera. And all of them were not terribly efficient. For example, remediating everything with a CVSS score of seven plus was only 32% efficient. I'm assuming here efficiency is how many of the vulnerabilities you patched were actively exploited. Because the ideal here would be, of course, that you patch the 473 that were actively exploited and you don't patch anything else because that's being as efficient as possible. [00:05:36] Speaker B: Well, not necessarily that you don't patch those others, but you have those prioritized. Because if you get those 473 knocked out, I don't think that means you've done patching, right? It's just a matter of what do you do first in order to make it timely to get the most bang for your buck right away. [00:05:53] Speaker A: Yeah, and the other part of that too is, as they said, it took up to 2.5 to three years for the active exploitation to start after the exploit was published. So there may be quite a bit of delay there. So even though only 473 were exploited now, given their chart earlier, that actually suggests that 473 of the exploits released last year, around 500, will be exploited in 2021 and another 500 in 2022. [00:06:27] Speaker B: So it means you have a longer lead time than you think you have. [00:06:31] Speaker A: Generally for many patches. For many patches. But not all. But not all. All right, so what are our takeaways from this article? Well, first of all, there's a lot of wasted effort. If your goal is to simply patch everything, and you probably shouldn't just patch everything, what can you do about it? What should you do about it? You probably want to front load more time in your vulnerability management program rather than focusing on blasting the patches out. You probably want to spend a little more time figuring out which patches need to go out and prioritizing them. Unless of course you're in the government, in which case they just tell you to patch everything within 30 days. [00:07:11] Speaker B: Right, because it doesn't matter, patch them all. [00:07:15] Speaker A: The other thing that came up was for external items, you need to move faster. For stuff that's externally vulnerable, they had an average of two weeks or less for exploit code to be released for 50% of the exploited bugs or 50% of the exploited code. So that one is a little bit different. And then frankly, again, there's some examples in a couple of articles on excelion where it was being exploited four or five days before the patch was released. And being externally vulnerable, they could sweep the Internet and move in at any time. And internal might be a little different. Internal you might be able to wait and see, you might be able to actually get in the threat and tell that people are exploiting it and then prioritize your patches based on that. That's definitely a risk decision. [00:08:00] Speaker B: Right. And that actually might depend also on whether that's something that is likely to be exploited via phishing, email or not, because it's almost as if your workstations are externally facing at that point. [00:08:12] Speaker A: Yeah, I definitely dread the day when the next Microsoft Exchange or Microsoft Outlook bug with no interaction reading preview comes out. That's going to be terrifying. [00:08:26] Speaker B: Right. [00:08:27] Speaker A: All right, I think that's all we have to talk about for that article, I think the next one is yours, right? [00:08:33] Speaker B: So this article is primarily based around an article from it security guru, but there's also going to be parts of this I'm going to touch on from an article in security week, and actually some is on. This article is talking about the compromise of a law firm through the accelion file transfer appliance. So back in December, Acelion had 40 day vulnerabilities released at the same time for their file transfer appliance, which people are typically staging in their DMZ. Two of those were OS vulnerabilities. There was one SQL vulnerability and one server side request forgery vulnerability. So if, at least in the instance of this law firm, the SQL injection vulnerability was what was leveraged for the initial access, and then the attackers set up a web shell for persistent access. So you can see with four of these vulnerabilities together that you could easily do attack chaining in order to get farther and deeper into the organization. In this instance, the attackers used that just to get into the appliance where the data was stored at. So they didn't even really have to go in deeper into the organization. They set up that web shell to download files from that site, and then they posted those on the dark web, attempting to blackmail the organization into paying them not to release the rest of them. They release this onto the Internet. But it's not exactly, your average person is not going to go there. Someone's going to have to be going into the dark web specifically looking for this kind of information. That could be other attackers or it could be news journalists who are actually looking for stories, which is probably not. A lot of those you'll probably get. Krebs is probably in there all day long, but I'm not sure how many other journalists are actually digging through the dark web looking for this kind of information. So if they were not going to release this stuff into the regular Internet, it's a wonder exactly how broad that exposure would be. Well, for this article in particular, they're talking about this law firm. So probably all sorts of interesting stuff that was pulled out of there, because one of the things that I was thinking about was if this law firm, I'm not sure exactly what their customer base is, but if they're a defense attorney and someone gets in, pulls all this stuff and releases it, and the law enforcement gets a hold of it, what's that do to their ability to defend themselves in a court of law? Some interesting implications there from a legal standpoint about whether this is going to not only financially impact individuals, individual customers, but could end up putting them in jail because they are not able to mount an adequate defense, because their defense documentation has been released to law enforcement. But as far as the overall vulnerabilities that we're talking about here with the FDA appliance. So this is where going in and actually looking at some accelion documentation comes in, is that they released. I could not find a date for when this was posted, but a note saying that the FDA is end of life, 30 April 2021. So this product was on the outs. But when you dig farther into that, it also states that it's based on CentoS six, which its date of end of life was 30 November 2020. So I thought that not terribly surprising, but disappointing, that their end of life in their software, which is based on an os, which is end of life, six months before their software is going to be end of life, it would have made more sense for them to correspond the end of life of their product with the end of life of the operating system in which it resides. And in that same press release or article, they said that for the past three years, they've been attempting to convince their customers to move off the FTA platform onto their new product. Kite works. What that makes me think, is that, okay, for the past three years, they've been saying, don't use this old product, go on to our new shiny product. Where do you think Excelion has been putting their development dollars? [00:13:31] Speaker A: Yeah, that makes absolutely sense. [00:13:33] Speaker B: Yeah. So this product has probably been really neglected for at least three years, which is probably why you're seeing this stuff right now before the end of life of, it's kind of built up over time. Now, along the angle of the defense attorney thing, what I was also thinking was about was, depending on what they get from this, they could forego extorting the victim, the law firm, and go straight for the customers of that victim and start extorting them, depending on what they found in those attorney documents. So there's like third order effects for this kind of compromise. And if they're able to actually extort those customers, the likelihood of even that exposure for them coming to light would probably be much lower, because you're talking about individual people and on a one by one basis, versus a major corporation or a major law firm that has the financial resources to decide to fight that kind of extortion. And it kind of goes back to both the Ashley Madison and the Sony breach, where, as I heard someone else couch individual tragedies for the customers of those breaches because Ashley Madison, obviously, because of the nature of their business, people who were customers of that, having that data exposed was caused all sorts of harm to those people. And then Sony, there were emails released for people within Sony that did not put them in the ideal light, if you will, which also would cause them personal harm from the release of that data. And as well as years ago, the army used to use anonymous FTP for the Army Corps of Engineers in order to accept proposals. So that would allow people to go in and just peruse whatever was in there from everybody else that was posted as well. [00:15:59] Speaker A: Really? [00:16:00] Speaker B: Yeah. And it was actually the lawyers that said, well, you can't change that either because of the bidding methodology for Army Corps of Engineer products or projects to have contractors, and this is for building embassies and things like that. They would post proposals to this anonymous FTP. [00:16:25] Speaker A: That is ridiculous. [00:16:27] Speaker B: Right, exactly. Because regardless of what organization you are or what your organization does, it has to exchange information with other organizations and customers. So there are a zillion ways to do this. And a lot of, I mean, small companies, they just use email, right? And hopefully they're encrypting whatever they're sending before they're sending it, or they're using a built in PGP or GPG in order to do that encryption. But larger organizations are probably using services like Dropbox or Zix, secure messaging, that kind of thing. And the assumption would be that they thought that having their own control over that platform would make it more secure than using something like Dropbox, which is in the cloud, and that data being stored there. And it kind of shows that just because you're in possession of your appliance doesn't necessarily mean that that's actually more secure than using a third party to do your data exchanges. Now, what you should think about when you're talking about this is really the broader scheme of how you do data know, because if you have this FDA appliance sitting in your DMZ and you're storing it there to exchange data with your customers, you have to consider what data should you be exchanging there and how long should that data reside there? There are implications for allowing stale data to reside on that platform. So if you're interacting with a customer and it's a short term contract or something where the customers, I don't know if dormant is exactly the right term, but that transaction is over within 30 days, it may not make sense to maintain that data at that data exchange site past that mark. Well, it really depends on the nature of the business, about how long that data makes sense to make available. Hopefully, it would be, ideally, it would be a situation where you're like, hey, we're going to post this. You got 24 hours to download it and then it's being removed. Because I know credit card companies, you'll log into their site, you'll be able to see your statements for the last six months or whatever, but if you want anything older than that, they've already archived it and you have to submit a request to go and get it. So that stuff is no longer residing in an Internet facing system. They've got it on a back end, and then there's some automated process assuming to move that forward for you to get that if you need it again. And this kind of ties into the whole GDPR where they're saying, you should say, well, you need to get rid of my data. This is actually where companies should be considering data limitation as well as like, do you really need to even keep this in an archive once it's done? And talking about putting limits on, like I said, the whole data management process and limits on data freshness or data retention, saying this type of data we're only going to keep for x period, and then we're going to archive it or it's going to be removed altogether. So a lot of organizations don't have that detailed data management plan. And that's one of the, actually good things about GDPR is kind of forcing companies to even start thinking about that log management as well. So you bring on this FTA, right? If you have a log management process said, we need to get these logs from the tool, then maybe you're seeing SQL data transactions in your logging tool to indicate, hey, this data was queried from the database on this date. And then you could figure out, well, that's when the attackers got it and what they got vendor appliances and SaaS solutions, right? So getting logs out of those two things are like ridiculously hard, especially when you're talking about SaaS solutions, because I've talked to numerous SaaS vendors and they have not considered it at all. So if you look at the shared data management model for SaaS, right, or for the cloud in general, you've got that nice diagonal and dot at the bottom. It says, hey, the customer is responsible for access and data, right? Yeah, but the SaaS provider is not giving you the intelligence you need to know about that. You don't know who logged in, when they logged in, where they logged in from, what the data they accessed, what they did with that data that is not generally in a logging source which you can get access to and sent your Sim. [00:21:27] Speaker A: I think you just gave me nightmares for the week. [00:21:31] Speaker B: Well maybe not necessarily make custom made content for them, at least as far as when you're talking about like splunk. You need to make them SIM compliant so they match your data models. [00:21:40] Speaker A: Yeah, but depending on what kind of weird logs and weird form. Yeah, I guess that's what I was going with. [00:21:46] Speaker B: It's a parsing problem. That's really the wrinkle there. [00:21:51] Speaker A: It's two problems. It's two problems. It's a parsing problem. It's a knowledge problem because if they do their logs in a weird way, you don't know what log has like for this Excel we're picking on them because they're the public punching bag right now until the next big exploit. But do they maintain the normal Apache web server logs or Nginx or whatever they use or do they use something proprietary? And is it buried somewhere deep where you can't even find it? There's the knowledge problem on what logs they store and where they store them. And then there's the parsing problem and then there's potentially the custom content problem depending on what it does like for the excelion. Sure, maybe we can put some default web server monitoring content. We can put some default content on here and we would have then hopefully caught the SQL injection test. But if they don't provide that web server content and they put the web shell on the do mode. Yeah, I don't know. And then the specific needs related to GDPR and other sort of sensitive data things may require custom content for this. [00:23:03] Speaker B: As well because you're also talking about that kind of wrinkle depending on what they write. Is it even going to make sense? I mean if you ever looked at mainframe logs. [00:23:15] Speaker A: I have not. [00:23:17] Speaker B: You need a translator. [00:23:19] Speaker A: Got you. [00:23:20] Speaker B: Because the mainframe log is going to say gibberish. Gibbers gibberish. And you're going to need a mainframe guy to say oh well that means that this thing happened. [00:23:30] Speaker A: Oh, I see, yeah, I see what you mean. I see what you mean in terms of like it just uses codes to refer to events rather than spelling them out. Or maybe it doesn't include field names and it's just like a string of numbers. [00:23:44] Speaker B: And sometimes the terminology, at least for the mainframe, isn't the same terminology as leveraged in distributed systems. I see, so the mainframe may say it did x and in the distributed system, you may say, oh, well, that means this. In the mainframe, it really means something else. Probably not necessarily too far off, but it still does not mean the exact same thing, which is why you need a mainframe translator to give you that context so that you can understand what means what. And that may be true of some of these custom built appliances, but when you're talking about, like, SQL and common web interfaces and stuff like that, hopefully there's nothing crazy in there that you look at and say, I don't understand what this is telling me. [00:24:31] Speaker A: Can you imagine if they use like, a private fork of Apache or something? They're like, we wrote our own web server. [00:24:39] Speaker B: Oh, you mean like South Africa, who's writing their own chrome browser for their IRS, or their own version of Chrome for. [00:24:48] Speaker A: Yeah, exactly. Or North Korea writing their own version of Linux. [00:24:52] Speaker B: I think it was China. [00:24:54] Speaker A: Was it? [00:24:55] Speaker B: Yeah, China said that they're going to make their own. I think everybody talked about doing it. [00:24:59] Speaker A: I mix up all of my totalitarian countries. It's hard to tell these days. On to the next one. Yeah, it's yours, too. Two in a row. [00:25:11] Speaker B: That's because I picked the best ones. All right, that's fair. And this is from the hacker news. And we're going to pile on to the solar winds thing, because if you are podcasting at this point in time, or if you're in computer security and not talking about the solar winds crisis, then what are you going to do? [00:25:31] Speaker A: How do we make money off it? We have to position ourselves as experts. [00:25:38] Speaker B: Well, that's why we're even talking, right? But this article here is talking about the whole SolarWinds one two three password thing that was supposedly posted to GitHub like, two years ago. So this is based on, I guess, testimony before Congress from a couple of different people within the organization, as well as some comments from people outside of the organization. So one of the former CEO for SolarWinds said that this was a password an intern used back on one server in 2017. That was a report to our security team, and it was immediately removed, whereas the new CEO. See, where's my notes on this? Oh, I'm sorry. This is a spokesperson, not the actual CEO, says it was determined that the credentials were for a third party vendor application and did not access the SolarWind it system at all. Furthermore, the third party application did not connect to SolarWinds it. So this password has nothing whatsoever to do with the sunburst attack. And what struck me about these two statements is really something that other people talk about in the realms of how do you maintain good faith with your customers after something like this has happened? And everybody talks about transparency, but I think in addition to transparency, what you need is uniformity of message, right? So you've got the previous guy saying, here is the problem. And then you've got a spokesperson saying, well, this is the problem. And then you've got this guy, Vinath Kumar, who said back in December that this was leaking FTP credentials that could be download the website in the clear, adding that a hacker could use credentials to upload malicious executable to the SolarWinds update. So there's a lot of confusion about really where this password resides in the whole SolarWinds debacle. And is it really relevant? Or is this the crux of the problem of where SolarWinds security fell down, which caused the compromise to occur in the first place? And to me, it's almost not relevant because I'm looking at this in the context of organizationally. Why did a password of that type even show up? Why was it even being utilized? Now, if you're talking about an intern trying to blame it off on an intern, saying, oh, those stupid interns, they don't know anything. They're wet behind the ears. They're still working at community college. We're not paying anything for them to come in and do work on our critical systems and have them allow them password to create a password for a critical account that could compromise our data store for our code. What does that say for the culture of how that organization manages passwords? And for that matter, how much access a non senior engineer or capable individual has within their system? Because I'm looking at this and saying, all right, SolarWinds, one, two, three. That does not strike me as a password that an intern would come up with. That is a password that is most likely was given to them or told to them by another engineer said, oh, this is our standard default password. Or maybe this was on a sticky note on the rack, for all we know. But the overall culture, I mean, if you have a password policy which they say that this violated, if this violated your password policy, I guess it really depends if you're relying on Microsoft's GPO to enforce your password policy. I think just looking at this password, you could have prevented that because there are no uppercase characters in there. Because I think you have to have, what, three of four? So it's uppercase, lowercase, special character, special character number. I think you have to have three of four in GPO. If I remember correctly. So they could have prevented that from GPO, from that occurring, from that policy. So either they're not enforcing their policy through a technical means, which they should, or that is not actually the policy. Let me take that back. Or this is not actually a violation of their policy. [00:30:48] Speaker A: Yeah. [00:30:51] Speaker B: Depending on the size of your organization, what you'd want to do is have a centralized team for account management and account creation. So if the engineering team needed an account created, they would put in a request to this team and maybe you could automate this full thing. I'd be surprised if you couldn't through the ticketing system to say, I need an account that has this access and put in a ticket and have that automatedly done, maybe with a check in there for approval, and have that created with a randomly generated password. Now, of course, that's not to say that once they had that password, they couldn't go and change it, but that's where your password policy enforcement comes in. But if you're a small organization, you should be able to at least do that in a manual fashion where you have a team that's responsible for account management and credential distribution. So ideally, you'd want to automate that part where someone says, hey, I need an account. I need this access for it. It's created, it's dropped into the right ad groups, it's given a randomly generated password. And if this system is automated well enough with a password vault, you could then take those credentials and put them in the correct password vault for that engineer to access or for that team to access, depending on how your ticket permissions are set up. And then you're also not exposing that password through email, which is the way passwords are generally distributed. And I've seen multiple organizations have that bite them in the ass for password distribution because they're creating the password and they're emailing it out, and the pen test team comes in, gets into email, and then they have these passwords. And also, if you're not using a password generator and you're using a standard, or not exactly a standard, but an easily figured out password, like SolarWinds, one, two, three. If pen test teams know this, then hackers know this. And I've seen them come in with winter 2021 or spring, I've seen that several times. And they know these things, so they just guess those and are able to figure that out as well. Which is why that random password generator, at least up front, is a check against these kind of mistakes. And if you're using a password vaulting solution, depending on your level of integration with that as well, is that if it's integrated with a web browser that automatically fills in stuff or integrated with the operating system and automatically puts this stuff in, you're also disincentivizing. Yeah. If you have a password vaulting solution that's well integrated into your other systems, then you're disincentivizing your engineers to use these kinds of simple passwords because it makes using the password, which is maintained in the vault, easier. And really that's what you want to do, is reduce that friction as much as possible. And that's the kind of thing that prevents or reduces the likelihood of this kind of problem from occurring. [00:34:09] Speaker A: I have to wonder if that password was publicly accessible, it was almost certainly cracked. And how long was it available? [00:34:17] Speaker B: Well, they say it was posted to GitHub. It may have been just posted as is. [00:34:21] Speaker A: Yeah, I wonder what system it was for. If it was like an externally available system or an internally, like you had to break in first before you could use it. [00:34:28] Speaker B: I don't know. My guess would be that if it was posted to GitHub, though, it had to have something to do with code. [00:34:34] Speaker A: Yeah. [00:34:35] Speaker B: Now it would be really sad if it was posted to GitHub and was a GitHub password. [00:34:44] Speaker A: But. [00:34:47] Speaker B: I don't know. It's hard to say because like I said, the messaging here is not consistent across the organization. I mean, the previous CEO said one thing, spokesman said another thing, and the current CEO said a different thing. So it's hard for customers to say, oh, well, these guys have their stuff together and this was just an honest mistake when their messaging is not uniform either. And I think at least as far as solar winds go, because they're so pervasive and this was so bad that this could be the cyber Pearl harbor, depending on how the government decides to go with this. Because if the government decides that SolarWinds, their software, can no longer be trusted, how many other organizations are going to take that as gospel and also dump solar winds? And an organization that big, if they take 80% cut in customer adoption, are they going to be able to survive? [00:35:50] Speaker A: That's actually my question on accelion, too. How many of these companies are going to move to kiteworks? And how many of them are going to blow off acceleon? Is this a death Ratle force? We haven't seen that yet. Really? [00:36:03] Speaker B: No. I mean, we hear it all the mean it was going to be target and then it was going to be Sony, but it has actually happened it's happened on a smaller scale. I know of at least one small company who was doing background checks for people, for the government who got compromised. And the government's like, oh, you're out. And of course, that was their own customer, the only customer they had. So they had to close up shop altogether. [00:36:27] Speaker A: That kind of makes sense for the government to be able to do that, because that's kind of a commodity service that a number of people can provide. But like SolarWinds, you have long term contracts with them, you've got infrastructure set up with them. It's not so easy. [00:36:43] Speaker B: Not only that, but SolarWinds, with what was the number of customers? I forget it was a large number. But they got into that position where they had those customers because their software is pretty good. As from a functionality perspective, obviously, they may have some challenges in securing their code, but from a customer perspective, they have a good, functional piece of software. Now, if the government says, hey, we are not going to work with this company any longer, their software is not trustworthy, and a bunch of other customers pile onto that and the company goes bankrupt. Now, when a company goes bankrupt, what happens is their assets are sold off to someone else who can use them. Now, what you have is a company who may go bankrupt. Well, let's just hypothetically see they do. So the company goes bankrupt. The bankers come in to sell off the company's assets. What are the primary assets that Solarwinds holds? Its software. So what you have is assets that are toxic, irradiated. Who is going to take responsibility for that asset and buy that asset in a bankruptcy? So it could be that even in bankruptcy, the company is completely destroyed, because no one is willing to take that code on, because it's not trusted. Because if you're going to go and buy that code, you have to say, okay, I'm willing to pay for the code. I'm also willing to spend the man hours having some kind of assurance or doing something that's going to ensure that people are going to trust utilizing that code going forward, as well as being able to show that we are not going to fall prey to the same problem that SolarWinds had. And that's why I think this might be the one. But really, I hate to say it is almost dependent on what the government decides to do. Speaking of the government, there were a couple of things in here that I wanted to mention that just bothered me about this article or statements that were made in this. So in here, Representative Katie Porter of California said you and your company were supposed to be preventing the Russians from reading Defense Department emails. And I was like, that's not what SolarWinds was supposed to be doing. That's not what their job was. And it's kind of like, well, these are the people that are going to be writing the laws and they don't even fundamentally understand the situation and what happened here and what the real problem was. And they're hauling these people in front of Congress and they're making statements like this. And it's like, how many people are hearing this outside of the government who don't have any kind of the understanding that other people within the it industry or the security industry of Solarwind's actual role within an organization? It's like, holy cow. Well, obviously these guys are just terrible overall. Additionally, this was a very interesting statement that was in there. The hackers launched the hack inside the United States, which further made it difficult for the US government to observe their activity. Deputy national security Advisor Anne Nurenberg said in a White House briefing last month, this sophisticated actor who did their best to hide their tracks, we believe, took it to believe. It took them months to plan and execute this compromise. So the deputy national, national security advisor for the United States government is saying because the attack took place within the United States, they couldn't observe the activity. And that just doesn't make any sense. Considering what we know from the fact that criminals operate in the United States, cybercriminals operate within the states, and never. It's possible. It is certainly possible. I mean, this is just not about foreign attackers compromising systems in the United States. There are cybercriminals in the United States, believe it or not. And besides that, to say that foreign attackers are not leveraging systems within the United States to attack other systems in the United States is just ridiculous. It flies in the face of everything that I've seen working in the security industry, that these things happen all the time. And besides, even going into what Snowden revealed about what the NSA is doing inside the country. [00:41:28] Speaker A: Yeah, but they got to say it, right? They can't just admit it. [00:41:34] Speaker B: They have to say something. But to say something like this, which just does not make any sense at all. They could have couched that better or. [00:41:42] Speaker A: Just not said anything at all. Unless, you know what, though, as they say, never let a good crisis go to waste. Maybe they're going to use this to argue that they should loosen up those restrictions on watching what goes on in the US. [00:41:54] Speaker B: It could be, and that could be the shot of the bow for that. Be interesting to see but this article actually ends with a statement saying that SolarWinds is going to start a secure by design process. [00:42:08] Speaker A: Are they? [00:42:10] Speaker B: Well, to deploy additional threat protection and threat hunting software across its network. [00:42:17] Speaker A: Just zero trust networking. Zero trust networking. Boom, problem solved. [00:42:21] Speaker B: No. Well why didn't they just say that then? Because obviously that is going to solve everybody's problem. [00:42:27] Speaker A: Yeah, that's exactly what I heard. [00:42:29] Speaker B: And not only that, it's simple to implement. [00:42:33] Speaker A: Yeah, it'll be done in like a month. [00:42:36] Speaker B: Yeah, this is probably a six month project, especially if I agile it. Right. [00:42:45] Speaker A: You hurt me. [00:42:48] Speaker B: Be done in no time. [00:42:49] Speaker A: Yeah, just like security, we can all be done and take the rest of the year off. [00:42:53] Speaker B: Oh yeah. Well if we were to implement zero trust networking, then we could fire our security staff and just have one guy. [00:42:59] Speaker A: There, he just flips the switch on and off when he leaves. [00:43:04] Speaker B: He just has to hit the red button when it lights up. [00:43:07] Speaker A: Yeah, exactly. All right, speaking of hitting the red button and automation, I'm actually going to skip our fourth article because we have been going on quite a bit. So I'm going to move on to the fourth article. This is on medium. It's Anton Chuviken talking. I hope I pronounced his last name right. It's possible I did. It's possible I did. He posted a little mini rant on stop trying to take the humans out of the sock. And the one line summary of the article is you can't automate everything and have it perform well. So why does this matter? Well, there's a couple of reasons. To quote the article quoting a vendor, we're able to do a quoteception, you have an adversary problem, not a malware problem. He also points out that bad automation kills, and I have some recent experience with this, so it turned into a minor issue. But his point stands like bad automation kills. There was a bug. First of all, we weren't intentionally trying to do this. This comes down to where automation is more like software development than it is a lot of more traditional security stuff. Do you know how the function works? Are you able to kind of dive in there? Do you know how the thing that you're automating works? I don't know if this was an automation. So there was a lot of unknowns in this. And an automation makes it seem so easy to do a lot of these things that you would typically have in the past needed a programmer to do. You would have needed somebody that knew more about that automation, which is great, because now I'd never automated anything before I'd written a couple of python scripts here and there, but now I'm able to automate way more stuff way faster. And that includes bad automation. [00:44:53] Speaker B: Well, I think one of the things you need to consider when you're going to automate, though, is breaking whatever you're doing down into its constituent parts. You may automate one part or all parts, but force a manual execution of each, which kind of forces an individual to accept responsibility as it goes. And then once you hit that tipping point of saying, we've done this x thousand times, x 100 times or whatever, and it's like, okay, now let's take the button away and just have it run through the whole thing, that's a possibility. [00:45:26] Speaker A: I mean, definitely that's what I typically do when I develop a new automation is like there's manual steps in between, or I develop it one step at a time, put the first step in there, run the first step to try and do some validation there, right. [00:45:42] Speaker B: I mean, that might also be just a general shortcoming in some of these automation products, or what you need to understand about the automation product is are they doing input validation? [00:45:52] Speaker A: Yeah. What are their built in limits? And I think that's something that every automation vendor should probably be doing, is they should probably be, because what you're doing is you're abstracting the expertise to the automation vendor. And so the automation vendor needs to put those guardrails in. Like, how can somebody break this and how can we try and prevent them? It's what every company does, right? You pay for somebody to handle the little stuff, right? [00:46:20] Speaker B: And you just end up really having to decide what your level of trust is going to be. And do you know the right questions to ask that automation vendor if you're doing input validation for every step of that? And the thing is, if you're new to doing automation, you have no idea if you're asking the right questions or not. And I'm not sure if the vendor is even going to say because a lot of the times when I talk to a vendor, though, depending on my level of experience with whatever we're talking about here, one of my last questions to them when we're talking about these things is what should I have asked? What did I fail to ask you that I should have? And unfortunately, a lot of times when I ask that question, I get no answer. And either I'm awesome and don't forget anything or don't miss anything, which seems highly unlikely, or these guys are not taking kind of adversarial aspect or looking at themselves from the outside coming in saying this is what customers should be asking, or will be asking. Let's know. [00:47:35] Speaker A: Starts. Mr. Shubiken starts the first half of the article talking about why you can't automate away everybody. But then he spends the second half of the article talking about kind of the future and why we're going to have to be using more and more automation, regardless of kind of these initial problems. He prevented a couple of scenarios that are frankly kind of horrifying. The first one is down the road. Eventually AI and machine learning will get to the point where they can auto generate new exploits based on the fuzzing and the testing that are done, which of course would totally destroy most signature based detection logic. Not that those have been phenomenal in the last decade or so. And then again, attackers are potentially going to be using worms more commonly, which again, both of these just accelerate the attacker's timescale. And instead of having to wait days or months before you break into a system, after you start reconnaissance and your initial entry, we're down to hours, which frankly is already happening. I remember reading a couple of months ago, there was a ransomware group that I think it was 5 hours between their initial entry via phishing email to ransomware across some number of servers in the environment. That's not a realistic time scale for response for any but the most mature security companies. [00:48:54] Speaker B: Yeah, well, as far as that, automation, auto generate exploits and everything, DARPA has a project for that already. [00:49:02] Speaker A: Yeah, I saw there was something about 2016. They ran a contest for that. [00:49:07] Speaker B: Yeah. So this is already being worked on. And I have something rolling around in my head about an article that I read not too recently or fairly recently about someone. It may have been Google talk about automating, fuzzing and attack. But I'd have to have to do some research to see to refine that article. But it only makes sense because the thing is that there's the disparity between attack and defense. The attack, they know specifically what they're trying to do. So they have a very narrow focus and they can automate very tightly, whereas you have to look at a broader scope for defending against that narrow beam attack. So that's why it's easier for them to automate than it is for defense. [00:50:04] Speaker A: It's true. So what can we do about it? There's a couple of things we can do about it. The first one that I found is look for the 80 20 problems. Look for the problems where 80% of your misery comes from the 20%. Try and automate that 80% away. That sucks. So big places for automation. Most soar tools that I've seen now come where they automate the enrichment automatically because that's easy for them to do. Additionally you can automate steps of the investigation. For example, if you get a report into your stock that says there's a suspicious network connection to this IP, there's probably a couple of searches you're going to do every time you're going to go look up in your inventory. Who owns the IP? What is the system supposed to do? Hopefully you have a software inventory, all that stuff. You're probably going to do a SIM search of your traffic to and from that IP over a period of time. You're probably maybe going to go look at the running processes at that time. That's definitely a big gain for sore products is if you can automate all those investigation steps and then present the information to the analyst, that's a big time saver. And then finally remediation, a lot of the sore products will provide things like removing emails from boxes, automated blocking, things like that. And of course that's a little touchy. [00:51:29] Speaker B: Look for those areas where you can automate that it's safe to do so, that a failure in that is not going to break something, it's just going to revert you back to the manual step or something like that. At least initially rather than really the enrichment maybe is where you want to just kind of start at for your automation stuff. [00:51:47] Speaker A: That's definitely the easiest place to start for sure. Also another thing you mentioned before that fits in here for mediation. It might be nice to have a pop up box that confirms like this is what you're going to do. Are you sure? Just so you kind of get some final eyes on what's going on there. Here are the five remediation things. Would you like to reset the account? Would you like to network, contain the host? Would you like to shut down network traffic to the host? And I think that's probably the way to go, at least in the short term. [00:52:21] Speaker B: So choose your own adventure. [00:52:23] Speaker A: Automation basically choose your own adventure where, yeah, it does the enrichment for you ahead of time. It may do some of the investigation for you and then it presents with you a menu of other investigation steps you may want to do and then presents you at the end with a menu of remediation steps. [00:52:40] Speaker B: Right. And that'd be helpful if you could block those based on your incident categories as well, saying you only expose these possible ones to these categories and then look at picking your automation steps. What you do first is the automation steps that span the broadest number of categories. [00:53:01] Speaker A: Yeah, that makes sense. Yeah. And unfortunately, like I said, every product is a little bit different. Well, that looks like that's all the articles we have today. Thank you for joining us. Follow us at saringetty Sec on Twitter and subscribe on your favorite podcast app, om.

Other Episodes

Episode 74

August 22, 2022 00:34:09
Episode Cover

SS-NEWS-074: Tornado Cash!

In this episode, we discuss ClOP ransoming the wrong water company (oop?) and Tornado Cash sanctions.  We're not crypto experts, but dang if it...

Listen

Episode 18

July 11, 2021 00:45:40
Episode Cover

SS-NEWS-018: Kaseya and Lawyers in charge of IR?

In this week's episode, we discuss Kaseya because we really have to.  Then we talk lawyers and IR again, and discover most everyone seems...

Listen

Episode 49

February 21, 2022 00:52:40
Episode Cover

SS-NEWS-049: Facial Recognition and Billions in Crypto recovered

In this episode, we go deep down the paranoid rabbit hole to discuss what the government would do with facial recognition technology, and we...

Listen