Episode Transcript
[00:00:13] Speaker A: Welcome to the security Serengeti. We're your hosts, David Schwinniger and Matthew Keener.
[00:00:19] Speaker B: We're here to talk about some recent news headlines and hopefully provide some insight and analysis and practical application that you'll be able to take back to your company to better protect your organization.
[00:00:29] Speaker A: Views and opinions expressed in this podcast are ours and ours alone and do not reflect the views or opinions of our employers.
[00:00:36] Speaker B: And if you're listening to this in a browser, the NSA has not added.
[00:00:40] Speaker A: You to their list yet, not until they break TLS.
[00:00:44] Speaker B: All right, so the first story we're going to talk about is an article from the Register which references a Sophos report, and the title is half of Q one's malware. Traffic observed by Sophos was TLS encrypted, hiding inside legitimate requests to legit services.
So, in summary, what this article is talking about is summarizing a Sophos report, which we'll get into some of the information contained in that report as well. And the link to that report will be in the show notes. But what they're saying here is that 46% of Q one for 2020, one's traffic from malware as seen by Sophos anyway, was TLS encrypted. So if you're not aware of sofos, Sophos is a british security company primarily known, or at least it used to be known mainly for its antivirus products.
So in this article and the report, they say that what they've seen so far on the Q one is up from what they saw in 2020, which was only 23% of traffic encrypted with TLS, and that's 23% of the entire year. So if we're already at 46% of what they see in Q one, it's unlikely that that number is going to go down and may actually go up as we approach or the farther along we get in 2021.
So I wouldn't say exactly it's a turning point, but it's a significant difference from the previous year. And what they categorize as TLS encrypted traffic is traffic, which is coming from the malware itself, or, as they state in the report, something that's getting activated in the browser. So it sounds like they're also including man in the browser attacks or man in the browser malware or infections.
[00:02:32] Speaker A: So it's interesting they conflate that, because I actually wonder if they're conflating two separate things as well. In the title there, you mentioned the TLS encryption, but then you also mentioned the hiding inside legit requests to legit services, one example of which I don't see in the notes here. So good. I'm hopefully not stealing this from you from a later discussion point, but one example they mentioned in the article was they reached out to Google Cloud and accessed a Powershell command that was in a cell in a Google sheet document. So did they mention what percentage of those are just TLS encrypted command and control traffic, or TLS encrypted malware downloads versus where they're using the legit services, where it's just TLS downloaded? Because Google just encrypts everything they do.
[00:03:30] Speaker B: Break it down by the different reasons for the different types of malware traffic, if you will.
They break it down into, and we'll be talking about this a little bit more later, but the initial malware download, exfiltration of data and then CNC. But what you're referencing there is what they mentioned in the report as that a couple of reasons they think they're seeing an increase in the TLS traffic is that they're increasingly using legitimate web services and the TLS that those services provide to obfuscate their traffic.
And the ones that they called out in the report are Google Discord, Paste bin, and GitHub. But I'm sure those are certainly not the only ones. And a couple of reasons to use those know one, it's already TLS by default, so they don't have to worry about rolling that up themselves.
And two, those sites already have established reputations which will get them past most reputation based proxy services or categorizations. But one thing that is not exactly a. Well, I guess it really depends on your perspective, but legitimate web services is that the use of Tor and TLS based proxies, which are available on the Internet. So basically they're trying to leverage existing infrastructure rather than rolling their own crypto in order to obfuscate what they're doing inside with their malware to their command and control or their other aspects of their attack.
[00:05:17] Speaker A: Yeah, but I wish more would use Tor because that one's fairly easy to identify versus some of the other.
[00:05:26] Speaker B: Can'T get to. You can't leverage Tor without knowing where the Tor proxies are at. So those are generally well known. But of course you have to stay on top of that because they do fluctuate.
[00:05:36] Speaker A: Yeah, but third parties will handle that for you. Like your firewalls may be able to flag tor traffic, or I know Microsoft does. Microsoft will generate an alert that says someone logged in from a Tor proxy node. So there's kind of pre built stuff for Tor. Some of the other proxies, not so much. So I know that's a struggle that I've had in the past for sure, is trying to identify logins via proxy. And that's made even more painful by the sheer number of people that use proxies day to day. Even for corporate assets, they'll install their own private proxy server. And on one hand, that's great, protect your privacy, keep the government and keep the isps from peeking through your stuff. But then when you have an alert that says, let me know when somebody logs in from a proxy, because the bad guys use proxies, it lights up like a Christmas tree, right?
[00:06:28] Speaker B: And you can also set, if you're expecting people to be using a workstation that's within your boundary, within your border and going out, you can set the proxy which your workstations go through, to say that the category of proxy or proxy avoidance is blocked by default and they can't get out to it. So there are some advantages, at least from the attacker perspective, of leveraging these existing cloud infrastructure, is that they can basically stand up a c two node which has that already established reputation, and use it only for a single campaign or a single attack, and then just burn the infrastructure and just create a new one to do the next campaign. And they may even have all that automated, for all we know. And there's probably some that do. And this is kind of one of the things that you have to consider when you're using your threat intelligence, is that threat intelligence has a shelf life. And the more frequent or the more often they use these type of tactics, the shorter that shelf life really gets.
And also because they're using existing infrastructure, they don't have to build their own. They can use different aspects of different infrastructure for different parts of the attack.
So maybe the dropper will be downloaded from here, but data will be downloaded from Google, but the data will be x filled to box or Dropbox or something else like that. So it actually adds a lot of flexibility to the attacker's tool set when they're leveraging those existing cloud systems.
[00:08:18] Speaker A: So I have a question for you, because I don't know how this works.
If you're doing man in the middle interception and decoding of the traffic for your organization, and you've installed kind of your own certificates on each of the server on each of your systems, so it goes out using your certificate, the proxy intercepts it, the proxy decodes it, because it's got your certificate, then the proxy re encodes it with know real Google certificate or whatever and know encrypts it outside of your boundaries. How does that apply to this malware? Does the malware come with its own built in certificates? I mean, it would have to if they're doing their own custom TLS solution for command and control. But if they're using an existing solution like box, they probably have to use your existing certificate. So if you're doing man in the middle and they're using existing services, it will be decoded, correct?
[00:09:17] Speaker B: It depends because some of those services still function with that man in the middle and some of them don't. Because if you're using a Chrome browser, accessing a Google property, that's a real problem. And generally for those kind of things where they're doing certificate painting, they are not going to allow or that is just not going to work and you're going to have to put an exception to allow that to go through without interception.
So it really depends on what service you're talking about. Some of them will work, some of them won't, and some of your customer or vendors that you're interacting with, depending on what you're doing, you're going to have to put in exceptions for those interactions as well.
[00:10:03] Speaker A: Interesting.
So for at least some of these where they're going out to their own infrastructure and they're not using the cloud account, but they're still encrypting it with CLS, they're going to have to include certificates. I wonder if that's. No, the certificates all over the box you can't just detect based on a new certificate was added to the machine.
[00:10:21] Speaker B: Yeah, that's unlikely. That's something you could try out though, and just see what it looks like. Yeah.
[00:10:28] Speaker A: Maybe looking for uncommon certificates or something like that. Like certificates where it's the only time you've ever seen it.
Although depending on how successful they are with their ransomware, it won't be the only time you've seen it, will it?
[00:10:41] Speaker B: Right. It depends on how big the potential infection is. But you could use it kind of like the top 100 or something like that to say that this certificate only exists on 1% or x percentage of the systems in the enterprise or something like that.
[00:10:59] Speaker A: Or maybe like a new, like this certificate was seen for the first time today. I wonder how noisy that would be. That big company, that probably be pretty noisy.
[00:11:07] Speaker B: I don't know, because most of the certificate vendors, the big ones, digital or whatever those are already come with the Windows key store.
So if they're doing a.
Well, I guess the lesson crypt is probably the most popular, though. So what we're talking about is really probably moot as far as who signed the certificate in the first place.
[00:11:39] Speaker A: Actually, no, but that would be a good filter too, because you would filter only for free certificate providers because the bad guys probably aren't going to pay for it. And if they do break into Komodo, say, and sign certificates for themselves, then we're never going to catch that anyways.
[00:11:58] Speaker B: Yeah, it depends on the level of certification and the, I don't know when it's so late face to the skill set, but how careful the attackers are because they may avoid let's encrypt for that very reason.
But you had mentioned before saying about the breakdown between the different aspects of an attack. So in the article or in the report from Sophos, they say that 80% of the traffic seen by Sofos could be linked to droppers. And this makes a lot of sense because what you're inspecting for in network traffic is that malware, and the dropper is how it's going to pull that malware down to the box. So it would make the most sense to have that encrypted in order to prevent that malware from being detected. Whereas command and control, it's pretty hard to identify what command and control was doing because you could really set it up to be any kind of obfuscate that almost any kind of way that you want, because you're just telling whatever your malware is to do whatever you want it to. And you could say that in any number of ways.
[00:13:13] Speaker A: Yeah, you could say black means pull all the data and send it up, and red means something completely different. You could do both encoding and encrypting, right?
[00:13:25] Speaker B: It's not like you have the Enigma machine, right? Say, oh, well, we have the enigma machine. We know exactly what they're saying.
So when we're talking about legitimate services, according to the article, Google and the various aspects of Google account for 9% of this malicious TLS, with cloudflare coming in second, mainly because cloudfare hosts discord content delivery network, which is apparently a large source for the malware delivery as well. And if you're talking about geographical locations, half of the instances seen were either us or India based organizations or the servers were located in one of those two countries.
[00:14:16] Speaker A: Is that really a surprise though? I mean, aren't most of the server farms?
[00:14:21] Speaker B: Not at all. I mean, it's not shocking whatsoever because you're talking about AWS is american, Google is american.
All the real large tech providers are american and house a lot of their servers within the United States, but most of them are also international. So they're probably hosting some of these things outside the country as well. But the level of effort to go through to ensure that your malware is coming from a Google property located in another country is probably well beyond the scope or the capabilities of most attackers.
So I think one of the other reasons which they didn't really call out in the article was that attackers tend to try to hide in plain sight. And when you're talking about the majority of the Internet is shifting over to Tls from HTTP, so shifting to an encrypted infrastructure. Basically, the attackers are going to just blend into that same infrastructure which everybody else is leveraging. And when you talk, speaking specifically of Google. So they recently came out with Chrome version 90, and that is now defaulting HTPs versus HTTP. So if you put in a website name or a URL and you don't include which protocol, they're going to assume that it's HTTPs and append that to the front before attempting to contact whichever domain or whichever site you're trying to get to, and eventually you're going to get.
My assumption would be that Safari and Firefox and Opera and the other major browser providers are not going to be far behind on this configuration as well.
Yeah, because as of March of this year, 98% of web traffic is TLS encrypted. So it's almost as if being unencrypted now is the outlier. I mean, it is the outlier. So certainly not want to be caught out in the open like that. Now, there is something else that they mentioned in the arc which I found really surprising, actually, is that of this TLS traffic that they were seeing, 49% were using non standard web ports. So they weren't using four 4380 or 80 80 to send this traffic, which I found pretty surprising because as I was saying a minute ago, they would just want to hide in plain sight. They don't want to be caught out or seen as odd. So it's kind of weird that you saw that high percentage that was using those non centered reports. Well, one thing they didn't specify in there is that 49% of the non cloud infrastructure traffic, or is that 49% of the entire q one data that they were looking at.
[00:17:16] Speaker A: Because that makes me wonder how many people are trying the whole security by obscurity thing and putting various websites that they don't want easily found on 40, 43 or something like that.
[00:17:32] Speaker B: Yeah, I'm not sure, but I think the longer this goes on or just in the future, period, that percentage is going to start shrinking because what they're really doing is they're making themselves stand out and that's the last thing that they want to happen when defenders are looking at network traffic.
But the bottom line is, why is this important? To realize that this is happening is that if you can't identify or detect that malicious traffic, you can't stop it.
[00:18:11] Speaker A: Even more than that, if you are not intercepting that, you're not even getting the full data. Palo Alto their firewalls have a proxying capability, but if it's TLS traffic and you're not doing interception, you can't get the whole URL. All you get is the domain. So like all of your proxy based strategy, any threat intel you're doing where you're looking for known malicious domains, it just doesn't work.
[00:18:37] Speaker B: Yeah, because they're truncating that known malicious domains works.
[00:18:41] Speaker A: But like known malicious URLs where it's a kit that uses the same URL or whatever malicious queries or command and control via queries, like all that's gone.
[00:18:51] Speaker B: And as you're saying, without that TLS inception or interception, forget it. You might as well. It's getting to the point where it's almost not worth looking at the border traffic without the TLS interception now, because point an ids and encrypted traffic isn't going to yield you anything.
[00:19:11] Speaker A: Yeah, like I said, you still get the domain so you can use threat and tell them the domains. You can still do statistical based monitoring for c two connections. There's things you can do, but they're.
[00:19:22] Speaker B: Not right when there are better tools for doing that though than an ids like Netflow or just firewall data.
So what can you do about this? So at the bottom of the report, the author points out that TLS can be implemented over any assigned ip port, and once that TLS handshake has taken place, it looks like any other traffic. So don't rely on the ports to determine what you're going to monitor and what you're not going to agree that. I agree with that to a certain extent, but that should not be the only thing you do, but I think you still should do it. So at least for your workstations, you should have your workstation servers separated for one thing, so that your workstation trap, you can say, can only go out to known ports. Typical web ports because the average workstation should not need to go out to non standard ports, and you can have an exception process in place to deal with those when they come up. But only allowing your workstations to go out to four, 4380 and 80, for instance, could help drop some of this off. It's obviously not a panacea by doing that, but it's certainly a step you can take in order to minimize that attack surface a little bit.
[00:20:49] Speaker A: And I would love to find out why so many legitimate providers use nonstandard ports. We see this all the time internally, like where various places I've worked where for some reason the developer or the software uses 49,000 instead of 80. Is there an actual good reason for.
[00:21:09] Speaker B: That inside the network? There can be, because if you have a lot of.
Well, guess I don't know if that's true anymore. It used to be you'd have servers that had many, many applications on it, or several applications on one of them, a single port. So I'd say that's probably really a holdover from the old days. But there are some servers that I know at least for like McAfee stuff.
EPO listens on three or four ports for different reasons, agent communication management, et cetera. So you are going to get some non standard ports in there for internal, but at least crossing the boundary to the Internet should be pretty low. Pretty rare that you're going to see those non standard ports.
[00:22:04] Speaker A: Like a web server serving a web page should not be serving 80. 80 instead of 80.
[00:22:09] Speaker B: Exactly, which is why you should be able to be able to do that. Limiting on the workstations where servers, you put them on a different network, you segment them off and they are, depending on what their function is, obviously they may need to connect out on non standard ports, but that is less of a concern when you're thinking about servers should only be doing server things for the purpose of the application installed on there, so you can whitelist their access to the Internet. Whereas workstations, you can't really do that. It's not feasible from a destination perspective. From port perspective, you could probably do it, but not destination where servers should be very limited, very narrow scope Internet interaction with those, this is what you.
[00:22:54] Speaker A: Get because you see a lot of this when it comes to content. You're like, well, just look for whatever's not normal and you found the bad guys and well, unfortunately nobody seems to write software that's normal anymore, or ever. I guess I shouldn't say anymore. People have never written software that's obeyed the rfcs.
[00:23:11] Speaker B: Yeah, just look at what Internet Explorer there are tools that I remember, you know, that's gone the way to the dodo, but I remember you'd have to use Internet Explorer because Internet Explorer didn't care what the RFC said and neither did the tool. So that's why you had to use Internet Explorer to connect to know you couldn't use Firefox or whatever, because they mostly more tightly adhered to what the RFC actually said for how to do that.
[00:23:41] Speaker A: That's ridiculous.
[00:23:43] Speaker B: So some of the other things you can do is have a proxy, and the idea is you'd have a proxy either configured to treat servers differently or workstations differently to do that filtering so that their Internet destinations were whitelisted and workstations weren't. You can block tor nodes with that.
If your proxy does website categorization, which absolutely is a must in my opinion. You can have it block proxy or proxy avoidance categories.
Something else you can do is use a CASB, and what the CASB is going to do for you is it's going to help you determine what cloud services your organization is using, which ones you aren't, and you can manage the access to those cloud services with that CASB then. And we mentioned earlier about doing the TLS inspection. Now, if you're doing the TLS inspection though, either whatever you're using to do that interception has to be able to have an intrusion detection system or some kind of security tool built into it to do that inspection, or has to have the capability to pipe that unencrypted traffic over to a tool to perform the inspection on that traffic. There's no point doing TLS inspection if you aren't actually doing the inspection, you're just breaking it to reassemble it. Now the challenge with doing that TLS inspection, though, is sometimes it works and sometimes it doesn't. We were talking about a minute ago about how you're going to have to have exceptions because some sites or some companies, that is simply not going to be allowed for you to do man in the middle for the traffic that you're sending to them. And of course you're going to have to have more overhead to manage those challenges where you have an exception procedure policy in order to deal with those when they come up. And if you have somebody managing your proxies, a reasonable portion of their time is going to be spent dealing with those exceptions. Now, something else to consider is standardizing your collaboration tools so you can then block the encrypted messaging and chat as well, because those are encrypted and used for CNC a lot of times, so don't let.
Basically, if you're not using Slack, don't allow slack.
They mentioned discord in here. I'm not sure a lot of corporations are using Discord for official business, so that's something you can stop at the edge as well.
Any final thoughts on that, Matt?
[00:26:29] Speaker A: Nope. Just not doing the TLS interception makes detection awful hard, makes your analyst life a lot harder, and means you're going to miss a whole lot more stuff. All right, for our second article, we are going to talk about Krebs on security article about the solar winds backdoor. Now, that being said, we're not going to dive into the solar winds backdoor. I'm sure everybody listening to the podcast has found it done to death. But the title of the article is, did someone at the commerce department find a SolarWinds backdoor in August 2020? So last month, in May, Microsoft and Mandy released their reports where they announced they found a fourth backdoor in the solar wind supply chain hack. And it turns out that if you look in virus total, the very first sample of that fourth backdoor was uploaded in August 2020, months before the attack was first discovered by Mandiant in December. If you go, they go, and they look at other files that were uploaded by the same user. There were files that contained email addresses and usernames from a government agency. There was a reply to an email about FiSMA compliance that attached data from the last quarter. There was a lot of interesting things uploaded by that person.
[00:27:42] Speaker B: I thought that was kind of amusing, that the report talking about the Q four FISMA compliance was almost to the day, a year before this malware was uploaded to virus total. Thought that was kind of ironic.
[00:27:58] Speaker A: So in terms of discussion points, there's a bunch here. I think the first one is going to be that this person uploaded this in August 2020. And I'm sure that virus total came back with a zero out of 69. AV engine detection probably said that it was perfectly fine because at that time it hadn't been discovered yet. And if it had been flagged by one of the AV scanners, I'm sure somebody would have looked at it a little bit before. Now.
[00:28:25] Speaker B: What's interesting about that, what you just said there, and it makes a whole lot of sense there, but when you look at the report that's in the virus total for that hash value that Krebs has in the article, it comes back and says when it was first reported and when it was last reported, which was the 22nd.
And of the 69 scan engines, now, there are 51 that report that as malicious. And one of those in there actually, let's just talk about a couple of them in there. So I just picked out three of the big ones that most people are familiar with and are aware of. So Symantec, McAfee, and Kaspersky. So the symantec one calls it out as backdoor Goldmax, which is a name which was invented for the SolarWinds event. And it gives a date on there of for March. But McAfee and Kaspersky. Now McAfee calls it, and this is what McAfee titles this detection, as it says, behaves like win 64 backdooragent TC. I couldn't actually find the McAfee malware library online to get a more better definition for this backdoor agent TC, but that indicates that MaCPI detected through heuristics and not through an exact detection like semantics indicates. And because I couldn't find their signature library, I don't know when it was changed in order to ensure that it would detect that. But Kaspersky calls it backdoor win 64 agent IQJ. And we'll have a link to the show notes of the Kasky page on this detection.
But the first date for detection for this Kaspersky signature is July 1 of 2016. And they call the backdoor win 64 agent a malware family. And then they give a specific malware indicator of that, IQJ. So that seems to be also heuristics based.
So my assumption would be based on that, that when this guy did submit it in 2020, that Kaspersky would have identified it as such at that time because of the heuristics.
[00:31:08] Speaker A: You just gave me an idea for a conference, doc.
[00:31:12] Speaker B: Okay. 10% of profits.
[00:31:15] Speaker A: No, because now I'm sitting here wondering, how quickly does it spread? Like, if you upload something to virus total, and it says, like, one out of 69 or two out of 69, and then a week later, you go back and it says four out of 69.
Because right now, even still, like you pointed out, even a month and a half after this was released as an actual bad guy thing, only 51 out of 69 navy engines detect it. I'm curious about how does the graph of when each one detect it vary against the timeline of release, et cetera, et cetera?
[00:31:50] Speaker B: I'm wondering if that would actually be stable as well, because that would give you an indication that Kaspersky, within 20 days, they generally change theirs, maxi within 30, et cetera, et cetera. And maybe that curve is going to be kind of standard for all the vendors because they have a standard practice for how they do these things.
[00:32:11] Speaker A: Yeah, that's interesting. I don't know, that conference talk is probably as strong, but now I'm kind of like that would make an interesting white paper.
[00:32:19] Speaker B: Yeah, I haven't logged into virus total, so I'm not sure how detailed the information is there. Whether you can pull that out or.
[00:32:28] Speaker A: Not, I don't know.
I only have used the UI, so I'm wondering, it's not available in the UI because I went and looked with this one because I went to go see myself if I could find if those submission dates, how many AV engines detected at each submission. But it might be in the.
Yeah, so that point is interesting. The other point you make about the heuristic detections is interesting as well. I would love to see that initial. I was making the assumption that nothing detected it and that's why it wasn't further investigated or nobody detected it. But if it was detected and this group removed the back door and then did their own internal ir and then didn't tell anybody about it.
[00:33:18] Speaker B: That'S about something that you'd have to figure out what their internal practices are for incident response, because if you get a malware detection, and I can see where you would submit the survivors total and you would get some heuristics hits on Bob's av scanner, be like, okay, yeah, that's great. But the big guys aren't detecting this, so I'm not going to be overly concerned about it. You have Kaspersky and McAfee both potentially detecting this through heuristics. That should have been a red flag. And if they are detecting it, but that would indicate that this organization is not running those two, that would mean that they should have taken that malware as part. And this is of course just my opinion is part of their process should have been to take that malware and submit it to their antivirus vendor to create a signature for it.
[00:34:10] Speaker A: Yeah, or they just ignored it. I mean, that happens. There's a lot of software that triggers the heuristic, machine learning detections.
This is Lexmark printers software is what it was hiding as. But I know a lot of chinese software triggers the living crap out of machine learning and heuristic detections. Probably because the government asked them to build in some backdoors and surveillance technology. But I don't know, depending on the types of responses they got back, if all they got were heuristic and machine learning responses and no specific signatures, they may have blown it off.
[00:34:48] Speaker B: Right. And it could be just a matter of their actual process to say, step one, submit your virus total. Step two, look at what the number comes back as. And if it's less than ten, okay, brush it off.
[00:35:04] Speaker A: Yeah, I've seen that before, especially with automation. With automation products where it even allows you to set a threshold in there for how many vendors at virus total call it malicious before you agree with them. And usually it's between five and ten specifically for that reason. Usually it's like Bob's no name antivirus, like you're saying.
[00:35:24] Speaker B: Yeah, because one of the other things I was thinking about considering when this was submitted is, I know other threat intel people or incident response people, particularly in the government, are reticent to submit stuff to virustotal because threat actors can monitor virus total to see when their malware is discovered.
So it seems, considering that the last upload for this was just a couple of days ago, that the attacker didn't cease use of that malware.
[00:36:05] Speaker A: Yeah, I wonder what they thought. If they were watching that and they saw it come in and all of a sudden they hunker down, they're like, oh, crap, this whole thing is about to blow wide open, and then nothing happens until December.
[00:36:17] Speaker B: Right now, either that's going to give them an ego boost, or they're going to be like, man, these guys are, their IR is terrible or something. Just wondering what the heck goes on in the offices over there when this kind of thing happens.
[00:36:34] Speaker A: I mean, most people that do IR and most analysts are looking for a reason that it's good. Like most socs don't have the time to look for every possible reason it could be bad.
They get an alert every ten minutes, an alert every 15 minutes, alert every hour. You don't have time to deep dive into this unless you've got on site malware reverse engineering that's capable. Because let's hypothetically say that this came back and it had a few heuristic detections but nothing else. And it's called Lexmark printers. Maybe they have Lexmark printers, and maybe there's one in the office, and that makes it plausible that somebody installed some Lexmark software. Like an analyst might just look at that and be like, you know, all right, there's nothing particularly bad about it. It's weird. It's weird, but it doesn't look malicious.
[00:37:27] Speaker B: Right. And what's interesting about the heuristics aspect of this, I just thought of this just now, is that McAfee and Kaspersky have at least, this is kind of my reasoning anyway, looked at this and said, we're not going to create a custom signature for this. Our heuristics is good enough on that.
[00:37:43] Speaker A: Is a, this is like the defining attack of this last year. And they're using heuristic detections, which is great if their heuristic detection already caught it. Like that's a feather in their cap for sure. But again, heuristic detections, they don't get quite as much import maybe as specific detections when people are looking at them.
[00:38:04] Speaker B: Yeah, I'm trying to remember if you can turn that off. Even.
[00:38:10] Speaker A: I know you can. In some places you can turn off the machine learning heuristic detection for sure.
[00:38:15] Speaker B: Yeah. So it could be that once it's detected, because that simply was not turned on because Macbee and Kaspersky, you would think, would have caught this much sooner. Of course, you can't use Kaspersky in the United States anymore. Well, not the only they listen, but Maxi has heavily used in there, so I'm kind of surprised that that wasn't identified sooner.
[00:38:46] Speaker A: Yeah, but that actually goes to the next discussion point I have down here.
I don't think I looked specifically at this back door. I think I looked at the other three that were released initially, and one of them, a couple of them were really hard to find. One of them was bog simple to define, bog simple to find and look like it wasn't even written by the same group. But some of the way these guys did the command and control, I bet they did have Lex mark printers because in one of the other backdoors they mentioned that the backdoor was hidden as an executable in a folder it was named after. It was in a folder named after an existing system management product in the environment. So these guys went to a lot of effort to hide it in there. So I bet there were Lexmarks printers in there that would match with the ttps they used elsewhere of naming it after existing software. This was a tough attack to find. Nobody found it until Mandiant did. And Mandiant only found it because they were compromised.
[00:39:42] Speaker B: Yeah, well mandiant really, they found it because the attacker got sloppy, really, by using their tools. It was a VPN, if I remember correctly, it was a VPN login attempt, which is what was the initial flag for FireEye.
I'll have to go back and validate that, but that's what my memory is telling me anyway.
[00:40:06] Speaker A: Gotcha. All right. And finally, the last discussion point is what kinds of things are being uploaded to virustotal, and this is one that I have had experience with in the past working with stock analysts.
A lot of soar tools want you to upload everything.
You can buy API subscriptions to virustotal and various sandboxes. This isn't just virustotal. This is all of the free sandboxes that you can upload things to, and they want you to upload everything to them. And especially, my experience has been typically speaking with phishing emails, especially more than anything else, people will submit the craziest things.
I have analyzed phishing emails submitted where people submitted their taxes, people submitted legal paperwork about their divorces, people have submitted financial statements about their holdings.
I think we've seen submitted salary lists of everybody in a department.
People will submit anything to the phishing mailbox. And you cannot just willy nilly upload stuff to free services because as we all know about free services is if you are not paying, then you are the product and virus total. They're making money off everything you upload into there.
They're selling it to the attacker because the attackers check in to see when their stuff gets uploaded. They're also looking for sensitive information that companies are uploading in here, maybe audit results and things like that. AV companies are paying because they're looking for the next suite of malware, which is kind of why I'm surprised it took this long to find it again, because I know AV companies are paying virus total because they want the latest types of malware so they can keep their signatures up to date.
Intel companies are paying so that they can get a handle on the threats that are going out. It's just everybody's looking at this data and it blows my mind.
This article came out in April. It took eight months for them to figure out that somebody uploaded this to virustotal.
[00:42:14] Speaker B: And don't forget, just because you're paying virustotal doesn't mean you're exempted from this giant bucket that they're throwing everything in, either.
[00:42:21] Speaker A: Some of the cloud services do exempt you. Like if you pay for some of the sandbox services, they'll give you a private sandbox and they'll promise not to get rid of your data. But virustotal does not do that.
[00:42:31] Speaker B: Right. And that's double edged sword. It helps the overall community, but can be challenging for individual organizations because of what you were just talking about, where you can't necessarily automate the submission of malware because who knows what's going to go up there. But I would say if your automated submission platform, whatever that is, can look at that file and say, only executables.
That might be something you could certainly automate even better.
[00:43:03] Speaker A: I know that at least one sore platform does this. Maybe they all do. It's possible. Some of them, you can restrict it to hash only, and then if the hash doesn't exist, then you can move to a manual step and have somebody.
[00:43:15] Speaker B: Decide, oh, that's even better.
[00:43:18] Speaker A: Yeah.
So it's accounted for. You just have to be careful. You have to make sure you're using the correct settings. So why does this matter? Well, first of all, just uploading files to virus total to find the bad guys only works if you are not first. If you're not in the first list of targets, it only works after the attack has been discovered, which for most people, most of the time, a lot of the ransomware and stuff that's going on today, they're using the same ransomware families they used a year ago, two years ago, three years ago, that's fine. But if you are an apt target and you are in that first list, everything they send you is going to be undetectable, usually.
[00:43:57] Speaker B: Yeah. It kind of has to build up ahead of steam, if you will.
[00:44:00] Speaker A: Yeah. And that's why I'm kind of curious about that timeline thing. I'm curious how many days after the attack is publicly disclosed is virustotal consistently telling you that it's malicious? Interesting.
Second of all, again, I actually already mentioned this, but remember, if you're not paying for the product, you are the product.
If you need that privacy, you need to pay for it. You need to find a provider, a cloud sandbox provider, maybe that will promise to keep your files private. All right, what can you do about it? Well, first of all, don't automatically, and I mean that both literally in terms of soar and figuratively in terms of analysts following standard procedure, upload your files to the cloud sandboxes, virustotal, any of it. Virustotal will allow you to remove files if it has sensitive info, but you have to submit a manual ticket and it'll have been up there for some period of time, which means somebody will have downloaded it. I've definitely gotten notifications in the past that say, hey, we found a file that has your company's sensitive information in virus total. You should probably take that down. Whoops.
[00:45:08] Speaker B: That was office courteous of them.
[00:45:10] Speaker A: It was courteous of them. It was someone we'd had a business relationship with in the past, and it was very courteous of them. And I appreciate it, truly and then shrugging your shoulders and going, I don't know why it's doing that is a terrible way to do analysis. That's kind of in my mental model of what happened with this one. And the reason that it wasn't discovered in August of 2020 is, again, I'm just so picturing the analyst being like, well, I don't know what it's doing, but virus total doesn't say it's bad. And I can't think of a reason why it's malicious. And I think this is just a general. I feel like I see a lot of analysts, especially junior level analysts, that kind of lean this way. And I understand why you're super busy. You've got a lot on your plate. Stuff is coming in. Things are moving faster and faster, and it's hard to take the time to really do a deep dive on everything that comes in, especially in this case, if they had Lex marks printers or something like that.
[00:46:08] Speaker B: I'm actually curious about what brought it to this guy's attention in the first place.
[00:46:12] Speaker A: Yeah, he had to go in there and probably manually uploaded it.
[00:46:16] Speaker B: Yeah. So something triggered him to go and look at it. I wonder what that was. It must not have been a really strong detection mechanism.
[00:46:23] Speaker A: Yep. Well, maybe they did have McAfee and it came up with a heuristic detection. He uploaded it to see what everybody else had.
[00:46:31] Speaker B: Could be. And then he said, oh, the Russians are saying it's malware, so get it.
[00:46:35] Speaker A: I can't trust.
[00:46:36] Speaker B: Can't trust them.
[00:46:38] Speaker A: Can't trust them. All right, do you have any final thoughts on this one?
[00:46:41] Speaker B: I do not. I think I've expressed them all.
[00:46:44] Speaker A: All right, well, that looks like that's all the articles we have today. Thank you for joining us. Follow us at sarangettisec on Twitter. Remember the other week where I said we had two followers? We have seven now. So join us.
[00:46:57] Speaker B: Let's double that.
[00:46:58] Speaker A: Let's get up to 14 and subscribe on your favorite podcast app, you. It's.