The beginning
It’s the year 2018, somewhere around the end of October in one of the most beautiful islands in the world. The weather is cold and rainy, and I am just about to finish my talk about the research we do at MRG Effitas. I throw a lot of technical words at the audience. Some follow my talk, others can only think about their next coffee. The city is surrounded by boring geysers, glaciers and volcanoes, but luckily the “conference” is about super interesting and exciting standards, procedures and templates discussing how to test AntiVirus in a fair way.
After the end of my talk, some rush to their daily dose of coffee, others are still processing what I just said. And suddenly someone familiar greets me. His name is Dr. Hyrum Anderson and he works for Endgame. He is very enthusiastic about what he is going to tell me, I can see that. He shares his idea about organizing a Machine learning evasion contest. As we both work with malware on a daily basis, we both know this is not about sticking our head in the sand so that we can avoid talking about ML in the future. I like the idea. I mean I love the idea. Hyrum’s team can provide ML detection models for the competition, and we can hunt together for samples for the test. MRG Effitas can host the malware samples and I can create a submission platform where contestants can submit their modified malware samples in the hope of bypassing the chosen ML models.
Development
It is easy to modify a sample in a way that it is not detected by Machine Learning models. It is a bit more challenging when the sample is a Windows executable file because the modifications can change the behaviour of the modified malware sample. Therefore, we have to make sure that this does not happen. Luckily, there are already solutions t
I still remember the second day of Christmas. Instead of playing with my presents, I am already checking the API of VMRay to see how I can use it to achieve our goal. In February, a colleague of mine shows me Flask Admin which is exactly the framework I am looking for. A simple, clean webserver with templates developed in Python. Flask and Flask Admin are new to me and it is both challenging and sometimes frustrating to work with a new framework. You know, the love and hate relationship. Love it when it works, hate it when it does not. In March, we decide that we announce the competition at DEF CON 27, August. In May, I already have a working site where certain functionalities already work. I try to keep things simple everywhere I can. Like who wants to deal with user registration, lost password, multiple registrations for bypassing limits and stuff like that when you can simply use Google
Time flies, code does not. August approaches fast, so I do what every coder does in these cases: write code faster! Spoiler alert: it does not work. In July, I spend a lot of time finishing all the functionalities. The prod environment is deployed into Amazon, I put NGINX and gunicorn in front of the app. Not because I have to, just because I read this is the best practice. To make TLS easy and the website fast, I put it behind Cloudflare. I perform some tests to make sure that the CDN does not affect t
DEF CON
At DEF CON, Hyrum presents the competition to the people at AI Village. People are excited. Both because this is a unique challenge, and because the prize for the winner is a pretty nice GPU card. Handy when you are into Machine Learning. Or gaming. Or both. The competition starts, it is on.
The contest
When I get back to Budapest through Toronto (note to self, never fly Air Canada Rouge again) I am already greeted with valid complaints from the machine learning evasion contest participants on our Slack channel, saying that some things do not work. Around sixty commits and three weeks later, the framework more or less works. During these three weeks, the framework does a lot of things to drive the competitors crazy. Valid samples are marked as invalid, invalid samples are marked as valid, upload limits are reached. Some people think they have achieved the
Finally, on August 28, 15:25 UTC William Fleshman uploads his final piece of the puzzle and achieves the highest 150 points. But on the same day, just some hours later another contestant does the same. Some days later, both Hyrum and
The solutions
Looking at the solutions, people followed the following routes:
- appending extra data to the executable, also known as overlay
- adding new sections to the executable, and it is even better if these sections are from known benign files
- packing the samples with a packer
By
If you have read William’s post, you know that things are never as simple as they seem. Adding sections to certain malware files rendered the executable useless.
Also, if you have read Jakub’s post, you can see that packing files works on some of the samples, but not on all.
So, let’s start the bits and bytes section. How is it possible to detect these samples? Well, turns out it is easy. Because traditional AV signature scanning still detects files where
When it comes to adding
When it comes to appending new sections to the PE, things get complicated. Some samples are detected by fewer AV engines. But why? I suppose certain AV engines have shortcuts for performance reasons, and they check simple things like the number of sections before a signature test is performed. Fun fact, this can even fool certain production ML engines. The original sample is detected by the ML engine, but not the one with the sections mentioned by William. Clearly, the bypass against ML engines works because the sample contains a lot of known benign sections, and not because the malware modification changes the number of sections.
When it comes to packed files, most AV engines have a solid unpacker engine already in place. Nevertheless, packers are still the Nr. 1. bypass techniques against static AV signatures because even slight modifications to the packer algorithm can break the unpacker engine. When it comes to most ML engines, things are a bit different. As most ML engines do not unpack the files, they mostly flag packers as
https://www.virustotal.com/gui/file/ab63fe3355304293e22988a124e6c1bbbd169153f51511bc3c98275228d7c810/detection
Pack Windows calc.exe with Themida with a valid Taggant, even
https://www.virustotal.com/gui/file/9b6a0037bbcd6bf7a697ddde550c76f3cbfd93f4a1d04129b9bf5fe58dafc5c0/detection
Pack Windows calc.exe with VMProtect and OMG happens:
https://www.virustotal.com/gui/file/d8c0820c44aaf23df13be3e960b6b211a4a95de100dcb1081fd6aacbc575547d/detection
Moral of the story?
The more techniques are used to detect the samples, the harder it is for attackers to evade them. Combine AV signatures with ML, combine it with behavior and heuristics.
Is it still possible to bypass them? Yes.
Is it more difficult? Yes.
Will it produce more False Positive alerts? Probably yes
Footnote on SSDeep hashes
While checking the SSDeep hashes of the submitted files, I found a fun comparison of the original malware SSDeep hashes and the modified ones. Can you spot which sample was the original one and which one was generated just to bypass the ML detection?
6144:E/R8QsWAXY1iBp0sixrdikpD3O3BBu5zFFZDE2x:IR80APONJvD3ORBuRFTE2x
49152:lOctKPaSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS:c2O
3072:9dCllRVeKvyR42egiZepm9EQ6SJzgOwi7mOmJY:9dA3OOLEQ5dIZ2
6144:9dA3OOLEQ5dIZHlxBM/lxBM/lxBM/lxBMe:9u3O+EQ5dIrMpMpMpMe
768:eMuijtHf5g7/IIG3bGcYDBSvFIWuePQDGEsgRMdd5rdZ4guUd4Fxnvx:7NW71rcYDAWeoDrsEud5rdqgGjnv
12288:I8Mr88Mr88Mr88Mr88Mr88Mr88Mr88Mr88Mr88MrZ:Ilr8lr8lr8lr8lr8lr8lr8lr8lr8lrZ
6144:E/R8QsWAXY1iBp0sixrdikpD3O3BBu5zFFZDE2x:IR80APONJvD3ORBuRFTE2x
49152:IOctnPjppprOctnPjppprOctnPjppprOctnPjppprOctnPjppprOctnPjppp:J2P02P02P02P02P02P