Recently Tyler Hudak (@SecShoggoth) tweeted:
Oh Tyler, you had me at #Ubuntu! Tyler provided a link to the files and I grabbed them. Here’s the included readme.txt, just to set the scene:
This Ubuntu Linux honeypot was put online in Azure in early October with the sole purpose of watching what happens with those exploiting CVE-2021-41773.
Initially there was a large amount of cryptominers that hit the system. You will see one cron script that is meant to remove files named kinsing in /tmp. This was my way of preventing these miners so more interesting things could occur.
Then, as with many things, I got busy and forgot about it. Fast forward to now (early December) and I remembered it was still up. I logged on and saw CPU usage through the roof. Instead of just shutting it down, I grabbed a disk snapshot, memory snapshot, and ran a tool named UAC (https://github.com/tclahr/uac) to grab live response. The results of this are in this directory.
There are three files:
- sdb.vhd.gz - VHD of the main drive obtained through an Azure disk snapshot
- ubuntu.20211208.mem.gz - Dump of memory using Lime
- uac.tgz - Results of UAC running on the system
Items were obtained in the order above - drive was snapshotted, memory was grabbed, then UAC was run.
Please feel free to share this. All I ask is that if you do any analysis to share it with the community.
If anyone would like to offer a more permanent home for the files, please let me know.
Thanks!
Tyler Hudak
Before going any farther, I wanted to find the cron job that Tyler mentions just so I wouldn’t be confused by his cleanup tool versus actual intruder activity. There is an entry in /var/spool/cron/crontabs/root that invokes /root/.remove.sh every minute. /root/.remove.sh is simple enough:
#!/bin/bash
for PID in `ps -ef | egrep "kinsing|kdevtmp" | grep "/tmp" | awk '{ print $2 }'`
do
kill -9 $PID
done
chown root.root /tmp/k*
chmod 444 /tmp/k*
We find a large number of /tmp/kinsing_* files and a couple of /tmp/kdevtmp* files. I did a quick verification that these were Kinsing and XMRig coin miners respectively, and then forgot all about them. There’s much more interesting stuff to look at in this image!
Other Strange Files in [/var]/tmp
While looking at Tyler’s cron job and its impact on the system, I couldn’t help noticing a couple of other interesting artifacts in the /tmp and /var/tmp directories.
- /var/tmp/dk86 was created 2021-11-11 19:09:51 UTC. The file is owned by user “daemon”–unsurprisingly, this is the user the web server on the machine runs as. I’ll dive into this file in more detail in a future blog post.
- /tmp/Mozi.a and /tmp/Mozi.tm were both created on 2021-10-13. Mozi.a has a creation time of 13:45:20 and is owned by the root user. Mozi.tm appears at 13:45:48 and is owned by “azureuser” (UID 1000). Looking at /home/azureuser/.bash_history, I think these files were intentionally created by Tyler during some of his early research into ongoing attacks on the machine (correct me if I’m wrong, Tyler!). So I chose to ignore them.
Looking into UAC
I’ve never used the UAC tool before, so I decided to start my investigation with that data and see how much useful information I could extract. The short answer is I found it very useful, particularly the process information collected by the tool in the …/liveresponse/process output directory.
lsof is one of my favorite Linux forensic tools, so I started with the “lsof_-nPl.txt” file. In particular, I started by looking at the current working directories of processes, for ones that looked abnormal. Here’s a subset of the output:
# grep cwd lsof_-nPl.txt | grep -v '2 /'
cron 1029 0 cwd DIR 8,17 4096 68440 /var/spool/cron
bash 4205 1000 cwd DIR 8,17 4096 527081 /home/azureuser/src/LiME/src
sleep 6388 1 cwd DIR 8,17 0 528743 /var/tmp/.log/101068/.spoollog (deleted)
uac 6445 0 cwd DIR 8,17 4096 528610 /root/uac
uac 7755 0 cwd DIR 8,17 4096 528610 /root/uac
lsof 7978 0 cwd DIR 8,17 4096 528610 /root/uac
lsof 7984 0 cwd DIR 8,17 4096 528610 /root/uac
sudo 9303 0 cwd DIR 8,17 4096 527081 /home/azureuser/src/LiME/src
su 9314 0 cwd DIR 8,17 4096 527081 /home/azureuser/src/LiME/src
bash 9331 0 cwd DIR 8,17 4096 528610 /root/uac
sh 15853 1 cwd DIR 8,17 12288 4059 /tmp
sh 20645 1 cwd DIR 8,17 0 528743 /var/tmp/.log/101068/.spoollog (deleted)
sh 21785 1 cwd DIR 8,17 12288 4059 /tmp
python3 27968 0 cwd DIR 8,17 4096 1552795 /var/lib/waagent/WALinuxAgent-2.5.0.2
python3 27968 28623 0 cwd DIR 8,17 4096 1552795 /var/lib/waagent/WALinuxAgent-2.5.0.2
python3 27968 28625 0 cwd DIR 8,17 4096 1552795 /var/lib/waagent/WALinuxAgent-2.5.0.2
python3 27968 28627 0 cwd DIR 8,17 4096 1552795 /var/lib/waagent/WALinuxAgent-2.5.0.2
python3 27968 28630 0 cwd DIR 8,17 4096 1552795 /var/lib/waagent/WALinuxAgent-2.5.0.2
PIDs 20645 and 6388 are running from the deleted /var/tmp/.log/101068/.spoollog directory, so they are immediately of interest. I also noted shell processes– PIDs 15853 and 21785– running from /tmp. That also looks a bit strange to me. Note that all of the suspicious processes are running as UID 1, the “daemon” user (see /etc/passwd from the system disk image to confirm).
What else is running as “daemon”? Let’s take a look at the “ps_-ef.txt” file created by UAC:
# awk '$1 == "daemon"' ps_-ef.txt
daemon 1003 1 0 Oct09 ? 00:00:00 /usr/sbin/atd -f
daemon 1693 801 0 Nov18 ? 00:00:48 /usr/sbin/httpd -k start
daemon 1813 801 0 Nov18 ? 00:00:40 /usr/sbin/httpd -k start
daemon 2539 801 0 Nov18 ? 00:00:39 /usr/sbin/httpd -k start
daemon 2632 801 0 Nov18 ? 00:01:23 /usr/sbin/httpd -k start
daemon 6388 20645 0 18:50 ? 00:00:00 sleep 300
daemon 6803 21785 0 18:51 ? 00:00:00 sleep 30
daemon 6830 15853 0 18:51 ? 00:00:00 sleep 30
daemon 15851 1 0 Nov30 ? 00:00:00 /bin/bash
daemon 15853 15851 0 Nov30 ? 00:25:04 sh
daemon 20645 1 0 Nov14 ? 03:01:59 sh .src.sh
daemon 21783 1 0 Nov30 ? 00:00:00 /bin/bash
daemon 21785 21783 0 Nov30 ? 00:25:02 sh
daemon 24330 1 49 Dec05 ? 1-16:41:54 agettyd -c noresetd
We see the web server on the system running as “daemon”. Unless the attackers bring along a privilege escalation tool, it’s likely their exploits are going to end up running as this user. /usr/sbin/atd running as “daemon” is typical for this Linux, so I’ll ignore that process. But there’s an interesting story being told by the other processes in the above listing.
On November 14, PID 20645 starts PID 6388 (observe the PPID on PID 6388). These were the processes we saw above that were running from the deleted /var/tmp/.log/101068/.spoollog directory. Also note that PID 20645 was apparently started as “sh .src.sh” which is definitely a suspicious command line.
UAC also captures some data from /proc for each process. The …/proc/20645/environ.txt file has some interesting details. I’ve extracted and reordered the most interesting data below:
REMOTE_ADDR=116.202.187.77
REMOTE_PORT=56590
HTTP_USER_AGENT=curl/7.79.1
HOME=/var/tmp/.log/101068/.spoollog/.api
PWD=/var/tmp/.log/101068/.spoollog
OLDPWD=/var/tmp
PYTHONUSERBASE=/var/tmp/.log/101068/.spoollog/.api/.mnc
REQUEST_METHOD=POST
REQUEST_URI=/cgi-bin/.%2e/.%2e/.%2e/.%2e/bin/sh
SCRIPT_NAME=/cgi-bin/../../../../bin/sh
SCRIPT_FILENAME=/bin/sh
CONTEXT_PREFIX=/cgi-bin/
CONTEXT_DOCUMENT_ROOT=/usr/lib/cgi-bin/
The request URI is typical of the CVE-2021-41773 RCE. We see the IP address and port used by the requestor– probably a VPN tunnel endpoint or Tor node and not the attacker’s actual IP address. We also have a user agent string which indicates that this was likely a scripted attack– curl is a command-line web client. The directories referenced in environment variables tie back to the deleted /var/tmp/.log/101068/.spoollog directory that was the CWD of these processes. So these are definitely worth digging deeper into in a future blog post.
There are two different, but very similar process hierarchies starting on Nov 30. Bash process 15851 starts sh process 15853 which runs sleep process 6830. Similarly, bash process 21783 starts shell process 21785 which runs sleep process 6803. The environ.txt files for these processes are nearly identical. PID 15851 was triggered from IP 5.2.72.226:47374, while PID 21783 was started by a request from 104.244.76.13:36748. All the other data is the same, so likely the same exploit was used–possibly by the same attacker:
HTTP_USER_AGENT=curl/7.79.1
REQUEST_METHOD=POST
REQUEST_URI=/cgi-bin/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/bin/bash
SCRIPT_NAME=/cgi-bin/../../../../../../../bin/bash
That leaves our mysterious agetty process from Dec 5. Using the “running_processes_full_paths.txt” data dumped by UAC, you can see this process is running from the deleted /tmp/agettyd binary, which is very abnormal. But when we look at the “environ.txt” data, it’s easy to see that this process is related to the PID 15851 process hierarchy from Nov 30.
REMOTE_ADDR=5.2.72.226
REMOTE_PORT=47374
HTTP_USER_AGENT=curl/7.79.1
REQUEST_METHOD=POST
REQUEST_URI=/cgi-bin/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/bin/bash
SCRIPT_NAME=/cgi-bin/../../../../../../../bin/bash
IP address, port, user agent, and all of the details of the request match perfectly with the information related to PID 15851. Clearly we will need to drill into this in more detail in a future blog post.
Coming Soon
Based on the triage I’ve done so far, my investigation has three main threads:
- Where is /var/tmp/dk86 come from and what is it? (analysis in Part Two)
- What is the origin of the processes running from the deleted /var/tmp/.log/101068/.spoollog and how did the directory end up getting deleted? (analysis in Part Three)
- Can we tell if the requests from 5.2.72.226 and 104.244.76.13 independent actors or the same attacker using multiple IPs? How did the /tmp/agettyd process get created? (analysis in Part 4)
We’ll investigate these questions more deeply in upcoming blog posts.
[…] Hudak’s Honeypot (Part 1) […]
Do you have a volatily profile or what volatily profile do you use to read the lime memory dump of this Ubuntu machine?
Thanks in advance,
I don’t have a profile for the memory image. But I found that the UAC data gave me all of the data I would typically pull from the memory image, so “good enough”.