XFS (Part 5) – Multi-Block Directories

Life gets more interesting when directories get large enough to occupy multiple blocks. Let’s take a look at my /etc directory:

[root@localhost hal]# ls -lid /etc
67146849 drwxr-xr-x. 141 root root 8192 May 26 20:37 /etc

The file size is 8192 bytes, or two 4K blocks.

Now we’ll use xfs_db to get more information:

xfs_db> inode 67146849
xfs_db> print
core.size = 8192
core.nblocks = 3
core.extsize = 0
core.nextents = 3
u3.bmx[0-2] = [startoff,startblock,blockcount,extentflag] 

I’ve removed much of the output here to make things more readable. The directory file is fragmented, requiring multiple single-block extents, which is common for directories in XFS. The directory would start as a single block. Eventually enough files will be added to the directory that it needs more than one block to hold all the file entries. But by this time, the blocks immediately following the original directory block have been consumed– often by the files which make up the content of the directory. When the directory needs to grow, it typically has to fragment.

What is really interesting about multi-block directories in XFS is that they are sparse files. Looking at the list of extents at the end of the xfs_db output, we see that the first two blocks are at logical block offsets 0 and 1, but the third block is at logical block offset 8388608. What the heck is going on here?

If you recall from our discussion of block directories in the last installment, XFS directories have a hash lookup table at the end for faster searching. When a directory consumes multiple blocks, the hash lookup table and “tail record” move into their own block. For consistency, XFS places this information at logical offset XFS_DIR2_LEAF_OFFSET, which is currently set to 32GB. 32GB divided by our 4K block size gives a logical block offset of 8388608.

From a file size perspective, we can see that xfs_db agrees with our earlier ls output, saying the directory is 8192 bytes. However, the xfs_db output clearly shows that the directory consumes three blocks, which should give it a file size of 3*4096 = 12288 bytes. Based on my testing, the directory “size” in XFS only counts the blocks that contain directory entries.

We can use xfs_db to examine the directory data blocks in more detail:

xfs_db> addr u3.bmx[0].startblock
xfs_db> print
dhdr.hdr.magic = 0x58444433 ("XDD3")
dhdr.hdr.crc = 0xe3a7892d (correct)
dhdr.hdr.bno = 38872696
dhdr.hdr.lsn = 0x2200007442
dhdr.hdr.uuid = e56c3b41-ca03-4b41-b15c-dd609cb7da71
dhdr.hdr.owner = 67146849
dhdr.bestfree[0].offset = 0x220
dhdr.bestfree[0].length = 0x8
dhdr.bestfree[1].offset = 0x258
dhdr.bestfree[1].length = 0x8
dhdr.bestfree[2].offset = 0x368
dhdr.bestfree[2].length = 0x8
du[0].inumber = 67146849
du[0].namelen = 1
du[0].name = "."
du[0].filetype = 2
du[0].tag = 0x40
du[1].inumber = 64
du[1].namelen = 2
du[1].name = ".."
du[1].filetype = 2
du[1].tag = 0x50
du[2].inumber = 34100330
du[2].namelen = 5
du[2].name = "fstab"
du[2].filetype = 1
du[2].tag = 0x60
du[3].inumber = 67146851
du[3].namelen = 8
du[3].name = "crypttab"

I’m using the addr command in xfs_db to select the startblock value from the first extent in the array (the zero element of the array).

The beginning of this first data block is nearly identical to the block directories we looked at previously. The only difference is that single block directories have a magic number “XDB3”, while data blocks in multi-block directories use “XDD3” as we see here. Remember that the value that xfs_db lobels dhdr.hdr.bno is actually the sector offset to this block and not the block number.

Let’s look at the next data block:

xfs_db> inode 67146849
xfs_db> addr u3.bmx[1].startblock
xfs_db> print
dhdr.hdr.magic = 0x58444433 ("XDD3")
dhdr.hdr.crc = 0xa0dba9dc (correct)
dhdr.hdr.bno = 38905568
dhdr.hdr.lsn = 0x2200007442
dhdr.hdr.uuid = e56c3b41-ca03-4b41-b15c-dd609cb7da71
dhdr.hdr.owner = 67146849
dhdr.bestfree[0].offset = 0xad8
dhdr.bestfree[0].length = 0x20
dhdr.bestfree[1].offset = 0xc18
dhdr.bestfree[1].length = 0x20
dhdr.bestfree[2].offset = 0xd78
dhdr.bestfree[2].length = 0x20
du[0].inumber = 67637117
du[0].namelen = 10
du[0].name = "machine-id"
du[0].filetype = 1
du[0].tag = 0x40
du[1].inumber = 67146855
du[1].namelen = 9
du[1].name = "localtime"

Again we see the same header information. Note that each data block has it’s own “free space” array, tracking available space in that data block.

Finally, we have the block containing the hash lookup table and tail record. We could use xfs_db to decode this block, but it turns out that there are some interesting internal structures to see here. Here’s the hex editor view of the start of the block:

Extent Directory Tail Block:

0-3     Forward link                        0
4-7     Backward link                       0
8-9     Magic number                        0x3df1
10-11   Padding                             zeroed
12-15   CRC32                               0xef654461

16-23   Sector offset                       38883440
24-31   Log seq number last update          0x2200008720
32-47   UUID                                e56c3b41-...-dd609cb7da71
48-55   Inode number                        67146849

56-57   Number of entries                   0x0126 = 294
58-59   Unused entries                      1
60-63   Padding for alignment               zeroed

The “forward” and “backward” links would come into play if this were a multi-node B+Tree data structure rather than a single block. Unlike previous magic number values, the magic value here (0x3df1) does not correspond to printable ASCII characters.

After the typical XFS header information, there is a two-byte value tracking the number of entries in the directory, and therefore the number of entries in the hash lookup table that follows. The next two bytes tell us that there is one unused entry– typically a record for a deleted file.

We find this unused record near the end of the hash lookup array. The entry starting at block offset 0x840 has an offset value of zero, indicating the entry is unused:

Extent Directory Tail block 0x820

Interestingly, right after the end of the hash lookup array, we see what appears to be the extended attribute information from an inode. This is apparently residual data left over from an earlier use of the block.

At the end of the block is data which tracks free space in the directory:

Extent Directory Tail Block 0xFFF

The last four bytes in the block are the number of blocks containing directory entries– two in this case. Preceding those four bytes is a “best free” array that tracks the length of the largest chunk of free space in each block. You will notice that the array values here correspond to the dhdr.bestfree[0].length values for each block in the xfs_db output above. When new directory entries are added, this array helps the file system locate the best spot to place the new entry.

We see the two bytes immediately before the “best free” array are identical to the first entry in the array. Did the /etc directory once consume three blocks and later shrink back to two? Based on limited testing, this appears to be the case. Unlike directories in traditional Unix file systems, which never shrink once blocks have been allocated, XFS directories will grow and shrink dynamically as needed.

So far we’ve looked at the three most common directory types in XFS: small “short form” directories stored in the inode, single block directories, and in this case a multi-block directories tracked with an extent array in the inode. In rare cases, when the directory is very large and very fragmented, the extent array in the inode is insufficient. In these cases, XFS uses a B+Tree to track the extent information. We will examine this scenario in the next installment.


XFS (Part 4) – Block Directories

In the previous installment, we looked at small directories stored in “short form” in the inode. While these small directories can make up as much as 90% of the total directories in a typical Linux file system, eventually directories get big enough that they can no longer be packed into the inode data fork. When this happens, directory data moves out to blocks on disk.

In the inode, the data fork type (byte 5) changes to indicate that the data is no longer stored within the inode. Extents are used to track the location of the disk blocks containing the directory data. Here is the inode core and extent list for a directory that only occupies a single block:

Inode detail for directory occupying a single block

The data fork type is 2, indicating an extent list follows the inode core. Bytes 76-79 indicate that there is only a single extent. The extent starts at byte 176 (0x0B0), immediately after the inode core. The last 21 bits of the extent structure show that the extent only contains a single block. Parsing the rest of the extent yields a block address of  0x8118e7, or relative block 71911 in AG 2.

We can extract this block and examine it in our hex editor. Here is the data in the beginning of the block:

Block Directory Header and Entries

The directory block begins with a 48 byte header:

0-3      Magic number                       XDB3
4-7      CRC32 checksum                     0xaf6a416d
8-15     Sector offset of this block        39409464

16-23    Last LSN update                    0x20000061fe
24-39    UUID                               e56c3b41-...-dd609cb7da71
40-47    inode that points to this block    0x0408e66d

You may compare the UUID and inode values in the directory block header with the corresponding values in the inode to see that they match.

The XFS documentation describes the sector offset field as the “block number”. However, using the formula from Part 1 of this series, we can calculate the physical block number of this block as:

(AG number) * (blocks per AG) + (relative block offset)
     2      *    2427136      +         71911   =   4926183

Multiply the block offset 4926183 by 8 sectors per block to get the sector offset value 39409464 that we see in the directory block header.

Following the header is a “free space” array that consumes 12 bytes, plus 4 bytes of padding to preserve 64-bit alignment. The free space array contains three elements which indicate where the three largest chunks of unused space are located in this directory block. Each element is a 2 byte offset and a 2 byte length field. The elements of the array are sorted in descending order by the length of each chunk.

In this directory block, there is only a single chunk of free space, starting at offset 1296 (0x0510) and having 2376 bytes (0x0948) of space. The other elements of the free space array are zeroed, indicating no other free space is available.

The directory entries start at byte 64 (0x040) and can be read sequentially like a typical Unix directory. However, XFS uses a hash-based lookup table, growing up from the bottom of the directory block, for more efficient searching:

Block Directory Tail Record and Hash Array

The last 8 bytes of the directory block are a “tail record” containing two 4 byte values: the number of directory entries (0x34 or 52) and the number of unused entries (zero). Immediately preceding the tail record will be an array of 8 byte records, one record per directory entry (52 records in this case). Each record contains a hash value computed from the file name, and the offset in the directory block where the directory entry for that file is located. The array is sorted by hash value so that binary search can quickly find the desired record. The offsets are in 8 byte units.

The xfs_db program can compute hash values for us:

xfs_db> hash 03_smallfile

If we locate this hash value in the array, we see the byte offset value is 0x12 or 18. Since the offset units are 8 bytes, this translates to byte offset 144 (0x090) from the start of the directory block.

Here are the first six directory entries from this block, including the entry for “03_smallfile”:

Directory Entry Detail

Directory entries are variable length, but always 8 byte (64-bit) aligned. The fields in each directory entry are:

     Len (bytes)       Field
     ===========       =====
          8            Inode number
          1            File name length
          varies       File name
          1            File type
          varies       Padding for alignment
          2            Byte offset of this directory entry

64-bit inode addresses are always used. This is different from “short form” directories, where 32-bit inode addresses will be used if possible.

File name length is a single byte, limiting file names to 255 characters. The file type byte uses the same numbering scheme we saw in “short form” directories:

    1   Regular file
    2   Directory
    3   Character special device
    4   Block special device
    5   FIFO
    6   Socket
    7   Symlink

Padding for alignment is only included if necessary. Our “03_smallfile” entry starting at offset 0x090 is exactly 24 bytes long and needs no padding for alignment. You can clearly see the padding in the “.” and “..” entries starting at offset 0x040 and 0x050 respectively.

Deleting a File

If we remove “03_smallfile” from this directory, the inode updates similarly to what we saw with the “short form” directory in the last installment of this series. The mtime and ctime values are updated, and the CRC32 and Logfile Sequence Number fields as well. The file size does not change, since the directory still occupies one block.

The “tail record” and hash array at the end of the directory block change:

Tail record and hash array post delete

The tail record still shows 34 entries, but one of them is now unused. If we look at the entry for hash 0x3F07FDEC, we see the offset value has been zeroed, indicating an unused record.

We also see changes at the beginning of the block:

Directory entry detail post delete

The free space array now uses the second element, showing 24 (0x18) bytes free at byte offset 0x90– the location where the “03_smallfile” entry used to reside.

Looking at offset 0x90, we see that the first two bytes of the inode field are overwritten with 0xFFFF, indicating an unused entry. The next two bytes are the length of the free space. Again we see 0x18, or 24 bytes.

However, since inode addresses in this file system fit in 32 bits, the original inode address associated with this file is still clearly visible. The rest of the original directory entry is untouched until a new entry overwrites this space. This should make file recovery easier.

Not Quite Done With Directories

When directories get large enough to occupy multiple blocks, the directory structure gets more complicated. We’ll examine larger directories in our next installment.

XFS (Part 3) – Short Form Directories

XFS uses several different directory structures depending on the size of the directory. For testing purposes, I created three directories– one with 5 files, one with 50, and one with 5000 file entries. Small directories have their data stored in the inode. In this installment we’ll examine the inode of the directory that contains only five files.

XFS Inode with

We documented the “inode core” layout and the format of the extended attributes in Part 2 of this series. In this inode the file type (upper nibble of byte 2) is 4, which means it’s a directory. The data fork type (byte 5) is 1, meaning resident data.

Resident directory data is stored as a “short form” directory structure starting at byte offset 176, right after the inode core. First we have a brief header:

176      Number of directory entries                   5
177      Number of dir entries needing 64-bit inodes   0
178-181  Inode of parent                               0x04159fa1

First we have a byte tracking the number of directory entries to follow the header. The next byte tracks how many directory entries require 64 bits for inode data. As we saw in Part 1 of this series, XFS uses variable length addreses for blocks and inodes. In our file system, we need less than 32 bits to store these addresses, so there are no directory entries requiring 64-bit inodes. This means the directory data will use 32 bits to store inodes in order to save space.

This has an immediate impact because the next entry in the header is the inode of the parent directory. Since byte 177 is zero, this field will be 32 bits. If byte 177 was non-zero, then all inode entries in the header and directory entries would be 64-bit.

The parent inode field in the header is the equivalent of the usual “..” link in the directory. The current directory inode (the “.” link) is found in the inode core in bytes 152-159. The short form directory simply uses these values and does not have explicit “.” and “..” entries.

After the header come a series of variable length directory entries, packed as tightly as possible with no alignment constraints. Entries are added to the directory in order of file creation and are not sorted in any way.

Here is a description of the fields and a breakdown of the values in the five directories in this inode:

      Len (Bytes)      Field
          1            Length of file name (in bytes)
          2            Entry offset in non short form directory
          varies       Characters in file name
          1            File type
          4 or 8       Absolute inode address

Len    Offset     Name            Type      Inode
===    ======     ====            ====      =====
12     0x0060     01_smallfile    01        0x0417979d
10     0x0078     02_bigfile      01        0x0417979e
12     0x0090     03_smallfile    01        0x0417979f
10     0x00a8     04_bigfile      01        0x0417a154
12     0x00c0     05_smallfile    01        0x0417a155

First we have a single byte for the file name length in bytes. Like other Unix file systems, there is a 255 character file name limit.

The next two bytes are based on the byte offset the directory entry would have if it were a normal XFS directory entry and not packed into a short form directory in the inode. In a normal directory block, directory entries are 64-bit aligned and start at byte offset 96 (0x60) following the directory header and “.” and “..” entries. The directory entries here are all 18 or 20 bytes long, which means they would consume 24 bytes (0x18) in a normal directory block. Using a consistent numbering scheme for the offset makes it easier to write code that iterates through directory entries, even though the offsets don’t match the actual offset of each directory entry in the short form style.

Next we have the characters in the file name followed by a single byte for the file type. The file type is included in the directory entry so that commands like “ls -F” don’t have to open each inode to get the file type information. The file type values in the directory entry do not use the same number scheme as the file type in the inode. Here are the expected values for directory entries:

    1   Regular file
    2   Directory
    3   Character special device
    4   Block special device
    5   FIFO
    6   Socket
    7   Symlink

Finally there is a field to hold the inode associated with the file name. In our example, these inode entries are 32 bits. 64-bit inode fields will be used if the directory header indicates they are needed.

Deleting a File

When a file is deleted from (or added to) a directory, the mtime and ctime in the directory’s inode core are updated. The directory file size changes (bytes 56-63). The CRC32 checksum and the logfile sequence number fields are updated.

In the data fork, all directory entries after the deleted entry are shifted downwards, completely overwriting the deleted entry. Here’s what the directory entries look like after “03_smallfile”– the third entry in the original directory– is deleted:

Short form directory entry after file deleted

The four remaining directory entries are highlighted above. However, after those entries you can clearly see the residue of the entry for “05_smallfile” from the original directory. So as short-form directories shrink, they leave behind entries in the unused “inode slack”. In this case the residue is for a file entry that still exists in the directory, but it’s possible that we might get residue of entries deleted from the end of the directory list.

When Directories Grow Up

Another place you can see short form directory residue is when the directory gets large enough that it needs to move out to blocks on disk. I created a sample directory that initially had five files and confirmed that it was being stored as a short form directory in the inode. Then I added 45 more files to the directory, which made a short form directory impossible. Here’s what the first part of the inode looks like after these two operations:

Extent directory with short form residue

The data fork type (byte 5) is 2, meaning an extent list after the inode core, giving the location of the directory content on disk. You can see the extent highlighted starting at byte offset 176 (0xb0). But immediately after that extent you can see the residue of the original short-form directory.

The format of directories changes significantly when directory entries move out into disk blocks. In our next installment we will examine the structures in these larger directories.

XFS (Part 2) – Inodes

Part 1 of this series was a quick introduction to XFS, the XFS superblock, and the unique Allocation Group (AG) based addressing scheme used in the file system. With this information, we were able to extract an inode from its physical location on disk

In this installment, we will look at the structure of the XFS inode. Since we will want to see what remains in the inode after a file is deleted, I’m going to create a small file for testing purposes:

[root@localhost ~]# echo This is a small file >testfile
[root@localhost ~]# ls -i testfile
100799719 testfile

To save time, we’ll use the xfs_db program to convert that inode address into the values we need to extract the inode from its physical location on disk. Then we’ll use dd to extract the inode as we did in Part 1.

[root@localhost ~]# xfs_db -r /dev/mapper/centos-root
xfs_db> convert inode 100799719 agno 
0x3 (3)
xfs_db> convert inode 100799719 agblock
0x429c (17052)
xfs_db> convert inode 100799719 offset
0x7 (7)
xfs_db> ^D
[root@localhost ~]# dd if=/dev/mapper/centos-root bs=4096 \
                         skip=$((3*2427136 + 17052)) count=1 | 
                    dd bs=512 skip=7 count=1 >/home/hal/testfile-inode

Looking at the Inode

We can now view the inode in our trusty hex editor:

XFS Inode with Extent Array

XFS v5 inodes start with a 176 byte “inode core” structure:

0-1      Magic number                              "IN"
2-3      File type and mode bits (see below)       1000 000 110 100 100
4        Version (v5 file system uses v3 inodes)   3
5        Data fork type flag (see below)           2
6-7      v1 inode numlinks field (not used in v3)  zeroed
8-11     File owner UID                            0 (root)
12-15    File GID                                  0 (root)

16-19    v2+ number of links                       1
20-21    Project ID (low)                          0
22-23    Project ID (high)                         0
24-29    Padding (must be zero)                    0
30-31    Increment on flush                        0

32-35    atime epoch seconds                       0x5afdd6cd
36-39    atime nanoseconds                         0x2467330e
40-43    mtime epoch seconds                       0x5afdd6cd
44-47    mtime nanoseconds                         0x24767568

48-51    ctime epoch seconds                       0x5afd d6cd
52-55    ctime nanoseconds                         0x2476 7568
56-63    File (data fork) size                     0x15 = 21

64-71    Number of blocks in data fork             1
72-75    Extent size hint                          zeroed
76-79    Number of data extents used               1

80-81    Number of extended attribute extents      0
82       Inode offset to xattr (8 byte multiples)  0x23 = 35 * 8 = 280
83       Extended attribute type flag (see below)  1
84-87    DMAPI event mask                          0
88-89    DMAPI state                               0
90-91    Flags                                     0 (none set)
92-95    Generation number                         0xa3fd42cd

96-99    Next unlinked ptr (if inode unlinked)    -1 (NULL in XFS)

/* v3 inodes (v5 file system) have the following fields */
100-103  CRC32 checksum for inode                  0xb43f0d10
104-111  Number of changes to attributes           1

112-119  Log sequence number of last update        0x2100006185
120-127  Extended flags                            0 (none set)

128-131  Copy on write extent size hint            0
132-143  Padding for future use                    0

144-147  btime epoch seconds                       0x5afdd6cd
148-151  btime nanoseconds                         0x2467330e
152-159  inode number of this inode                0x60214e7 = 100799719

160-175  UUID                                      e56c3b41-...-dd609cb7da71

XFS inodes start with the 2 byte magic number value “IN”. Inodes also have a CRC32 checksum (bytes 100-103) to help detect corruption. The inode includes its own absolute inode number (bytes 152-159) and the file system UUID (bytes 160-175), which should match the UUID value from the superblock. Whenever the inode is updated, bytes 112-119 track the “logfile sequence number” (LSN) of the journal entry for the update. The inode format has changed across different versions of the XFS file system, so refer to the inode version in byte 4 before decoding the inode. XFS v5 uses v3 inodes.

The size of the file (in bytes) is a 64-bit value in bytes 56-63. The original XFS inode tracked the number of links as a 16-bit value (bytes 6-7), which is no longer used. Number of links is now tracked as a 32-bit value found in bytes 16-19.

Timestamps include both a 32-bit “Unix epoch” style seconds field and a 32-bit nanosecond resolution fractional seconds field. The three classic Unix timestamps– atime, mtime, ctime– are found in bytes 32-55 of the inode. File creation time (btime) was only added in XFS v5, so that timestamp resides in bytes 144-151 in the upper portion of the inode core.

File ownership and permissions are tracked as in earlier Unix file systems. There are 32-bit file owner (bytes 8-11) and group owner (bytes 12-15) fields. File type and permissions are stored in a packed 16-bit structure. The low 12 bits are the standard Unix permissions bits, and the upper four bits are used for the file type.

The file type nibble will be one of the following values:

   8   Regular file
   4   Directory
   2   Character special device
   6   Block special device
   1   FIFO
   C   Socket
   A   Symlink

The 12 permissions bits are grouped into four groups of 3 bits, and are often written in octal notation– in our case we have 0644. The first group of three represents the “special” bit flags: set-UID, set-GID, and “sticky” (none of these are set for our test file). The remaining three groups represent “read” (r), “write” (w), and “execute” (x) permissions for three categories. The first set of bits applies to the file owner, the second to members of the Unix group that owns the file, and the last group for everybody else. The permissions on our test file are 644 or 110 100 100 aka rw-r–r–. In other words, read and write access for the file owner, and read only access for group members and for all other users on the system.

The remaining space after the 176 bytes of inode core is used to track the data blocks associated with the file (the “data fork” of the file) and any extended attributes that may be set. There are multiple ways in which data and attributes may be stored– locally resident within the inode, in a series of extents, or in a more complex B+Tree indexed structure. The data fork type flag in byte 5 and the extended attribute type flag in byte 83 document how this information is organized. The possible values for these fields are:

   1   Special device file (data type only)
   2   Data is resident ("local") in the inode
   3   Array of extent structures follows
   4   B+Tree root follows

Currently XFS only uses resident or “local” storage for extended attributes and small directories. There is a proposal to allow small files to be stored in the inode (similar to NTFS), but this is still under development. The data fork for our small test file is type 2– an array of extent structures. The extended attributes are type 1, meaning they are stored locally in the inode.

The data fork starts at byte 176, immediately after the inode core. The start of the extended attribute data is found at an offset from the end of the inode core. This offset is byte 82 of the inode core, and the units are multiples of 8 bytes. In our sample inode, the offset value is 0x23 or 35. Multiplying by 8 gives a byte offset of 280 from the end of the inode core, or 176+280=456 bytes from the beginning of the inode.

Extent Arrays

The most common storage option for file content in XFS is data fork type 2– an array of 16 byte extent structures starting immediately after the inode core. Bytes 76-79 indicate how many extent structures are in the array. Our file is not fragmented, so there is only a single extent structure in the inode.

Theoretically, the 336 bytes following the inode core could hold 21 extent structures, assuming no extended attribute data. If the inode cannot hold all of the extent information (an extremely fragmented file), then the data fork in the inode becomes the root of a B+Tree (data fork type 3) for tracking extent information. We will see an example of this in a later installment in this series.

The challenging thing about XFS extent structures is that they are not byte aligned. They contain four fields as follows:

  • Flag (1 bit) – Set if extent is preallocated but not yet written, zero otherwise
  • Logical offset (54 bits) – Logical offset from the start of the file
  • Starting block (52 bits) – Absolute block address of the start of the extent
  • Length (21 bits) – Number of blocks in the extent

If you think this makes manually decoding XFS extent information challenging, you’d be correct. Let’s break the extent structure down into individual bits in order to make decoding a bit easier. The extent starts at byte offset 176 (0xb0), and I’ll use a little command-line magic to see the bits:

[root@localhost ~]# xxd -b -c 4 /home/hal/testfile-inode | 
                       grep -A3 0b0:
00000b0: 00000000 00000000 00000000 00000000  ....
00000b4: 00000000 00000000 00000000 00000000  ....
00000b8: 00000000 00000000 00011000 00001000  ....
00000bc: 00001111 00100000 00000000 00000001  . ..

Flag bit (1 bit): 0
logical offset (54 bits): 0
absolute start block (52 bits): 
    0 00000000 00000000 00000000 00011000 00001000 00001111 001

    0000 0000 0000 0000 0000 0000 0000 1100 0000 0100 0000 0111 1001
      0    0    0    0    0    0    0    C    0    4    0    7    9

    block 0xC04079 aka relative block 0x4079 (16505) in AG 3

block count (21 bits): 1

Let’s check and see if we decoded the structure correctly:

[root@localhost ~]# dd if=/dev/mapper/centos-root bs=4096 
                       skip=$((3*2427136 + 16505)) count=1 | xxd
0000000: 5468 6973 2069 7320 6120 736d 616c 6c20  This is a small 
0000010: 6669 6c65 0a00 0000 0000 0000 0000 0000  file............
0000020: 0000 0000 0000 0000 0000 0000 0000 0000  ................
0000030: 0000 0000 0000 0000 0000 0000 0000 0000  ................
[... all zeroes to end ...]

Looks like we got it right. Note that XFS null fills file slack space, which is typical for Unix file systems.

Extended Attributes

XFS allows arbitrary extended attributes to be added to the file. Attributes are simply name, value pairs. There is a 255 byte limit on the size of any attribute name or value. You can set or view attributes from the command line with the “attr” command.

If the amount of attribute data is small, extended attributes will be stored in the inode, just as they are in our sample file. Large amounts of attribute information may need to be stored in data blocks on disk, in which case the attribute data is tracked using extents just like the data fork.

As we discussed above, resident attribute information starts at a specific byte offset from the end of the inode core. In our sample file the offset is 280 bytes from the end of the inode core or 456 bytes (280 + 176) from the start of the inode.

Attributes start with a four byte header:

456-457  Length of attributes               0x34 = 52
458      Number of attributes to follow     1
459      Padding for alignment              0

The length field unit is bytes and includes the 4 byte header. Our sample file only contains a single attribute.

Each attribute structure is variable length, to allow attributes to be packed as tightly as possible. Each attribute structure starts with a single byte for the name length, then a byte for the value length, and a flag byte. The rest of the attribute structure is the name followed by the value, with no null terminators or padding for byte alignment.

Breaking down the single attribute we have in our sample inode, we see:

460      Length of name                     7
461      Length of value                    0x26 = 38
462      Flags                              4 
463-469  Attribute name                     selinux
470-507  Attribute value                    unconfined_u:...

This attribute holds the SELinux context on our file, “unconfined_u:object_r:admin_home_t:s0”. While extended attribute values are not required to be null-terminated, SELinux expects it’s context labels to have null terminators. So the 38 byte value length is 37 printable characters and a null.

The flags field is designed to control access to the attribute information. The flags byte is defined as a bit mask, but only four values appear to be used currently:

   128   Attribute is being updated
     4   "Secure" - attribute may be viewed by all but only set by root
     2   "Trusted" - attribute may only be viewed and set by root
     0   No restrictions

The Inode After Deletion

When a file is deleted, changes are limited to a small number of fields in the inode core:

  • The 2 byte file type and permissions field is zeroed
  • Link count, file size, number of blocks, and number of extents are zeroed
  • ctime is set to the time the file was deleted
  • The offset to the extended attributes is zeroed
  • The data fork and extended attribute type bytes are set to 2, which would normally mean an extent array
  • The “Generation number” field (inode bytes 92-95) is incremented–more testing is required, but it appears this field may be a usage count for the inode
  • The CRC32 checksum and the LSN are updated

No other data in the inode changes. So while the number of extents value is zeroed and so is the offset to the start of the extended attributes, the actual extent and attribute data remains in the inode.

This means it should be straightforward to recover the original file by parsing whatever  extent data exists starting at inode offset 176. The XFS FAQ points to two Open Source projects that appear to use this idea to recover deleted files, and a little Google searching turns up several commercial tools that claim to do XFS file recovery:

I have not had the opportunity to test any of these tools.

In limited testing it also appears that the data fork and the extended attribute information are not zeroed when the inode is reused. This means there is the possibility of finding remnants of data from a previous file in the unused or “slack” space in the inode.

Using xfs_db to View Inodes

xfs_db allows you to quickly view the inode values, even for inodes that are currently unallocated:

[root@localhost ~]# xfs_db -r /dev/mapper/centos-root
xfs_db> inode 100799719
xfs_db> print
core.magic = 0x494e
core.mode = 0
core.version = 3
core.format = 2 (extents)
core.nlinkv2 = 0
core.onlink = 0
core.projid_lo = 0
core.projid_hi = 0
core.uid = 0
core.gid = 0
core.flushiter = 0
core.atime.sec = Thu May 17 16:41:15 2018
core.atime.nsec = 821506703
core.mtime.sec = Thu May 17 16:41:15 2018
core.mtime.nsec = 821506703
core.ctime.sec = Thu May 17 22:10:07 2018
core.ctime.nsec = 163429238
[... additional output not shown...]

xfs_db even converts the timestamps for you, so that’s a win.

What’s Next?

XFS does not store file name information in the inode, which is pretty typical for Unix file systems. The only place where file names exist is in directory entries. In our next installment we will begin to examine the different XFS directory types. Yes, it’s complicated.

XFS (Part 1) – The Superblock

The XFS file system was originally developed by Silicon Graphics for their IRIX operating system. The Linux version is increasingly popular– Red Hat has adopted XFS as their default file system as of Red Hat Enterprise Linux v7. Unfortunately, while XFS is becoming more common on Linux systems, we are lacking forensic tools for decoding this file system. This series will provide insights into the XFS file system structures for forensics professionals, and document the current state of the art as far as tools for decoding XFS.

I would like to thank the XFS development community for their work on the file system and their help in preparing these articles. Links to the documentation, source code, and the mailing list are available from XFS.org. I wouldn’t have been able to do any of this work without these resources.

A Quick Overview of XFS

XFS is a modern journaled file system which uses extent-based file allocation and B+Tree style directories. XFS supports arbitrary extended file attributes. Inodes are dynamically allocated. The block size is 4K by default, but can be set to other values at file system creation time. All file system metadata is stored in “big endian” format, regardless of processor architecture.

Some of the structures in XFS are recognizable from older Unix file systems. XFS still uses 32-bit signed Unix epoch style timestamps, and has the “Year 2038” rollover problem as a result. XFS v5– the version currently used in Linux– does have a creation date (btime) field in addition to the normal last modified (mtime), access time (atime), and metadata change time (ctime) timestamps. XFS timestamps also have an additional 32-bit nanosecond resolution element. File type and permissions are stored in a packed 16-bit value, just like in older Unix file systems.

Very little data gets overwritten when files are deleted in XFS. Directory entries are simply marked as unused, and the extent data in the inode is still visible after deletion. File recovery should be straightforward.

In addition, standard metadata structures in XFS v5 contain a consistent unique file system UUID value, along with information like the inode value associated with the data structure. Metadata structures also have unique “magic number” values. These features facilitate file system and data recovery, and are very useful when carving or viewing raw file system data. Metadata structures include a CRC32 checksum to help detect corruption.

One interesting feature of XFS is that a single file system is subdivided into multiple Allocation Groups— four by default on RHEL systems. Each allocation group (AG) can be treated as a separate file system with its own inode and block lists. The intention was to allow multiple threads to write in parallel to the same file system with minimal interaction. This makes XFS a quite high performing file system on multi-core systems.

It also leads to a unique addressing scheme for blocks and inodes that uses a combination of the AG number and a relative block or inode offset within that AG. These values are packed together into a single address, normally stored as a 64-bit value. However the actual length of the relative portion of the address and the AG value can vary from file system to file system, as we will discuss below. In other words, it’s complicated.

The Superblock

As with other Unix file systems, XFS starts with a superblock which helps decode the file system. The superblock occupies the first 512 bytes of each XFS AG. The primary superblock is the one in AG 0 at the front of the file system, with the superblocks in the other AGs used for redundancy.

Only the first 272 bytes of the superblock are currently used. Here is a breakdown of the information from the superblock:

XFS AG0 Superblock

0-3      Magic Number                       "XFSB"
4-7      Block Size (in bytes)              0x1000 = 4096
8-15     Total blocks in file system        0x942400 = 9,708,544

16-23    Num blocks in real-time device     zeroed
24-31    Num extents in real-time device    zeroed

32-47    UUID                               e56c3b41-...-dd609cb7da71

48-55    First block of journal             0x800004 = 8388612
56-63    Root directory's inode             0x40 = 64

64-71    Real-time extents bitmap inode     0x41 = 65
72-79    Real-time bitmap summary inode     0x42 = 66

80-83    Real-time extent size (in blocks)  0x01
84-87    AG size (in blocks)                0x250900 = 2,427,136 (c.f. 8-15)
88-91    Number of AGs                      0x04
92-95    Num of real-time bitmap blocks     zeroed

96-99    Num of journal blocks              0x1284 = 4740
100-101  File system version and flags      0xB4B5 (low nibble is version)
102-103  Sector size                        0x200 = 512
104-105  Inode size                         0x200 = 512
106-107  Inodes/block                       0x08
108-119  File system name                   not set-- zeroed
120      log2(block size)                   0x0C (2^^12 = 4096)
121      log2(sector size)                  0x09 (2^^9 = 512)
122      log2(inode size)                   0x09
123      log2(inode/block)                  0x03 (2^^3 = 8 inode/block)
124      log2(AG size) rounded up           0x16 (2^^22 = 4M > 2,437,136)
125      log2(real-time extents)            zeroed
126      File system being created flag     zeroed
127      Max inode percentage               0x19 = 25%

128-135  Number of allocated inodes         0x2C500 = 181504
136-143  Number of free inodes              0x385 = 901

144-151  Number of free blocks              0x8450dc = 8,671,452
152-159  Number of free real-time extents   zeroed

160-167  User quota inode                   -1 (NULL in XFS)
168-175  Group quota inode                  -1 (NULL in XFS)

176-177  Quota flags                        zero
178      Misc flags                         zero
179      Reserved                           Must be zero
180-183  Inode alignment (in blocks)        0x04
184-187  RAID unit (in blocks)              zeroed
188-191  RAID stripe (in blocks)            zeroed

192      log2(dir blk allocation granularity)         zero
193      log2(sector size of externl journal device)  zero  
194-195  Sector size of external journal device       zero
196-199  Stripe/unit size of external journal device  0x01
200-203  Additional flags                             0x018A
204-207  Repeat additional flags (for alignment)      0x018A

/* Version 5 only */
208-211  Read-write feature flags (not used)          zero
212-215  Read-only feature flags                      zero
216-219  Read-write incompatibility flags             0x01
220-223  Read-write incompat flags for log (unused)   zero

224-227  CRC32 checksum for superblock                0x0A5832D0
228-231  Sparse inode alignment                       zero
232-239  Project quota inode                          -1

240-247  Log seq number of last superblock update     0x19000036EA
248-263  UUID used if INCOMPAT_META_UUID feature      zeroed
264-271  If INCOMPAT_META_RMAPBT, inode of RM btree   zeroed

Rather than discussing all of these fields in detail, I am going to focus in on the fields we need to quickly get into the file system.

First we need basic file system structure size information like the block size (bytes 4-7) and inode size (bytes 104-105). XFS v5 defaults to 4K blocks and 512 byte inodes, which is what we see here.

As we’ll discuss below, the number of AGs (bytes 88-91) and the size of each AG in blocks (bytes 84-87) are critical for locating data’s physical location on the storage device. This file system has 4 AGs which each contain 2,427,136 blocks (roughly 9.6GB per AG or just under 40GB for the file system).

The superblock contains the inode number of the root directory (bytes 56-63)– this value is normally 64. We also find the starting block of the file system journal (bytes 48-55) and the journal length in blocks (bytes 96-99). We’ll cover the journal in a later article in this series.

While looking at file system metadata in a hex editor is always fun, XFS does include a program named xfs_db which allows for more convenient decoding of various file system structures. Here’s an example of using xfs_db to decode the superblock of our example file system:

[root@localhost XFS]# xfs_db -r /dev/mapper/centos-root
xfs_db> sb 0
xfs_db> print
magicnum = 0x58465342
blocksize = 4096
dblocks = 9708544
rblocks = 0
rextents = 0
uuid = e56c3b41-ca03-4b41-b15c-dd609cb7da71

“xfs_db -r” allows read-only access to mounted file systems. The “sb 0” command selects the superblock from AG 0. “print” has a built-in template to automatically parse and display the superblock information.

Inode and Block Addressing

Typically XFS metadata uses “absolute” addresses, which contain both AG information and a relative offset from the start of that AG. This is what we find here in the superblock and in directory files. Sometimes XFS will use “AG relative” addresses that only include the relative offset from the start of the AG.

While XFS typically allocates 64-bits to hold absolute addresses, the actual size of the address fields varies depending on the size of the file system. For block addresses, the number of bits for the “AG relative” portion of the inode is the log2(AG size) value found in superblock byte 124. In the example superblock, this value is 22. So the lower 22 bits of the block address will be the relative block offset. The upper bits will be used to hold the AG number.

The first block of the file system journal is at address 0x800004. Let’s write that out in binary showing the AG and relative block offset portions:

     0x800004   =    1000 0000 0000 0000 0000 0100
AG# in upper 2 bits---/\---22 bits of relative block offset

So the journal starts at relative block offset 4 from the beginning of AG 2.

But where is that in terms of a physical block offset? The physical block offset can be calculated as follows:

(AG number) * (blocks per AG) + (relative block offset)
     2      *    2427136      +         4   =    4854276

We could perform this calculation on the Linux command line and use dd to extract the first block of the journal:

[root@localhost XFS]# dd if=/dev/mapper/centos-root bs=4096 \
       skip=$((2*2427136 + 4)) count=1 | xxd
0000000: 0000 0021 0000 0000 6901 0000 071a 4dba  ...!....i.....M.
0000010: 0000 0010 6900 0000 4e41 5254 2800 0000  ....i...NART(...

Inode addressing is similar. However, because we can have multiple inodes per block, the relative portion of the inode address has to be longer. The length of relative inode addresses is the sum of superblock bytes 123 and 124– the log2 value of inodes per block plus the log2 value of blocks per AG. In our example this is 3+22=25.

The inode address of the root directory isn’t a very interesting example– it’s just inode offset 64 from AG 0. For a more interesting example, I’ll use my /etc/passwd file at inode 67761631 (0x409f5df). Let’s take a look at the bits:

     0x409f5df   =    0100 0000 1001 1111 0101 1101 1111
  AG# in upper 3 bits---/\---25 bits of relative inode

So the /etc/passwd file uses inode 0x9f5df (652767) in AG 2.

Where does this inode physically reside on the storage device? The relative block location of an inode in XFS is simply the integer portion of the inode number divided by the number of inodes per block. In our case this is 652767 div 8 or block 81595. The inode offset in this block is 672767 mod 8, which equals 7.

Now that we know the AG and relative block number for this inode, we can extract it as we did the first block of the journal. We can even use a second dd command to extract the correct inode offset from the block:

[root@localhost XFS]# dd if=/dev/mapper/centos-root bs=4096 \ 
                              skip=$((2*2427136 + 81595)) count=1 | 
                      dd bs=512 skip=7 count=1 | xxd
0000000: 494e 81a4 0302 0000 0000 0000 0000 0000  IN..............
0000010: 0000 0001 0000 0000 0000 0000 0000 0000  ................

Note that the xfs_db program can perform address conversions for us. However, in order to use xfs_db it must be able to attach to the file system in order to have the correct length for the AG relative portion of the address. Since this may no always be possible, knowing how to manually convert absolute addresses is definitely a useful skill.

Here is how to get xfs_db to convert the block and inode addresses we used in the examples above:

[root@localhost XFS]# xfs_db -r /dev/mapper/centos-root
xfs_db> convert fsblock 0x800004 agno
0x2 (2)
xfs_db> convert fsblock 0x800004 agblock
0x4 (4)
xfs_db> convert inode 67761631 agno
0x2 (2)
xfs_db> convert inode 67761631 agino
0x9f5df (652767)
xfs_db> convert inode 67761631 agblock
0x13ebb (81595)
xfs_db> convert inode 67761631 offset
0x7 (7)

The first two commands convert the starting block of the journal (xfs_db refers to absolute block addresses as “fsblock” values) to the AG number (agno) and AG relative block offset (agblock). We can also use the convert command to translate inode addresses. Here we calculate the AG number, AG relative inode (agino), the AG relative block for the inode, and even the offset in that block where the inode resides (offset). The values from xfs_db match the values we calculated manually above. You will note that we can use either hex or decimal numbers as input.

Now that we can locate file system structures on disk, Part 2 of this series will focus on the XFS inode format. I hope you will return for the next installment.

Advice to Recruiters

Like many tech workers, I regularly get inquiries from recruiters. Lately, these inquiries seem to be coming to me via LinkedIn for the most part… and let’s just say that the quality of most of these leads is extremely dubious. Judging by feedback I’ve received on Twitter, my colleagues in the tech industry are just as frustrated by this as I am.

When I suggested trying to educate recruiters to help them do a better job, my friends pointed out to me that recruiting tends to be a high-turnover business. We could spend a significant amount of time educating one batch of recruiters, only to have to do it all over again later. So I thought I might jot down some notes to recruiters here on my blog, if only so that I have to say these things just once.

It’s not just about matching keywords. I’m known for my Perl programming and I have the keyword “Selenium” on my LinkedIn profile. But even a casual glance at my profile would tell you that I’m not interested or a good fit for a Senior SQA position on your decades-old Perl-based web framework. Similarly, it’s clear from my profile that I’ve been an independent consultant for 15+ years, so I’m unlikely to be interested in full-time employment with your gigantic software company.

Do your homework. Please respect my time, and take a moment to really understand the position you’re trying to fill and the people you’re trying to put there. The best recruiters I’ve worked with understand their own business and the industry they’re working in, and are looking to build relationships for the long haul.

No job description = no response. If you contact me about an “exciting” job opportunity with your firm, but don’t include the job description (or link to one), I’m just going to assume you’re trawling for resumes. I need to evaluate for myself if I think the opportunity is “exciting”. To expect me to respond sight unseen is again disrespectful of my time.

If you just want to leverage my Rolodex, tell me. I get it. I’ve been in the industry a long time, and do work that tends to bring me into contact with lots of different people. And I’m perfectly happy to refer interesting jobs to friends who I think the position is suitable for. But don’t play games with me. Be up-front and say, “This job isn’t right for you but I was hoping you might know somebody who it is appropriate for.” That’s a reasonable and professional request. And I will honestly consider it, and try to fire it off to my “network” of friends, and let you know that I’ve done so.

I’m not going to do your job for you. But if you keep coming back to me over and over again for referrals (especially for positions unrelated to my fields of expertise), or keep bothering me for follow-up after I’ve put your opening out to my network, I’m going to start blocking your messages. If I wanted to be a recruiter, I’d be doing it right now. Again, respect my time. Say, “Thanks for the referral!”, and start following up on those leads yourself.

I believe recruiting is an honorable profession, and a benefit to our industry if done well. Many of my colleagues would love to build a relationship with a recruiter who could help them through all phases of their careers. So please consider the advice above in a constructive frame of mind. I welcome feedback from both recruiters and candidates (and employers!) in the comments.

Getting Started in InfoSec… Or Any Other Career

Lately I’ve received several requests for advice on breaking into the InfoSec field.  I find myself repeating the same advice over and over, so I thought I’d post my thoughts here on Righteous IT to save time (at the risk of turning this into a career advice blog).

What Others are Writing

“Breaking into InfoSec” has been a hot topic in the community lately, and several authors are writing eloquently on this topic.  Rather than repeating their good advice, let me just throw out some important links to read.

Every Tuesday, Lee Kushner and Mike Murray provide solid InfoSec career guidance in   “Career Advice Tuesday” at the Information Security Leaders blog.  One oft-repeated piece of advice in their blog is to develop a “career plan” for where you want to be with at least a five-year time horizon.  While no plan survives contact with the enemy, having a plan means that you’re moving forward in a purposeful direction rather than just wandering at random.

Bruce Schneier recently posted “So You Want to Be a Security Expert” on his blog.  I’m a firm believer in his “Study… Do… Show” mantra.  Bruce gives a specific shout-out to security certifications, which are indeed useful for demonstrating a certain level of knowledge in a general discipline.  But I wish that more people starting their careers put at least as much effort into doing research in their own areas of interest and writing blog posts, talks, and code to document what they’ve done.  This is how we grow as an industry and incidentally it also shows potential employers something that distinguishes you from all the other “highly certified” professionals you’ll be competing against for jobs.

That Bruce Schneier article is part of a larger series of interviews with various InfoSec professionals on how to break into the InfoSec field, which is being created by Brian Krebs over at Krebs on Security.  Brian’s blog is normally some great coverage of recent happenings in the Cyber Crime world, but these (often first-person) accounts of how to get started in InfoSec have been really interesting.

Similarly, Eric J. Huber has been running a series of enlightening interviews with leading lights in the field of Digital Forensic Investigation on his Fistful of Dongles blog.  Somehow he became momentarily confused and also included me in this series.  But apart from that oversight, these interviews always include interesting information on how to get started in the field.

If you’re paying attention, one thing about all of this advice is that it’s equally applicable to getting into any field.  There are no magic tricks for getting started on an InfoSec career path that are different from any other career path.  The corollary to that realization is that any of the classic career guidance books (from “What Color is Your Parachute” to now) can be helpful when you’re getting started in InfoSec or any other career.

It’s All About Your Network

When people ask me for career guidance, the one point that I emphasize repeatedly is that personal connections– your “network” of friends and colleagues– control your career destiny more than any other single factor.  Every good job I’ve ever had, whether as a full-time employee or as a consultant, has come through personal connections.

When you’re just starting out your career, you’re also starting to create your professional network.  This process begins during your educational history.  The contacts you cultivate during college and grad school– both fellow students as well as faculty and administration– are at least as important as what you learn from your books and professors.

Many of you reading this may not have been fortunate enough to attend college, or your college days are long past.  And even the people who did start to build their network in school need to continue building their networks after they leave their educational womb.  You need to constantly be on the lookout for opportunities and venues to meet other people and create a robust, living network.

An important part of your personal network comes from your on-the-job friends and co-workers.  If your employer sends you for training, part of your job at that training event is to make useful contacts with other people in the room.  If they’re at the same training event with you, they’re almost certainly part of the same field and will be great people to interact with in the future– whether that’s getting help with a problem you’re stuck on or finding a new job.

But also look around your area for regular meetings of different groups  and invest the time to attend the meetings.  This could be anything from a Security BSides event, to a SAGE or LOPSA local group, or an ISACA or ISSA chapter meeting, or even Toastmasters.  InfraGuard may have an active chapter in your area.  SANS often has a “community night” associated with its conferences which you can attend for free and network with other people in your area.

Don’t have a local group in your area?  Go start one!  Try using LinkedIn to search for other IT and InfoSec professionals in your area and reach out to them.  It doesn’t have to be anything formal.  Just meet for dinner/drinks every month and talk about your experiences and research projects.

Social networking has become an extraordinary resource for reaching out and networking with other InfoSec professionals.  While it will never fully replace face-to-face interactions, “knowing” somebody by interacting with them first via Twitter, LinkedIn, or Facebook can get you past the awkward chit-chat phase when you finally do meet them in real life.  And it can help you engineer those meetings when you’re in the same geographic region.

When you come into an established group for the first time, I urge you to sit back and just listen for the first couple of meetings.  Figure out who the “players” in the group are and get a feel for the “social norms” and nuances in the new group.  You’ve probably had the experience of boorish newcomers coming in and making a pain of themselves in groups that you’re already a member of.  Don’t be “that guy”.

Instead you want the group to recognize your positive contributions.  That could be anything from providing helpful summaries of information provided at the meeting, to helping with setup and tear-down at meetings, to providing food and beverage, to providing additional links that are relevant to the meeting’s focus, to contributing your own research and presentations.  Even just making new people (like you) feel welcome and accepted is a valuable contribution!

Small Fish, Big Pond

If there aren’t currently any gatherings for professional InfoSec people in your area, and you’re having trouble tracking people down on LinkedIn to start your own gathering, then this may be a sign that you’re in the wrong geographic location.  Being the biggest fish in your small pond may be comfortable, but you need to put yourself in an uncomfortable situation in order to grow.

You need to be in a situation where you’re constantly being exposed to new information and new ways of doing things.  You might think you’re getting this from reading articles and blogs on the Internet.  But you really need people around you who will push you to improve your game.  If you’re on your own reading about new technology on-line it’s easy to think, “That’s cool, I should look into that.” But meeting up with your InfoSec pals every month will do more to push you into actually doing that research than anything else.

When you’re learning on your own it’s easy to have “blind spots” and miss out on important information.  While social media can help with this somewhat, it’s not a replacement for being in a room with a group of like-minded folks who are bouncing ideas and solutions off one another at a rapid rate.

Being in the right geographic location also provides more job opportunities, which also translates to more “interesting” job opportunities.  Feel like you’ve topped out at your current job and aren’t being challenged?  Things are much easier if your next job doesn’t require you to move your home.

But how do you get moved to the “big pond”?  In my case, I took a pretty lousy job for a year because the job was willing to relocate me to the Silicon Valley.  Remember that advice about having a “career plan”?  It’s a lot easier to take a lousy job for a year if you view it as a step on the road to the career you want.  During that year, I was busily getting plugged into various tech groups in the local scene, and by the end of the year it was almost embarrassingly easy for me to step into my next job, which was a lot of fun. The things I learned during my 12 years in the Silicon Valley were instrumental in shaping my career and massively increasing my knowledge-base.  And the friends and contacts I made during that period are still with me today.

So pay your dues if you have to, but get yourself to one of the big high-tech centers: Silicon Valley, New York, Washington D.C., or Seattle.  You may never be a “big fish” in any of these places, but you’ll be better for having had the experience.

Consulting (Part 9) – Knowing When to Say When

Taking time off is important for maintaining your health and sanity.  But as a consultant, it’s easy to feel that time not spent billing is wasted time.  And forcing yourself to take time off when you feel like you should be billing– or when you feel like you should be looking for your next assignment– is almost as bad as not taking time off at all.

The trick that I use is to set a reasonable billing goal for the year, and when I reach that goal I simply stop billing.  Instead I shift over to “fun” projects that I’ve had to put off because of my work and travel schedule.  Or Laura and I travel to fun places together as a vacation, which is very different from the business travel I do normally.  Or I just “veg out” and read a book or play computer games.  The timing on this strategy usually works out well, since I commonly meet my billing goal late in the year when it’s typically hard to drum up new business: both because of the holiday schedule and the lack of budget at the end of the year for my potential clients.

Setting the Goal

The key is that when you reach your billing goal you have to be at a place where you don’t feel like you need any more money to get you through the remainder of the year.  That means you have to have billed enough money so that you can pay yourself enough to cover your annual expenses for a year and the standard taxes that accrue on that income.  You should also have billed enough to cover any overhead costs related to running your business for the year.

One mistake that I made early in my consulting career was forgetting to factor in large annual costs like my property taxes and annual homeowners insurance bill.  One option would be to pay these costs on a monthly basis so that you can more easily factor them into your monthly expenses.  However, there’s usually an extra fee for doing so.  My solution is to plan as if the year were 13 months long instead of 12 and use the “extra” month of salary to cover the heavy annual expenses that appear at the end of the year.

So now you’ve hopefully got a figure that covers your gross salary needs and business expenses, but what about retirement planning?  You’d better be saving some money unless you plan on working until the day you die (hint: this is not a good plan).  Look into ways that you can invest some of your gross billing into a pre-tax retirement plan, and be sure to talk to your accountant about how much your company is allowed to contribute in “matching funds” in order to maximize the amount you are allowed to invest each year.  Then build the maximum allowable amount into your billing goal.

Have you been forced to eat into your “six months of burn rate” savings plan?  If so, then you’d better build your reserves back up to full before you quit billing for the year.  Downtime comes when you least expect it.  And you might need that float to carry you for a little bit when you start billing again in the new year.

Similarly, if you’ve been deferring any maintenance on big ticket items, like your automobile or property, make sure you plan on funding those repairs.  While some work can be delayed for months if necessary during bad times, failing to address these issues will eventually cost you more money if put off for too long.

Now how about funding what you’re going to do during your downtime?  Maybe you want to take a trip someplace nice.  Factor in the extra cost for travel and/or any special expenses you’re planning to accrue during your time off.  Make sure your billing goal covers those costs.

So your billing goal then is the sum of several factors:

  1. Gross salary to cover personal expenses (including large annual costs)
  2. Money to cover expenses associated with your business
  3. Retirement funding
  4. Any money necessary to rebuild your “rainy day” savings
  5. Deferred maintenance costs
  6. Special costs associated with vacation or other plans

If you get to a place where you can cover all of the above costs, then that’s a good place to stop billing.  And you’ll be able to enjoy your downtime and not stress about needing to make more money.

The only other factor to consider is the long-term financial outlook.  Right now, I’m personally very pessimistic about the global economy and am expecting another significant downturn in early 2013 (based on my assumption that nobody’s going to let the economy splatter before the US Presidential elections conclude in November of 2012).  So I’m going to bill as much as I can in the next year in order to have  a “war chest” against future bad times.

Advantages of Pre-Planning

There’s an additional benefit to setting a billing goal besides getting yourself to a place where you feel OK taking time off.  The billing goal clearly focuses you on how many hours you need to bill and at what rate in order to get to your “happy place”.  One of the side-effects of this process is often the realization that you need to find a way to increase your billing rates.

I strongly recommend you sit down and plan a billing goal at the beginning of each fiscal year.  It focuses your efforts by giving you a target to shoot for.  And it improves your mental health by allowing you to take time off without stress.

Wrapping Up This Series

With this article I’ve covered everything that I wanted to say based on my experience as an independent consultant.  I hope you’ve found the advice useful in your own decision making and planning.  Thanks for sticking with me!

If you have questions that you feel I haven’t fully addressed, please feel free to leave them in the comments and I’ll be happy to respond.  Who knows?  Perhaps your question will prompt me to add another full blog post in the series.

Consulting (Part 8) – Avoiding Overhead

Earlier in this series of articles on consulting, I mentioned that my wife and I run our business from our home.  This prompted one reader to ask why we didn’t have separate office space.  The short answer is because office space is pure overhead.  Every month that office space is going to cost me the same amount of money, whether I’m actually billing or not.  And if I’m in the middle of a lease, I may not be able to shed that overhead as rapidly as I’d like.

During bad periods, extra overhead is the weight that drags you down.  I’ve previously discussed having six months of “burn money” saved up to help you through these periods.  One way to make your savings go further is to take a hard look at what recurring costs are necessary, and which can be done away with.

When I think about the absolute necessities for keeping our business going, the list is really very short:

  • Internet/Data Service — Obviously, we run a high tech business and Internet and data connectivity are a must.  Over the years, the most cost-effective solution that I’ve found is to use an inexpensive residential data plan for our home offices and have a dedicated server at a colocation facility acting as the “public” face of Deer Run.  I get better availability and more throughput by having the server at the colo, and “residential service + colo” is still less per month than the expensive “business level” service plans offered by my local data providers.
  • Telephony — These days, cell phones are a must.  We also have a POTS line from our local telco which is the main number for Deer Run.  Extravagantly, we also have a separate POTS line for our fax machine.  While I enjoy having the POTS lines around as a backup, the reality is that they’re not at all necessary.  If we ever do another move that requires us to print new business cards and letterhead, I would likely drop this service.
  • Insurance — An earlier article in the series talks about insurance issues.  We carry our general liability coverage and an extra rider on our homeowners policy to cover the replacement cost of our computer systems in the event of a catastrophic event like a fire.  Do not try to skimp on this.
  • Accountant/Attorney — Also do not skimp on legal and financial advice.  The good news about these costs is that they only tend to accrue when you’re actively working.  If business drops off, then you won’t need these services as much, other than perhaps year-end tax preparation.
  • Taxes — Related to the above, make sure you pay all of the taxes that you owe.  The out-of-pocket and opportunity costs related to dealing with an audit, taxes, and penalties are significant.

From a business perspective, anything else falls into the “unnecessary overhead” category in my world.  Think long and hard before taking on any additional recurring costs besides those listed above.

We’ve managed to keep our business going through two major economic downturns where business was scarce for 6-12  months.  In both cases, we weathered the storm by dialing our expenses down to the bare minimum and deferring maintenance that was not absolutely necessary until the economic outlook improved.  In this “hibernation” mode, we were able to turn our “six months” of savings into enough money to get us through an entire year.

Office Space

Since I started this post by mentioning the office space issue, I did want to note that even though we work from home we do have dedicated office space.  In fact, three of the four bedrooms in our home comprise the “World Headquarters” of Deer Run Associates.  Laura and I each have separate offices with doors that close– a must for when we’re both working from home.  We also have a third “overflow office” for visitors which also holds our server and networking equipment along with our paper files and other storage.

You are allowed to claim a deduction for the office space used for a home-based business.  You must be careful to only use the office space for business and not for personal reasons.  I will also note that the IRS has sent unannounced representatives to visit both our California and Oregon offices to verify that the office space was being used as we claimed.  While the IRS agents were unfailingly polite, I was also glad that our office space looked as professional as could be when they arrived.

Consulting (Part 7) — Work? What Work?

The last two installments in this series of articles on consulting have focused on how to go about finding work.  But one of the important questions every consultant should consider is who their target customer is.  The answer to this question affects how you go about selling your services.  For example, if you decide that providing services to the legal industry is the way you want to go, then you should look at writing articles for Bar Association journals and speaking at legal conferences.  Also, when you get inquiries about possible consulting engagements, having a clear answer to the “Where do I want to be working?” question can help you decide which engagements are worth pursuing and which you should no bid.

However, the simple question of how to target your services has several facets that should all be considered.  Let’s walk through the major ones.

What Are You Selling?

In the last installment I talked about picking an area of expertise.  But even within that specialty there are sub-disciplines and specializations that you should consider when trying to determine the “sweet spot” for your perfect consulting engagement.  For example, consider the field of Digital Forensics that I’m currently working in.  Under that broad umbrella are people who do incident response, traditional hard drive forensics, media exploitation, mobile forensics, malware analysis, e-discovery, and other specialties.  Having a clear focus on which area you prefer to work in allows you to more clearly articulate your marketing message.  It will guide the sorts of publications and presentations you want to be known for and help you narrow down the 30 second “elevator pitch” you want to give to potential clients.

Which is not to say that you should only do work in a specific niche market.  It pays to take on jobs outside of your comfort zone which can stretch your capabilities and force you to learn new skills.  Expertise in a particular area will get you a job, but a broad base will allow you to have a long-term and prosperous consulting career.

What Level of Work?

The other aspect of homing in on your specific offering as a consultant is determining what level of work you are targeting.  For example, when I first started out my practice doing general IT and InfoSec operations, there seemed to be essentially infinite amounts of work for basic day-to-day system and network administration.  But I wanted to do more interesting/challenging “big” infrastructure level architecture and deployment work.  There were many fewer of those jobs to be found, and actually landing them took more work.

You can visualize this as a pyramid.  At the base of the pyramid is a large group of potential clients who need basic “block and tackle” type services.  Since there’s little differentiation in service offerings and a larger pool of potential suppliers, rates are lower.  But since there’s a large amount of available work, there’s less overhead for “downtime” between jobs.  As you move up the pyramid the jobs get more interesting, and the number of people who can provide the service goes down, so billing rates go up.  But the number of interesting, high bill rate jobs becomes smaller and smaller.

You’re trying to find the best spot that maximizes your billing rate while minimizing the time you spend looking for your next engagement.  And, of course, which provides work that you’re interested in doing.  There’s plenty of people making large amounts of money as PCI assessors, but that’s not work that I would personally ever want to do.

What Industry Do You Want To Work In?

Some people like the fast-paced, high-pressure Wall Street environment.  Others like working with Law Enforcement.  Some find the Federal Government a comfortable niche.  The question of who you work for is intimately tied up with the services that you offer.  Some consultants identify the industry they want to work for first– because a given sector may be perceived as more stable and/or have more money to spend– and then try to figure out what services to offer to that industry and how to sell them.

When you’re first starting out, it’s typically easiest to provide consulting services to the industry where you got your experience as a full-time employee.  You will have a better “network” of contacts in that industry and be more familiar with the problems and needs of your potential clients.

But once you get your legs under you as a consultant, it may be worthwhile to investigate other industries and see whether you might find interesting work and higher billing rates elsewhere.  Start inviting people in your target industry to lunch, and really listen to where their pain points are.  Be frank about asking what service offerings you could provide that would most help them.

Where’s the Work?

Once you’ve identified and industry and a service offering to provide, it’s worth considering where the highest concentrations of that kind of work are located.  Wanting to do IT infrastructure work for high-tech companies, I moved to the Silicon Valley which had the highest density of that kind of work available.  This increased my pool of potential jobs and reduced the amount of overhead I needed to invest in travelling to my work site.

But it’s worth noting that, twelve years after moving to Silicon Valley, I ended up moving to a smaller community in Oregon for “quality of life” reasons.  There are all kinds of criteria that go into the decision about where to live– family ties, cost of living, access to healthcare, recreation, etc.  Consulting has always been a “lifestyle” business for me, so the demands of your business shouldn’t be the overriding factor in determining your location.

Of course, your location may limit the choices of available work and the industries you work in.  Or you pursue the path that I have and spend a lot of time on the road travelling to various client sites.  You may have to balance your desire to live in a particular location with the issue of having to spend most of your time going someplace else to find work.

Who Are You Selling To?

The next question is where to direct your sales pitch.  Some services get sold to C-level staff in the Boardroom– audit, compliance, and data mining are examples.  Technical services get sold to technicians and technical management at a lower level of the hierarchy.  The higher up the food chain you go, the higher billing rates you can typically command, but the longer the sales cycle is going to be, meaning more overhead.

Different level sales require different language and presentation.  Your target market also determines where you spend your time getting noticed.  C-level execs read very different media and frequent different meeting venues from technical folks in the trenches.

Ultimately, this one often comes down to your comfort zone.  I like being hands-on with technology and talking about technical topics with other like-minded people.  So it’s most natural for me to sell specialized technical services to my peer group.  But perhaps I might command a higher billing rate if I sold my services to their CEOs.

Short Term or Long Term?

Are you excited about taking on a lot of smaller, tactical jobs in rapid succession or tackling a bigger project with a longer time-frame?  The one obvious advantage to longer-term engagements is that there’s less overhead involved in finding work and getting up to speed on your role.  Plus you get a chance to do “bigger” projects and really get into watching the life-cycle and evolution of your work.

Or boredom could set in.  Or you could find the technical skills you’re not using starting to atrophy.  The nice thing about shorter-term contracts is you can really get a wide variety of marketable experience in a relatively short amount of time.

The type of work you’re doing may determine the length of a typical contract.  For example, if you’re doing Incident Response work, there are no long-term engagements.  You ride in, clean up the town, and then hand things over to the local sheriff.  Unless you want to become the local sheriff, of course.  But at that point you won’t be doing IR consulting anymore, you’ll be doing operational InfoSec work.

You Don’t Have To Have All The Answers!

The more you can think about these questions before starting your consulting practice, the less overhead you’ll tend to have looking for work.  But you don’t have to have specific answers nailed down for each question.  And you shouldn’t limit yourself when you’re first starting out anyway.  Stumble around a little bit and try out different types of engagements.

But every year sit down and think about your past work history.  What engagements were the most fun and interesting? Which were the most comfortable working environments? What was the most lucrative?  Then think about the best engagements in the context of the questions discussed above and try to home in on the best kinds of work for you and the right industries, locations, and people to sell that work to.