xorl %eax, %eax

HFS+ read_mapping_page() DoS

leave a comment »

Eric Sesterhenn found this vulnerability and reported to the Linux Kernel Mailing List on August 2008. This bug affects the Linux kernel up to 2.6.28-rc1 release and it is located at fs/hfsplus/bitmap.c. The following code snippets have been taken from Linux kernel 2.6.27 and here is the vulnerable function:

18 int hfsplus_block_allocate(struct super_block *sb, u32 size, u32 offset, u32 *max)
19 {
20         struct page *page;
21         struct address_space *mapping;
22         __be32 *pptr, *curr, *end;
23         u32 mask, start, len, n;
24         __be32 val;
25         int i;

As it is implied by its name, this routine is used to allocate space for HFS+ blocks of data. Since we can control the contents of the filesystem we can somehow also control the block size that will be allocated each time. The function continues like this:

27         len = *max;
28         if (!len)
29                 return size;

So, if the given max size is NULL then immediately return size. Here as you can see there is a small signedness issue. The value being returned (size) is u32 (unsigned int, 32bit long) but the function returns signed integer! I haven’t checked whether this can trigger any vulnerability but it might.. Anyway, next we have:

31         dprint(DBG_BITMAP, "block_allocate: %u,%u,%u\n", size, offset, len);
32         mutex_lock(&HFSPLUS_SB(sb).alloc_file->i_mutex);
33         mapping = HFSPLUS_SB(sb).alloc_file->i_mapping;

Just a debugging print out, the start of a MUTEX lock and then, structure mapping gets initialized with the block’s mapping contents. Remember, these contents are more or less user controlled. Following:

34         page = read_mapping_page(mapping, offset / PAGE_CACHE_BITS, NULL);
35         pptr = kmap(page);
36         curr = pptr + (offset & (PAGE_CACHE_BITS - 1)) / 32; 

The call at line 34 is the buggy one. But why? Let’s examine it. It is trying to map a page from the previously retrieved mapping structure and the offset divided with the page cache bits to have the correct alignment, this function can be found at include/linux/pagemap.h.

246 static inline struct page *read_mapping_page(struct address_space *mapping,
247                                              pgoff_t index, void *data)
248 {
249         filler_t *filler = (filler_t *)mapping->a_ops->readpage;
250         return read_cache_page(mapping, index, filler, data);
251 }

It’s just a wrapper around read_cahce_page(). The latter is located at mm/filemap.c and is used to read into a cache page. If the page exist it fills it and waits to be unlocked else it returns I/O error (-EIO). Now, go back to hfsplus_block_allocate() and think about what we just said. Line 34 can return an errorneous, negative value which is never checked! In addition to this, the next function using page variable that may contain an invalid value (because of the error code) is kmap(), this routine of arch/x86/mm/highmem_32.c is used to map a highmem page into memory but of course this would fail on an incorrect page pointer. If you noticed the E. Sesterhenn’s call trace was:

[15840.675016] BUG: unable to handle kernel paging request at fffffffb
[15840.675016] IP: [<c0116a4f>] kmap+0x15/0x56
[15840.675016] *pde = 00008067 *pte = 00000000
[15840.675016] Oops: 0000 [#1] PREEMPT DEBUG_PAGEALLOC

If you’re still not sure why it tried to access 0xfffffffb (you should be pretty sure by now) then just try to print out the -EIO that read_mapping_page() would return as I explained earlier, here is tiny dummy proggie that demonstrates what is this strange page that kmap() tried to access on the above trace:

#include <errno.h>
#include <stdio.h>

main(void) {
        printf("hex: %#x dec: %d\n", -EIO, -EIO);
        return 0; }

And at runtime:

sh-3.2$ gcc io.c -o io
sh-3.2$ ./io
hex: 0xfffffffb dec: -5

It’s obvious why it crashed on kmap() now. To fix this, Sesterhenn wrote the following patch which adds a check for error codes that page may have:

     page = read_mapping_page(mapping, offset / PAGE_CACHE_BITS, NULL);
+    if (IS_ERR(page)) {
+        start = size;
+        goto out;
+    }
     pptr = kmap(page);

Of course this was a fairly trivial vulnerability which was result of common fuzzing but I think it was worth of this small post. Considering the exploitation part, I found it really unlikely to be able to achieve something more that denial of service situations. Since on every faulty allocation the return value would be -EIO there are not much things that you can do.

Written by xorl

January 12, 2009 at 01:42

Posted in bugs, linux

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s