xorl %eax, %eax

Archive for the ‘vulnerabilities’ Category

CVE-2017-1081: FreeBSD ipfilter use-after-free

leave a comment »

The SA-17:04 FreeBSD security advisory describes a logic flaw that results in a use-after-free situation in ipfilter. The vulnerability was reported by Cy Schubert and affects all FreeBSD releases prior to 11.0-STABLE, 11.0-RELEASE-p10, 10.3-STABLE, and 10.3-RELEASE-p19. Let’s have a look at sys/contrib/ipfilter/netinet/ip_frag.c to better understand the vulnerability.

/* ------------------------------------------------------------------------ */
/* Function:    ipfr_frag_new                                               */
/* Returns:     ipfr_t * - pointer to fragment cache state info or NULL     */
/* Parameters:  fin(I)   - pointer to packet information                    */
/*              table(I) - pointer to frag table to add to                  */
/*              lock(I)  - pointer to lock to get a write hold of           */
/*                                                                          */
/* Add a new entry to the fragment cache, registering it as having come     */
/* through this box, with the result of the filter operation.               */
/*                                                                          */
/* If this function succeeds, it returns with a write lock held on "lock".  */
/* If it fails, no lock is held on return.                                  */
/* ------------------------------------------------------------------------ */
static ipfr_t *
ipfr_frag_new(softc, softf, fin, pass, table
#ifdef USE_MUTEXES
, lock
#endif
)
    ...
	ipfr_t *fra, frag, *fran;
    ...
	/*
	 * allocate some memory, if possible, if not, just record that we
	 * failed to do so.
	 */
	KMALLOC(fran, ipfr_t *);
	if (fran == NULL) {
		FBUMPD(ifs_nomem);
		return NULL;
	}
    ...
	/*
	 * first, make sure it isn't already there...
	 */
	for (fra = table[idx]; (fra != NULL); fra = fra->ipfr_hnext)
		if (!bcmp((char *)&frag.ipfr_ifp, (char *)&fra->ipfr_ifp,
			  IPFR_CMPSZ)) {
			RWLOCK_EXIT(lock);
			FBUMPD(ifs_exists);
			KFREE(fra);
			return NULL;
		}

	fra = fran;
	fran = NULL;
	fr = fin->fin_fr;
	fra->ipfr_rule = fr;
	if (fr != NULL) {
		MUTEX_ENTER(&fr->fr_lock);
		fr->fr_ref++;
		MUTEX_EXIT(&fr->fr_lock);
	}
    ...
}

The above code snippet is from ipfr_frag_new() which handles new fragmented packets in the fragment cache. What is important the “for” loop which is the processing of a hash table that contains the cached fragmented packets. The bcmp() will check if the packet stored in “fra” is already in the hash table (the “frag” pointer). If it’s not it will update “fra” with the memory allocated in “fran” and store the new packet there. If is already in the table, it will release the lock, free “fra” and return NULL. Well, it’s hard to notice but that’s a logic flaw as it should have freed “fran” (the not used kernel buffer) rather than the “fra” which points to the packet to be processed. As you can easily guess, the patch is pretty straightforward.

 			RWLOCK_EXIT(lock);
 			FBUMPD(ifs_exists);
-			KFREE(fra);
+			KFREE(fran);
 			return NULL;

There are few different code paths that can result in accessing the freed “fra”. One is in the ipf_frag_lookup() function which is used to look in the fragment cache for entries of the requested packet with its filter result. You can see how this can cause issues below.

static ipfr_t *
ipf_frag_lookup(softc, softf, fin, table
#ifdef USE_MUTEXES
, lock
#endif
)
	ipf_main_softc_t *softc;
	ipf_frag_softc_t *softf;
	fr_info_t *fin;
	ipfr_t *table[];
#ifdef USE_MUTEXES
	ipfrwlock_t *lock;
#endif
{
    ...
	/*
	 * check the table, careful to only compare the right amount of data
	 */
	for (f = table[idx]; f; f = f->ipfr_hnext) {
    ...
	return NULL;
}

And a second code path that can lead to use-after-free is via ipf_slowtimer() function from sys/contrib/ipfilter/netinet/fil.c. This function is used to slowly expire the state of the fragments.

void
ipf_slowtimer(softc)
	ipf_main_softc_t *softc;
{

	ipf_token_expire(softc);
	ipf_frag_expire(softc);
	ipf_state_expire(softc);
	ipf_nat_expire(softc);
	ipf_auth_expire(softc);
	ipf_lookup_expire(softc);
	ipf_rule_expire(softc);
	ipf_sync_expire(softc);
	softc->ipf_ticks++;
#   if defined(__OpenBSD__)
	timeout_add(&ipf_slowtimer_ch, hz/2);
#   endif
}

The interesting for this vulnerability call is the one to ipf_frag_expire() which is designed to expire entries from the fragment cache table. To achieve this, ipf_frag_expire() utilizes the internal ipf_frag_delete() routine which is shown below.

static void
ipf_frag_delete(softc, fra, tail)
	ipf_main_softc_t *softc;
	ipfr_t *fra, ***tail;
{
	ipf_frag_softc_t *softf = softc->ipf_frag_soft;

	if (fra->ipfr_next)
		fra->ipfr_next->ipfr_prev = fra->ipfr_prev;
	*fra->ipfr_prev = fra->ipfr_next;
	if (*tail == &fra->ipfr_next)
		*tail = fra->ipfr_prev;

	if (fra->ipfr_hnext)
		fra->ipfr_hnext->ipfr_hprev = fra->ipfr_hprev;
	*fra->ipfr_hprev = fra->ipfr_hnext;

	if (fra->ipfr_rule != NULL) {
		(void) ipf_derefrule(softc, &fra->ipfr_rule);
	}

	if (fra->ipfr_ref <= 0)
		ipf_frag_free(softf, fra);
}

Due to the logic error in ipfr_frag_new() the above could result in attempting to delete an entry that has already been freed. Interesting vulnerability and kind of hard to catch from static code analysis.

Written by xorl

November 19, 2017 at 18:30

Posted in vulnerabilities

CVE-2017-15306: Linux kernel KVM PowerPC NULL pointer dereference

leave a comment »

This vulnerability was reported by Greg Kurz on September 2017. The vulnerability is specific to PowerPC KVM (Kernel-based Virtual Machine) with its code being in arch/powerpc/kvm/powerpc.c source code file. Below is the vulnerable IOCTL (Input/Output Control) function from this file.

int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
{
    ...
	/* Assume we're using HV mode when the HV module is loaded */
	int hv_enabled = kvmppc_hv_ops ? 1 : 0;

	if (kvm) {
		/*
		 * Hooray - we know which VM type we're running on. Depend on
		 * that rather than the guess above.
		 */
		hv_enabled = is_kvmppc_hv_enabled(kvm);
	}
    ...
	switch (ext) {
    ...
	case KVM_CAP_PPC_HTM:
		r = cpu_has_feature(CPU_FTR_TM_COMP) &&
		    is_kvmppc_hv_enabled(kvm);
		break;
    ...
}

What Greg Kurz discovered was that if the file descriptor of the global KVM was used (that is, /dev/kvm) then “kvm” pointer is NULL. This means that the first “if” condition will fail but if the IOCTL is called with “KVM_CAP_PPC_HTM” then “kvm” will be invoked by is_kvmppc_hv_enabled() resulting in a NULL pointer dereference. You can see the equivalent code from arch/powerpc/include/asm/kvm_ppc.h here.

static inline bool is_kvmppc_hv_enabled(struct kvm *kvm)
{
	return kvm->arch.kvm_ops == kvmppc_hv_ops;
}

extern int kvmppc_hwrng_present(void);

Greg Kurz also included a simple PoC trigger code that basically opens the global KVM file descriptor and then calls the above IOCTL with the “KVM_CAP_PPC_HTM” extension. You can see that PoC code below.

#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <sys/ioctl.h>
#include <linux/kvm.h>

main()
{
    int fd = open("/dev/kvm", O_RDWR);
    ioctl(fd, KVM_CHECK_EXTENSION, KVM_CAP_PPC_HTM);
}

As you can easily guess, the patch for this vulnerability was to reuse the “hv_enabled” variable which is set with the same value but only if “kvm” is not NULL as we saw above. You can see the patch here.

 	case KVM_CAP_PPC_HTM:
-		r = cpu_has_feature(CPU_FTR_TM_COMP) &&
-		    is_kvmppc_hv_enabled(kvm);
+		r = cpu_has_feature(CPU_FTR_TM_COMP) && hv_enabled;
 		break;

Written by xorl

November 18, 2017 at 00:43

Posted in vulnerabilities

CVE-2017-16650: Linux kernel WWAN (3G/LTE/?) devices DoS

leave a comment »

This is another vulnerability that was discovered by Google’s unsupervised kernel fuzzer (syzkaller). The vulnerability was reported by Andrey Konovalov to the Linux kernel project. The source code for the Linux kernel driver for WWAN (3G/LTE/?) devices is using the QMI (Qualcomm MSM Interface) along with AT management commands and it is located at drivers/net/usb/qmi_wwan.c.

static int qmi_wwan_bind(struct usbnet *dev, struct usb_interface *intf)
{
    ...
	struct usb_cdc_ether_desc *cdc_ether;
    ...
	struct usb_cdc_parsed_header hdr;
    ...
	/* and a number of CDC descriptors */
	cdc_parse_cdc_header(&hdr, intf, buf, len);
	cdc_union = hdr.usb_cdc_union_desc;
	cdc_ether = hdr.usb_cdc_ether_desc;
    ...
	/* errors aren't fatal - we can live with the dynamic address */
	if (cdc_ether) {
		dev->hard_mtu = le16_to_cpu(cdc_ether->wMaxSegmentSize);
		usbnet_get_ethernet_addr(dev, cdc_ether->iMACAddress);
	}
    ...
}

What you see above is the bind function for the WWAN/QMI devices. The value of “cdc_ether->wMaxSegmentSize” is derived from “hdr.usb_cdc_ether_desc”. CDC (Communication Device Class) is a USB device communication class and “usb_cdc_ether_desc” is the implementation of the “Ethernet Networking Functional Descriptor” from CDC 5.2.3.16 specification.

/* "Ethernet Networking Functional Descriptor" from CDC spec 5.2.3.16 */
struct usb_cdc_ether_desc {
	__u8	bLength;
	__u8	bDescriptorType;
	__u8	bDescriptorSubType;

	__u8	iMACAddress;
	__le32	bmEthernetStatistics;
	__le16	wMaxSegmentSize;
	__le16	wNumberMCFilters;
	__u8	bNumberPowerFilters;
} __attribute__ ((packed));

In the CDC 5.2.3.16 specification, “wMaxSegmentSize” is defined as “The maximum segment size that the Ethernet device is capable of supporting. This is typically 1514 bytes, but could be extended (e.g., 802.1d VLAN)”. However, setting this to 0 will result in a division by zero that will consequently cause a kernel crash as shown below, making this a kernel DoS vulnerability.

divide error: 0000 [#1] PREEMPT SMP KASAN
Modules linked in:
CPU: 0 PID: 24 Comm: kworker/0:1 Not tainted 4.14.0-rc8-44453-g1fdc1a82c34f #56
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
Workqueue: usb_hub_wq hub_event
task: ffff88006bef5c00 task.stack: ffff88006bf60000
RIP: 0010:usbnet_update_max_qlen+0x24d/0x390 drivers/net/usb/usbnet.c:355
RSP: 0018:ffff88006bf67508 EFLAGS: 00010246
RAX: 00000000000163c8 RBX: ffff8800621fce40 RCX: ffff8800621fcf34
RDX: 0000000000000000 RSI: ffffffff837ecb7a RDI: ffff8800621fcf34
RBP: ffff88006bf67520 R08: ffff88006bef5c00 R09: ffffed000c43f881
R10: ffffed000c43f880 R11: ffff8800621fc406 R12: 0000000000000003
R13: ffffffff85c71de0 R14: 0000000000000000 R15: 0000000000000000
FS:  0000000000000000(0000) GS:ffff88006ca00000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007ffe9c0d6dac CR3: 00000000614f4000 CR4: 00000000000006f0
Call Trace:
 usbnet_probe+0x18b5/0x2790 drivers/net/usb/usbnet.c:1783
 qmi_wwan_probe+0x133/0x220 drivers/net/usb/qmi_wwan.c:1338
 usb_probe_interface+0x324/0x940 drivers/usb/core/driver.c:361
 really_probe drivers/base/dd.c:413
 driver_probe_device+0x522/0x740 drivers/base/dd.c:557
---[ end trace 834034903d6f2b37 ]--- 

The vulnerability was that in case of “cdc_ether->wMaxSegmentSize” being zero, “dev->hard_mtu” would also be initialized with 0. Then, once usbnet_update_max_qlen() is invoked to update the “dev->hard_mtu” value on the kernel’s USB speed, it will attempt to divide it with “MAX_QUEUE_MEMORY” leading to a division by zero.

/*
 * Nineteen USB 1.1 max size bulk transactions per frame (ms), max.
 * Several dozen bytes of IPv4 data can fit in two such transactions.
 * One maximum size Ethernet packet takes twenty four of them.
 * For high speed, each frame comfortably fits almost 36 max size
 * Ethernet packets (so queues should be bigger).
 *
 * The goal is to let the USB host controller be busy for 5msec or
 * more before an irq is required, under load.  Jumbograms change
 * the equation.
 */
#define	MAX_QUEUE_MEMORY	(60 * 1518)
    ...
/* must be called if hard_mtu or rx_urb_size changed */
void usbnet_update_max_qlen(struct usbnet *dev)
{
	enum usb_device_speed speed = dev->udev->speed;

	switch (speed) {
	case USB_SPEED_HIGH:
		dev->rx_qlen = MAX_QUEUE_MEMORY / dev->rx_urb_size;
		dev->tx_qlen = MAX_QUEUE_MEMORY / dev->hard_mtu;
		break;
	case USB_SPEED_SUPER:
	case USB_SPEED_SUPER_PLUS:
		/*
		 * Not take default 5ms qlen for super speed HC to
		 * save memory, and iperf tests show 2.5ms qlen can
		 * work well
		 */
		dev->rx_qlen = 5 * MAX_QUEUE_MEMORY / dev->rx_urb_size;
		dev->tx_qlen = 5 * MAX_QUEUE_MEMORY / dev->hard_mtu;
		break;
	default:
		dev->rx_qlen = dev->tx_qlen = 4;
	}
}

As you can easily guess, the fix for this vulnerability was to simply check that “cdc_ether->wMaxSegmentSize” contains a non-zero value before updating “dev->hard_mtu” with that value. You can see the patch here.

 	/* errors aren't fatal - we can live with the dynamic address */
-	if (cdc_ether) {
+	if (cdc_ether && cdc_ether->wMaxSegmentSize) {
 		dev->hard_mtu = le16_to_cpu(cdc_ether->wMaxSegmentSize);
 		usbnet_get_ethernet_addr(dev, cdc_ether->iMACAddress);
 	}

Written by xorl

November 15, 2017 at 21:19

Posted in vulnerabilities

OpenSSH ‘sftp-server’ Remote Security Vulnerability

with 2 comments

In the ChangeLog of OpenSSH 7.6 there is only one security fix, and I couldn’t find any associated CVE ID to it. It is basically a design flaw of sftp-server when in read-only mode. Theoretically, read-only should not allow any write operations but Michal Zalewski discovered that you can create zero-length files.

 * sftp-server(8): in read-only mode, sftp-server was incorrectly
   permitting creation of zero-length files. Reported by Michal
   Zalewski.

The vulnerability was located in sftp-server.c source code file and it affected all versions from OpenSSH 5.5 until 7.6 where it was fixed. If the SFTP server starts with the -R flag, it will enable the “readonly” variable as you can see below. This indicates that the clients are not allowed to perform and write operations.

int
sftp_server_main(int argc, char **argv, struct passwd *user_pw)
{
	...
	while (!skipargs && (ch = getopt(argc, argv,
	    "d:f:l:P:p:Q:u:cehR")) != -1) {
		switch (ch) {
	...
		case 'R':
			readonly = 1;
			break;
	...
}

The file opening for SFTP server is performed via process_open() helper function which was opening the files for read-only mode as shown below.

static void
process_open(u_int32_t id)
{
	...
	if (readonly &&
	    ((flags & O_ACCMODE) == O_WRONLY ||
	    (flags & O_ACCMODE) == O_RDWR) != 0)) {
		verbose("Refusing open request in read-only mode");
		status = SSH2_FX_PERMISSION_DENIED;
	} else {
		fd = open(name, flags, mode);
	...
}

As you can see, if “readonly” flag is enabled, it will check if the opening flags “WRITE ONLY” or “READ/WRITE” exist and if this is the case it will error out with “Refusing open request in read-only mode”. Otherwise, it will open the file using open() system call. This looks correct but it doesn’t take into account other argument flags of open() system call such as O_CREAT (create file) or O_TRUNC (truncate file). Those were the flags that M. Zalewski used to create arbitrary files in a read-only SFTP server. To fix this, OpenSSH developers changed the logic of this check to ensure that it will error out if the flag is not O_RDONLY (Read-only) or it includes O_CREAT or O_TRUNC flags. You can see the patch below.

 	if (readonly &&
-	    ((flags & O_ACCMODE) == O_WRONLY ||
-	    (flags & O_ACCMODE) == O_RDWR)) {
+	    ((flags & O_ACCMODE) != O_RDONLY ||
+	    (flags & (O_CREAT|O_TRUNC)) != 0)) {
 		verbose("Refusing open request in read-only mode");

Written by xorl

November 13, 2017 at 23:08

Posted in vulnerabilities

CVE-2017-13090: wget file retrieval integer overflow

leave a comment »

This is similar to CVE-2017-13089 with the only difference being that it leads to a heap based buffer overflow instead of a stack based one. Again, it was reported by Antti Levomäki, Christian Jalio, Joonas Pihlaja of Forcepoint as well as Juhani Eronen of the Finnish National Cyber Security Centre and it is related to the signed integer “remaining_chunk_size”. However, this time it is in the file retrieval code of GNU wget which is located at src/retr.c.

/* Read the contents of file descriptor FD until it the connection
   terminates or a read error occurs.  The data is read in portions of
   up to 16K and written to OUT as it arrives.  If opt.verbose is set,
   the progress is shown.

   TOREAD is the amount of data expected to arrive, normally only used
   by the progress gauge.

   STARTPOS is the position from which the download starts, used by
   the progress gauge.  If QTYREAD is non-NULL, the value it points to
   is incremented by the amount of data read from the network.  If
   QTYWRITTEN is non-NULL, the value it points to is incremented by
   the amount of data written to disk.  The time it took to download
   the data is stored to ELAPSED.

   If OUT2 is non-NULL, the contents is also written to OUT2.
   OUT2 will get an exact copy of the response: if this is a chunked
   response, everything -- including the chunk headers -- is written
   to OUT2.  (OUT will only get the unchunked response.)

   The function exits and returns the amount of data read.  In case of
   error while reading data, -1 is returned.  In case of error while
   writing data to OUT, -2 is returned.  In case of error while writing
   data to OUT2, -3 is returned.  */

int
fd_read_body (const char *downloaded_filename, int fd, FILE *out, wgint toread, wgint startpos,

              wgint *qtyread, wgint *qtywritten, double *elapsed, int flags,
              FILE *out2)

The comment description of this function is more than detailed so without any further discussion, let’s dive into the vulnerability which is part of fd_read_body() function.

int
fd_read_body (const char *downloaded_filename, int fd, FILE *out, wgint toread, wgint startpos,

              wgint *qtyread, wgint *qtywritten, double *elapsed, int flags,
              FILE *out2)
{
	...
  int dlbufsize = max (BUFSIZ, 8 * 1024);
  char *dlbuf = xmalloc (dlbufsize);
	...
  wgint remaining_chunk_size = 0;
	...
      if (chunked)
        {
          if (remaining_chunk_size == 0)
            {
              char *line = fd_read_line (fd);
	...
              remaining_chunk_size = strtol (line, &endl, 16);
	...
          rdsize = MIN (remaining_chunk_size, dlbufsize);
	...
      ret = fd_read (fd, dlbuf, rdsize, tmout);
	...
}

Just like in CVE-2017-13089, “remaining_chunk_size” is initialized from the user controlled value of the file in hexadecimal form, which could lead to a negative value. Then this is compared using MIN() macro and used to initialize signed integer “rdsize” which is then used as a counter for fd_read() which in turn, copies data from the file to the heap-based “dlbuf” buffer. If “rdsize” has been exploited to contain a negative value (integer overflow), it will result in a heap-based buffer overflow of “dlbuf” during the fd_read() call. The patch was to add a bounds check to ensure that the “remaining_chunk_size” is not overflowed to a negative value after the strtol() invocation.

               remaining_chunk_size = strtol (line, &endl, 16);
               xfree (line);
 
+              if (remaining_chunk_size < 0)
+                {
+                  ret = -1;
+                  break;
+                }
+
               if (remaining_chunk_size == 0)

Written by xorl

November 11, 2017 at 20:43

Posted in vulnerabilities

CVE-2017-13089: wget HTTP integer overflow

leave a comment »

That’s an interesting vulnerability in GNU wget. According to the wget project, this was reported by Antti Levomäki, Christian Jalio, Joonas Pihlaja of Forcepoint as well as Juhani Eronen of the Finnish National Cyber Security Centre. The vulnerability is in src/http.c source code file and more precisely in skip_short_body() function.

/* Read the body of the request, but don't store it anywhere and don't
   display a progress gauge.  This is useful for reading the bodies of
   administrative responses to which we will soon issue another
   request.  The response is not useful to the user, but reading it
   allows us to continue using the same connection to the server.

   If reading fails, false is returned, true otherwise.  In debug
   mode, the body is displayed for debugging purposes.  */

static bool
skip_short_body (int fd, wgint contlen, bool chunked)
{
  enum {
    SKIP_SIZE = 512,                /* size of the download buffer */
    SKIP_THRESHOLD = 4096        /* the largest size we read */
  };
  wgint remaining_chunk_size = 0;
	...
  return true;
}

The description in the comment is pretty clear but what we care about here is the “remaining_chunk_size” variable which has data type of “wgint”. This is a data type defined in src/wget.h header file based on the architecture and operating system.

/* Pick an integer type large enough for file sizes, content lengths,
   and such.  Because today's files can be very large, it should be a
   signed integer at least 64 bits wide.  This can't be typedeffed to
   off_t because: a) off_t is always 32-bit on Windows, and b) we
   don't necessarily want to tie having a 64-bit type for internal
   calculations to having LFS support.  */

#ifdef WINDOWS
  /* nothing to do, see mswindows.h */
#elif SIZEOF_LONG >= 8
  /* long is large enough, so use it. */
  typedef long wgint;
# define SIZEOF_WGINT SIZEOF_LONG
#elif SIZEOF_LONG_LONG >= 8
  /* long long is large enough and available, use that */
  typedef long long wgint;
# define SIZEOF_WGINT SIZEOF_LONG_LONG
#elif HAVE_INT64_T
  typedef int64_t wgint;
# define SIZEOF_WGINT 8
#elif SIZEOF_OFF_T >= 8
  /* In case off_t is typedeffed to a large non-standard type that our
     tests don't find. */
  typedef off_t wgint;
# define SIZEOF_WGINT SIZEOF_OFF_T
#else
  /* Fall back to using long, which is always available and in most
     cases large enough. */
  typedef long wgint;
# define SIZEOF_WGINT SIZEOF_LONG
#endif

What is worth noting is all of the type definitions are using signed data types. This means that “wgint” variables can get both positive and negative values. Now that this is clear, let’s move back to http.c and skip_short_body() function.

static bool
skip_short_body (int fd, wgint contlen, bool chunked)
{
	...
    SKIP_SIZE = 512,                /* size of the download buffer */
	...
  wgint remaining_chunk_size = 0;
  char dlbuf[SKIP_SIZE + 1];
	...
  while (contlen > 0 || chunked)
    {
      int ret;
      if (chunked)
        {
          if (remaining_chunk_size == 0)
            {
              char *line = fd_read_line (fd);
              char *endl;
              if (line == NULL)
                break;

              remaining_chunk_size = strtol (line, &endl, 16);
              xfree (line);
	...
          contlen = MIN (remaining_chunk_size, SKIP_SIZE);
	...
      ret = fd_read (fd, dlbuf, MIN (contlen, SKIP_SIZE), -1);
	...
}

So, when wget processes chunked responses it will enter this “while” loop (content length greater than zero or the response is chunked). When the chunk size gets to 0, it will read the next line using fd_read_line() and then attempt to retrieve the remaining chunk size using strtol() in hexadecimal. This value is 100% controlled by the response header and it could be anything, including so large that it will wrap around this signed integer into a negative value. Then MIN() macro will be used to compare that value with SKIP_SIZE (which is 512) and use this to initialize “contlen” signed integer. If “remaining_chunk_size” had a negative value it means that this will now be stored in “contlen” which is then used in fd_read() leading to a stack based buffer overflow as the attacker completely controls the size argument that is used to copy data from “fd” (the HTTP page) to “dlbuf” (stack based buffer with size of 513 bytes). The fix was relatively simple as you can see below.

               remaining_chunk_size = strtol (line, &endl, 16);
               xfree (line);
 
+              if (remaining_chunk_size < 0)
+                return false;
+
               if (remaining_chunk_size == 0)

The fix was a simple bound check after the strtol() call to ensure that the value of “remaining_chunk_size” was not set to a negative value before continuing with the processing.

Written by xorl

November 11, 2017 at 16:32

Posted in vulnerabilities

CVE-2016-6515: OpenSSH remote DoS

leave a comment »

It is quite interesting that until recently OpenSSH was suffering from this type of DoS. Basically, a client could send passwords longer than 1000 characters and this was causing excessive CPU usage during the hashing crypt() operation. The issue was reported to the OpenSSH project by Tomas Kuthan, Andres Rojas and Javier Nieto. Below is the exact snippet from OpenSSH 7.3 release that describes the vulnerability.

 * sshd(8): Mitigate a potential denial-of-service attack against
   the system's crypt(3) function via sshd(8). An attacker could
   send very long passwords that would cause excessive CPU use in
   crypt(3). sshd(8) now refuses to accept password authentication
   requests of length greater than 1024 characters. Independently
   reported by Tomas Kuthan (Oracle), Andres Rojas and Javier Nieto.

Here is how OpenSSH developers patched the code to implement the above logic. All of the modifications were part of auth-passwd.c source code file. First the constant MAX_PASSWORD_LEN was defined with the value of 1024, which will be used to control the maximum password length allowed.

#define MAX_PASSWORD_LEN	1024

The second modification was in auth_password() function which will immediately return 0 (false) if the provided password exceeds MAX_PASSWORD_LEN. You can read the equivalent code snippet from auth_password() function below.

/*
 * Tries to authenticate the user using password.  Returns true if
 * authentication succeeds.
 */
int
auth_password(Authctxt *authctxt, const char *password)
{
	...
	if (strlen(password) > MAX_PASSWORD_LEN)
		return 0;
	...
	return (result && ok);
}

Written by xorl

November 10, 2017 at 23:52

Posted in vulnerabilities