From: Frantisek Hrbata <fhrbata@redhat.com> Date: Mon, 6 Sep 2010 14:33:50 -0400 Subject: [net] ipv4: fix buffer overflow in icmpmsg_put Message-id: <1283783630-14789-1-git-send-email-fhrbata@redhat.com> Patchwork-id: 28159 O-Subject: [RHEL5 PATCH] BZ 601391: net: fix buffer overflow in icmpmsg_put() Bugzilla: 601391 RH-Acked-by: Jiri Pirko <jpirko@redhat.com> RH-Acked-by: David S. Miller <davem@redhat.com> RH-Acked-by: Neil Horman <nhorman@redhat.com> Bug: 601391 https://bugzilla.redhat.com/show_bug.cgi?id=601391 Brew build: 2735526 http://brewweb.devel.redhat.com/brew/taskinfo?taskID=2735526 Description: Reading from the /proc/net/snmp file causes buffer overflow when number of different MIBs messages overruns internal "out" buffer. Upstream Status of the patch: backport of 2.6 commit b971e7ac834e9f4bda96d5a96ae9abccd01c1dd8 Test Status: Tested by myself on x86-64. The hping3 utility was used to verify that $ cat /proc/net/snmp produces correct output for more than 16 different icmp types. $ hping3 --force-icmp -1 -C <icmp type> <host> Notes: Changes in the upstream patch rely on some other interface modifications which are not presented in the RHEL5 kernel. Following two changes were made to the upstream patch. 1) use fold_field instead of snmp_fold_field 2) use global icmpmsg_statistics instead of net->mib.icmpmsg_statistics passed as private data in seq_file Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com> diff --git a/net/ipv4/proc.c b/net/ipv4/proc.c index db31572..ec80a76 100644 --- a/net/ipv4/proc.c +++ b/net/ipv4/proc.c @@ -254,42 +254,44 @@ static const struct snmp_mib snmp4_net_list[] = { SNMP_MIB_SENTINEL }; -static void icmpmsg_put(struct seq_file *seq) +static void icmpmsg_put_line(struct seq_file *seq, unsigned long *vals, + unsigned short *type, int count) { -#define PERLINE 16 - - int j, i, count; - static int out[PERLINE]; - - count = 0; - for (i = 0; i < ICMPMSG_MIB_MAX; i++) { - - if (fold_field((void **) icmpmsg_statistics, i)) - out[count++] = i; - if (count < PERLINE) - continue; + int j; - seq_printf(seq, "\nIcmpMsg:"); - for (j = 0; j < PERLINE; ++j) - seq_printf(seq, " %sType%u", i & 0x100 ? "Out" : "In", - i & 0xff); - seq_printf(seq, "\nIcmpMsg: "); - for (j = 0; j < PERLINE; ++j) - seq_printf(seq, " %lu", - fold_field((void **) icmpmsg_statistics, - out[j])); - seq_putc(seq, '\n'); - } if (count) { seq_printf(seq, "\nIcmpMsg:"); for (j = 0; j < count; ++j) - seq_printf(seq, " %sType%u", out[j] & 0x100 ? "Out" : - "In", out[j] & 0xff); + seq_printf(seq, " %sType%u", + type[j] & 0x100 ? "Out" : "In", + type[j] & 0xff); seq_printf(seq, "\nIcmpMsg:"); for (j = 0; j < count; ++j) - seq_printf(seq, " %lu", fold_field((void **) - icmpmsg_statistics, out[j])); + seq_printf(seq, " %lu", vals[j]); + } +} + +static void icmpmsg_put(struct seq_file *seq) +{ +#define PERLINE 16 + + int i, count; + unsigned short type[PERLINE]; + unsigned long vals[PERLINE], val; + + count = 0; + for (i = 0; i < ICMPMSG_MIB_MAX; i++) { + val = fold_field((void **) icmpmsg_statistics, i); + if (val) { + type[count] = i; + vals[count++] = val; + } + if (count == PERLINE) { + icmpmsg_put_line(seq, vals, type, count); + count = 0; + } } + icmpmsg_put_line(seq, vals, type, count); #undef PERLINE }