From: Larry Woodman <lwoodman@redhat.com> Date: Tue, 15 Jul 2008 14:45:22 -0400 Subject: [x86] mm: fix endless page faults in mount_block_root Message-id: 1216147523.21188.3.camel@localhost.localdomain O-Subject: [RHEL5-U3 patch] backport of fix endless page faults in mount_block_root for Linux 2.6 from 2.6.26 to RHEL5-U3 Bugzilla: 455491 The attached patch is a backport of the following. It adds a simple check in vmalloc_fault() to fail immediately if the faulting address is not inside the vmalloc address bounds: Fixes BZ 455491 Gitweb: http://git.kernel.org/git/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=b29c701deacd5d24453127c37ed77ef851c53b8b Commit: b29c701deacd5d24453127c37ed77ef851c53b8b Parent: 3703f39965a197ebd91743fc38d0f640606b8da3 Author: Henry Nestler <henry.nestler@gmail.com> AuthorDate: Mon May 12 15:44:39 2008 +0200 Committer: Ingo Molnar <mingo@elte.hu> CommitDate: Thu Jun 12 21:26:07 2008 +0200 x86: fix endless page faults in mount_block_root for Linux 2.6 Page faults in kernel address space between PAGE_OFFSET up to VMALLOC_START should not try to map as vmalloc. Fix rarely endless page faults inside mount_block_root for root filesystem at boot time. All 32bit kernels up to 2.6.25 can fail into this hole. I can not present this under native linux kernel. I see, that the 64bit has fixed the problem. I copied the same lines into 32bit part. diff --git a/arch/i386/mm/fault.c b/arch/i386/mm/fault.c index 45914b5..dd6cef4 100644 --- a/arch/i386/mm/fault.c +++ b/arch/i386/mm/fault.c @@ -296,6 +296,11 @@ static inline int vmalloc_fault(unsigned long address) unsigned long pgd_paddr; pmd_t *pmd_k; pte_t *pte_k; + + /* Make sure we are in vmalloc area */ + if (!(address >= VMALLOC_START && address < VMALLOC_END)) + return -1; + /* * Synchronize this task's top level page-table * with the 'reference' page table. diff --git a/arch/x86_64/mm/fault.c b/arch/x86_64/mm/fault.c index 390160b..5a0c483 100644 --- a/arch/x86_64/mm/fault.c +++ b/arch/x86_64/mm/fault.c @@ -288,6 +288,10 @@ static int vmalloc_fault(unsigned long address) pmd_t *pmd, *pmd_ref; pte_t *pte, *pte_ref; + /* Make sure we are in vmalloc area */ + if (!(address >= VMALLOC_START && address < VMALLOC_END)) + return -1; + /* Copy kernel mappings over when needed. This can also happen within a race in page table update. In the later case just flush. */