The patch titled mm/shmem.c: fix division by zero has been added to the -mm tree. Its filename is mm-shmemc-fix-division-by-zero.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/SubmitChecklist when testing your code *** See http://userweb.kernel.org/~akpm/stuff/added-to-mm.txt to find out what to do about this The current -mm tree may be found at http://userweb.kernel.org/~akpm/mmotm/ ------------------------------------------------------ Subject: mm/shmem.c: fix division by zero From: Yuri Tikhonov <yur@xxxxxxxxxxx> Fix a division by zero which we have in shmem_truncate_range() and shmem_unuse_inode() when using big PAGE_SIZE values (e.g. 256KB on ppc44x). With 256KB PAGE_SIZE the ENTRIES_PER_PAGEPAGE constant becomes too large (0x1.0000.0000), so this patch just changes the types from 'ulong' to 'ullong' where it's necessary. Signed-off-by: Yuri Tikhonov <yur@xxxxxxxxxxx> Cc: Hugh Dickins <hugh@xxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/shmem.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff -puN mm/shmem.c~mm-shmemc-fix-division-by-zero mm/shmem.c --- a/mm/shmem.c~mm-shmemc-fix-division-by-zero +++ a/mm/shmem.c @@ -65,7 +65,7 @@ static struct vfsmount *shm_mnt; #include <asm/pgtable.h> #define ENTRIES_PER_PAGE (PAGE_CACHE_SIZE/sizeof(unsigned long)) -#define ENTRIES_PER_PAGEPAGE (ENTRIES_PER_PAGE*ENTRIES_PER_PAGE) +#define ENTRIES_PER_PAGEPAGE ((unsigned long long)ENTRIES_PER_PAGE*ENTRIES_PER_PAGE) #define BLOCKS_PER_PAGE (PAGE_CACHE_SIZE/512) #define SHMEM_MAX_INDEX (SHMEM_NR_DIRECT + (ENTRIES_PER_PAGEPAGE/2) * (ENTRIES_PER_PAGE+1)) @@ -103,7 +103,7 @@ static unsigned long shmem_default_max_i } #endif -static int shmem_getpage(struct inode *inode, unsigned long idx, +static int shmem_getpage(struct inode *inode, unsigned long long idx, struct page **pagep, enum sgp_type sgp, int *type); static inline struct page *shmem_dir_alloc(gfp_t gfp_mask) @@ -541,7 +541,7 @@ static void shmem_truncate_range(struct int punch_hole; spinlock_t *needs_lock; spinlock_t *punch_lock; - unsigned long upper_limit; + unsigned long long upper_limit; inode->i_ctime = inode->i_mtime = CURRENT_TIME; idx = (start + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT; @@ -1183,7 +1183,7 @@ static inline struct mempolicy *shmem_ge * vm. If we swap it in we mark it dirty since we also free the swap * entry since a page cannot live in both the swap and page cache */ -static int shmem_getpage(struct inode *inode, unsigned long idx, +static int shmem_getpage(struct inode *inode, unsigned long long idx, struct page **pagep, enum sgp_type sgp, int *type) { struct address_space *mapping = inode->i_mapping; _ Patches currently in -mm which might be from yur@xxxxxxxxxxx are mm-shmemc-fix-division-by-zero.patch -- To unsubscribe from this list: send the line "unsubscribe mm-commits" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html