On Mon, Aug 06, 2012 at 01:05:53PM -0700, Jim Keniston wrote: > When passed a negative count (indicating a byte count rather than > a block count) e2fsck_handle_read_error() treats the data as a full > block, causing unix_write_blk64() (which can handle negative counts > just fine) to try to write too much. Given a faulty block device, > this resulted in a SEGV when unix_write_blk64() read past the bottom > of the stack copying the data to cache. (check_backup_super_block -> > unix_read_blk64 -> raw_read_blk -> e2fsck_handle_read_error) > > Signed-off-by: Jim Keniston <jkenisto@xxxxxxxxxx> > Signed-off-by: Dan Streetman <ddstreet@xxxxxxxxxx> > Reviewed-by: Mingming Cao <mcao@xxxxxxxxxx> > Reported-by: Alex Friedman <alexfr@xxxxxxxxxx> Thanks, applied! I changed the one-line summary to read: e2fsck: fix potential segv when handling a read error in a superblock - Ted -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html