When passed a negative count (indicating a byte count rather than a block count) e2fsck_handle_read_error() treats the data as a full block, causing unix_write_blk64() (which can handle negative counts just fine) to try to write too much. Given a faulty block device, this resulted in a SEGV when unix_write_blk64() read past the bottom of the stack copying the data to cache. (check_backup_super_block -> unix_read_blk64 -> raw_read_blk -> e2fsck_handle_read_error) Signed-off-by: Jim Keniston <jkenisto@xxxxxxxxxx> Signed-off-by: Dan Streetman <ddstreet@xxxxxxxxxx> Reviewed-by: Mingming Cao <mcao@xxxxxxxxxx> Reported-by: Alex Friedman <alexfr@xxxxxxxxxx> --- e2fsck/ehandler.c | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/e2fsck/ehandler.c b/e2fsck/ehandler.c index 6eecf33..6dddf9c 100644 --- a/e2fsck/ehandler.c +++ b/e2fsck/ehandler.c @@ -60,7 +60,7 @@ static errcode_t e2fsck_handle_read_error(io_channel channel, preenhalt(ctx); if (ask(ctx, _("Ignore error"), 1)) { if (ask(ctx, _("Force rewrite"), 1)) - io_channel_write_blk64(channel, block, 1, data); + io_channel_write_blk64(channel, block, count, data); return 0; } -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html