Re: mempool.c: Replace io_schedule_timeout with io_schedule

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Dec 18 2014 at 10:37am -0500,
Mike Snitzer <snitzer@xxxxxxxxxx> wrote:

> On Wed, Dec 17 2014 at  7:40pm -0500,
> Timofey Titovets <nefelim4ag@xxxxxxxxx> wrote:
> 
> > io_schedule_timeout(5*HZ);
> > Introduced for avoidance dm bug:
> > http://linux.derkeiler.com/Mailing-Lists/Kernel/2006-08/msg04869.html
> > According to description must be replaced with io_schedule()
> > 
> > Can you test it and answer: it produce any regression?
> > 
> > I replace it and recompile kernel, tested it by following script:
> > ---
> > dev=""
> > block_dev=zram #loop
> > if [ "$block_dev" == "loop" ]; then
> >         f1=$RANDOM
> >         f2=${f1}_2
> >         truncate -s 256G ./$f1
> >         truncate -s 256G ./$f2
> >         dev="$(losetup -f --show ./$f1) $(losetup -f --show ./$f2)"
> >         rm ./$f1 ./$f2
> > else
> >         modprobe zram num_devices=8
> >         # needed ~1g free ram for test
> >         echo 128G > /sys/block/zram7/disksize
> >         echo 128G > /sys/block/zram6/disksize
> >         dev="/dev/zram7 /dev/zram6"
> > fi
> > 
> > md=/dev/md$[$RANDOM%8]
> > echo "y\n" | mdadm --create $md --chunk=4 --level=1 --raid-devices=2 $(echo $dev)
> 
> You didn't test using DM, you used MD.
> 
> And in the context of 2.6.18 the old dm-raid1 target was all DM had
> (whereas now we also have a DM wrapper around MD raid with the dm-raid
> module).  Should we just kill dm-raid1 now that we have dm-raid?  But
> that is tangential to the question being posed here.

Heinz pointed out that dm-raid1 handles clustered raid1 capabilities.
So we cannot easily replace with dm-raid.
 
> So I'll have to read the thread you linked to to understand if DM raid1
> (or DM core) still suffers from the problem that this hack papered over.

Heinz also pointed out that the primary issue that forced the use of
io_schedule_timeout() was that dm-log-userspace (used by dm-raid1) makes
use of a single shared mempool for multiple devices.  Unfortunately,
dm-log-userspace still has this shared mempool (flush_entry_pool).  So
we'll need to fix that up to be per-device before mm/mempool.c code can
be switched to use io_schedule().

I'll add this to my TODO.  But it'll have to wait until after the new
year.

Mike

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel




[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux