Fellows, While we were getting out of options, I have finally managed to fix the read/write speed from LVM snapshots. Turns out, it was default scheduler CFQ's fault or not-fault. Switching to NOOP gave nearly a 40% performance boost (getting 30MB/s write speed instead of 10-15MB/s when dumping or copying raw data from snapshot). And if I use Deadline scheduler, it quite randomly jumps to 50MB/s and load goes down and stays constant (3.0 while dd'ing a fresh snapshot)! Considering the normal disk speeds of 100MB/s of these standard off-the-shelf HDDs, what I am getting now out of LVM snapshots is not bad at all. Leaving the details here so it can help others who may coming looking for answers! On Wed, Jul 10, 2013 at 4:15 AM, Micky <mickylmartin@gmail.com> wrote: > Read/write speed is just fine (~100-200MB/s) as I described in my > first few emails without LVM snapshots. > > > On Tue, Jul 9, 2013 at 11:47 PM, Zdenek Kabelac <zkabelac@redhat.com> wrote: >> Dne 9.7.2013 17:39, Micky napsal(a): >> >>> I meant alignment for all dm entries 0 through 31 is zero! >>> >>> >>>>> What does the `lsblk -t` say? Could be an alignment issue. >>>> >>>> >>>> 0 through 31 >>>> >>>>> What's `free` saying about the free memory and cache? (dmeventd on 6.4 >>>>> is >>>>> trying to lock a large chunk of address space in RAM (~100M) >>>> >>>> >>>> Cached mem looks good. >>>> Dmeventd. Right. It is. Isn't it spawed everytime an LV is created? >>>> root 6813 0.0 1.4 197056 11044 ? S<s May26 2:44 >>>> /sbin/dmeventd >>> >>> >> >> There is only one dmeventd running - and lvm is spawning it only when it's >> not >> available - and in fact spawning is not the right term if you use systemd >> enabled system (like Fedora) >> >> Also so far you still have not show actually any 'real' numbers even when >> you run plain good old 'dd' command. >> >> So what is the performance of 'dd' reading 10G >/dev/null >> or raw device, dm origin, dm snapshot (with iflag=direct) >> >> What is performance of write ? >> >> What is the performance when 2 of them are running in parallel. >> >> Also it's probably more easier to resolve this through #irc. >> >> Zdenek >> >> _______________________________________________ linux-lvm mailing list linux-lvm@redhat.com https://www.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/