On Thu, 2007-05-10 at 10:33 -0400, Greg Freemyer wrote: > On 5/10/07, Alex Owen <r.alex.owen@gmail.com> wrote: > > Hello, > > I have just been making some snapshot performance benchmarks on a > > Debian Etch system. > > Kernel: 2.6.18-4-686 (2.6.18.dfsg.1-12etch1) > > dmsetup: 1.02.08-1 > > lvm2: 2.02.06-4 > > > > I have been using commands of the form: > > time dd if=/dev/zero of=/dev/volgroup/test bs=1M count=100 > > to get speeds for copying to a LVM device both WITH and WITHOUT a > > single snapshot. > > > > It seems that writes take >=10 times longer the first time a newly > > snapshot origin device is written to. > > > > I was expecting somthing like a 2x or 3x performance loss as 1 > > physical read and 2 physical writes must occur for a single logical > > write. I was NOT expecting there to be a 10x overhead. As I move to > > larger devices (bs=1M count=1000) the 10x figure rises to nearer 20x. > > This is also true on mounted origin LV's. > > > > Has anyone else benchmarked this? Is this normal? > > > > Thanks for any feedback > > Alex Owen > > I always ensure my snapshots are on physically separate drives than my > origin. If they are on the same drive I'm not surprised you're having > speed issues. You are significantly increasing the amount of disk > seek activity. Having in separate drives should be much better. by putting it into separate device, you might see a 5x slow instead of 10x. still because disk seek activity do not use current snapshot on a write intense lv. and adjust your lv chunk size base on your application workload can remedy it a bit. > > (FYI: It has been a while since I benchmarked, so you may still have problems.) > > Greg _______________________________________________ linux-lvm mailing list linux-lvm@redhat.com https://www.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/