On 01/15/2017 05:57 PM, Sebastian
Bachmann wrote:
Hi again! Actually setting the stripe cache using dmsetup reload works. You just have to resume the device after it was reloaded. I did not read the documentation well enough to recognize that: Yes, reload populates the inactive table (see "dmsetup info" after the reload), resume switches active and inactive table and removes the latter. $ dmsetup table [...] myvg-lv: 0 11525947392 raid raid5_ls 3 128 region_size 8192 3 254:5 254:6 254:7 254:8 254:9 254:10 [...] $ dmsetup reload myvg-lv --table '0 11525947392 raid raid5_ls 5 128 stripe_cache 16384 region_size 8192 3 254:5 254:6 254:7 254:8 254:9 254:10' $ dmsetup resume myvg-lv $ dmsetup table [...] myvg-lv: 0 11525947392 raid raid5_ls 5 128 region_size 8192 stripe_cache 16534 3 254:5 254:6 254:7 254:8 254:9 254:10 [...] If you do this, there is actually a speedup, as suggested in various threads about mdraid. Is there now any way to make this persistent? I saw there are udev hooks for lvm, but I think you can not simply put the new dmsetup table argument there... Never dmsetup any table lines in a static way, because they'd not refelect changes to your raid5 set done lateron via lvm commands (size change). Ultimately persistent changes to the stripe cache have to be coped with in lvm2. As a workaround until that's added, you might create a script retrieving any current raid5 set table, editing the parameter count and insertting "stripe_cache #". Heinz regards sebastian On 01/01/2017 05:45 PM, Sebastian Bachmann wrote:Hi! Just a sidenote: I tried to set the stripe_cache option via dmsetup reload in a debian jessie VM. With the 3.16 kernel, it crashed with a null pointer dereference error in raid5_set_cache_size function. I upgraded to a newer kernel version (4.8.11) and tried it again, the cache size can be set there using reload: Jan 1 15:44:56 jessie kernel: [ 33.913336] device-mapper: raid: 16384 stripe cache entries I then upgraded the kernel on my machine with the RAID5 and found out, that even without any cache settings, the RAID is now two to four times faster in writing than before. I now get around 80-100MB/s. It seems that there were a lot of changes between 3.16 and 4.8 regarding speed, not only fixing this crash. I tried now setting the stripe cache size on the RAID5 but the speed does not get higher though... According to the numerous reports on the internet, the impact on write performance should be great - so it would still be interesting if the stripe_cache is actually set correctly when using dmsetup reload (if I run dmsetup table again, i still see the old line) and if it somehow can be set persistently with LVM. regards sebastian On 01/01/2017 03:15 PM, Sebastian Bachmann wrote:Hi! I'm using a LVM RAID5 on a machine but the write performance is pretty poor (about 20-30MB/s), where the read performance is quite good (about 280MB/s). I read about the stripe_cache_size for md raid and as far as I understand LVM raid, it uses md as well. In the design document for lvm2-raid (https://git.fedorahosted.org/cgit/lvm2.git/tree/doc/lvm2-raid.txt) I can find that a option --stripecache is specified there, but as it seems those options were never implemented. Is it possible to set the stripe cache size somewhere else? It seems to me that lvm2 uses dmsetup to create the raid, where a stripe cache can be set while creation. But it seems to me there is no interface to change such values later on? Thanks in advance! regards Sebastian _______________________________________________ linux-lvm mailing list linux-lvm@redhat.com https://www.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/_______________________________________________ linux-lvm mailing list linux-lvm@redhat.com https://www.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/ |
_______________________________________________ linux-lvm mailing list linux-lvm@redhat.com https://www.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/