Re: GFS2 as virtual machine disk store

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 29/08/17 11:54, Gionatan Danti wrote:
Hi Steven,

Il 29-08-2017 11:45 Steven Whitehouse ha scritto:
Yes, there is some additional overhead due to the clustering. You can
however usually organise things so that the overheads are minimised as
you mentioned above by being careful about the workload.

No. You want to use the default data=ordered for the most part. It is
less a question of data loss and more a question of whether in case of
a power outage it is possible for a file being written to, to land up
with incorrect content. That can happen in the data=writeback case
(where block allocation has succeeded, but the new data has not yet
been written to disk) but not in the data=ordered case.

I think there is a misunderstanding: I am not speaking about filesystem mount options (data=ordered vs data=writeback), rather on the QEMU virtual disk caching mode: on Red Hat documentation, it is suggested to set QEMU vdisk in cache=none mode. However, cache=writeback has some significant performance advantages in a number of situations. As, since at least 5 years, QEMU with cache=writeback supports barrier passing and so it is safe to use, I wondered why Red Hat officially suggest to avoid it on GFS2. I suspect it is related to the performance degradation due to cache coherence between the two hosts, but I would like to be certain in not related to inherently unsafe operation on GFS2.

Yes, it definitely needs to be set to cache=none mode. Barrier passing is only one issue, and as you say it is down to the cache coherency, since the block layer is not aware of the caching requirements of the upper layers in this case.

Yes, it works well. The size limit was based on fsck time, rather than
any reliability issues. It will work reliably at much larger sizes,
but it will take longer and use more memory.

Great. Any advice on how much time is needed for full fsck on a 8+ TB volume?
It will depend a great deal on a number of factors... the performance of the storage and also the number of inodes on the filesystem. It will also take longer if there is any work to do (i.e. if changes need to be made compared with just checking an otherwise clean filesystem) so it is difficult to give any guidance without knowing those variables. The best way to know is to try it and see,

Steve.

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster



[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux