Re: On making ctime generator enabled by default in stack

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 11/05/2018 10:56 PM, Raghavendra Gowdappa wrote:
> All,
> 
> There is a patch [1] from Kotresh, which makes ctime generator as
> default in stack. Currently ctime generator is being recommended only
> for usecases where ctime is important (like for Elasticsearch). However,
> a reliable (c)(m)time can fix many consistency issues within glusterfs
> stack too. These are issues with caching layers having stale (meta)data
> [2][3][4]. Basically just like applications, components within glusterfs
> stack too need a time to find out which among racing ops (like write,
> stat, etc) has latest (meta)data.
> 
> Also note that a consistent (c)(m)time is not an optional feature, but
> instead forms the core of the infrastructure. So, I am proposing to
> merge this patch. If you've any objections, please voice out before Nov
> 13, 2018 (a week from today).

The primary issue which is discussed in the patch, is upgrade, as the
option name changes. So, I would like clear instructions on how to
perform rolling upgrades, in scenarios where existing installations are
using the older options. If there are no special instructions, I am good
with the patch.

Also, during rolling upgrades if the older option is set, older and
newer clients will send the time information in the frame for use by the
server, right? IOW, in mixed version cluster is the integrity of time
preserved on disk and also sent by the client?

> 
> As to the existing known issues/limitations with ctime generator, my
> conversations with Kotresh, revealed following:
> * Potential performance degradation (we don't yet have data to
> conclusively prove it, preliminary basic tests from Kotresh didn't
> indicate a significant perf drop).
> * atime consistency. ctime generator offers atime consistency equivalent
> to noatime mounts. But, with my limited experience I've not seen too
> many usecases that require atime consistency. If you've a usecase please
> point it out and we'll think how we can meet that requirement.
> 
> [1] https://review.gluster.org/#/c/glusterfs/+/21060/
> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1600923
> [3] https://bugzilla.redhat.com/show_bug.cgi?id=1617972
> [4] https://bugzilla.redhat.com/show_bug.cgi?id=1393743
> 
> regards,
> Raghavendra
> 
> 
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel@xxxxxxxxxxx
> https://lists.gluster.org/mailman/listinfo/gluster-devel
> 
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-devel



[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux