Re: Spurious empty files in CephFS root pool when multiple pools associated

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> On 3 Jul 2018, at 15.36, Jesus Cea <jcea@xxxxxxx> wrote:
> 
> Signed PGP part
> On 03/07/18 15:09, Steffen Winther Sørensen wrote:
>> 
>> 
>>> On 3 Jul 2018, at 12.53, Jesus Cea <jcea@xxxxxxx> wrote:
>>> 
>>> Hi there.
>>> 
>>> I have an issue with cephfs and multiple datapools inside. I have like
>>> SIX datapools inside the cephfs, I control where files are stored using
>>> xattrs in the directories.
>> Couldn’t you just use 6xCephFS each w/metadata + data pools instead?
> 
> This is actually a good suggestion.
> 
> That was my first try. Nevertheless, multiple CephFS is an explicitly unsupported configuration,
Really stated where, anyone?

/Steffen
 
> I had to constantly fight the tools (because
> having an unique CephFS is the "supported" config) and deploying
> multiple MDSs (plus replicas) was complicated because I don't have
> access to the servers.
> 
> I tried this first, but with those issues and after discovering "ceph fs
> add_data_pool XXX", I thought I had a winner...
> 
> Knowing what I know today, I could try again the multiple CephFS option,
> if I could convince "the powers" to allow an *explicitly* unsupported
> configuration warming constantly about risk of losing data, hell
> breaking open, etc. :).
> 
> If you ask me about why I use multiple datapools in the CephFS, it is
> because being able to use different erasuse code/replica sizes, etc, for
> different data. Each datapool in the CephFS could have a different
> configuration, different cache tiering if needed, etc.
> 
> I am quite happy with the result, except the massive ghost objects count
> in the root datapool, that I would rather prefer empty. That is
> annoying, my cluster is in WARNING all the time and overriding
> "mon_pg_warn_max_object_skew" is not working for me for some unknown reason.
> 
> PS: I can't change "mon_pg_warn_max_object_skew" directly. I need to
> request it, sysops told me that they already did and rebooted the
> monitors. Several times. I am using Ceph 12.2.5. They are annoyed and I
> am too.
> 
> -- 
> Jesús Cea Avión                         _/_/      _/_/_/        _/_/_/
> jcea@xxxxxxx - http://www.jcea.es/     _/_/    _/_/  _/_/    _/_/  _/_/
> Twitter: @jcea                        _/_/    _/_/          _/_/_/_/_/
> jabber / xmpp:jcea@xxxxxxxxxx  _/_/  _/_/    _/_/          _/_/  _/_/
> "Things are not so easy"      _/_/  _/_/    _/_/  _/_/    _/_/  _/_/
> "My name is Dump, Core Dump"   _/_/_/        _/_/_/      _/_/  _/_/
> "El amor es poner tu felicidad en la felicidad de otro" - Leibniz
> 
> 
> 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux