Re: Not all pools are equal, but why

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Sep 13, 2018 at 9:03 AM Stefan Kooman <stefan@xxxxxx> wrote:
>
> Hi List,
>
> TL;DR: what application types are compatible with each other concerning
> Ceph Pools?
>
> I.e. is it safe to mix "RBD" pool with (some) native librados objects?
>
> RBD / RGW / Cephfs all have their own pools. Since luminous release
> there is this "application tag" to (somewhere in the future) prevent certain
> applications from using non-compatible pools. I want to understand what
> it is that makes them incompatible. In the end it's all objects that get
> written into RADOS. Is it overlapping "namespaces" of objects?
>
> I want to avoid "pool sprawl". Pools need PGs, and although it might be
> possible to have that "auto-tuned" in the future (pgsplit / pgmerge) it
> is necessarily a good thing to have many pools.
>
> One more question: would "namespace" support (like librados / libcephfs
> already have) solve the need for seperate pools entirely if it would be
> implemented everywhere (librdb, librmb, etc.)?

Some reasons we have separate pools and not just namespaces:
 - statistics and IO aren't tracked per-namespace, so it would be
administratively painful to work out who was using how much space.
 - objects aren't indexed per namespace, so any process that iterates
through all objects (like CephFS disaster recovery) become
problematic.
 - we don't have a "delete all objects in a namespace" operation, so
deleting a filesystem that shares a pool is slow/complicated compared
with simply deleting the filesystem's pool.
 - using separate pools (even if they're on the same crush rule) makes
it possible to migrate data to distinct devices later.  For example,
someone might start with their cephfs metadata on spinning disks, and
then later re-assign it to a crush rule that targets SSDs.

Cheers,
John

>
> Thanks,
>
> Stefan
>
>
> --
> | BIT BV  http://www.bit.nl/        Kamer van Koophandel 09090351
> | GPG: 0xD14839C6                   +31 318 648 688 / info@xxxxxx
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux