Re: CephFS, file layouts pools and rados df

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Sage, 

Thank you for your answer.

So, there is no anticipated problem with how I did ?

Does the 'data' pool performance affects directly my filesystem
performance, even if there is no file on it ?
Do I need to have the same performance policy on 'data' pools than on
the other pools ?
Can I use the fact that my base data pool 'data' is different than
my /real/ data pools to improve filesystem performance (something like
putting the 'data' pool on SSDs) ?

Regards
-- 
Thomas Lemarchand
Cloud Solutions SAS - Responsable des systèmes d'information



On jeu., 2014-11-13 at 08:33 -0800, Sage Weil wrote:
> On Thu, 13 Nov 2014, Thomas Lemarchand wrote:
> > Hi Ceph users,
> > 
> > I need to have different filesystem trees in different pools, mainly for
> > security reasons.
> > 
> > So I have ceph users (cephx) with specific access on specific pools.
> > 
> > I have one metadata pool ('metadata') and tree data pools ('data',
> > 'wimi-files, 'wimi-recette-files').
> > 
> > I used file layouts ( http://ceph.com/docs/master/cephfs/file-layouts/ )
> > to associate directories with pools.
> > 
> > My filesystem looks like that :
> > Path -> associated pool
> > 
> > / -> data
> > /prod -> wimi-files
> > /prod/... -> wimi-files
> > /recette -> wimi-recette-files
> > /recette/... -> wimi-recette-files
> > 
> > Is it the best way to achieve what I need, since it's not possible to
> > have multiple CephFS on a Ceph cluster ?
> > 
> > I ask this because my 'rados df' seems strange to me :
> > 
> > pool name       category                 KB      objects       clones
> > degraded      unfound           rd        rd KB           wr        wr
> > KB
> > data            -                          0      9045499            0
> > 0           0       434686       434686      9294004            0
> > metadata        -                      58591        52681            0
> > 0           0      2168219   2403048804     16461385    180433628
> > wimi-files      -                 9006435331     10169214            0
> > 0           0       296284      2747513     19225407   9064999231
> > wimi-recette-files -                    1036224       309167
> > 0            0           0       345223      1401472       658388
> > 1170762
> >   total used     27404544372     19576561
> >   total avail    78033398196
> >   total space   105437942568
> > 
> > As you can see, there are 9045499 objects in 'data' pool, while there
> > are only two directories ( 'prod', 'recette' ), and not a single file in
> > this pool.
> > 
> > Anyone know how this works ?
> 
> The MDS puts backtrace objects in the base data pool in order to 
> facilitate fsck and lookup by ino even when the data is stored elsewhere.
> 
> Other strategies that don't do this are possible, but they're more 
> complicated, and we opted to keep it as simple as possible for now.
> 
> sage
> 
> > 
> > Thanks in advance !
> > 
> > Regards
> > -- 
> > Thomas Lemarchand
> > Cloud Solutions SAS - Responsable des syst?mes d'information
> > 
> > 
> > 
> > 
> > 
> > -- 
> > This message has been scanned for viruses and
> > dangerous content by MailScanner, and is
> > believed to be clean.
> > 
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > 
> 
> -- 
> This message has been scanned for viruses and
> dangerous content by MailScanner, and is
> believed to be clean.
> 


-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux