Cephfs with large numbers of files per directory

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Logan,


Thank you for the feedback.


Rhian Resnick

Assistant Director Middleware and HPC

Office of Information Technology


Florida Atlantic University

777 Glades Road, CM22, Rm 173B

Boca Raton, FL 33431

Phone 561.297.2647

Fax 561.297.0222

 [image] <https://hpc.fau.edu/wp-content/uploads/2015/01/image.jpg>


________________________________
From: Logan Kuhn <logank@xxxxxxxxxxx>
Sent: Tuesday, February 21, 2017 8:42 AM
To: Rhian Resnick
Cc: ceph-users at ceph.com
Subject: Re: Cephfs with large numbers of files per directory

We have a very similar configuration at one point.

I was fairly new when we started to move away from it, but what happened to us is that anytime a directory needed to stat, backup, ls, rsync, etc.  It would take minutes to return and while it was waiting CPU load would spike due to iowait.  The difference between what you've said and what we did was that we used a gateway machine, the actual cluster never had any issues with it.  This was also on infernalis so things probably have changed in Jewel and Kraken.

Regards,
Logan

----- On Feb 21, 2017, at 7:37 AM, Rhian Resnick <rresnick at fau.edu> wrote:

Good morning,


We are currently investigating using Ceph for a KVM farm, block storage and possibly file systems (cephfs with ceph-fuse, and ceph hadoop). Our cluster will be composed of 4 nodes, ~240 OSD's, and 4 monitors providing mon and mds as required.


What experience has the community had with large numbers of files in a single directory (500,000 - 5 million). We know that directory fragmentation will be required but are concerned about the stability of the implementation.


Your opinions and suggestions are welcome.


Thank you


Rhian Resnick

Assistant Director Middleware and HPC

Office of Information Technology


Florida Atlantic University

777 Glades Road, CM22, Rm 173B

Boca Raton, FL 33431

Phone 561.297.2647

Fax 561.297.0222

 [image] <https://hpc.fau.edu/wp-content/uploads/2015/01/image.jpg>

_______________________________________________
ceph-users mailing list
ceph-users at lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20170221/d71346ea/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux