Re: Large amount of files - cephfs?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Josef,  my comments based on experience  with cephFS(Jewel with 1MDS) community(free) version.

* cephFS(Jewel) considering 1 MDS(stable) performs horrible with "small million KB size files", even after MDS cache, dir frag tuning etc.
* cephFS(Jewel) considering 1 MDS(stable) performs great for "large GB/TB files" ie large IO, but still small inodes.
* Your best bet is object storage interface (S3/SWIFT API or LIBRADOS API)
* Multiple MDS and dir frag in Jewel is considered unstable(experimental feature).
* For testing, you can try Luminous(multiple active/active MDS) with default dir frag enabled, but its just got stable couple of weeks back. So keep caution putting PROD data on "not battle tested" versions, unless you have a backup strategy.

--
Deepak

-----Original Message-----
From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Josef Zelenka
Sent: Wednesday, September 27, 2017 4:57 AM
To: ceph-users@xxxxxxxxxxxxxx
Subject:  Large amount of files - cephfs?

Hi,

we are currently working on a ceph solution for one of our customers. 
They run a file hosting and they need to store approximately 100 million of pictures(thumbnails). Their current code works with FTP, that they use as a storage. We thought that we could use cephfs for this, but i am not sure how it would behave with that many files, how would the performance be affected etc. Is cephfs useable in this scenario, or would radosgw+swift be better(they'd likely have to rewrite some of the code, so we'd prefer not to do this)? We already have some experience with cephfs for storing bigger files, streaming etc so i'm not completely new to this, but i thought it'd be better to ask more experiened users. Some advice on this would be greatly appreciated, thanks,

Josef

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
-----------------------------------------------------------------------------------
This email message is for the sole use of the intended recipient(s) and may contain
confidential information.  Any unauthorized review, use, disclosure or distribution
is prohibited.  If you are not the intended recipient, please contact the sender by
reply email and destroy all copies of the original message.
-----------------------------------------------------------------------------------
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux