Re: cephfs and erasure coding

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On Wed, 29 Mar 2017 21:09:23 +0700 Konstantin Shalygin wrote:

> Thanks for notice. On dovecot mail list reported 
> https://dovecot.org/pipermail/dovecot/2016-August/105210.html about 
> success usage CephFS for 30-40k of users, with replica, not EC.
>
If you read that whole thread, you will have noticed my reply and
questions, Sami's question and the total lack of responses by Daniel.
 
W/o further data I'd be reluctant to call that a success, at least in a
general sense.

A native dovecot-ceph object interface will certainly help performance,
but of course still be somewhat limited by the network nature of things
and also a total black box compared to maildir on a FS. 

Lastly, do you feel comfortable to put all your mail eggs into one
(software) storage basket?
At a million+ users I most certainly don't.

Christian

> On 03/29/2017 08:19 PM, Wido den Hollander wrote:
> > I wouldn't use CephFS for so many small files. Dovecot will do a lot of locking, opening en closing those small files which is not very efficient.
> >
> > http://tracker.ceph.com/issues/12430
> >
> > That is in development right now. NO code yet out there, but should be there later this year.  
> 


-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux