Re: Ceph performance falls as data accumulates

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Apr 2, 2018 at 11:18 AM Robert Stanford <rstanford8896@xxxxxxxxx> wrote:

 This is a known issue as far as I can tell, I've read about it several times.  Ceph performs great (using radosgw), but as the OSDs fill up performance falls sharply.  I am down to half of empty performance with about 50% disk usage.

 My questions are: does adding more OSDs / disks to the cluster restore performance?  (Is it the absolute number of objects that degrades performance, or % occupancy?)  And, will the performance level off at some point, or will it continue to get lower and lower as our disks fill?

There are two dimensions of interest for you here:
1) the speed of your bucket index,
2) the speed of the OSD store for reading/writing actual data.

(2) should be better with bluestore than filestore AFAIK, and will definitely improve as you add more OSDs to the cluster.
(1) will depend more on the amount of bucket index sharding and the number of buckets you're working with.
-Greg
 

 Thank you
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux