Re: rados complexity

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Ok, seems like my problem could be cephfs-related. I have 16 cephfs clients that do heavy, sub-optimal writes simultaneously. The cluster have no problems handling the load up until circa 20000 kobjects.  Above this threshold the OSDs start to go down randomly and eventually get killed by the ceph's watchdog mechanism. The funny thing is that CPU and HDDs are not really overloaded during this events. So I am really puzzled at this moment.

-Mykola

-----Original Message-----
From: Sven Höper <list@xxxxxx>
To: ceph-users@xxxxxxxxxxxxxx
Subject: Re: rados complexity
Date: Sun, 05 Jun 2016 19:18:27 +0200

We've got a simple cluster having 45 OSDs, have above 50000 kobjects and did not
have any issues so far. Our cluster does mainly serve some rados pools for an
application which usually writes data once and reads it multiple times.

- Sven

Am Sonntag, den 05.06.2016, 18:47 +0200 schrieb Mykola Dvornik:
Are there any ceph users with pools containing >20000 kobjects? If so, have you noticed any instabilities of the clusters once this threshold is reached? -Mykola _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux