mimic stability finally achieved

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I think I finally have a stable containerized mimic cluster... jeeeee zeeeeez!  It was hard enough!

I'm currently repopulating cephfs and cruising along at ...
client:   147 MiB/s wr, 0 op/s rd, 38 op/s wr

First last month I had four Seagate Barracuda drive failures at the same time with around 18,000 power_on_hours.  I used to adore Seagate.  Now I am shocked at how terrible they are.  And they have been advertising NVMe's to me on Facebook very heavily.  There is no way... Then three newegg and Amazon suppliers failed to get me correct HGST disks.  I've been dealing with this garbage for like five weeks now!

Once the hardware issues were settled my osds were flapping like gophers on the caddy shack golf course.  Ultimately I had to copy data to standalone drives, destroy everything ceph related and start from scratch.  That didn't solve the problem!  osds continued to flap!

For new clusters and maybe old cluster also this setting is key!!!
ceph osd crush tunables optimal


Without that setting I surmise that client writes would consume all available cluster bandwidth.  MDS would report slow IOs.  OSD would not be able to replicate object or answer heartbeats then slowly they would get knocked out of the cluster.

That's all from me.  Like my poor NC sinus's recovering from the crazy pollen my ceph life is also slowly recovering.

/Chris C
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux