Our cluster here having troubles is primarily for object storage, and somewhere around 650M objects and 600T. Majority of objects being small jpgs, large objects are big movie .ts files and .mp4.
This was upgraded from jewel on xenial last month.majority of bugs are ceph-osd on SSDs for us. We've got bugs on filestore and bluestore at this point with ssd journals, colocated wal/db, nvme wal/db, you name it. We're removing the SSDs from the cluster right now. They just keep failing and causing downtime. Rather a slow cluster than an unusable one. A crashy osd is a gone osd nowadays.
E
On Thu, Nov 16, 2017 at 7:22 PM Konstantin Shalygin <k0ste@xxxxxxxx> wrote:
> My cluster (55 OSDs) runs 12.2.x since the release, and bluestore too
> All good so far
This is cleanly deployed cluster or upgrade from some version?
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph. com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com