Re: How to reduce HDD OSD flapping due to rocksdb compacting event?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



It's ceph-bluestore-tool.

On 4/10/2019 10:27 AM, Wido den Hollander wrote:

On 4/10/19 9:25 AM, jesper@xxxxxxxx wrote:
On 4/10/19 9:07 AM, Charles Alva wrote:
Hi Ceph Users,

Is there a way around to minimize rocksdb compacting event so that it
won't use all the spinning disk IO utilization and avoid it being marked
as down due to fail to send heartbeat to others?

Right now we have frequent high IO disk utilization for every 20-25
minutes where the rocksdb reaches level 4 with 67GB data to compact.

How big is the disk? RocksDB will need to compact at some point and it
seems that the HDD can't keep up.

I've seen this with many customers and in those cases we offloaded the
WAL+DB to an SSD.
Guess the SSD need to be pretty durable to handle that?

Always use DC-grade SSDs, but you don't need to buy the most expensive
ones you can find. ~1.5DWPD is sufficient.

Is there a "migration path" to offload this or is it needed to destroy
and re-create the OSD?

In Nautilus release (and maybe Mimic) there is a tool to migrate the DB
to a different device without the need to re-create the OSD. This is
bluestore-dev-tool I think.

Wido

Thanks.

Jesper


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux