Re: minimum osd size?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I did a set of 30GB OSDs before with extra disk space on my SSDs for the metadata pool on cephfs and my entire cluster locked up about 3 weeks later. Some metadata operation was happening, filled some of the 30GB disks to 100%, and all IO was blocked in the cluster. I did some trickery of deleting 1 copy of a few PGs on each OSD, such that I still had at least 2 copies of each PG, and was able to backfill the pool back onto my HDDs and restore cluster functionality. I would say that trying to use that space is definitely not worth it.

In one of my production clusters I occasionally get a warning state that an omap object is too large in my buckets.index pool. I could very easily imagine that stalling the entire cluster if my index pool were on such small OSDs.

On Tue, Oct 22, 2019, 6:55 PM Frank R <frankaritchie@xxxxxxxxx> wrote:
Hi all,

I have 40 nvme drives with about 20G free space each. 

Would creating a 10GB partition/lvm on each of the nvmes for an rgw index pool be a bad idea? 

RGW has about about 5 million objects

I don't think space will be an issue but I am worried about the 10G size, is it just too small for a bluestore OSD?

thx
Frank
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux