Re: Inherited CEPH nightmare

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>             osd_memory_target = 2147483648
>
> Based on some reading, I'm starting to understand a little about what can be tweaked. For example, I think the osd_memory_target looks low.  I also think the DB/WAL should be on dedicated disks or partitions, but have no idea what procedure to follow to do this.  I'm actually thinking that the best bet would be to copy the VM's to temporary storage (as there is only about 7TBs worth) and then set-up CEPH from scratch following some kind of best practice guide.

Yes, the memory target is very low, if you have RAM to spare, bumping
this to 4-6-8-10G for each OSD should give some speedups.
If you can, check one of each drive type to see if they gain or lose
from having write-cache turned off, as per

https://medium.com/coccoc-engineering-blog/performance-impact-of-write-cache-for-hard-solid-state-disk-drives-755d01fcce61

and other guides. The ceph usage pattern combined with some
less-than-optimal ssd caches sometimes force much more to get flushed
when ceph wants to make sure a small part actually hits the disk,
meaning you get poor iops rates. Unfortunately this is very dependent
on the controllers and the drives, so there is no simple rule if on or
off is "best" for all possible combinations, but the fio test shown on
that and similar pages should tell you quickly if you can get 50-100%
more write iops out of your drives by having the cache in the right
mode for each type of disk. Hopefully bumped ram should help with read
performance, so it should be able to get better perf by two relatively
simple changes.

Check if any OSDs are bluestore, and if not, convert each filestore
OSD to bluestore, that would probably give you 50% more write iops on
that OSD.

https://www.virtualtothecore.com/how-to-migrate-ceph-storage-volumes-from-filestore-to-bluestore/

They probably are bluestore, but it can't hurt to check if the cluster is old.

-- 
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux