Re: avg apply latency went up after update from octopus to pacific

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> 
> >
> >>
> >> What I also see is that I have three OSDs that have quite a lot of
> OMAP
> >> data, in compare to other OSDs (~20 time higher). I don't know if
> this
> >> is an issue:
> >
> > I have on 2TB ssd's with 2GB - 4GB omap data, while on 8TB hdd's the
> omap data is only 53MB - 100MB.
> > Should I manually clean this? (how? :))
> 
> The amount of omap data depends on multiple things, especially the use-
> case.  If a given OSD is only used for RBD, it will have a different
> omap experience than if it were used for an RGW index pool.
> 

This (mine) is mostly an rbd cluster. 

Is it correct that compacting leveldb is addressing 'cleaning omap data'? And this can only be done by setting leveldb_compact_on_mount = true in ceph.conf and restarting the osd?
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux