Re: Cleanup old osdmaps after #13990 fix applied

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> On 14 Sep 2016, at 23:07, Gregory Farnum <gfarnum@xxxxxxxxxx> wrote:
> 
> On Wed, Sep 14, 2016 at 7:19 AM, Dan Van Der Ster
> <daniel.vanderster@xxxxxxx> wrote:
>> Indeed, seems to be trimmed by osd_target_transaction_size (default 30) per new osdmap.
>> Thanks a lot for your help!
> 
> IIRC we had an entire separate issue before adding that field, where
> cleaning up from bad situations like that would result in the OSD
> killing itself as removing 2k maps exceeded the heartbeat timeouts. ;)
> Thus the limit.

Thanks Greg. FTR, I did some experimenting and found that setting osd_target_transaction_size = 1000 is a very bad idea (tried on one osd... FileStore merging of the meta subdirs lead to a slow/down osd). But setting it to ~60 was OK.

I cleaned up 90TB of old osdmaps today, generating new maps in a loop by doing:

   watch -n10 ceph osd pool set data min_size 2

Anything more aggressive than that was disruptive on our cluster.

Cheers, Dan


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux