Re: problem with removing osd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Thank  You  very  much for analize and the file ! I had similar :) but
wasn't sure if it wan't distroy something in cluster.

> The encoded tree bucket -11 had bad values.  I don't really trust the tree
> bucket code in crush... it's not well tested (and is a poor balance 
> computation and efficiency anyway).  We should probably try to remove tree
> entirely.

> I've attached a fixed map that you can inject with

>  ceph osd setcrushmap -i <filename>

Now it works, and also ceph osd crush dump -f json-pretty runs OK.

> Bucket -11 is now empty; not sure what was supposed to be in it.

this server will be reinstalled, there where three osds.

> I suggest switching all of your tree buckets over to straw2 as soon as
> possible.  Note that this will result in some rebalancing.  You could do
> it one bucket a time if that's concerning.

OK,  changing  alg  to straw2 will rebalance ale PGs on all nodes ?
-- 
Regards,
Luk

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux