Re: Unexpected pg placement in degraded mode with custom crush rule

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Sage,

I don't believe so, I'm loading the objects directly from another host (which is running 0.64 built from src) with:

$ rados -m 192.168.122.21 -p obj put smallnode$n.dat smallnode.dat # $n=0->99

and the osd's are all running 0.56.6, so I don't think there is any kernel rbd or librbd involved.


I did try:

$ ceph osd crush tunables optimal

In one run - no difference.

I have updated to 0.61.4 and am running the test again, will update with the results!

Cheers

Mark

On 05/07/13 16:01, Sage Weil wrote:
Hi Mark,

If you're not using a kernel cephfs or rbd client older than ~3.9, or
ceph-fuse/librbd/librados older than bobtail, then you should

  ceph osd crush tunables optimal

and I suspect that this will suddenly work perfectly.  The defaults are
still using semi-broken legacy values because client support is pretty
new.  Trees like yours, with sparsely populated leaves, tend to be most
affected.

(I bet you're seeing the rack separation rule violated because the
previous copy of the PG was already there and ceph won't throw out old
copies before creating new ones.)



--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux