Re: CRUSH depends on host + OSD?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Chad,
That sounds bizarre to me, and I can't reproduce it. I added an osd (which was previously not in the crush map) to a fake host=test:

   ceph osd crush create-or-move osd.52 1.0 rack=RJ45 host=test

that resulted in some data movement of course. Then I removed that osd from the crush map:

   ceph osd crush rm osd.52

which left the test host in the crushmap but now it's weight is zero. I waited until all the PGs where active and clean, then removed that host:

   ceph osd crush remove test

And there was no data movement.

As far as I've experienced, an entry in the crush map with a _crush_ weight of zero is equivalent to that entry not being in the map. (In fact, I use this to drain OSDs ... I just ceph osd crush reweight osd.X 0, then sometime later I crush rm the osd, without incurring any secondary data movement).

Cheers, Dan


October 15 2014 6:07 PM, "Chad Seys" <cwseys@xxxxxxxxxxxxxxxx> wrote: 
> Hi all,
> When I remove all OSDs on a given host, then wait for all objects (PGs?) to
> be to be active+clean, then remove the host (ceph osd crush remove hostname),
> that causes the objects to shuffle around the cluster again.
> Why does the CRUSH map depend on hosts that no longer have OSDs on them?
> 
> A wonderment question,
> C.
> 
> _______________________________
> 
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux