Re: About the data movement in Ceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 10 Sep 2013, atrmat wrote:
> Hi all,
> recently i read the source code and paper, and i have some questions about
> the data movement:
> 1. when OSD's add or removal, how Ceph do this data migration and rebalance
> the crush map? is it the rados modify the crush map or cluster map, and the
> primary OSD does the data movement according to the cluster map? how to
> found the data migration in the source code?

The OSDMap changes when the osd is added or removed (or some other event 
or administrator action happens).  In response, the OSDs recalculate where 
the PGs should be stored, and move data in response to that.

> 2. when OSD's down or failed, how Ceph recover the data in other OSDs? is it
> the primary OSD copy the PG to the new located OSD?

The (new) primary figures out where data is/was (peering) and the 
coordinates any data migration (recovery) to where the data should now be 
(according to the latest OSDMap and its embedded CRUSH map).

> 3. the OSD has 4 status bits: up,down,in,out. But i can't found the defined
> status-- CEPH_OSD_DOWN, is it the OSD call the function mark_osd_down() to
> modify the OSD status in OSDMap?

See OSDMap.h: is_up() and is_down().  For in/out, it is either binary 
(is_in() and is_out() or can be somewhere in between; see get_weight()).

Hope that helps!

sage
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux