As a piece to the puzzle, the client always has an up to date osd map (which includes the crush map). If it's out of date, then it has to get a new one before it can request to
read or write to the cluster. That way the client will never have old information and if you add or remove storage, the client will always have the most up to date map to know where the current copies of the files are.
This can cause slow downs in your cluster performance if you are updating your osdmap frequently, which can be caused by deleting a lot of snapshots as an example.
From: ceph-users [ceph-users-bounces@xxxxxxxxxxxxxx] on behalf of girish kenkere [kngenius@xxxxxxxxx]
Sent: Thursday, February 16, 2017 12:43 PM To: ceph-users@xxxxxxxxxxxxxx Subject: [ceph-users] Question regarding CRUSH algorithm Hi, I have a question regarding CRUSH algorithm - please let me know how this works. CRUSH paper talks about
how given an object we select OSD via two mapping - first one is obj to PG and then PG to OSD.
This PG to OSD mapping is something i dont understand. It uses pg#, cluster map, and placement rules. How is it guaranteed
to return correct OSD for future reads after the cluster map/placement rules has changed due to nodes coming and out?
Thanks
Girish
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com