Hi,
On 01/23/2018 09:53 AM, Mayank Kumar wrote:
Hi Ceph Experts
I am a new user of Ceph and currently using Kubernetes to deploy Ceph
RBD Volumes. We our doing some initial work rolling it out to internal
customers and in doing that we are using the ip of the host as the ip
of the osd and mons. This means if a host goes down , we loose that
ip. While we are still experimenting with these behaviors, i wanted to
see what the community thinks for the following scenario :-
1: a rbd volume is already attached and mounted on host A
2: the osd on which this rbd volume resides, dies and never comes back up
3: another osd is replaced in its place. I dont know the intricacies
here, but i am assuming the data for this rbd volume either moves to
different osd's or goes back to the newly installed osd
4: the new osd has completley new ip
5: will the rbd volume attached to host A learn the new osd ip on
which its data resides and everything just continues to work ?
What if all the mons also have changed ip ?
A volume does not reside "on a osd". The volume is striped, and each
strip is stored in a placement group; the placement group on the other
hand is distributed to several OSDs depending on the crush rules and the
number of replicates.
If an OSD dies, ceph will backfill the now missing replicates to another
OSD, given another OSD satisfying the crush rules is available. The same
process is also triggered if an OSD is added.
This process is somewhat transparent to the ceph client, as long as
enough replicates a present. The ceph client (librbd accessing a volume
in this case) gets asynchronous notification from the ceph mons in case
of relevant changes, e.g. updates to the osd map reflecting the failure
of an OSD. Traffic to the OSD will be automatically rerouted depending
on the crush rules as explained above. The OSD map also contains the IP
address of all OSDs, so changes to the IP address are just another
update to the map.
The only problem you might run into is changing the IP address of the
mons. There's also a mon map listing all active mons; if the mon a ceph
client is using dies/is removed, the client will switch to another
active mon from the map. This works fine in a running system; you can
change the IP address of a mon one by one without any interruption to
the client (theoretically....).
The problem is starting the ceph client. In this case the client uses
the list of mons from the ceph configuration file to contact one mon and
receive the initial mon map. If you change the hostnames/IP address of
the mons, you also need to update the ceph configuration file.
The above outline is how it should work, given a valid ceph and network
setup. YMMV.
Regards,
Burkhard
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com