Indeed it is not necessary to have any OSD entries in the Ceph.conf
file
but what happens in the event of a disk failure resulting in changing
the mount device?
For what I can see is that OSDs are mounted from entries in /etc/mtab
(I am on CentOS 6.6)
like this:
/dev/sdj1 /var/lib/ceph/osd/ceph-8 xfs rw,noatime,inode64 0 0
/dev/sdh1 /var/lib/ceph/osd/ceph-6 xfs rw,noatime,inode64 0 0
/dev/sdg1 /var/lib/ceph/osd/ceph-5 xfs rw,noatime,inode64 0 0
/dev/sde1 /var/lib/ceph/osd/ceph-3 xfs rw,noatime,inode64 0 0
/dev/sdi1 /var/lib/ceph/osd/ceph-7 xfs rw,noatime,inode64 0 0
/dev/sdf1 /var/lib/ceph/osd/ceph-4 xfs rw,noatime,inode64 0 0
/dev/sdd1 /var/lib/ceph/osd/ceph-2 xfs rw,noatime,inode64 0 0
/dev/sdk1 /var/lib/ceph/osd/ceph-9 xfs rw,noatime,inode64 0 0
/dev/sdb1 /var/lib/ceph/osd/ceph-0 xfs rw,noatime,inode64 0 0
/dev/sdc1 /var/lib/ceph/osd/ceph-1 xfs rw,noatime,inode64 0 0
So in the event of a disk failure (e.g. disk SDH fails) then in the
order the next one will take its place meaning that
SDI will be seen as SDH upon next reboot thus it will be mounted as
CEPH-6 instead of CEPH-7 and so on...resulting in a problematic
configuration (I guess that lots of data will be start moving around,
PGs will be misplaced etc.)
Correct me if I am wrong but the proper way to mount them would be by
using the UUID of the partition.
Is it OK if I change the entries in /etc/mtab using the UUID=xxxxxx
instead of /dev/sdX1??
Does CEPH try to mount them using a different config file and perhaps
exports the entries at boot in /etc/mtab (in the latter case no
modification in /etc/mtab will be taken into account)??
I have deployed the Ceph cluster using only the "ceph-deploy" command.
Is there a parameter that I 've missed that must be used during
deployment in order to specify the mount points using the UUIDs instead
of the device names?
Regards,
George
On Wed, 6 May 2015 22:36:14 -0600, Robert LeBlanc wrote:
We dont have OSD entries in our Ceph config. They are not needed if
you dont have specific configs for different OSDs.
Robert LeBlanc
Sent from a mobile device please excuse any typos.
On May 6, 2015 7:18 PM, "Florent MONTHEL" wrote:
Hi teqm,
Is it necessary to indicate in ceph.conf all OSD that we have in the
cluster ?
we have today reboot a cluster (5 nodes RHEL 6.5) and some OSD seem
to have change ID so crush map not mapped with the reality
Thanks
FLORENT MONTHEL
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx [1]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com [2]
Links:
------
[1] mailto:ceph-users@xxxxxxxxxxxxxx
[2] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
[3] mailto:florent.monthel@xxxxxxxxxxxxx
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com