Re: After kernel upgrade OSD's on different disk.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



this is normal. You should expect that your disks may get reordered after reboot. I am not sure about your setup details, but in 10.2.3 udev should be able to activate your OSDs no matter the naming (there were some bugs in previous 10.2.x releases)

On 16-10-31 18:32, jan hugo prins wrote:
Hello,

After patching my OSD servers with the latest Centos kernel and
rebooting the nodes, all OSD drives moved to different positions.

Before the reboot:

Systemdisk: /dev/sda
Journaldisk: /dev/sdb
OSD disk 1: /dev/sdc
OSD disk 2: /dev/sdd
OSD disk 3: /dev/sde

After the reboot:

Systemdisk: /dev/sde
journaldisk: /dev/sdb
OSD disk 1: /dev/sda
OSD disk 2: /dev/sdc
OSD disk 3: /dev/sdd

The result was that the OSD didn't start at boot-up and I had to
manually activate them again.
After rebooting OSD node 1 I checked the state of the Ceph cluster
before rebooting node number 2. I found that the disks were not online
and I needed to fix this. In the end I was able to do all the upgrades
etc, but this was a big surprise to me.

My idea to fix this is to use the Disk UUID instead of the dev name
(/dev/disk/by-uuid/<uuid> instead of /dev/sda) when activating the disk.
But I really don't know if this is possible.

Could anyone tell me if I can prevent this issue in the future?


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux