Re: After kernel upgrade OSD's on different disk.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Below are the block ID's for the OSD drives.
I have one Journal disk in this system and because I'm testing the setup
at the moment, one disk has it's journal local and the other 2 OSD's
have the journal on the journal disk (/dev/sdb). There is also one
journal to many but this is because I took out /dev/sda and put it back
into the cluster with a local journal instead of a journal on the
journal drive.

/dev/sdc1: UUID="f16b94cc-f691-40b8-92a2-06c5263683d6" TYPE="xfs"
PARTLABEL="ceph data" PARTUUID="ea5fd156-f82b-4686-8d62-c86fc430098c"
/dev/sdd1: UUID="f9114559-af27-4a10-96b5-5c1b8bce8fbd" TYPE="xfs"
PARTLABEL="ceph data" PARTUUID="28716fa4-c7ba-4db0-9117-da5ad781b3e5"
/dev/sda1: UUID="05048fb7-c79c-46da-80a1-95aa7be0dd41" TYPE="xfs"
PARTLABEL="ceph data" PARTUUID="10fa40ab-1cfe-4bf6-8f06-967158ab6aa3"
/dev/sdb1: PARTLABEL="ceph journal"
PARTUUID="e270318c-1921-44d6-9bf5-e5832c0c57e4"
/dev/sdb2: PARTLABEL="ceph journal"
PARTUUID="6fff2c84-d28d-4be6-bc53-b80da87701d4"
/dev/sdb3: PARTLABEL="ceph journal"
PARTUUID="ce6e335b-fba3-413f-a657-64c7727f6289"
/dev/sda2: PARTLABEL="ceph journal"
PARTUUID="e80b53aa-324a-4689-a06d-ea3aae79702e"

The 95-ceph-osd.rules file is the same on all systems, so I would think
they are from the Jewel Ceph RPM.

[root@blsceph01-1 ~]# md5sum /lib/udev/rules.d/95-ceph-osd.rules
b4132c970fd72e718fda1865f458210e  /lib/udev/rules.d/95-ceph-osd.rules
[root@blsceph01-2 ~]# md5sum /lib/udev/rules.d/95-ceph-osd.rules
b4132c970fd72e718fda1865f458210e  /lib/udev/rules.d/95-ceph-osd.rules
[root@blsceph01-3 ~]# md5sum /lib/udev/rules.d/95-ceph-osd.rules
b4132c970fd72e718fda1865f458210e  /lib/udev/rules.d/95-ceph-osd.rules

I just rebooted the blsceph01-1 and all the disks came back normally.

I'm still very curious to what will happen the next time I will get a
kernel update or any other time my systems at boot time decide to
rearrange the disks again.

Jan Hugo



On 11/01/2016 12:15 AM, Henrik Korkuc wrote:
> How are your OSDs setup? It is possible that udev rules didn't
> activate your OSDs if it didn't match rules. Refer to
> /lib/udev/rules.d/95-ceph-osd.rules. Basically your partition types
> must be of correct type for it to work
>
> On 16-10-31 19:10, jan hugo prins wrote:
>> After the kernel upgrade, I also upgraded the cluster to 10.2.3 from
>> 10.2.2.
>> Let's hope I only hit a bug and that this bug is now fixed, on the other
>> hand, I think I also saw the issue with a 10.2.3 node, but I'm not sure.
>>
>> Jan Hugo
>>
>>
>> On 10/31/2016 11:41 PM, Henrik Korkuc wrote:
>>> this is normal. You should expect that your disks may get reordered
>>> after reboot. I am not sure about your setup details, but in 10.2.3
>>> udev should be able to activate your OSDs no matter the naming (there
>>> were some bugs in previous 10.2.x releases)
>>>
>>> On 16-10-31 18:32, jan hugo prins wrote:
>>>> Hello,
>>>>
>>>> After patching my OSD servers with the latest Centos kernel and
>>>> rebooting the nodes, all OSD drives moved to different positions.
>>>>
>>>> Before the reboot:
>>>>
>>>> Systemdisk: /dev/sda
>>>> Journaldisk: /dev/sdb
>>>> OSD disk 1: /dev/sdc
>>>> OSD disk 2: /dev/sdd
>>>> OSD disk 3: /dev/sde
>>>>
>>>> After the reboot:
>>>>
>>>> Systemdisk: /dev/sde
>>>> journaldisk: /dev/sdb
>>>> OSD disk 1: /dev/sda
>>>> OSD disk 2: /dev/sdc
>>>> OSD disk 3: /dev/sdd
>>>>
>>>> The result was that the OSD didn't start at boot-up and I had to
>>>> manually activate them again.
>>>> After rebooting OSD node 1 I checked the state of the Ceph cluster
>>>> before rebooting node number 2. I found that the disks were not online
>>>> and I needed to fix this. In the end I was able to do all the upgrades
>>>> etc, but this was a big surprise to me.
>>>>
>>>> My idea to fix this is to use the Disk UUID instead of the dev name
>>>> (/dev/disk/by-uuid/<uuid> instead of /dev/sda) when activating the
>>>> disk.
>>>> But I really don't know if this is possible.
>>>>
>>>> Could anyone tell me if I can prevent this issue in the future?
>>>>
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@xxxxxxxxxxxxxx
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

-- 
Met vriendelijke groet / Best regards,

Jan Hugo Prins
Infra and Isilon storage consultant

Better.be B.V.
Auke Vleerstraat 140 E | 7547 AN Enschede | KvK 08097527
T +31 (0) 53 48 00 694 | M +31 (0)6 26 358 951
jprins@xxxxxxxxxxxx | www.betterbe.com

This e-mail is intended exclusively for the addressee(s), and may not
be passed on to, or made available for use by any person other than 
the addressee(s). Better.be B.V. rules out any and every liability 
resulting from any electronic transmission.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux