Hi
I think I hit the same issue with the journal partitions when upgrading our cluster to Jewel and Ubuntu 16.04.
Our solutions to this issue was the change the GUID partition ID/type of the CEPH journal partition to the official one by hand. Afterwards udev changes the raw journal partitions the correct owner/groups at boot.
The GUID for a CEPH journal partition should be "45B0969E-9B03-4F30-B4C6-B4B80CEFF106"
I haven't been able to find this info in the documentation on the ceph site but there are some info on wikipedia https://en.wikipedia.org/wiki/GUID_Partition_Table
I hope this helps you enough to resolve you issue :)
Kind regards
Brian Lagoni
DevOps System administrator, Engineering Tools
Unity Technologies
On 10 June 2016 at 04:20, 한승진 <yongiman@xxxxxxxxx> wrote:
Hi Cephers,I hava a ceph cluster Jewel on Ubuntu 16.04.What I am wondering is Whenever I reboot the OSD nodes, the OSD init service is failed.The reason why is that the owner not changed in journal partition.I have btrfs file system and the devices are sdc1,sdd1,sde1.. and so on.They are also linked with journal device which are sdb1,sdb2,sdb3..and so on.btrfs devices are all belong to ceph:ceph when OSDs are activating.root@cephnode01:~# ll /dev/sdc1 /dev/sdd1 /dev/sde1brw-rw---- 1 ceph ceph 8, 33 Jun 10 10:48 /dev/sdc1brw-rw---- 1 ceph ceph 8, 49 Jun 10 10:44 /dev/sdd1brw-rw---- 1 ceph ceph 8, 65 Jun 10 10:44 /dev/sde1However, journal devices are not belong to ceph user and group.brw-rw---- 1 root disk 8, 17 Jun 10 11:02 /dev/sdb1brw-rw---- 1 root disk 8, 18 Jun 10 11:00 /dev/sdb2brw-rw---- 1 root disk 8, 19 Jun 10 10:57 /dev/sdb3Is it a bug of ceph?Thank you
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com