Ok, Partition GUID code was the same like Partition unique GUID.
I used the sudo sgdisk --new=1:0:+20480M
--change-name=1:'ceph journal' --partition-guid=1:$journal_uuid
--typecode=1:$journal_uuid --mbrtogpt -- /dev/sdk to
recreate my journal. However, typecode part should be the
45B0969E-9B03-4F30-B4C6-B4B80CEFF106, not the journal_uuid. I
guess this tutorial is for the old ceph, which didn't run as a
ceph user, but as a root user. Thanks for your help.
Kind regards,
Piotr Dzionek
Hi Piotr,
is your partition GUID right?
Look with sgdisk:
# sgdisk --info=2 /dev/sdd
Partition GUID code: 45B0969E-9B03-4F30-B4C6-B4B80CEFF106
(Unknown)
Partition unique GUID: 396A0C50-738C-449E-9FC6-B2D3A4469E51
First sector: 2048 (at 1024.0 KiB)
Last sector: 10485760 (at 5.0 GiB)
Partition size: 10483713 sectors (5.0 GiB)
Attribute flags: 0000000000000000
Partition name: 'ceph journal'
# sgdisk --info=2 /dev/sdc
Partition GUID code: 45B0969E-9B03-4F30-B4C6-B4B80CEFF106
(Unknown)
Partition unique GUID: 31E9A040-A2C2-4F8F-906E-19D8A24DBDAB
First sector: 2048 (at 1024.0 KiB)
Last sector: 10485760 (at 5.0 GiB)
Partition size: 10483713 sectors (5.0 GiB)
Attribute flags: 0000000000000000
Partition name: 'ceph journal'
Udo
Am 2017-02-13 16:13, schrieb Piotr Dzionek:
I run it on CentOS Linux release 7.3.1611.
After running "udevadm test
/sys/block/sda/sda1" I don't see that this rule apply to this
disk.
Hmm I remember that it used to work properly, but some time ago
I
retested journal disk recreation. I followed the same tutorial
like
the one pasted here by Wido den Hollander :
"The udev rules of Ceph should chown the journal to ceph:ceph if
it's
set to the right partition UUID.
This blog shows it
partially:http://ceph.com/planet/ceph-recover-osds-after-ssd-journal-failure/"
I think that my journals were not recreated in a proper way, but
I
don't know what is missing.
My SSD journal disk looks like this:
/Disk /dev/sda: 120GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 27.3GB 27.3GB ceph journal
2 27.3GB 54.6GB 27.3GB ceph journal
3 54.6GB 81.9GB 27.3GB ceph journal
4 81.9GB 109GB 27.3GB ceph journal/
and blkid:
blkid | grep sda
/dev/sda1: PARTLABEL="ceph journal"
PARTUUID="a5ea6883-b2b2-4d53-b8ba-9ff8bcddead5"
/dev/sda2: PARTLABEL="ceph journal"
PARTUUID="adae4442-380c-418c-bdc0-05890fcf633e"
/dev/sda3: PARTLABEL="ceph journal"
PARTUUID="a8637452-fd9c-4d68-924f-69a43c75442c"
/dev/sda4: PARTLABEL="ceph journal"
PARTUUID="615a208a-19e0-4e02-8ef3-19d618a71103"
Do you have any idea what may be wrong?
W dniu 13.02.2017 o 12:45, Craig Chi pisze:
Hi,
What is your OS? The permission of journal partition should be
changed by udev rules: /lib/udev/rules.d/95-ceph-osd.rules
In this file, it is described as:
# JOURNAL_UUID
ACTION="" SUBSYSTEM=="block", \
ENV{DEVTYPE}=="partition", \
ENV{ID_PART_ENTRY_TYPE}=="45b0969e-9b03-4f30-b4c6-b4b80ceff106", \
OWNER:="ceph", GROUP:="ceph", MODE:="660", \
RUN+="/usr/sbin/ceph-disk --log-stdout -v trigger
/dev/$name"
You can also use udevadm command to test whether the partition
has been processed by the correct udev rule. Like following:
#> udevadm test /sys/block/sdb/sdb2
...
starting 'probe-bcache -o udev /dev/sdb2'
Process 'probe-bcache -o udev /dev/sdb2' succeeded.
OWNER 64045 /lib/udev/rules.d/95-ceph-osd.rules:16
GROUP 64045 /lib/udev/rules.d/95-ceph-osd.rules:16
MODE 0660 /lib/udev/rules.d/95-ceph-osd.rules:16
RUN '/usr/sbin/ceph-disk --log-stdout -v trigger /dev/$name'
/lib/udev/rules.d/95-ceph-osd.rules:16
...
Then /dev/sdb2 will have ceph:ceph permission automatically.
#> ls -l /dev/sdb2
brw-rw---- 1 ceph ceph 8, 18 Feb 13 19:43 /dev/sdb2
Sincerely,
Craig Chi
On 2017-02-13 19:06, Piotr Dzionek
<piotr.dzionek@xxxxxxxx> wrote:
Hi,
I am running ceph Jewel 10.2.5 with separate journals -
ssd disks.
It runs pretty smooth, however I stumble upon an issue
after
system reboot. Journal disks become owned by root and ceph
failed
to start.
/starting osd.4 at :/0 osd_data /var/lib/ceph/osd/ceph-4
/var/lib/ceph/osd/ceph-4/journal//
/ /2017-02-10 16:24:29.924126 7fd07ab40800 -1
filestore(/var/lib/ceph/osd/ceph-4) mount failed to open
journal
/var/lib/ceph/osd/ceph-4/journal: (13) Permission denied//
/ /2017-02-10 16:24:29.924210 7fd07ab40800 -1 osd.4 0
OSD:init:
unable to mount object store//
/ /2017-02-10 16:24:29.924217 7fd07ab40800 -1 #033[0;31m
** ERROR:
osd init failed: (13) Permission denied#033[0m/
I fixed this issue by finding journal disks in /dev dir
and chown
to ceph:ceph. I remember that I had a similar issue after
I
installed it for a first time. Is it a bug ? or do I have
to set
some kind of udev rules for this disks?
FYI, I have this issue after every restart now.
Kind regards,
Piotr Dzionek
_______________________________________________
ceph-users mailing
list ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
|
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com