After a reboot, all the partitions of LVM don't show up in /dev/mapper -nor in the /dev/dm-<dm-num> or /proc/partitions- though the whole disks show up; I have to make the hosts run one 'partprobe' every time they boot so as to have the partitions all show up.
I've also tried chowning all the /dev/dm-<num> to ceph:disk in vain. Do I have to use the udev rules even if the /dev/dm-<num> s are already owned by ceph:ceph?
Thank you very much for reading.
Best Regards,
Nicholas.
On Wed, Mar 15, 2017 at 1:06 AM Gunwoo Gim <wind8702@xxxxxxxxx> wrote:
Thank you very much, Peter.I'm sorry for not clarifying the version number; it's kraken and 11.2.0-1xenial.I guess the udev rules in this file are supposed to change them : /lib/udev/rules.d/95-ceph-osd.rules...but the rules' filters don't seem to match the DEVTYPE part of the prepared partitions on the LVs I've got on the host.Would it have been the cause of trouble? I'd love to be informed of a good way to make it work with the logical volumes; should I fix the udev rule?~ # cat /lib/udev/rules.d/95-ceph-osd.rules | head -n 19# OSD_UUIDACTION="" SUBSYSTEM=="block", \ENV{DEVTYPE}=="partition", \ENV{ID_PART_ENTRY_TYPE}=="4fbd7e29-9d25-41b8-afd0-062c0ceff05d", \OWNER:="ceph", GROUP:="ceph", MODE:="660", \RUN+="/usr/sbin/ceph-disk --log-stdout -v trigger /dev/$name"ACTION="" SUBSYSTEM=="block", \ENV{ID_PART_ENTRY_TYPE}=="4fbd7e29-9d25-41b8-afd0-062c0ceff05d", \OWNER="ceph", GROUP="ceph", MODE="660"# JOURNAL_UUIDACTION="" SUBSYSTEM=="block", \ENV{DEVTYPE}=="partition", \ENV{ID_PART_ENTRY_TYPE}=="45b0969e-9b03-4f30-b4c6-b4b80ceff106", \OWNER:="ceph", GROUP:="ceph", MODE:="660", \RUN+="/usr/sbin/ceph-disk --log-stdout -v trigger /dev/$name"ACTION="" SUBSYSTEM=="block", \ENV{ID_PART_ENTRY_TYPE}=="45b0969e-9b03-4f30-b4c6-b4b80ceff106", \OWNER="ceph", GROUP="ceph", MODE="660"~ # udevadm info /dev/mapper/vg--ssd1-lv--ssd1p1 | grep ID_PART_ENTRY_TYPEE: ID_PART_ENTRY_TYPE=45b0969e-9b03-4f30-b4c6-b4b80ceff106~ # udevadm info /dev/mapper/vg--ssd1-lv--ssd1p1 | grep DEVTYPEE: DEVTYPE=diskBest Regards,Nicholas.On Tue, Mar 14, 2017 at 6:37 PM Peter Maloney <peter.maloney@xxxxxxxxxxxxxxxxxxxx> wrote:Is this Jewel? Do you have some udev rules or anything that changes the owner on the journal device (eg. /dev/sdx or /dev/nvme0n1p1) to ceph:ceph?
On 03/14/17 08:53, Gunwoo Gim wrote:
I'd love to get helped out; it'd be much appreciated.
Best Wishes,Nicholas.
On Tue, Mar 14, 2017 at 4:51 PM Gunwoo Gim <wind8702@xxxxxxxxx> wrote:
Hello, I'm trying to deploy a ceph filestore cluster with LVM using ceph-ansible playbook. I've been fixing a couple of code blocks in ceph-ansible and ceph-disk/main.py and made some progress but now I'm stuck again; 'ceph-disk activate osd' fails.
Please let me just show you the error message and the output of 'ls':
~ # ceph-disk -v activate /dev/mapper/vg--hdd1-lv--hdd1p1[...]ceph_disk.main.Error: Error: ['ceph-osd', '--cluster', 'ceph', '--mkfs', '--mkkey', '-i', u'1', '--monmap', '/var/lib/ceph/tmp/mnt.cJDc7I/activate.monmap', '--osd-data', '/var/lib/ceph/tmp/mnt.cJDc7I', '--osd-journal', '/var/lib/ceph/tmp/mnt.cJDc7I/journal', '--osd-uuid', u'5097be3f-349e-480d-8b0d-d68c13ae2f72', '--keyring', '/var/lib/ceph/tmp/mnt.cJDc7I/keyring', '--setuser', 'ceph', '--setgroup', 'ceph'] failed : 2017-03-14 16:01:10.051537 7fdc9a025a40 -1 filestore(/var/lib/ceph/tmp/mnt.cJDc7I) mkjournal error creating journal on /var/lib/ceph/tmp/mnt.cJDc7I/journal: (13) Permission denied2017-03-14 16:01:10.051565 7fdc9a025a40 -1 OSD::mkfs: ObjectStore::mkfs failed with error -132017-03-14 16:01:10.051624 7fdc9a025a40 -1 ** ERROR: error creating empty object store in /var/lib/ceph/tmp/mnt.cJDc7I: (13) Permission denied
~ # ls -al /var/lib/ceph/tmptotal 8drwxr-xr-x 2 ceph ceph 4096 Mar 14 16:01 .drwxr-xr-x 11 ceph ceph 4096 Mar 14 11:12 ..-rwxr-xr-x 1 root root 0 Mar 14 11:12 ceph-disk.activate.lock-rwxr-xr-x 1 root root 0 Mar 14 11:44 ceph-disk.prepare.lock
~ # ls -l /dev/mapper/vg-*-lv-*p*lrwxrwxrwx 1 root root 8 Mar 14 13:46 /dev/mapper/vg--hdd1-lv--hdd1p1 -> ../dm-12lrwxrwxrwx 1 root root 8 Mar 14 13:46 /dev/mapper/vg--hdd2-lv--hdd2p1 -> ../dm-14lrwxrwxrwx 1 root root 8 Mar 14 13:46 /dev/mapper/vg--hdd3-lv--hdd3p1 -> ../dm-16lrwxrwxrwx 1 root root 8 Mar 14 13:46 /dev/mapper/vg--hdd4-lv--hdd4p1 -> ../dm-18lrwxrwxrwx 1 root root 8 Mar 14 13:46 /dev/mapper/vg--hdd5-lv--hdd5p1 -> ../dm-20lrwxrwxrwx 1 root root 8 Mar 14 13:46 /dev/mapper/vg--hdd6-lv--hdd6p1 -> ../dm-22lrwxrwxrwx 1 root root 8 Mar 14 13:46 /dev/mapper/vg--hdd7-lv--hdd7p1 -> ../dm-24lrwxrwxrwx 1 root root 8 Mar 14 13:46 /dev/mapper/vg--hdd8-lv--hdd8p1 -> ../dm-26lrwxrwxrwx 1 root root 8 Mar 14 13:47 /dev/mapper/vg--hdd9-lv--hdd9p1 -> ../dm-28lrwxrwxrwx 1 root root 8 Mar 14 13:47 /dev/mapper/vg--ssd1-lv--ssd1p1 -> ../dm-11lrwxrwxrwx 1 root root 8 Mar 14 13:47 /dev/mapper/vg--ssd1-lv--ssd1p2 -> ../dm-15lrwxrwxrwx 1 root root 8 Mar 14 13:47 /dev/mapper/vg--ssd1-lv--ssd1p3 -> ../dm-19lrwxrwxrwx 1 root root 8 Mar 14 13:47 /dev/mapper/vg--ssd1-lv--ssd1p4 -> ../dm-23lrwxrwxrwx 1 root root 8 Mar 14 13:47 /dev/mapper/vg--ssd1-lv--ssd1p5 -> ../dm-27lrwxrwxrwx 1 root root 8 Mar 14 13:47 /dev/mapper/vg--ssd2-lv--ssd2p1 -> ../dm-13lrwxrwxrwx 1 root root 8 Mar 14 13:47 /dev/mapper/vg--ssd2-lv--ssd2p2 -> ../dm-17lrwxrwxrwx 1 root root 8 Mar 14 13:47 /dev/mapper/vg--ssd2-lv--ssd2p3 -> ../dm-21lrwxrwxrwx 1 root root 8 Mar 14 13:47 /dev/mapper/vg--ssd2-lv--ssd2p4 -> ../dm-25
~ # ls -l /dev/dm-*brw-rw---- 1 root disk 252, 0 Mar 14 13:46 /dev/dm-0brw-rw---- 1 root disk 252, 1 Mar 14 13:46 /dev/dm-1brw-rw---- 1 root disk 252, 10 Mar 14 13:47 /dev/dm-10brw-rw---- 1 ceph ceph 252, 11 Mar 14 13:47 /dev/dm-11brw-rw---- 1 ceph ceph 252, 12 Mar 14 13:46 /dev/dm-12brw-rw---- 1 ceph ceph 252, 13 Mar 14 13:47 /dev/dm-13brw-rw---- 1 ceph ceph 252, 14 Mar 14 13:46 /dev/dm-14brw-rw---- 1 ceph ceph 252, 15 Mar 14 13:47 /dev/dm-15brw-rw---- 1 ceph ceph 252, 16 Mar 14 13:46 /dev/dm-16brw-rw---- 1 ceph ceph 252, 17 Mar 14 13:47 /dev/dm-17brw-rw---- 1 ceph ceph 252, 18 Mar 14 13:46 /dev/dm-18brw-rw---- 1 ceph ceph 252, 19 Mar 14 13:47 /dev/dm-19brw-rw---- 1 root disk 252, 2 Mar 14 13:46 /dev/dm-2brw-rw---- 1 ceph ceph 252, 20 Mar 14 13:46 /dev/dm-20brw-rw---- 1 ceph ceph 252, 21 Mar 14 13:47 /dev/dm-21brw-rw---- 1 ceph ceph 252, 22 Mar 14 13:46 /dev/dm-22brw-rw---- 1 ceph ceph 252, 23 Mar 14 13:47 /dev/dm-23brw-rw---- 1 ceph ceph 252, 24 Mar 14 13:46 /dev/dm-24brw-rw---- 1 ceph ceph 252, 25 Mar 14 13:47 /dev/dm-25brw-rw---- 1 ceph ceph 252, 26 Mar 14 13:46 /dev/dm-26brw-rw---- 1 ceph ceph 252, 27 Mar 14 13:47 /dev/dm-27brw-rw---- 1 ceph ceph 252, 28 Mar 14 13:47 /dev/dm-28brw-rw---- 1 root disk 252, 3 Mar 14 13:46 /dev/dm-3brw-rw---- 1 root disk 252, 4 Mar 14 13:46 /dev/dm-4brw-rw---- 1 root disk 252, 5 Mar 14 13:46 /dev/dm-5brw-rw---- 1 root disk 252, 6 Mar 14 13:47 /dev/dm-6brw-rw---- 1 root disk 252, 7 Mar 14 13:46 /dev/dm-7brw-rw---- 1 root disk 252, 8 Mar 14 13:46 /dev/dm-8brw-rw---- 1 root disk 252, 9 Mar 14 13:47 /dev/dm-9
Best Regards,Nicholas.--
You can find my PGP public key here: https://google.com/+DewrKim/about--
You can find my PGP public key here: https://google.com/+DewrKim/about
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
-- -------------------------------------------- Peter Maloney Brockmann Consult Max-Planck-Str. 2 21502 Geesthacht Germany Tel: +49 4152 889 300 Fax: +49 4152 889 333 E-mail: peter.maloney@xxxxxxxxxxxxxxxxxxxx Internet: http://www.brockmann-consult.de ----------------------------------------------You can find my PGP public key here: https://google.com/+DewrKim/about
--
You can find my PGP public key here: https://google.com/+DewrKim/about
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com