Re: Nautilus 14.2.6 ceph-volume bluestore _read_fsid unparsable uuid

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Jan,

I have something new on this topic.  I had gone back to Debian 9 backports and Luminous (distro packages).  I had all of my OSDs working and I was about to deploy an MDS.  But I noticed that the same Luminous packages where in Debian 10 (not backports), so I upgraded my OS to Debian 10.  The OSDs, MONs, and MGRs survived the trip, although a couple of the OSDs needed me to 'systemctl start ceph-volume@lvm....' before they came online.

Then I couldn't resist, so I did one further upgrade to Debian 10 Backports, which moved my Ceph to Nautilus.  What could go wrong? I did refer to https://pve.proxmox.com/wiki/Ceph_Luminous_to_Nautilus even though it's not exactly equivalent.

After the dist-upgrade the MONs and MGRs were all good, but 17 of my 24 OSDs are down and don't seem to want to come up:

   root@ceph00:~# ceph versions
   {
        "mon": {
            "ceph version 14.2.6
   (f0aa067ac7a02ee46ea48aa26c6e298b5ea272e9) nautilus (stable)": 3
        },
        "mgr": {
            "ceph version 14.2.6
   (f0aa067ac7a02ee46ea48aa26c6e298b5ea272e9) nautilus (stable)": 3
        },
        "osd": {
            "ceph version 12.2.11
   (26dc3775efc7bb286a1d6d66faee0ba30ea23eee) luminous (stable)": 7
        },
        "mds": {},
        "overall": {
            "ceph version 12.2.11
   (26dc3775efc7bb286a1d6d66faee0ba30ea23eee) luminous (stable)": 7,
            "ceph version 14.2.6
   (f0aa067ac7a02ee46ea48aa26c6e298b5ea272e9) nautilus (stable)": 6
        }
   }
   root@ceph00:~# ceph osd tree
   ID CLASS WEIGHT    TYPE NAME       STATUS REWEIGHT PRI-AFF
   -1       261.93823 root default
   -7        87.31274     host ceph00
   15   hdd  10.91409         osd.15    down  1.00000 1.00000
   16   hdd  10.91409         osd.16    down  1.00000 1.00000
   17   hdd  10.91409         osd.17    down  1.00000 1.00000
   18   hdd  10.91409         osd.18    down  1.00000 1.00000
   19   hdd  10.91409         osd.19    down  1.00000 1.00000
   20   hdd  10.91409         osd.20    down  1.00000 1.00000
   21   hdd  10.91409         osd.21    down  1.00000 1.00000
   22   hdd  10.91409         osd.22    down  1.00000 1.00000
   -5        87.31274     host ceph01
     7   hdd  10.91409         osd.7     down  1.00000 1.00000
     8   hdd  10.91409         osd.8     down  1.00000 1.00000
     9   hdd  10.91409         osd.9     down  1.00000 1.00000
   10   hdd  10.91409         osd.10    down  1.00000 1.00000
   11   hdd  10.91409         osd.11    down  1.00000 1.00000
   12   hdd  10.91409         osd.12    down  1.00000 1.00000
   13   hdd  10.91409         osd.13    down  1.00000 1.00000
   14   hdd  10.91409         osd.14    down  1.00000 1.00000
   -3        87.31274     host ceph02
     0   hdd  10.91409         osd.0     down  1.00000 1.00000
     1   hdd  10.91409         osd.1       up  1.00000 1.00000
     2   hdd  10.91409         osd.2       up  1.00000 1.00000
     3   hdd  10.91409         osd.3       up  1.00000 1.00000
     4   hdd  10.91409         osd.4       up  1.00000 1.00000
     5   hdd  10.91409         osd.5       up  1.00000 1.00000
     6   hdd  10.91409         osd.6       up  1.00000 1.00000
   23   hdd  10.91409         osd.23      up  1.00000 1.00000

   root@ceph00:~# ceph-volume inventory

   Device Path               Size         rotates available Model name
   /dev/md0                  186.14 GB    False   False
   /dev/md1                  37.27 GB     False   False
   /dev/nvme0n1              1.46 TB      False   False SAMSUNG
   MZPLL1T6HEHP-00003
   /dev/sda                  223.57 GB    False   False Samsung SSD 883
   /dev/sdb                  223.57 GB    False   False Samsung SSD 883
   /dev/sdc                  10.91 TB     True    False ST12000NM0027
   /dev/sdd                  10.91 TB     True    False ST12000NM0027
   /dev/sde                  10.91 TB     True    False ST12000NM0027
   /dev/sdf                  10.91 TB     True    False ST12000NM0027
   /dev/sdg                  10.91 TB     True    False ST12000NM0027
   /dev/sdh                  10.91 TB     True    False ST12000NM0027
   /dev/sdi                  10.91 TB     True    False ST12000NM0027
   /dev/sdj                  10.91 TB     True    False ST12000NM0027

I'm going to try a couple things on one of the two nodes, but I will save the other until I hear from you on any further information I could collect.  Note that all 3 nodes are identical hardware and software.

Since I don't have any data on these OSDs yet I don't have any problem with destroying and rebuilding them.  What would be really interesting would be a sequence of low-level commands that could be issued to manually create these OSDs.  There's some evidence of this in /var/log/ceph/ceph-volume.log, but there's some detail missing and it's really hard to follow.

If you can provide this list I'd gladly give it a try and let you know how it goes.

Thanks.

-Dave

Dave Hall
Binghamton University


On 1/29/2020 3:15 AM, Jan Fajerski wrote:
On Tue, Jan 28, 2020 at 08:03:35PM +0100, bauen1 wrote:
Hi,

I've run into the same issue while testing:

ceph version 14.2.6 (f0aa067ac7a02ee46ea48aa26c6e298b5ea272e9)
nautilus (stable)

debian bullseye

Ceph was installed using ceph-ansible on a vm from the repo
http://download.ceph.com/debian-nautilus

The output of `sudo sh -c 'CEPH_VOLUME_DEBUG=true ceph-volume
--cluster test lvm batch --bluestore /dev/vdb'` has been attached.
Thx, I opened https://tracker.ceph.com/issues/43868.
This looks like a bluestore/osd issue to me, though it might end up being
ceph-volumes fault.
Also worth noting might be that '/var/lib/ceph/osd/test-0/fsid' is
empty (but I don't know too much about the internals)

- bauen1

On 1/28/20 4:54 PM, Dave Hall wrote:
Jan,

Unfortunately I'm under immense pressure right now to get some form
of Ceph into production, so it's going to be Luminous for now, or
maybe a live upgrade to Nautilus without recreating the OSDs (if
that's possible).

The good news is that in the next couple months I expect to add more
hardware that should be nearly identical.  I will gladly give it a
go at that time and see if I can recreate.  (Or, if I manage to
thoroughly crash my current fledgling cluster, I'll give it another
go on one node while I'm up all night recovering.)

If you could tell me where to look I'd gladly read some code and see
if I can find anything that way.  Or if there's any sort of design
document describing the deep internals I'd be glad to scan it to see
if I've hit a corner case of some sort.  Actually, I'd be interested
in reading those documents anyway if I could.

Thanks.

-Dave

Dave Hall

On 1/28/2020 3:05 AM, Jan Fajerski wrote:
On Mon, Jan 27, 2020 at 03:23:55PM -0500, Dave Hall wrote:
All,

I've just spent a significant amount of time unsuccessfully chasing
the  _read_fsid unparsable uuid error on Debian 10 / Natilus 14.2.6.
Since this is a brand new cluster, last night I gave up and moved back
to Debian 9 / Luminous 12.2.11.  In both cases I'm using the packages
>from Debian Backports with ceph-ansible as my deployment tool.
Note that above I said 'the _read_fsid unparsable uuid' error. I've
searched around a bit and found some previously reported issues, but I
did not see any conclusive resolutions.

I would like to get to Nautilus as quickly as possible, so I'd gladly
provide additional information to help track down the cause of this
symptom.  I can confirm that, looking at the ceph-volume.log on the
OSD host I see no difference between the ceph-volume lvm batch command
generated by the ceph-ansible versions associated with these two Ceph
releases:

    ceph-volume --cluster ceph lvm batch --bluestore --yes
    --block-db-size 133358734540 /dev/sdc /dev/sdd /dev/sde /dev/sdf
    /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/nvme0n1

Note that I'm using --block-db-size to divide my NVMe into 12 segments
as I have 4 empty drive bays on my OSD servers that I may eventually
be able to fill.

My OSD hardware is:

    Disk /dev/nvme0n1: 1.5 TiB, 1600321314816 bytes, 3125627568 sectors
    Disk /dev/sdc: 10.9 TiB, 12000138625024 bytes, 23437770752 sectors
    Disk /dev/sdd: 10.9 TiB, 12000138625024 bytes, 23437770752 sectors
    Disk /dev/sde: 10.9 TiB, 12000138625024 bytes, 23437770752 sectors
    Disk /dev/sdf: 10.9 TiB, 12000138625024 bytes, 23437770752 sectors
    Disk /dev/sdg: 10.9 TiB, 12000138625024 bytes, 23437770752 sectors
    Disk /dev/sdh: 10.9 TiB, 12000138625024 bytes, 23437770752 sectors
    Disk /dev/sdi: 10.9 TiB, 12000138625024 bytes, 23437770752 sectors
    Disk /dev/sdj: 10.9 TiB, 12000138625024 bytes, 23437770752 sectors

I'd send the output of ceph-volume inventory on Luminous, but I'm
getting  -->: KeyError: 'human_readable_size'.

Please let me know if I can provide any further information.
Mind re-running you ceph-volume command with  debug output
enabled:
CEPH_VOLUME_DEBUG=true ceph-volume --cluster ceph lvm batch
--bluestore ...

Ideally you could also openen a bug report here
https://tracker.ceph.com/projects/ceph-volume/issues/new

Thanks!
Thanks.

-Dave

--
Dave Hall
Binghamton University

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
sysadmin@ceph-test:~$ sudo setenforce 0
sysadmin@ceph-test:~$ sudo sh -c 'CEPH_VOLUME_DEBUG=true ceph-volume --cluster test lvm batch --bluestore /dev/vdb'

Total OSDs: 1

  Type            Path                                                    LV Size         % of device
----------------------------------------------------------------------------------------------------
  [data]          /dev/vdb                                                63.00 GB        100.0%
--> The above OSDs would be created if the operation continues
--> do you want to proceed? (yes/no) yes
Running command: /usr/sbin/vgcreate -s 1G --force --yes ceph-1cc81d7c-a153-462a-8080-ec3d217c7180 /dev/vdb
stdout: Physical volume "/dev/vdb" successfully created.
stdout: Volume group "ceph-1cc81d7c-a153-462a-8080-ec3d217c7180" successfully created
Running command: /usr/sbin/lvcreate --yes -l 63 -n osd-data-bbd7752f-fad9-41d5-bbbe-e6fd512bcf8e ceph-1cc81d7c-a153-462a-8080-ec3d217c7180
stdout: Wiping ceph_bluestore signature on /dev/ceph-1cc81d7c-a153-462a-8080-ec3d217c7180/osd-data-bbd7752f-fad9-41d5-bbbe-e6fd512bcf8e.
stdout: Logical volume "osd-data-bbd7752f-fad9-41d5-bbbe-e6fd512bcf8e" created.
Running command: /bin/ceph-authtool --gen-print-key
Running command: /bin/ceph --cluster test --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/test.keyring -i - osd new e3ebb6e0-82c8-4088-a6bd-abd729a575bb
Running command: /bin/ceph-authtool --gen-print-key
Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/test-0
Running command: /usr/sbin/restorecon /var/lib/ceph/osd/test-0
Running command: /bin/chown -h ceph:ceph /dev/ceph-1cc81d7c-a153-462a-8080-ec3d217c7180/osd-data-bbd7752f-fad9-41d5-bbbe-e6fd512bcf8e
Running command: /bin/chown -R ceph:ceph /dev/dm-1
Running command: /bin/ln -s /dev/ceph-1cc81d7c-a153-462a-8080-ec3d217c7180/osd-data-bbd7752f-fad9-41d5-bbbe-e6fd512bcf8e /var/lib/ceph/osd/test-0/block
Running command: /bin/ceph --cluster test --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/test.keyring mon getmap -o /var/lib/ceph/osd/test-0/activate.monmap
stderr: got monmap epoch 1
Running command: /bin/ceph-authtool /var/lib/ceph/osd/test-0/keyring --create-keyring --name osd.0 --add-key AQAcgzBeTlc5BxAApXJgwyoRAHtrL9kk1tbs9w==
stdout: creating /var/lib/ceph/osd/test-0/keyring
stdout: added entity osd.0 auth(key=AQAcgzBeTlc5BxAApXJgwyoRAHtrL9kk1tbs9w==)
Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/test-0/keyring
Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/test-0/
Running command: /bin/ceph-osd --cluster test --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/test-0/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/test-0/ --osd-uuid e3ebb6e0-82c8-4088-a6bd-abd729a575bb --setuser ceph --setgroup ceph
stderr: 2020-01-28 18:53:20.438 7f17de7b3c00 -1 bluestore(/var/lib/ceph/osd/test-0/) _read_fsid unparsable uuid
stderr: terminate called after throwing an instance of 'boost::exception_detail::clone_impl<boost::exception_detail::error_info_injector<boost::bad_get> >'
stderr: what():  boost::bad_get: failed value get using boost::get
stderr: *** Caught signal (Aborted) **
stderr: in thread 7f17de7b3c00 thread_name:ceph-osd
stderr: ceph version 14.2.6 (f0aa067ac7a02ee46ea48aa26c6e298b5ea272e9) nautilus (stable)
stderr: 1: (()+0x13520) [0x7f17dee75520]
stderr: 2: (gsignal()+0x141) [0x7f17de93b081]
stderr: 3: (abort()+0x121) [0x7f17de926535]
stderr: 4: (()+0x9a643) [0x7f17decba643]
stderr: 5: (()+0xa5fd6) [0x7f17decc5fd6]
stderr: 6: (()+0xa6041) [0x7f17decc6041]
stderr: 7: (()+0xa6295) [0x7f17decc6295]
stderr: 8: (()+0x49a92c) [0x56027edc792c]
stderr: 9: (Option::size_t const md_config_t::get_val<Option::size_t>(ConfigValues const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const+0x51) [0x56027eedeea1]
stderr: 10: (BlueStore::_set_cache_sizes()+0x174) [0x56027f3fba44]
stderr: 11: (BlueStore::_open_bdev(bool)+0x1c5) [0x56027f3fe845]
stderr: 12: (BlueStore::mkfs()+0x6e0) [0x56027f484620]
stderr: 13: (OSD::mkfs(CephContext*, ObjectStore*, uuid_d, int)+0x1b3) [0x56027eef9b23]
stderr: 14: (main()+0x1821) [0x56027eea68d1]
stderr: 15: (__libc_start_main()+0xeb) [0x7f17de927bbb]
stderr: 16: (_start()+0x2a) [0x56027eed903a]
stderr: 2020-01-28 18:53:20.486 7f17de7b3c00 -1 *** Caught signal (Aborted) **
stderr: in thread 7f17de7b3c00 thread_name:ceph-osd
stderr: ceph version 14.2.6 (f0aa067ac7a02ee46ea48aa26c6e298b5ea272e9) nautilus (stable)
stderr: 1: (()+0x13520) [0x7f17dee75520]
stderr: 2: (gsignal()+0x141) [0x7f17de93b081]
stderr: 3: (abort()+0x121) [0x7f17de926535]
stderr: 4: (()+0x9a643) [0x7f17decba643]
stderr: 5: (()+0xa5fd6) [0x7f17decc5fd6]
stderr: 6: (()+0xa6041) [0x7f17decc6041]
stderr: 7: (()+0xa6295) [0x7f17decc6295]
stderr: 8: (()+0x49a92c) [0x56027edc792c]
stderr: 9: (Option::size_t const md_config_t::get_val<Option::size_t>(ConfigValues const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const+0x51) [0x56027eedeea1]
stderr: 10: (BlueStore::_set_cache_sizes()+0x174) [0x56027f3fba44]
stderr: 11: (BlueStore::_open_bdev(bool)+0x1c5) [0x56027f3fe845]
stderr: 12: (BlueStore::mkfs()+0x6e0) [0x56027f484620]
stderr: 13: (OSD::mkfs(CephContext*, ObjectStore*, uuid_d, int)+0x1b3) [0x56027eef9b23]
stderr: 14: (main()+0x1821) [0x56027eea68d1]
stderr: 15: (__libc_start_main()+0xeb) [0x7f17de927bbb]
stderr: 16: (_start()+0x2a) [0x56027eed903a]
stderr: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
stderr: -5> 2020-01-28 18:53:20.438 7f17de7b3c00 -1 bluestore(/var/lib/ceph/osd/test-0/) _read_fsid unparsable uuid
stderr: 0> 2020-01-28 18:53:20.486 7f17de7b3c00 -1 *** Caught signal (Aborted) **
stderr: in thread 7f17de7b3c00 thread_name:ceph-osd
stderr: ceph version 14.2.6 (f0aa067ac7a02ee46ea48aa26c6e298b5ea272e9) nautilus (stable)
stderr: 1: (()+0x13520) [0x7f17dee75520]
stderr: 2: (gsignal()+0x141) [0x7f17de93b081]
stderr: 3: (abort()+0x121) [0x7f17de926535]
stderr: 4: (()+0x9a643) [0x7f17decba643]
stderr: 5: (()+0xa5fd6) [0x7f17decc5fd6]
stderr: 6: (()+0xa6041) [0x7f17decc6041]
stderr: 7: (()+0xa6295) [0x7f17decc6295]
stderr: 8: (()+0x49a92c) [0x56027edc792c]
stderr: 9: (Option::size_t const md_config_t::get_val<Option::size_t>(ConfigValues const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const+0x51) [0x56027eedeea1]
stderr: 10: (BlueStore::_set_cache_sizes()+0x174) [0x56027f3fba44]
stderr: 11: (BlueStore::_open_bdev(bool)+0x1c5) [0x56027f3fe845]
stderr: 12: (BlueStore::mkfs()+0x6e0) [0x56027f484620]
stderr: 13: (OSD::mkfs(CephContext*, ObjectStore*, uuid_d, int)+0x1b3) [0x56027eef9b23]
stderr: 14: (main()+0x1821) [0x56027eea68d1]
stderr: 15: (__libc_start_main()+0xeb) [0x7f17de927bbb]
stderr: 16: (_start()+0x2a) [0x56027eed903a]
stderr: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
stderr: -5> 2020-01-28 18:53:20.438 7f17de7b3c00 -1 bluestore(/var/lib/ceph/osd/test-0/) _read_fsid unparsable uuid
stderr: 0> 2020-01-28 18:53:20.486 7f17de7b3c00 -1 *** Caught signal (Aborted) **
stderr: in thread 7f17de7b3c00 thread_name:ceph-osd
stderr: ceph version 14.2.6 (f0aa067ac7a02ee46ea48aa26c6e298b5ea272e9) nautilus (stable)
stderr: 1: (()+0x13520) [0x7f17dee75520]
stderr: 2: (gsignal()+0x141) [0x7f17de93b081]
stderr: 3: (abort()+0x121) [0x7f17de926535]
stderr: 4: (()+0x9a643) [0x7f17decba643]
stderr: 5: (()+0xa5fd6) [0x7f17decc5fd6]
stderr: 6: (()+0xa6041) [0x7f17decc6041]
stderr: 7: (()+0xa6295) [0x7f17decc6295]
stderr: 8: (()+0x49a92c) [0x56027edc792c]
stderr: 9: (Option::size_t const md_config_t::get_val<Option::size_t>(ConfigValues const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const+0x51) [0x56027eedeea1]
stderr: 10: (BlueStore::_set_cache_sizes()+0x174) [0x56027f3fba44]
stderr: 11: (BlueStore::_open_bdev(bool)+0x1c5) [0x56027f3fe845]
stderr: 12: (BlueStore::mkfs()+0x6e0) [0x56027f484620]
stderr: 13: (OSD::mkfs(CephContext*, ObjectStore*, uuid_d, int)+0x1b3) [0x56027eef9b23]
stderr: 14: (main()+0x1821) [0x56027eea68d1]
stderr: 15: (__libc_start_main()+0xeb) [0x7f17de927bbb]
stderr: 16: (_start()+0x2a) [0x56027eed903a]
stderr: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
--> Was unable to complete a new OSD, will rollback changes
Running command: /bin/ceph --cluster test --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/test.keyring osd purge-new osd.0 --yes-i-really-mean-it
stderr: purged osd.0
Traceback (most recent call last):
  File "/usr/sbin/ceph-volume", line 11, in <module>
    load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')()
  File "/usr/lib/python3/dist-packages/ceph_volume/main.py", line 38, in __init__
    self.main(self.argv)
  File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 59, in newfunc
    return f(*a, **kw)
  File "/usr/lib/python3/dist-packages/ceph_volume/main.py", line 149, in main
    terminal.dispatch(self.mapper, subcommand_args)
  File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
    instance.main()
  File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/main.py", line 40, in main
    terminal.dispatch(self.mapper, self.argv)
  File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
    instance.main()
  File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root
    return func(*a, **kw)
  File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/batch.py", line 325, in main
    self.execute()
  File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/batch.py", line 288, in execute
    self.strategy.execute()
  File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/strategies/bluestore.py", line 124, in execute
    Create(command).main()
  File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/create.py", line 69, in main
    self.create(args)
  File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root
    return func(*a, **kw)
  File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/create.py", line 26, in create
    prepare_step.safe_prepare(args)
  File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 219, in safe_prepare
    self.prepare()
  File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root
    return func(*a, **kw)
  File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 320, in prepare
    osd_fsid,
  File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py", line 119, in prepare_bluestore
    db=db
  File "/usr/lib/python3/dist-packages/ceph_volume/util/prepare.py", line 430, in osd_mkfs_bluestore
    raise RuntimeError('Command failed with exit code %s: %s' % (returncode, ' '.join(command)))
RuntimeError: Command failed with exit code 250: /bin/ceph-osd --cluster test --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/test-0/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/test-0/ --osd-uuid e3ebb6e0-82c8-4088-a6bd-abd729a575bb --setuser ceph --setgroup ceph
sysadmin@ceph-test:~$ sudo setenforce 1
sysadmin@ceph-test:~$

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux