Re: install ceph-osd failed in docker

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Disclaimer... This is slightly off topic and a genuine question.  I am a container noobie that has only used them for test environments for nginx configs and ceph client multi-tenency benchmarking.

I understand the benefits to containerizing RGW,  MDS, and MGR daemons. I can even come up with a decent argument to containerize MON daemons. However, I cannot fathom a reason to containerize OSD daemons.

The entire state of the osd is a physical disk, possibly with a shared physical component with other OSDs. The rest of the daemons have very little state, excepting MONs. If you need to have 1+ physical devices for an OSD to run AND it needs access to the physical hardware, then why add the complexity of a container to the configuration? It just sounds needless and complex for no benefit other than saying you're doing it.


On Sun, Nov 26, 2017, 9:18 PM Dai Xiang <xiang.dai@xxxxxxxxxxx> wrote:
Hi!

I am trying to install ceph in container, but osd always failed:
[root@d32f3a7b6eb8 ~]$ ceph -s
  cluster:
    id:     a5f1d744-35eb-4e1b-a7c7-cb9871ec559d
    health: HEALTH_WARN
            Reduced data availability: 128 pgs inactive
            Degraded data redundancy: 128 pgs unclean

  services:
    mon: 2 daemons, quorum d32f3a7b6eb8,1d22f2d81028
    mgr: d32f3a7b6eb8(active), standbys: 1d22f2d81028
    mds: cephfs-1/1/1 up  {0=1d22f2d81028=up:creating}, 1 up:standby
    osd: 0 osds: 0 up, 0 in

  data:
    pools:   2 pools, 128 pgs
    objects: 0 objects, 0 bytes
    usage:   0 kB used, 0 kB / 0 kB avail
    pgs:     100.000% pgs unknown
             128 unknown

Since docker can not create and access new partition, i create
partition at host then use --device to make it available in container.

>From the install log, i only found below error:

Traceback (most recent call last):
  File "/usr/sbin/ceph-disk", line 9, in <module>
    load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')()
  File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 5704, in run
    main(sys.argv[1:])
  File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 5657, in main
    main_catch(args.func, args)
  File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 5682, in main_catch
    func(args)
  File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 4617, in main_list
    main_list_protected(args)
  File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 4621, in main_list_protected
    devices = list_devices()
  File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 4525, in list_devices
    partmap = list_all_partitions()
  File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 778, in list_all_partitions
    dev_part_list[name] = list_partitions(get_dev_path(name))
  File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 791, in list_partitions
    if is_mpath(dev):
  File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 626, in is_mpath
    uuid = get_dm_uuid(dev)
  File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 611, in get_dm_uuid
    uuid_path = os.path.join(block_path(dev), 'dm', 'uuid')
  File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 605, in block_path
    rdev = os.stat(path).st_rdev
OSError: [Errno 2] No such file or directory: '/dev/dm-6'

But it didn't make whole install exit. I enter container and call
`ceph-disk list` then same error occur.

I know that create osd need to call `ceph-disk`, while dm device is
for container storage, it would be better if `ceph-disk` can skip dm
device.

I guess this error lead osd create fail since i do not get other error
info.

--
Best Regards
Dai Xiang
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux