Re: "VolumeDriver.Create: Unable to create Ceph RBD Image"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 






On Sat, Jan 13, 2018 at 5:58 PM, Traiano Welcome <traiano@xxxxxxxxx> wrote:
Hi

On Mon, Jan 8, 2018 at 9:27 PM, Jason Dillaman <jdillama@xxxxxxxxxx> wrote:
If you are using a pre-created RBD image for this, you will need to
disable all the image features that krbd doesn't support:

# rbd feature disable dummy01 exclusive-lock,object-map,fast-diff,deep-flatten



I've tried this with a pre-created image. Unfortunately no luck.
When I try creating a volume directly with rbd-docker-plugin, the attempt hangs indefinitely, with no feedback I can use to debug further:

---
root@lol-server2:~#  /usr/local/bin/rbd-docker-plugin --create -debug -name dummy01 -size 1024 -cluster ceph -user ceph -fs xfs -config /etc/ceph/ceph.conf
2018/01/13 09:49:57 main.go:92: INFO: starting rbd-docker-plugin version 1.5.0
2018/01/13 09:49:57 main.go:93: INFO: canCreateVolumes=%!q(bool=true), removeAction="ignore"
2018/01/13 09:49:57 main.go:103: INFO: Setting up Ceph Driver for PluginID=dummy01, cluster=ceph, user=ceph, pool=rbd, mount=/var/lib/docker-volumes, config=/etc/ceph/ceph.conf, go-ceph=%!s(bool=false)
2018/01/13 09:49:57 driver.go:115: INFO: newCephRBDVolumeDriver: setting base mount dir=/var/lib/docker-volumes/dummy01
2018/01/13 09:49:57 main.go:127: INFO: Creating Docker VolumeDriver Handler
2018/01/13 09:49:57 main.go:131: INFO: Opening Socket for Docker to connect: /run/docker/plugins/dummy01.sock
---

Is there any way I can diagnose at a lower level to understand what's going on here?




I managed to get this to work by making a number for configuration changes to ceph, and rbd-docker-plugin:

1. limit the supported image features for ceph to "3" (it was 63)
2. Configure rbd-docker-plugin to run with a more privileged user (other than docker), and using the --create flag.

I'm doubtful as to whether this is the correct way to fix this issue, but it's working for now.


 





 
On Sun, Jan 7, 2018 at 11:36 AM, Traiano Welcome <traiano@xxxxxxxxx> wrote:
> Hi List
>
> I'm getting the following error when trying to run docker with a rbd volume
> (either pre-existing, or not):
>
> "VolumeDriver.Create: Unable to create Ceph RBD Image"
>
> Please could someone give me a clue as to how to debug this further and
> resolve it?
>
> Details of my platform:
>
> 1. ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe)
> 2. Docker version 17.05.0-ce, build 89658be
> 3. rbd-docker-plugin --version 2.0.1
> 4. Kernel: Linux lol-server-049 4.4.0-62-generic #83-Ubuntu SMP Wed Jan 18
> 14:10:15 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
>
> Here are the details from the rbd-docker logs and syslogs:
>
> - Running docker with an as-yet-uncreated rbd volume, and rbd-docker-plugin
> with --create=true:
>
> ```
> root@lol-server-045:~# docker run  --volume-driver=rbd --volume dummy02:/mnt
> centos:latest bash
> docker: Error response from daemon: create dummy02: VolumeDriver.Create:
> Unable to create Ceph RBD Image(dummy02): exit status 2.
> See 'docker run --help'.
> ```
>
> - With an already created rbd volume, and rbd-docker-plugin with
> --create=false:
>
> ```
> root@lol-server-045:~# docker run  --volume-driver=rbd --volume dummy01:/mnt
> centos:latest bash
> docker: Error response from daemon: create dummy01: VolumeDriver.Create:
> Ceph RBD Image not found: dummy01.
>
> ```
>
>
> - state of a pre-created rbd device:
>
> root@lol-server-045:/var/log# rbd ls| egrep dummy
> dummy01
>
> root@lol-server-045:/var/log# rbd info dummy01
> rbd image 'dummy01':
>         size 1096 MB in 274 objects
>         order 22 (4096 kB objects)
>         block_name_prefix: rbd_data.85d6238e1f29
>         format: 2
>         features: layering, exclusive-lock, object-map, fast-diff,
> deep-flatten
>         flags:
>
> BUG:  https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1578484
>
> ```
> root@lol-server-045:/var/log#
> root@lol-server-045:/var/log# rbd feature disable foo exclusive-lock
> object-map fast-diff deep-flatten
> rbd: error opening image foo: (2) No such file or directory
> root@lol-server-045:/var/log# rbd feature disable dummy01 exclusive-lock
> object-map fast-diff deep-flatten
> root@lol-server-045:/var/log# rbd map dummy01 --pool rbd
> /dev/rbd3
> ```
>
> - rbd-docker-plugin.log entry following restart of the rbd-docker driver
> service)
>
> ```
> 2018/01/07 23:45:20 main.go:121: INFO: Creating Docker VolumeDriver Handler
> 2018/01/07 23:45:20 main.go:125: INFO: Opening Socket for Docker to connect:
> /run/docker/plugins/rbd.sock
> 2018/01/07 23:45:29 main.go:141: INFO: received TERM or KILL signal:
> terminated
> 2018/01/07 23:45:29 main.go:190: INFO: closing log file
> 2018/01/07 23:45:29 main.go:91: INFO: starting rbd-docker-plugin version
> 2.0.1
> 2018/01/07 23:45:29 main.go:92: INFO: canCreateVolumes=true,
> removeAction="ignore"
> 2018/01/07 23:45:29 main.go:101: INFO: Setting up Ceph Driver for
> PluginID=rbd, cluster=, ceph-user=docker, pool=rbd,
> mount=/var/lib/docker-volumes, config=/etc/ceph/ceph.conf
> 2018/01/07 23:45:29 driver.go:85: INFO: newCephRBDVolumeDriver: setting base
> mount dir=/var/lib/docker-volumes/rbd
> 2018/01/07 23:45:29 main.go:121: INFO: Creating Docker VolumeDriver Handler
> 2018/01/07 23:45:29 main.go:125: INFO: Opening Socket for Docker to connect:
> /run/docker/plugins/rbd.sock
> ```
>
> - when attempting to run a docker image, specifying a volume that does not
> yet exist:
>
> ```
> root@lol-server-045:/var/log# docker run  -u 0 --privileged -it
> --volume-driver rbd -v dummy02:/mnt:rw centos:latest bash
> ```
>
>
> docker: Error response from daemon: create dummy02: VolumeDriver.Create:
> Unable to create Ceph RBD Image(dummy02): exit status 2.
> ```
>
> - Log entry:
>
> ```
> 2018/01/07 23:45:29 driver.go:85: INFO: newCephRBDVolumeDriver: setting base
> mount dir=/var/lib/docker-volumes/rbd
> 2018/01/07 23:45:29 main.go:121: INFO: Creating Docker VolumeDriver Handler
> 2018/01/07 23:45:29 main.go:125: INFO: Opening Socket for Docker to connect:
> /run/docker/plugins/rbd.sock
> 2018/01/07 23:46:56 api.go:188: Entering go-plugins-helpers getPath
> 2018/01/07 23:46:56 driver.go:467: WARN: Image dummy02 does not exist
> 2018/01/07 23:46:56 api.go:132: Entering go-plugins-helpers createPath
> 2018/01/07 23:46:56 driver.go:145: INFO: API Create(&{"dummy02" map[]})
> 2018/01/07 23:46:56 driver.go:153: INFO: createImage(&{"dummy02" map[]})
> 2018/01/07 23:46:56 driver.go:687: INFO: Attempting to create new RBD Image:
> (rbd/dummy02, %!s(int=20480), xfs)
> 2018/01/07 23:46:56 driver.go:203: ERROR: Unable to create Ceph RBD
> Image(dummy02): exit status 2
> ```
>
> - docker log entries:
>
>
> ```
> Jan  7 23:42:03 lol-server-045 kernel: [4063726.059726] aufs
> au_opts_verify:1597:dockerd[107149]: dirperm1 breaks the protection by the
> permission bits on the lower branch
> Jan  7 23:42:30 lol-server-045 kernel: [4063752.624828] aufs
> au_opts_verify:1597:dockerd[107147]: dirperm1 breaks the protection by the
> permission bits on the lower branch
> Jan  7 23:45:20 lol-server-045 rbd-docker-plugin[77813]: 2018/01/07 23:45:20
> main.go:179: INFO: setting log file: /var/log/rbd-docker-plugin.log
> Jan  7 23:45:29 lol-server-045 rbd-docker-plugin[77856]: 2018/01/07 23:45:29
> main.go:179: INFO: setting log file: /var/log/rbd-docker-plugin.log
> Jan  7 23:46:56 lol-server-045 kernel: [4064019.169722] aufs
> au_opts_verify:1597:dockerd[107449]: dirperm1 breaks the protection by the
> permission bits on the lower branch
> Jan  7 23:46:56 lol-server-045 dockerd[107120]:
> time="2018-01-07T23:46:56.857163090+08:00" level=error msg="Handler for POST
> /v1.29/containers/create returned error: create dummy02:
> VolumeDriver.Create: Unable to create Ceph RBD Image(dummy02): exit status
> 2"
> ```
>
> - state of the ceph cluster:
>
> ```
> root@lol-server-045:/var/log# ceph -s
>     cluster 0bb54801-846d-47ac-b14a-3828d830ff3a
>      health HEALTH_OK
>      monmap e1: 1 mons at {lol-server-045=10.0.0.20:6789/0}
>             election epoch 6, quorum 0 lol-server-045
>       fsmap e11: 1/1/1 up {0=lol-server-050=up:active}
>      osdmap e64: 5 osds: 5 up, 5 in
>             flags sortbitwise,require_jewel_osds
>       pgmap v1232770: 192 pgs, 3 pools, 14067 MB data, 82167 objects
>
> 2018/01/07 23:45:29 main.go:125: INFO: Opening Socket for Docker to connect:
> /run/docker/plugins/rbd.sock
> 2018/01/07 23:46:56 api.go:188: Entering go-plugins-helpers getPath
> 2018/01/07 23:46:56 driver.go:467: WARN: Image dummy02 does not exist
> 2018/01/07 23:46:56 api.go:132: Entering go-plugins-helpers createPath
> 2018/01/07 23:46:56 driver.go:145: INFO: API Create(&{"dummy02" map[]})
> 2018/01/07 23:46:56 driver.go:153: INFO: createImage(&{"dummy02" map[]})
> 2018/01/07 23:46:56 driver.go:687: INFO: Attempting to create new RBD Image:
> (rbd/dummy02, %!s(int=20480), xfs)
> 2018/01/07 23:46:56 driver.go:203: ERROR: Unable to create Ceph RBD
> Image(dummy02): exit status 2
> ```
>
> - docker log entries:
>
>
> ```
> Jan  7 23:42:03 lol-server-045 kernel: [4063726.059726] aufs
> au_opts_verify:1597:dockerd[107149]: dirperm1 breaks the protection by the
> permission bits on the lower branch
> Jan  7 23:42:30 lol-server-045 kernel: [4063752.624828] aufs
> au_opts_verify:1597:dockerd[107147]: dirperm1 breaks the protection by the
> permission bits on the lower branch
> Jan  7 23:45:20 lol-server-045 rbd-docker-plugin[77813]: 2018/01/07 23:45:20
> main.go:179: INFO: setting log file: /var/log/rbd-docker-plugin.log
> Jan  7 23:45:29 lol-server-045 rbd-docker-plugin[77856]: 2018/01/07 23:45:29
> main.go:179: INFO: setting log file: /var/log/rbd-docker-plugin.log
> Jan  7 23:46:56 lol-server-045 kernel: [4064019.169722] aufs
> au_opts_verify:1597:dockerd[107449]: dirperm1 breaks the protection by the
> permission bits on the lower branch
> Jan  7 23:46:56 lol-server-045 dockerd[107120]:
> time="2018-01-07T23:46:56.857163090+08:00" level=error msg="Handler for POST
> /v1.29/containers/create returned error: create dummy02:
> VolumeDriver.Create: Unable to create Ceph RBD Image(dummy02): exit status
> 2"
> ```
>
> - state of the ceph cluster:
>
> ```
> root@lol-server-045:/var/log# ceph -s
>     cluster 0bb54801-846d-47ac-b14a-3828d830ff3a
>      health HEALTH_OK
>      monmap e1: 1 mons at {lol-server-045=10.0.0.20:6789/0}
>             election epoch 6, quorum 0 lol-server-045
>       fsmap e11: 1/1/1 up {0=lol-server-050=up:active}
>      osdmap e64: 5 osds: 5 up, 5 in
>             flags sortbitwise,require_jewel_osds
>       pgmap v1232770: 192 pgs, 3 pools, 14067 MB data, 82167 objects
>             28396 MB used, 7623 GB / 7651 GB avail
>                  192 active+clean
> ```
>
> Many thanks in advance for any help!
>
> Traiano
>
>
>
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



--
Jason


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux