Re: Ceph - Xen accessing RBDs through libvirt

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

So "somthing" goes wrong:

# cat /var/log/libvirt/libxl/libxl-driver.log
-> ...
2018-05-20 15:28:15.270+0000: libxl:
libxl_bootloader.c:634:bootloader_finished: bootloader failed - consult
logfile /var/log/xen/bootloader.7.log
2018-05-20 15:28:15.270+0000: libxl:
libxl_exec.c:118:libxl_report_child_exitstatus: bootloader [26640]
exited with error status 1
2018-05-20 15:28:15.271+0000: libxl:
libxl_create.c:1259:domcreate_rebuild_done: cannot (re-)build domain: -3

# cat /var/log/xen/bootloader.7.log
->
Traceback (most recent call last):
  File "/usr/lib64/xen/bin/pygrub", line 896, in <module>
    part_offs = get_partition_offsets(file)
  File "/usr/lib64/xen/bin/pygrub", line 113, in get_partition_offsets
    image_type = identify_disk_image(file)
  File "/usr/lib64/xen/bin/pygrub", line 56, in identify_disk_image
    fd = os.open(file, os.O_RDONLY)
OSError: [Errno 2] No such file or directory:
'rbd:devel-pool/testvm3.rbd:id=libvirt:key=AQBThwFbGFRYFxxxxxxxxxxxxxxxxxxxxxxxxx==:auth_supported=cephx\\;none:mon_host=10.20.30.1\\:6789\\;10.20.30.2\\:6789\\;10.20.30.3\\:6789'

we used to work with Xen hypervisors before we switched to KVM, all the VMs are within OpenStack. There was one thing we had to configure for Xen instances: the base image needed two image properties, "hypervisor_type = xen" and "kernel_id = <IMAGE_ID>" where the image for the kernel_id was uploaded from /usr/lib/grub2/x86_64-xen/grub.xen.
For VMs independent from openstack we had to provide the kernel like this:

# kernel="/usr/lib/grub2/x86_64-xen/grub.xen"
kernel="/usr/lib/grub2/i386-xen/grub.xen"

I'm not sure if this is all that's required in your environment but we managed to run Xen VMs with Ceph backend.

Regards,
Eugen


Zitat von thg <nospam@xxxxxxxxx>:

Hi all@list,

my background: I'm doing Xen since 10++ years, many years with DRBD for
high availability, since some time I'm using preferable GlusterFS with
FUSE as replicated storage, where I place the image-files for the vms.

In my current project we started (successfully) with Xen/GlusterFS too,
but because the provider, where we placed the servers, uses widely CEPH.
So we decided to switch, because of getting better support for this.

Unfortunately I'm new to CEPH, but with help of a technician, we have
running a 3 node CEPH-cluster now, that seems to work fine.

Hardware:
- Xeons, 24 Cores, 256 GB RAM,
  2x 240 GB system-SSDs RAID1, 4x 1,92 TB data-SSDs (no RAID)

Software we are using:
- CentOS 7.5.1804
- Kernel: 4.9.86-30.el7             @centos-virt-xen-48
- Xen: 4.8.3-5.el7                  @centos-virt-xen-48
- libvirt-xen: 4.1.0-2.xen48.el7    @centos-virt-xen-48
- Ceph: 2:12.2.5-0.el7              @Ceph


What is working:
I've converted a vm to a RBD-device, mapped it, mounted it and can start
this as pvm on the Xen hypervisor via xl create:

# qemu-img convert -O rbd img/testvm.img rbd:devel-pool/testvm3.rbd
# rbd ls -l devel-pool
-> NAME                          SIZE PARENT FMT PROT LOCK
   ...
   testvm3.rbd                 16384M          2
# rbd info devel-pool/testvm3.rbd
-> rbd image 'testvm3.rbd':
       size 16384 MB in 4096 objects
       order 22 (4096 kB objects)
       block_name_prefix: rbd_data.fac72ae8944a
       format: 2
       features: layering, exclusive-lock, object-map, fast-diff,
deep-flatten
       flags:
       create_timestamp: Sun May 20 14:13:42 2018
# qemu-img info rbd:devel-pool/testvm3.rbd
-> image: rbd:devel-pool/testvm3.rbd
   file format: raw
   virtual size: 16G (17179869184 bytes)
   disk size: unavailable

# rbd feature disable devel-pool/testvm2.rbd deep-flatten, fast-diff,
object-map (otherwise mapping does not work)
# rbd info devel-pool/testvm3.rbd
-> rbd image 'testvm3.rbd':
       size 16384 MB in 4096 objects
       order 22 (4096 kB objects)
       block_name_prefix: rbd_data.acda2ae8944a
       format: 2
       features: layering, exclusive-lock
       ...
# rbd map devel-pool/testvm3.rbd
-> /dev/rbd0
# rbd showmapped
-> id pool       image       snap device
   0  devel-pool testvm3.rbd -    /dev/rbd0
# fdisk -l /dev/rbd0
-> Disk /dev/rbd0: 17.2 GB, 17179869184 bytes, 33554432 sectors
   Units = sectors of 1 * 512 = 512 bytes
   Sector size (logical/physical): 512 bytes / 512 bytes
   ...
        Device Boot      Start         End      Blocks   Id  System
   /dev/rbd0p1   *        2048     2099199     1048576   83  Linux
   /dev/rbd0p2         2099200    29362175    13631488   83  Linux
   /dev/rbd0p3        29362176    33554431     2096128   82  Linux swap
   ...
# mount /dev/rbd0p2 /mnt
# ll /mnt/
-> ...
   lrwxrwxrwx.  1 root root    7 Jan  2 23:42 bin -> usr/bin
   drwxr-xr-x.  2 root root    6 Jan  2 23:42 boot
   drwxr-xr-x.  2 root root    6 Jan  2 23:42 dev
   drwxr-xr-x. 81 root root 8192 May  7 02:08 etc
   drwxr-xr-x.  8 root root   98 Jan 29 02:19 home
   ...
   drwxr-xr-x. 19 root root  267 Jan  3 13:22 var
# umount /dev/rbd0p2

# cat testvm3.rbd0
-> name = "testvm3"
   ...
   disk = [ "phy:/dev/rbd0,xvda,w" ]
   ...
# xl create -c testvm3.rbd0
-> Parsing config from vpngw1.rbd0
   Using <class 'grub.GrubConf.Grub2ConfigFile'> to parse /grub2/grub.cfg
   ...
   Welcome to CentOS Linux 7 (Core)!
   ...
   CentOS Linux 7 (Core)
Kernel 3.10.0-693.11.1.el7.centos.plus.x86_64 on an x86_64

   testvm3 login:
   ...


But this is not really, how it should work, because there is no static
assignment from rbd to the vms. As far as I understood, there is still
no Ceph-support in Xen, since it was announced in 2013, so the way to go
is with libvirt?


I was following this guide, to setup Ceph with libvirt:
<http://docs.ceph.com/docs/master/rbd/libvirt/>:

# ceph auth get-or-create client.libvirt mon 'profile rbd' osd 'profile
rbd pool=devel-pool'
-> [client.libvirt]
       key = AQBThwFbGFRYFxxxxxxxxxxxxxxxxxxxxxxxxx==
# ceph auth ls
-> ...
   client.libvirt
       key: AQBThwFbGFRYFxxxxxxxxxxxxxxxxxxxxxxxxx==
       caps: [mon] profile rbd
       caps: [osd] profile rbd pool=devel-pool
       ...
# vi secret.xml
->
<secret ephemeral='no' private='no'>
        <usage type='ceph'>
                <name>client.libvirt secret</name>
        </usage>
</secret>

# virsh secret-define --file secret.xml
-> Secret 07f3a0fe-0000-1111-2222-333333333333 created
# ceph auth get-key client.libvirt > client.libvirt.key
# cat client.libvirt.key
-> AQBThwFbGFRYFxxxxxxxxxxxxxxxxxxxxxxxxx==
# virsh secret-set-value --secret 07f3a0fe-0000-1111-2222-333333333333
--base64 $(cat client.libvirt.key)
-> Secret value set

# vi xml/testvm3.xml
->
<domain type='xen'>
  <name>testvm3</name>
  ...
  <devices>
    <disk type='network' device='disk'>
      <source protocol='rbd' name='devel-pool/testvm3.rbd'>
        <host name="10.20.30.1" port="6789"/>
        <host name="10.20.30.2" port="6789"/>
        <host name="10.20.30.3" port="6789"/>
      </source>
      <auth username='libvirt'>
        <secret type='ceph' uuid='07f3a0fe-0000-1111-2222-333333333333'/>
      </auth>
      <target dev='xvda' bus='xen'/>
    </disk>
    ...

# virsh define xml/testvm3.xml
-> Domain testvm3 defined from xml/testvm3.xml
# virsh start --console testvm3
error: Failed to start domain testvm3
error: internal error: libxenlight failed to create new domain 'testvm3'


So "somthing" goes wrong:

# cat /var/log/libvirt/libxl/libxl-driver.log
-> ...
2018-05-20 15:28:15.270+0000: libxl:
libxl_bootloader.c:634:bootloader_finished: bootloader failed - consult
logfile /var/log/xen/bootloader.7.log
2018-05-20 15:28:15.270+0000: libxl:
libxl_exec.c:118:libxl_report_child_exitstatus: bootloader [26640]
exited with error status 1
2018-05-20 15:28:15.271+0000: libxl:
libxl_create.c:1259:domcreate_rebuild_done: cannot (re-)build domain: -3

# cat /var/log/xen/bootloader.7.log
->
Traceback (most recent call last):
  File "/usr/lib64/xen/bin/pygrub", line 896, in <module>
    part_offs = get_partition_offsets(file)
  File "/usr/lib64/xen/bin/pygrub", line 113, in get_partition_offsets
    image_type = identify_disk_image(file)
  File "/usr/lib64/xen/bin/pygrub", line 56, in identify_disk_image
    fd = os.open(file, os.O_RDONLY)
OSError: [Errno 2] No such file or directory:
'rbd:devel-pool/testvm3.rbd:id=libvirt:key=AQBThwFbGFRYFxxxxxxxxxxxxxxxxxxxxxxxxx==:auth_supported=cephx\\;none:mon_host=10.20.30.1\\:6789\\;10.20.30.2\\:6789\\;10.20.30.3\\:6789'


So, as far as I "read" the logs, Xen does not find the RBD-device, but I
have no clue, how I can solve this :-(


Thanks a lot for your hints,
--

kind regards,

thg
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux