Re: Mounting image from erasure-coded pool without tiering in KVM

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Check if you have a recent enough librbd installed on your VM hosts.

Hello, all!
I have a problem with adding image volumes to my KVM VM.
I prepared erasure coded pool (named data01) on full-bluestore OSDs and
allowed ec_overwrites on it. Also i created replicated pool for image
volumes metadata named ssd-repl.

Pools were prepared by:
ceph osd pool create data01 1024 1024 erasure 2-1-isa-v
ceph osd pool set data01 allow_ec_overwrites true
rbd pool init data01

Image was created using:
rbd create --size 25G --data-pool data01 ssd-repl/vm-5

Image info:
[ceph@alfa-csn-01 ~]$ rbd info ssd-repl/vm-5
rbd image 'vm-5':
       size 25 GiB in 6400 objects
       order 22 (4 MiB objects)
       id: a20c46b8b4567
       data_pool: data01
       block_name_prefix: rbd_data.21.a20c46b8b4567
       format: 2
       features: layering, exclusive-lock, object-map, fast-diff,
deep-flatten, data-pool
       op_features:
       flags:
       create_timestamp: Tue Mar  5 16:51:59 2019

So it seem all should work.
But when i try to run VM with this disk attached i'm getting following
error:
root@alfa-cpu-02:~# virsh start vm-5
error: Failed to start domain vm-5
error: internal error: process exited while connecting to monitor:
2019-03-05T13:53:30.020525Z qemu-system-x86_64: -drive
file=rbd:ssd-repl/vm-5:id=libvirt:key=AQBD5GJc40bjN
hAA7qV6hZYumI7FUDkhElxMYw==:auth_supported=cephx\;none:mon_host=10.212.3.161\:6789,format=raw,if=none,id=drive-virtio-disk1:
error reading header from vm-5

XML config for this volume from my VM:
<disk type='network' device='disk'>
  <driver name='qemu' type='raw'/>
  <auth username='libvirt'>
    <secret type='ceph' uuid='4acff7d5-9c31-42c3-83ea-d32f20c7417a'/>
  </auth>
   <source protocol='rbd' name='ssd-repl/vm-5'>
   <host name='10.212.3.161' port='6789'/>
  </source>
     <target dev='vdb' bus='virtio'/>
     <address type='pci' domain='0x0000' bus='0x00' slot='0x07'
function='0x0'/>
   </disk>

If i create the whole image in replicated pool then all works as expected:
i can connect and work with this disk inside VM.
What could be the reason for such behavior?
What i missed in configuration?

Thanks in advance!

--
With best regards,
  Vitaliy Filippov
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux