Re: Automatic OSD creation / Floating IP for ceph dashboard

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

> On 31 Jan 2022, at 00:53, Nir Soffer <nsoffer@xxxxxxxxxx> wrote:
> 
> Live migration and snapshots are not available? This is news to me.
> 


Welcome to krbd world. But there will be no fun when you need to update the rbd driver for the entire aggregate with migration and reboot for each host

> Why do you need to connect 300-400 rbd devices at the same time to a host?

Because projects are already run and this is projects scale/maintenance path

> Few years ago we decided to replace the incomplete, unmaintained, and untested
> code with a better way - using cinderlib - called Managed Block Storage (MBS).


But at the same time, support was dropped in a minor release of 4.4.6, and not in 4.5 in which it was planned. This is fine? I think it's doubtful

> Do you have 400 active disks actually used by VMs on every host?

400 no, but 300 - yes, this is only 8 VMS with 41 disk's, 4CPU 8GB RAM instances - the Gold CPU's can handle even more...

https://ibb.co/HHJzkZX <https://ibb.co/HHJzkZX> screenshot



> If you do, do they perform worse compared with 400 LUNs, or 400 active
> logical volumes?

I don't know what is it, we are use "old Cinder oVirt integration (from 2016)" and it is work flawless

    <disk type='network' device='disk' snapshot='no'>
      <driver name='qemu' type='raw' cache='writeback' error_policy='stop' io='threads' discard='unmap'/>
      <auth username='cinder'>
        <secret type='ceph' uuid='e0828f39-2832-4d82-90ee-23b26fc7b20a'/>
      </auth>
      <source protocol='rbd' name='replicated_rbd/volume-86769788-7824-4809-bc56-20ef5cee18fa' index='37'>
        <host name='172.16.16.2' port='3300'/>
        <host name='172.16.16.3' port='3300'/>
        <host name='172.16.16.4' port='3300'/>
      </source>
      <target dev='sda' bus='scsi'/>
      <serial>86769788-7824-4809-bc56-20ef5cee18fa</serial>
      <boot order='1'/>
      <alias name='ua-86769788-7824-4809-bc56-20ef5cee18fa'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <disk type='network' device='disk' snapshot='no'>
      <driver name='qemu' type='raw' cache='writeback' error_policy='stop' io='threads' discard='unmap'/>
      <auth username='cinder'>
        <secret type='ceph' uuid='e0828f39-2832-4d82-90ee-23b26fc7b20a'/>
      </auth>
      <source protocol='rbd' name='replicated_rbd/volume-9edd06d5-b0cc-437f-9aaa-80ccc391f302' index='36'>
        <host name='172.16.16.2' port='3300'/>
        <host name='172.16.16.3' port='3300'/>
        <host name='172.16.16.4' port='3300'/>
      </source>
      <target dev='sdb' bus='scsi'/>
      <serial>9edd06d5-b0cc-437f-9aaa-80ccc391f302</serial>
      <alias name='ua-9edd06d5-b0cc-437f-9aaa-80ccc391f302'/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
    </disk>


Thanks,
k
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux