Re: cephx capabilities to forbid rbd creation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



So, one key per RBD.
Or, dynamically enable/disable access to each RBD in each hypervisor's key.
Uhm, something doesn't scale here. :P
(I wonder if there's any limit to a key's capabilities string...)

But, as it appears, I share your view that it is the only available
approach right now.

Anyone would like to prove us wrong? :)

Le 15/03/2016 22:33, David Casier a écrit :
> Hi,
> Maybe (not tested) :
> [osd ]allow * object_prefix <block_name_prefix>  ?
>
>
>
> 2016-03-15 22:18 GMT+01:00 Loris Cuoghi <lc@xxxxxxxxxxxxxxxxx>:
>> Hi David,
>>
>> One pool per virtualization host would make it impossible to live
>> migrate a VM. :)
>>
>> Thanks,
>>
>> Loris
>>
>>
>> Le 15/03/2016 22:11, David Casier a écrit :
>>> Hi Loris,
>>> If i'm not mistaken, there are no rbd ACL in cephx.
>>> Why not 1 pool/client and pool quota ?
>>>
>>> David.
>>>
>>> 2016-02-12 3:34 GMT+01:00 Loris Cuoghi <lc@xxxxxxxxxxxxxxxxx>:
>>>> Hi!
>>>>
>>>> We are on version 9.2.0, 5 mons and 80 OSDS distributed on 10 hosts.
>>>>
>>>> How could we twist cephx capabilities so to forbid our KVM+QEMU+libvirt
>>>> hosts any RBD creation capability ?
>>>>
>>>> We currently have an rbd-user key like so :
>>>>
>>>>         caps: [mon] allow r
>>>>         caps: [osd] allow x object_prefix rbd_children, allow rwx
>>>> object_prefix rbd_header., allow rwx object_prefix rbd_id., allow rw
>>>> object_prefix rbd_data.
>>>>
>>>>
>>>> And another rbd-manager key like the one suggested in the documentation,
>>>> which is used in a central machine which is the only one allowed to create
>>>> RBD images:
>>>>
>>>>         caps: [mon] allow r
>>>>         caps: [osd] allow class-read object_prefix rbd_children, allow rwx
>>>> pool=rbd
>>>>
>>>> Now, the libvirt hosts all share the same "rbd-user" secret.
>>>> Our intention is to permit the QEMU processes to take full advantage of any
>>>> single RBD functionality, but to forbid any new RBD creation with this same
>>>> key. In the eventuality of a stolen key, or other hellish scenarios.
>>>>
>>>> What cephx capabilities did you guys configure for your virtualization
>>>> hosts?
>>>>
>>>> Thanks,
>>>>
>>>> Loris
>>>> _______________________________________________
>>>> ceph-users mailing list
>>>> ceph-users@xxxxxxxxxxxxxx
>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux