Re: Ceph luminous - Erasure code and iSCSI gateway

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks - that worked 


[root@osd01 ~]# rbd --image image_ec1 -p rbd info
rbd image 'image_ec1':
        size 51200 MB in 12800 objects
        order 22 (4096 kB objects)
        data_pool: ec_k4_m2
        block_name_prefix: rbd_data.1.fe0f643c9869
        format: 2
        features: layering, data-pool
        flags:
        create_timestamp: Tue Feb 27 09:49:35 2018

[root@osd01 ~]# rbd feature enable image_ec1 exclusive-lock

[root@osd01 ~]# rbd --image image_ec1 -p rbd info
rbd image 'image_ec1':
        size 51200 MB in 12800 objects
        order 22 (4096 kB objects)
        data_pool: ec_k4_m2
        block_name_prefix: rbd_data.1.fe0f643c9869
        format: 2
        features: layering, exclusive-lock, data-pool
        flags:
        create_timestamp: Tue Feb 27 09:49:35 2018

[root@osd01 ~]# gwcli
/disks> create pool=rbd image=image_ec1 size=120G
ok
/disks> ls
o- disks .......................................................................................................... [320G, Disks: 3]
  o- rbd.image_ec1 .............................................................................................. [image_ec1 (120G)]
  o- rbd.vmware02 ................................................................................................ [vmware02 (100G)]
  o- rbd.vmwware01 .............................................................................................. [vmwware01 (100G)]


On 27 February 2018 at 11:17, Jason Dillaman <jdillama@xxxxxxxxxx> wrote:
Do your pre-created images have the exclusive-lock feature enabled?
That is required to utilize them for iSCSI.

On Tue, Feb 27, 2018 at 11:09 AM, Steven Vacaroaia <stef97@xxxxxxxxx> wrote:
> Hi Jason,
>
> Thanks for your prompt response
>
> I have not been able to find a way to add an existing image ... it looks
> like I can just create new ones
>
>
> Ill appreciate if you could provide details please
>
> For example how would I add the preexisting image named image_ec1 ?
>
>  rados -p rbd ls | grep rbd_id
> rbd_id.image01
> rbd_id.image_ec1
> rbd_id.vmware02
> rbd_id.vmwware01
>
> [root@osd01 ~]# gwcli
> /disks> ls
> o- disks
> ..........................................................................................................
> [200G, Disks: 2]
>   o- rbd.vmware02
> ................................................................................................
> [vmware02 (100G)]
>   o- rbd.vmwware01
> ..............................................................................................
> [vmwware01 (100G)]
>
> /disks> create pool=rbd image=image_ec1 size=120G
> Failed : disk create/update failed on osd01. LUN allocation failure
> /disks> exit
>
>
> (LUN.allocate) rbd 'image_ec1' is not compatible with LIO
> Only image features
> RBD_FEATURE_LAYERING,RBD_FEATURE_EXCLUSIVE_LOCK,RBD_FEATURE_OBJECT_MAP,RBD_FEATURE_FAST_DIFF,RBD_FEATURE_DEEP_FLATTEN
> are supported
> 2018-02-27 11:06:23,424    ERROR [rbd-target-api:731:_disk()] - LUN alloc
> problem - (LUN.allocate) rbd 'image_ec1' is not compatible with LIO
>
>
>
> On 27 February 2018 at 10:52, Jason Dillaman <jdillama@xxxxxxxxxx> wrote:
>>
>> Your image does not live in the EC pool -- instead, only the data
>> portion lives within the EC pool. Therefore, you would need to specify
>> the replicated pool where the image lives when attaching it as a
>> backing store for iSCSI (i.e. pre-create it via the rbd CLI):
>>
>> # gwcli
>> /iscsi-target...sx01-657d71e0> cd /disks
>> /disks> create pool=rbd image=image_ec1 size=XYZ
>>
>>
>> On Tue, Feb 27, 2018 at 10:42 AM, Steven Vacaroaia <stef97@xxxxxxxxx>
>> wrote:
>> > Hi,
>> >
>> > I noticed it is possible to use erasure code pool for RBD and CEPHFS
>> >
>> > https://ceph.com/community/new-luminous-erasure-coding-rbd-cephfs/
>> >
>> > This got me thinking that I can deploy iSCSI luns on EC pools
>> > However it appears it is not working
>> >
>> > Anyone able to do that or have I misunderstood ?
>> >
>> > Thanks
>> > Steven
>> >
>> > Here is the pool
>> >
>> > ceph osd pool get ec_k4_m2 all
>> > size: 6
>> > min_size: 5
>> > crash_replay_interval: 0
>> > pg_num: 128
>> > pgp_num: 128
>> > crush_rule: ec_k4_m2
>> > hashpspool: true
>> > nodelete: false
>> > nopgchange: false
>> > nosizechange: false
>> > write_fadvise_dontneed: false
>> > noscrub: false
>> > nodeep-scrub: false
>> > use_gmt_hitset: 1
>> > auid: 0
>> > erasure_code_profile: EC_OSD
>> > fast_read: 0
>> >
>> >
>> > here is how I created an image just to make sure RBD is supported
>> > rbd create rbd/image_ec1 --size 51200 --data-pool ec_k4_m2
>> > --image-feature
>> > layering
>> >
>> > here is what gwcli complains about
>> > gwcli
>> > /iscsi-target...sx01-657d71e0> cd /disks
>> > /disks> create pool=ec_k4_m2 image=testec size=120G
>> > Invalid pool (ec_k4_m2). Must already exist and be replicated
>> >
>> >
>> >
>> >
>> > _______________________________________________
>> > ceph-users mailing list
>> > ceph-users@xxxxxxxxxxxxxx
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >
>>
>>
>>
>> --
>> Jason
>
>



--
Jason

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux