Re: New Issue - Mapping Block Devices

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

Can you show output from 'lsblk' command?

Regards,

On 3/23/21 9:38 AM, duluxoz wrote:
> Hi Ilya,
>
> OK, so I've updated the my-id permissions to include 'profile rbd
> pool=my-pool-data'.
>
> Yes, "rbd device map" does succeed (both before and after the my-id
> update).
>
> The full dmesg form the "rbd device map" command is:
>
> [18538.539416] libceph: mon0 (1)<REDACTED>:6789 session established
> [18538.554143] libceph: client25428 fsid <REDACTED>
> [18538.615761] rbd: rbd0: capacity 1099511627776 features 0xbd
>
> The full dmesg form the fdisk command is (which seems to have worked
> now that I've updated the my-id auth):
>
> [18770.784126]  rbd0: p1
>
> There is no dmesg from the mount command. The mount command itself gives:
>
> mount: /my-rbd-bloc-device: special device /dev/rbd0p1 does not exist
> (same as before I updated my-id)
>
> Cheers
>
> Matthew J
>
>
> On 23/03/2021 17:34, Ilya Dryomov wrote:
>> On Tue, Mar 23, 2021 at 6:13 AM duluxoz <duluxoz@xxxxxxxxx> wrote:
>>> Hi All,
>>>
>>> I've got a new issue (hopefully this one will be the last).
>>>
>>> I have a working Ceph (Octopus) cluster with a replicated pool
>>> (my-pool), an erasure-coded pool (my-pool-data), and an image
>>> (my-image)
>>> created - all *seems* to be working correctly. I also have the correct
>>> Keyring specified (ceph.client.my-id.keyring).
>>>
>>> ceph -s is reporting all healthy.
>>>
>>> The ec profile (my-ec-profile) was created with: ceph osd
>>> erasure-code-profile set my-ec-profile k=4 m=2
>>> crush-failure-domain=host
>>>
>>> The replicated pool was created with: ceph osd pool create my-pool 100
>>> 100 replicated
>>>
>>> Followed by: rbd pool init my-pool
>>>
>>> The ec pool was created with: ceph osd pool create my-pool-data 100 100
>>> erasure my-ec-profile --autoscale-mode=on
>>>
>>> Followed by: rbd pool init my-pool-data
>>>
>>> The image was created with: rbd create -s 1T --data-pool my-pool-data
>>> my-pool/my-image
>>>
>>> The Keyring was created with: ceph auth get-or-create client.my-id mon
>>> 'profile rbd' osd 'profile rbd pool=my-pool' mgr 'profile rbd
>>> pool=my-pool' -o /etc/ceph/ceph.client.my-id.keyring
>> Hi Matthew,
>>
>> If you are using a separate data pool, you need to give "my-id" access
>> to it:
>>
>>    osd 'profile rbd pool=my-pool, profile rbd pool=my-pool-data'
>>
>>> On a centos8 client machine I have installed ceph-common, placed the
>>> Keyring file into /etc/ceph/, and run the command: rbd device map
>>> my-pool/my-image --id my-id
>> Does "rbd device map" actually succeed?  Can you attach dmesg from that
>> client machine from when you (attempted to) map, ran fdisk, etc?
>>
>> Thanks,
>>
>>                  Ilya
-- 
PS
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux