Re: Mounting A RBD Via Kernal Modules

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi All,

OK, an update for everyone, a note about some (what I believe to be) missing information in the Ceph Doco, a success story, and an admission on my part that I may have left out some important information.

So to start with, I finally got everything working - I now have my 4T RBD Image mapped, mounted, and tested on my host.  YAAAAA!

The missing Ceph Doco Info:

What I found in the latested Redhat documentation (https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/7/html/block_device_guide/the-rbd-kernel-module) that is not in the Ceph documentation (perhaps because it is EL-specific? - but a note should be placed anyway, even if it is EL-specific) is that the RBD Image needs to have a partition entry created for it - that might be "obvious" to some, but my ongoing belief is that most "obvious" things aren't, so its better to be explicit about such things. Just my $0.02 worth.  :-)

The relevant commands, which are performed after a `rbd map my_pool.meta/my_image --id my_image_user` are:

[codeblock]

parted /dev/rbd0 mklabel gpt

parted /dev/rbd0 mkpart primary xfs 0% 100%

[/codebock]

From there the RBD Image needs a file system: `mkfs.xfs /dev/rbd0p1`

And a mount: `mount /dev/rbd0p1 /mnt/my_image`

Now, the omission on my part:

The host I was attempting all this on was an oVirt-managed VM. Apparently, an oVirt-Managed VM doesn't like/allow (speculation on my part) running the `parted` or `mkfs.xfs` commands on an RBD Image. What I had to do to test this and get it working was to run the `rbd map`, `parted`, and `mkfs.xfs` commands on a physical host (which I did), THEN unmount/unmap the image from the physical host and map / mount it on the VM.

So my apologises for not providing all the info - I didn't consider it to be relevant - my bad!

So all good in the end. I hope the above helps others if they have similar issues.

Thank you all who helped / pitched in with ideas - I really, *really* appreciate it.

Thanks too to Wesley Dillingham - although the suggestion wasn't relevant to this issue, it did cause me to look at the firewall settings on the Ceph Cluster where I found (and corrected) an unrelated issue that hadn't reared its ugly head yet. Thanks Wes.

Cheers (until next time)  :-P

Dulux-Oz
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux