Hi All,
I'm trying to mount a Ceph Reef (v18.2.2 - latest version) RBD Image as
a 2nd HDD on a Rocky Linux v9.3 (latest version) host.
The EC pool has been created and initialised and the image has been
created.
The ceph-common package has been installed on the host.
The correct keyring has been added to the host (with a chmod of 600) and
the host has been configure with an rbdmap file as follows:
`my_pool.meta/my_image
id=ceph_user,keyring=/etc/ceph/ceph.client.ceph_user.keyring`.
When running the rbdmap.service the image appears as both `/dev/rbd0`
and `/dev/rbd/my_pool.meta/my_image`, exactly as the Ceph Doco says it
should.
So everything *appears* AOK up to this point.
My question now is: Should I run `mkfs xfs` on `/dev/rbd0` *before* or
*after* I try to mount the image (via fstab:
`/dev/rbd/my_pool.meta/my_image /mnt/my_image xfs noauto 0 0` - as
per the Ceph doco)?
The reason I ask is that I've tried this *both* ways and all I get is an
error message (sorry, can't remember the exact messages and I'm not
currently in front of the host to confirm it :-) - but from memory it
was something about not being able to recognise the 1st block - or
something like that).
So, I'm obviously doing something wrong, but I can't work out what
exactly (and the logs don't show any useful info).
Do I, for instance, have the process wrong / don't understand the exact
process, or is there something else wrong?
All comments/suggestions/etc greatly appreciated - thanks in advance
Cheers
Dulux-Oz
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx