Hi, Alwin,
Command (as requested): rbd create --size 4T my_pool.meta/my_image
--data-pool my_pool.data --image-feature exclusive-lock --image-feature
deep-flatten --image-feature fast-diff --image-feature layering
--image-feature object-map --image-feature data-pool
On 24/03/2024 22:53, Alwin Antreich wrote:
Hi,
March 24, 2024 at 8:19 AM, "duluxoz"<duluxoz@xxxxxxxxx> wrote:
Hi,
Yeah, I've been testing various configurations since I sent my last
email - all to no avail.
So I'm back to the start with a brand new 4T image which is rbdmapped to
/dev/rbd0.
Its not formatted (yet) and so not mounted.
Every time I attempt a mkfs.xfs /dev/rbd0 (or mkfs.xfs
/dev/rbd/my_pool/my_image) I get the errors I previous mentioned and the
resulting image then becomes unusable (in ever sense of the word).
If I run a fdisk -l (before trying the mkfs.xfs) the rbd image shows up
in the list - no, I don't actually do a full fdisk on the image.
An rbd info my_pool:my_image shows the same expected values on both the
host and ceph cluster.
I've tried this with a whole bunch of different sized images from 100G
to 4T and all fail in exactly the same way. (My previous successful 100G
test I haven't been able to reproduce).
I've also tried all of the above using an "admin" CephX(sp?) account - I
always can connect via rbdmap, but as soon as I try an mkfs.xfs it
fails. This failure also occurs with a mkfs.ext4 as well (all size drives).
The Ceph Cluster is good (self reported and there are other hosts
happily connected via CephFS) and this host also has a CephFS mapping
which is working.
Between running experiments I've gone over the Ceph Doco (again) and I
can't work out what's going wrong.
There's also nothing obvious/helpful jumping out at me from the
logs/journal (sample below):
~~~
Mar 24 17:38:29 my_host.my_net.local kernel: rbd: rbd0: write at objno
524773 0~65536 result -1
Mar 24 17:38:29 my_host.my_net.local kernel: rbd: rbd0: write at objno
524772 65536~4128768 result -1
Mar 24 17:38:29 my_host.my_net.local kernel: rbd: rbd0: write result -1
Mar 24 17:38:29 my_host.my_net.local kernel: blk_print_req_error: 119
callbacks suppressed
Mar 24 17:38:29 my_host.my_net.local kernel: I/O error, dev rbd0, sector
4298932352 op 0x1:(WRITE) flags 0x4000 phys_seg 1024 prio class 2
Mar 24 17:38:29 my_host.my_net.local kernel: rbd: rbd0: write at objno
524774 0~65536 result -1
Mar 24 17:38:29 my_host.my_net.local kernel: rbd: rbd0: write at objno
524773 65536~4128768 result -1
Mar 24 17:38:29 my_host.my_net.local kernel: rbd: rbd0: write result -1
Mar 24 17:38:29 my_host.my_net.local kernel: I/O error, dev rbd0, sector
4298940544 op 0x1:(WRITE) flags 0x4000 phys_seg 1024 prio class 2
~~~
Any ideas what I should be looking at?
Could you please share the command you've used to create the RBD?
Cheers,
Alwin
--
Peregrine IT Signature
*Matthew J BLACK*
M.Inf.Tech.(Data Comms)
MBA
B.Sc.
MACS (Snr), CP, IP3P
When you want it done /right/ ‒ the first time!
Phone: +61 4 0411 0089
Email: matthew@xxxxxxxxxxxxxxx <mailto:matthew@xxxxxxxxxxxxxxx>
Web: www.peregrineit.net <http://www.peregrineit.net>
View Matthew J BLACK's profile on LinkedIn
<http://au.linkedin.com/in/mjblack>
This Email is intended only for the addressee. Its use is limited to
that intended by the author at the time and it is not to be distributed
without the author’s consent. You must not use or disclose the contents
of this Email, or add the sender’s Email address to any database, list
or mailing list unless you are expressly authorised to do so. Unless
otherwise stated, Peregrine I.T. Pty Ltd accepts no liability for the
contents of this Email except where subsequently confirmed in
writing. The opinions expressed in this Email are those of the author
and do not necessarily represent the views of Peregrine I.T. Pty
Ltd. This Email is confidential and may be subject to a claim of legal
privilege.
If you have received this Email in error, please notify the author and
delete this message immediately.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx