Re: Mounting A RBD Via Kernal Modules

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I may be barking up the wrong tree, but if you run ip -s link show
yourNicID on this server or your OSDs do you see any errors/dropped/missed?

On Sun, 24 Mar 2024, 09:20 duluxoz, <duluxoz@xxxxxxxxx> wrote:

> Hi,
>
> Yeah, I've been testing various configurations since I sent my last
> email - all to no avail.
>
> So I'm back to the start with a brand new 4T image which is rbdmapped to
> /dev/rbd0.
>
> Its not formatted (yet) and so not mounted.
>
> Every time I attempt a mkfs.xfs /dev/rbd0 (or mkfs.xfs
> /dev/rbd/my_pool/my_image) I get the errors I previous mentioned and the
> resulting image then becomes unusable (in ever sense of the word).
>
> If I run a fdisk -l (before trying the mkfs.xfs) the rbd image shows up
> in the list - no, I don't actually do a full fdisk on the image.
>
> An rbd info my_pool:my_image shows the same expected values on both the
> host and ceph cluster.
>
> I've tried this with a whole bunch of different sized images from 100G
> to 4T and all fail in exactly the same way. (My previous successful 100G
> test I haven't been able to reproduce).
>
> I've also tried all of the above using an "admin" CephX(sp?) account - I
> always can connect via rbdmap, but as soon as I try an mkfs.xfs it
> fails. This failure also occurs with a mkfs.ext4 as well (all size drives).
>
> The Ceph Cluster is good (self reported and there are other hosts
> happily connected via CephFS) and this host also has a CephFS mapping
> which is working.
>
> Between running experiments I've gone over the Ceph Doco (again) and I
> can't work out what's going wrong.
>
> There's also nothing obvious/helpful jumping out at me from the
> logs/journal (sample below):
>
> ~~~
>
> Mar 24 17:38:29 my_host.my_net.local kernel: rbd: rbd0: write at objno
> 524773 0~65536 result -1
> Mar 24 17:38:29 my_host.my_net.local kernel: rbd: rbd0: write at objno
> 524772 65536~4128768 result -1
> Mar 24 17:38:29 my_host.my_net.local kernel: rbd: rbd0: write result -1
> Mar 24 17:38:29 my_host.my_net.local kernel: blk_print_req_error: 119
> callbacks suppressed
> Mar 24 17:38:29 my_host.my_net.local kernel: I/O error, dev rbd0, sector
> 4298932352 op 0x1:(WRITE) flags 0x4000 phys_seg 1024 prio class 2
> Mar 24 17:38:29 my_host.my_net.local kernel: rbd: rbd0: write at objno
> 524774 0~65536 result -1
> Mar 24 17:38:29 my_host.my_net.local kernel: rbd: rbd0: write at objno
> 524773 65536~4128768 result -1
> Mar 24 17:38:29 my_host.my_net.local kernel: rbd: rbd0: write result -1
> Mar 24 17:38:29 my_host.my_net.local kernel: I/O error, dev rbd0, sector
> 4298940544 op 0x1:(WRITE) flags 0x4000 phys_seg 1024 prio class 2
> ~~~
>
> Any ideas what I should be looking at?
>
> And thank you for the help  :-)
>
> On 24/03/2024 17:50, Alexander E. Patrakov wrote:
> > Hi,
> >
> > Please test again, it must have been some network issue. A 10 TB RBD
> > image is used here without any problems.
> >
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux