Re: Mounting A RBD Via Kernal Modules

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I suspect this may be a network / firewall issue between the client and one
OSD-server. Perhaps the 100MB RBD didn't have an object mapped to a PG with
the primary on this problematic OSD host but the 2TB RBD does. Just a
theory.

Respectfully,

*Wes Dillingham*
LinkedIn <http://www.linkedin.com/in/wesleydillingham>
wes@xxxxxxxxxxxxxxxxx




On Mon, Mar 25, 2024 at 12:34 AM duluxoz <duluxoz@xxxxxxxxx> wrote:

> Hi Alexander,
>
> Already set (and confirmed by running the command again) - no good, I'm
> afraid.
>
> So I just restart with a brand new image and ran the following commands
> on the ceph cluster and the host respectively. Results are below:
>
> On the ceph cluster:
>
> [code]
>
> rbd create --size 4T my_pool.meta/my_image --data-pool my_pool.data
> --image-feature exclusive-lock --image-feature deep-flatten
> --image-feature fast-diff --image-feature layering --image-feature
> object-map --image-feature data-pool
>
> [/code]
>
> On the host:
>
> [code]
>
> rbd device map my_pool.meta/my_image --id ceph_rbd_user --keyring
> /etc/ceph/ceph.client.ceph_rbd_user.keyring
>
> mkfs.xfs /dev/rbd0
>
> [/code]
>
> Results:
>
> [code]
>
> meta-data=/dev/rbd0              isize=512    agcount=32,
> agsize=33554432 blks
>           =                       sectsz=512   attr=2, projid32bit=1
>           =                       crc=1        finobt=1, sparse=1, rmapbt=0
>           =                       reflink=1    bigtime=1 inobtcount=1
> nrext64=0
> data     =                       bsize=4096   blocks=1073741824, imaxpct=5
>           =                       sunit=16     swidth=16 blks
> naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
> log      =internal log           bsize=4096   blocks=521728, version=2
>           =                       sectsz=512   sunit=16 blks, lazy-count=1
> realtime =none                   extsz=4096   blocks=0, rtextents=0
> Discarding blocks...Done.
> mkfs.xfs: pwrite failed: Input/output error
> libxfs_bwrite: write failed on (unknown) bno 0x1ffffff00/0x100, err=5
> mkfs.xfs: Releasing dirty buffer to free list!
> found dirty buffer (bulk) on free list!
> mkfs.xfs: pwrite failed: Input/output error
> libxfs_bwrite: write failed on (unknown) bno 0x0/0x100, err=5
> mkfs.xfs: Releasing dirty buffer to free list!
> found dirty buffer (bulk) on free list!
> mkfs.xfs: pwrite failed: Input/output error
> libxfs_bwrite: write failed on xfs_sb bno 0x0/0x1, err=5
> mkfs.xfs: Releasing dirty buffer to free list!
> found dirty buffer (bulk) on free list!
> mkfs.xfs: pwrite failed: Input/output error
> libxfs_bwrite: write failed on (unknown) bno 0x100000080/0x80, err=5
> mkfs.xfs: Releasing dirty buffer to free list!
> found dirty buffer (bulk) on free list!
> mkfs.xfs: read failed: Input/output error
> mkfs.xfs: data size check failed
> mkfs.xfs: filesystem failed to initialize
> [/code]
>
> On 25/03/2024 15:17, Alexander E. Patrakov wrote:
> > Hello Matthew,
> >
> > Is the overwrite enabled in the erasure-coded pool? If not, here is
> > how to fix it:
> >
> > ceph osd pool set my_pool.data allow_ec_overwrites true
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux