Re: RBD Image can't be formatted - blk_error

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Ilya,

Sorry for the late reply, I've been sick all week long :-/ and then really
busy at work once I'll get back.

I've tried to wipe out the image by zeroing it (Even tried to fully wipe
it), I can see the same error message.
The thing is, isn't the image created supposed to be empty?

Regarding the pool creation, both, I created a new metadata pool (archives)
and a new data pool (archives-data) as my pool is used for EC based RBD
images.
Both too, I've tried to delete and re-create pools with a different name
and the same name, we always hit the issue.

Here are the commands I used to create those pools and volumes:

*POOLS CREATION:*

*ceph osd pool create archives 1024 1024 replicatedceph osd pool create
archives-data 1024 1024 erasure standard-ecceph osd pool set archives-data
allow_ec_overwrites true*


*VOLUME CREATION:*
*rbd create --size 80T --data-pool archives-data archives/mirror*
just for complementary information, we use the following EC profile:

k=3
m=2
plugin=jerasure
crush-failure-domain=host
crush-device-class=ssd
technique=reed_sol_van

This cluster is composed of 10 OSDs nodes filled with 24 8Tb SSD disks so
if I'm not wrong with my maths, our profile is OK so it shouldn't be a
profile/crushmap issue.

I didn't try to map the volume using the admin user tho, you're right I
should in order to eliminate any auth issue, but I doubt it is related as a
smaller image works just fine with this client key using the same pool name.

Thanks a lot for your following by the way and sorry for the really late
answer!


Le lun. 11 janv. 2021 à 13:38, Ilya Dryomov <idryomov@xxxxxxxxx> a écrit :

> On Mon, Jan 11, 2021 at 10:09 AM Gaël THEROND <gael.therond@xxxxxxxxxxxx>
> wrote:
> >
> > Hi Ilya,
> >
> > Here is additional information:
> > My cluster is a three OSD Nodes cluster with each node having 24 4TB SSD
> disks.
> >
> > The mkfs.xfs command fail with the following error:
> https://pastebin.com/yTmMUtQs
> >
> > I'm using the following command to format the image: mkfs.xfs
> /dev/rbd/<pool_name>/<image_name>
> > I'm facing the same problem (and same sectors) if I'm directly targeting
> the device with mkfs.xfs /dev/rbb<devMapID>
> >
> > The client authentication caps are as follows:
> https://pastebin.com/UuAHRycF
> >
> > Regarding your questions, yes, it is a persistent issue as soon as I try
> to create a large image from a newly created pool.
> > Yes, after the first attempt, all new attempts fail too.
> > Yes, it is always the same set of sectors that fails.
>
> Have you tried writing to sector 0, just to take mkfs.xfs out of the
> picture?  E.g. "dd if=/dev/zero of=/dev/rbd17 bs=512 count=1 oflag=direct"?
>
> >
> > Strange thing is, if I use an already existing pool, and create this
> 80Tb image within this pool, it formats it correctly.
>
> What do you mean by a newly created pool?  A metadata pool, a data pool
> or both?
>
> Are you deleting and re-creating pools (whether metadata or data) with
> the same name?  It would help if you paste all commands, starting with
> how you create pools all the way to a failing write.
>
> Have you tried mapping using the admin user ("rbd map --id admin ...")?
>
> Thanks,
>
>                 Ilya
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux