Unfortunately I can no longer execute those commands for that rbd5, as I had to delete it; I couldn't 'resurrect' it, at least not in a decent time.
Here is the output for another image, which is 2TB big:
ceph-admin@ceph-client-01:~$ sudo blockdev --getsz --getss --getbsz /dev/rbd1
4194304000
512
512
ceph-admin@ceph-client-01:~$ xfs_info /dev/rbd1
meta-data="" isize=256 agcount=8127, agsize=64512 blks
= sectsz=512 attr=2
data = bsize=4096 blocks=524288000, imaxpct=25
= sunit=1024 swidth=1024 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=8 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
4194304000
512
512
ceph-admin@ceph-client-01:~$ xfs_info /dev/rbd1
meta-data="" isize=256 agcount=8127, agsize=64512 blks
= sectsz=512 attr=2
data = bsize=4096 blocks=524288000, imaxpct=25
= sunit=1024 swidth=1024 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=8 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
On Thu, Nov 12, 2015 at 11:00 PM, Jan Schermer <jan@xxxxxxxxxxx> wrote:
Can you post the output of:blockdev --getsz --getss --getbsz /dev/rbd5andxfs_info /dev/rbd5rbd resize can actually (?) shrink the image as well - is it possible that the device was actually larger and you shrunk it?JanOn 12 Nov 2015, at 21:46, Bogdan SOLGA <bogdan.solga@xxxxxxxxx> wrote:Is there a better way to resize an RBD image and the filesystem?On Thu, Nov 12, 2015 at 10:35 PM, Jan Schermer <jan@xxxxxxxxxxx> wrote:On 12 Nov 2015, at 20:49, Bogdan SOLGA <bogdan.solga@xxxxxxxxx> wrote:As you mentioned the filesystem thinking the block device should be larger than it is - I have initially created that image as a 2GB image, and then resized it to be much bigger. Could this be the issue?The filesystem was created using mkfs.xfs, after creating the RBD block device and mapping it on the Ceph client. I haven't specified any parameters when I created the filesystem, I just ran mkfs.xfs on the image name.Hello Jan!Thank you for your advices, first of all!Sounds more than likely :-) How exactly did you grow it?JanThank you, once again!Fortunately it's not important data, it's just testing data. If I won't succeed repairing it I will trash and re-create it, of course.There are several RBD images mounted on one Ceph client, but only one of them had issues. I have made a clone, and I will try running fsck on it.On Thu, Nov 12, 2015 at 9:28 PM, Jan Schermer <jan@xxxxxxxxxxx> wrote:How did you create filesystems and/or partitions on this RBD block device?The obvious causes would be1) you partitioned it and the partition on which you ran mkfs points or pointed during mkfs outside the block device size (happens if you for example automate this and confuse sectors x cylinders, or if you copied the partition table with dd or from some image)or2) mkfs created the filesystem with pointers outside of the block device for some other reason (bug?)or3) this RBD device is a snapshot that got corrupted (or wasn't snapshotted in crash-consistent state and you got "lucky") and some reference points to a non-sensical block number (fsck could fix this, but I wouldn't trust the data integrity anymore)Basically the filesystem thinks the block device should be larger than it is and tries to reach beyond.Is this just one machine or RBD image or is there more?I'd first create a snapshot and then try running fsck on it, it should hopefully tell you if there's a problem in setup or a corruption.If it's not important data and it's just one instance of this problem then I'd just trash and recreate it.JanOn 12 Nov 2015, at 20:14, Bogdan SOLGA <bogdan.solga@xxxxxxxxx> wrote:_______________________________________________Hello everyone!We have a recently installed Ceph cluster (v 0.94.5, Ubuntu 14.04), and today I noticed a lot of 'attempt to access beyond end of device' messages in the /var/log/syslog file. They are related to a mounted RBD image, and have the following format:Nov 12 21:06:44 ceph-client-01 kernel: [438507.952532] attempt to access beyond end of device
Nov 12 21:06:44 ceph-client-01 kernel: [438507.952534] rbd5: rw=33, want=6193176, limit=4194304After restarting that Ceph client, I see a lot of 'metadata I/O error' messages in the boot log:XFS (rbd5): metadata I/O error: block 0x46e001 ("xfs_buf_iodone_callbacks") error 5 numblks 1Any idea on why these messages are shown? The health of the cluster shows as OK, and I can access that block device without (apparent) issues...Thank you!Regards,Bogdan
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com