xfs_growfs "autodetects" the block device size. You can force re-read of the block device to refresh this info but might not do anything at all.
There are situations when block device size will not reflect reality - for example you can't (or at least couldn't) resize partition that is in use (mounted, mapped, used in LVM...) without serious hacks, and ioctls on this partition will return the old size until you reboot.
The block device can also simply lie (like if you triggered a bug that made the rbd device visually larger).
Device-mapper devices have their own issues.
The only advice I can give is to never, ever shrink LUNs or block devices and to avoid partitions if you can. I usually set up a fairly large OS drive (with oversized partitions to be safe, assuming you have thin-provisioning it wastes no real space) and a separate data volume without any partitioning. This also works-around possible alignment issues....
Growing is always safe, shrinking destroys data. I am very surprised that "rbd resize" doesn't require something like "--i-really-really-know-what-i-am-doing --please-eatmydata" parameter to shrink the image (or does it ask for confirmation when shrinking at least? I can't try it now). Making a typo == instawipe?
My bet would still be that the original image was larger and you shrunk it by mistake. The kernel client most probably never gets the capacity change notification and you end up creating filesystem that points outside the device. (not sure if mkfs.xfs actually tries seeking over the full sector range). This is the most plausible explanation I can think of, but anything is possible. I have other ideas if you want to investigate but I'd take it off-list...
Jan
P.S. Your image is not 2TB but rather 2000 GiB ;-)
Unfortunately I can no longer execute those commands for that rbd5, as I had to delete it; I couldn't 'resurrect' it, at least not in a decent time.
Here is the output for another image, which is 2TB big:
ceph-admin@ceph-client-01:~$ sudo blockdev --getsz --getss --getbsz /dev/rbd1
4194304000
512
512
ceph-admin@ceph-client-01:~$ xfs_info /dev/rbd1
meta-data="" isize=256 agcount=8127, agsize=64512 blks
= sectsz=512 attr=2
data = bsize=4096 blocks=524288000, imaxpct=25
= sunit=1024 swidth=1024 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=8 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
I know rbd can also shrink the image, but I'm sure I haven't shrunk it. What I have tried, accidentally, was to resize the image to the same size it previously had, and that operation has failed, after trying for some time. Hmm... I think the failed resize was the culprit for it's malfunctioning, then.
Any (additional) advices on how to prevent this type of issues, in the future? Should the resizing and the xfs_growfs be executed with some parameters, for a better configuration of the image and / or filesystem?
Thank you very much for your help!
Regards,
Bogdan