fail. Epic fail.
Absolutely reproducible.
I have ceph cluster with this configuration:
8 physical servers
14 osd servers.
Each osd server have personal fs.
48T total size of ceph cluster.
17T used.
Now, step by step:
1. Stop ceph server osd0
/etc/init.d/ceph stop
2. Make fresh fs for osd
umount /osd.0
mkfs.ext4 /dev/sdc1
tune2fs -o journal_data_writeback /dev/sdc1
mount -a
# string from /etc/fstab:
# /dev/sdc1 /osd.0 ext4
user_xattr,rw,noexec,nodev,noatime,nodiratime,data=writeback,barrier=0
0 2
ceph mon getmap -o /tmp/monmap
cosd --mkfs -i 0 --monmap /tmp/monmap
3. Start ceph server osd0
/etc/init.d/ceph start
Now, make a big cup of coffee and begin to wait.
After completion of rebalancing do:
/etc/init.d/ceph stop
umount /osd.0
fsck.ext4 -fy /dev/sdc1
and see many-many messages like:
Inode 238551053, i_blocks is 24, should be 32. Fix? yes
Inode 238551054, i_blocks is 40, should be 32. Fix? yes
Inode 238551066, i_blocks is 24, should be 32. Fix? yes
Inode 238944257, i_blocks is 8, should be 16. Fix? yes
Inode 239206414, i_blocks is 8, should be 16. Fix? yes
Inode 239206416, i_blocks is 40, should be 32. Fix? yes
Inode 239206431, i_blocks is 8, should be 16. Fix? yes
Inode 239206441, i_blocks is 24, should be 32. Fix? yes
Voila.
P.S. No any message in syslog. No any message in console.
WBR,
Fyodor.
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html