Hi list,
we have a Luminous bluestore cluster with separate block.db/block.wal
on SSDs. We were running version 12.2.2 and upgraded yesterday to
12.2.5. The upgrade went smoothly, but since the restart of the OSDs I
noticed that 'ceph osd df' shows a different total disk size:
---cut here---
ceph1:~ # ceph osd df
ID CLASS WEIGHT REWEIGHT SIZE USE AVAIL %USE VAR PGS
1 hdd 0.92429 1.00000 931G 557G 373G 59.85 1.03 681
4 hdd 0.92429 1.00000 931G 535G 395G 57.52 0.99 645
6 hdd 0.92429 1.00000 931G 532G 398G 57.19 0.99 640
13 hdd 0.92429 1.00000 931G 587G 343G 63.08 1.09 671
16 hdd 0.92429 1.00000 931G 562G 368G 60.40 1.04 665
18 hdd 0.92429 1.00000 931G 531G 399G 57.07 0.98 623
10 ssd 0.72769 1.00000 745G 18423M 727G 2.41 0.04 37
---cut here---
Before the upgrade the displayed size for each 1TB disk was 946G where
each OSD has a block.db size of 15G (931 + 15 = 946). So it seems that
in one of the recent changes within 12.2.X the output has changed,
also resulting in a slightly smaller total cluster size. Is this just
a code change for the size calculation or is there something else I
should look out for?
Regards,
Eugen
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com