Hi Igor,
This difference was introduced by the following PR:
https://github.com/ceph/ceph/pull/20487 (commit os/bluestore: do not
account DB volume space in total one reported by statfs method).
The rationale is to show block device capacity as total only. And
don't add DB space to it. This makes no sense since data stored at
these locations aren't cumulative.
So this just an effect of a bit different calculation.
thank you very much for this quick response and the confirmation of
our assumption.
I totally agree that this makes more sense to *not* count the db size
into total disk size, we were just wondering if something went wrong
during the upgrade.
Regards,
Eugen
Zitat von Igor Fedotov <ifedotov@xxxxxxx>:
Hi Eugen,
This difference was introduced by the following PR:
https://github.com/ceph/ceph/pull/20487 (commit os/bluestore: do not
account DB volume space in total one reported by statfs method).
The rationale is to show block device capacity as total only. And
don't add DB space to it. This makes no sense since data stored at
these locations aren't cumulative.
So this just an effect of a bit different calculation.
Thanks,
Igor
On 5/25/2018 2:22 PM, Eugen Block wrote:
Hi list,
we have a Luminous bluestore cluster with separate
block.db/block.wal on SSDs. We were running version 12.2.2 and
upgraded yesterday to 12.2.5. The upgrade went smoothly, but since
the restart of the OSDs I noticed that 'ceph osd df' shows a
different total disk size:
---cut here---
ceph1:~ # ceph osd df
ID CLASS WEIGHT REWEIGHT SIZE USE AVAIL %USE VAR PGS
1 hdd 0.92429 1.00000 931G 557G 373G 59.85 1.03 681
4 hdd 0.92429 1.00000 931G 535G 395G 57.52 0.99 645
6 hdd 0.92429 1.00000 931G 532G 398G 57.19 0.99 640
13 hdd 0.92429 1.00000 931G 587G 343G 63.08 1.09 671
16 hdd 0.92429 1.00000 931G 562G 368G 60.40 1.04 665
18 hdd 0.92429 1.00000 931G 531G 399G 57.07 0.98 623
10 ssd 0.72769 1.00000 745G 18423M 727G 2.41 0.04 37
---cut here---
Before the upgrade the displayed size for each 1TB disk was 946G
where each OSD has a block.db size of 15G (931 + 15 = 946). So it
seems that in one of the recent changes within 12.2.X the output
has changed, also resulting in a slightly smaller total cluster
size. Is this just a code change for the size calculation or is
there something else I should look out for?
Regards,
Eugen
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com