Re: Size and capacity calculations questions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Georg,

I suspect your db device size is around 100GiB size? And actual total hdd class size is rather 700 GiB (100 GiB  * 7 osds) less than reported 19 TiB.

Is the above correct? If so then high raw size(s) is caused by osd stats reporting design - it unconditionally includes full db volume size into reported total/used space.

Hence one gets both "ceph df's" RAW STORAGE SIZE and "ceph osd df's" SIZE/RAW USE numbers increased.


You might want to inspect per-pool usage sizes via "ceph df detail" command and highly likely they will show expected numbers.


Thanks,

Igor


On 12/10/2019 11:39 AM, Georg F wrote:
Glad to see I am not the only one with unexpected increased disk usage. I do have a case for a few months now where the reported size on disk is 10 times higher than it should be. Unfortunately no solution so far. Therefore, I am very curious whether the min alloc size will solve your problem I do not expect it to be the solution in my case.

This is how it looks in my cluster:

I've moved a 1TiB pool (3TiB raw use) from hdd osds (7) to newly added nvme osds (14). The hdd osds should be almost empty by now as just small pools reside on them. The pools on the hdd osds in sum store about 25GiB, which should use about 75GiB with a pool size of 3. Wal and db are on separate devices.

However the outputs of ceph df and ceph osd df tell a different story:

# ceph df
RAW STORAGE:
     CLASS     SIZE       AVAIL      USED        RAW USED     %RAW USED
     hdd       19 TiB     18 TiB     775 GiB      782 GiB          3.98

# ceph osd df | egrep "(ID|hdd)"
ID CLASS WEIGHT  REWEIGHT SIZE    RAW USE DATA    OMAP    META     AVAIL   %USE VAR  PGS STATUS
  8   hdd 2.72392  1.00000 2.8 TiB 111 GiB  10 GiB 111 KiB 1024 MiB 2.7 TiB 3.85 0.60  65     up
  6   hdd 2.17914  1.00000 2.3 TiB 112 GiB  11 GiB  83 KiB 1024 MiB 2.2 TiB 4.82 0.75  58     up
  3   hdd 2.72392  1.00000 2.8 TiB 114 GiB  13 GiB  71 KiB 1024 MiB 2.7 TiB 3.94 0.62  76     up
  5   hdd 2.72392  1.00000 2.8 TiB 109 GiB 7.6 GiB  83 KiB 1024 MiB 2.7 TiB 3.76 0.59  63     up
  4   hdd 2.72392  1.00000 2.8 TiB 112 GiB  11 GiB  55 KiB 1024 MiB 2.7 TiB 3.87 0.60  59     up
  7   hdd 2.72392  1.00000 2.8 TiB 114 GiB  13 GiB   8 KiB 1024 MiB 2.7 TiB 3.93 0.61  66     up
  2   hdd 2.72392  1.00000 2.8 TiB 111 GiB 9.9 GiB  78 KiB 1024 MiB 2.7 TiB 3.84 0.60  69     up

The sum of "DATA" is 75,5GiB which is what I am expecting to be used by the pools. How come the sum of "RAW USE" is 783GiB? More than 10x the size of the stored data. On my nvme osds the "RAW USE" to "DATA" overhead is <1%:

ceph osd df|egrep "(ID|nvme)"
ID CLASS WEIGHT  REWEIGHT SIZE    RAW USE DATA    OMAP    META     AVAIL   %USE VAR  PGS STATUS
  0  nvme 2.61989  1.00000 2.6 TiB 181 GiB 180 GiB  31 KiB  1.0 GiB 2.4 TiB 6.74 1.05  12     up
  1  nvme 2.61989  1.00000 2.6 TiB 151 GiB 150 GiB  39 KiB 1024 MiB 2.5 TiB 5.62 0.88  10     up
13  nvme 2.61989  1.00000 2.6 TiB 239 GiB 238 GiB  55 KiB  1.0 GiB 2.4 TiB 8.89 1.39  16     up
-- truncated --

I am running ceph version 14.2.3 (0f776cf838a1ae3130b2b73dc26be9c95c6ccc39) nautilus (stable) which was upgraded recently from 13.2.1.

Best
Georg
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux