Re: Luminous ceph pool %USED calculation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Nov 03, 2017 at 12:09:03PM +0100, Alwin Antreich wrote:
> Hi,
>
> I am confused by the %USED calculation in the output 'ceph df' in luminous. In the example below the pools use 2.92% "%USED" but with my calculation, taken from the source code it gives me a 8.28%. On a hammer cluster my calculation gives the same result as in the 'ceph df' output.
>
>  Am I taking the right values? Or do I miss something on the calculation?
>
> This tracker introduced the calculation: http://tracker.ceph.com/issues/16933
> # https://github.com/ceph/ceph/blob/master/src/mon/PGMap.cc
> curr_object_copies_rate = (float)(sum.num_object_copies - sum.num_objects_degraded) / sum.num_object_copies;
> used = sum.num_bytes * curr_object_copies_rate;
> used /= used + avail;
>
> curr_object_copies_rate  = (num_object_copies: 2118 - num_objects_degraded: 0) / num_object_copies: 2118;
> used = num_bytes: 4437573656 * curr_object_copies_rate
> used /= used + max_avail: 73689653248
>
> # my own calculation
> Name                       size   min_size     pg_num     %-used                 used
> default                       3          2         64       8.28           4437573656
> test1                         3          2         64       8.28           4437573656
> test2                         2          1         64       5.68           4437573656
>
> # ceph df detail
> GLOBAL:
>     SIZE     AVAIL     RAW USED     %RAW USED     OBJECTS
>     191G      151G       40551M         20.69        3177
> POOLS:
>     NAME        ID     QUOTA OBJECTS     QUOTA BYTES     USED      %USED     MAX AVAIL     OBJECTS     DIRTY     READ     WRITE     RAW USED
>     default     1      N/A               N/A             4232M      2.92        46850M        1059      1059        0      1059       12696M
>     test1       4      N/A               N/A             4232M      2.92        46850M        1059      1059        0      1059       12696M
>     test2       5      N/A               N/A             4232M      2.92        70275M        1059      1059        0      1059        8464M
>
> # ceph pg dump pools
> dumped pools
> POOLID OBJECTS MISSING_ON_PRIMARY DEGRADED MISPLACED UNFOUND BYTES      LOG  DISK_LOG
> 5         1059                  0        0         0       0 4437573656 1059     1059
> 4         1059                  0        0         0       0 4437573656 1059     1059
> 1         1059                  0        0         0       0 4437573656 1059     1059
>
> # ceph versions
> {
>     "mon": {
>         "ceph version 12.2.1 (1a629971a9bcaaae99e5539a3a43f800a297f267) luminous (stable)": 3
>     },
>     "mgr": {
>         "ceph version 12.2.1 (1a629971a9bcaaae99e5539a3a43f800a297f267) luminous (stable)": 3
>     },
>     "osd": {
>         "ceph version 12.2.1 (1a629971a9bcaaae99e5539a3a43f800a297f267) luminous (stable)": 6
>     },
>     "mds": {},
>     "overall": {
>         "ceph version 12.2.1 (1a629971a9bcaaae99e5539a3a43f800a297f267) luminous (stable)": 12
>     }
> }
>
> Some more data in the attachment.
>
> Thanks in adavance.
> --
> Cheers,
> Alwin

> # ceph df detail
> GLOBAL:
>     SIZE     AVAIL     RAW USED     %RAW USED     OBJECTS
>     191G      151G       40551M         20.69        3177
> POOLS:
>     NAME        ID     QUOTA OBJECTS     QUOTA BYTES     USED      %USED     MAX AVAIL     OBJECTS     DIRTY     READ     WRITE     RAW USED
>     default     1      N/A               N/A             4232M      2.92        46850M        1059      1059        0      1059       12696M
>     test1       4      N/A               N/A             4232M      2.92        46850M        1059      1059        0      1059       12696M
>     test2       5      N/A               N/A             4232M      2.92        70275M        1059      1059        0      1059        8464M
>
> # ceph pg dump pools
> dumped pools
> POOLID OBJECTS MISSING_ON_PRIMARY DEGRADED MISPLACED UNFOUND BYTES      LOG  DISK_LOG
> 5         1059                  0        0         0       0 4437573656 1059     1059
> 4         1059                  0        0         0       0 4437573656 1059     1059
> 1         1059                  0        0         0       0 4437573656 1059     1059
>
> # ceph osd dump
> epoch 97
> fsid 1c6a05cf-f93c-49a3-939d-877bb61107c3
> created 2017-10-27 13:15:55.049914
> modified 2017-11-03 10:14:58.231071
> flags sortbitwise,recovery_deletes,purged_snapdirs
> crush_version 13
> full_ratio 0.95
> backfillfull_ratio 0.9
> nearfull_ratio 0.85
> require_min_compat_client jewel
> min_compat_client jewel
> require_osd_release luminous
> pool 1 'default' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 6 flags hashpspool stripe_width 0 application rbd
> pool 4 'test1' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 49 flags hashpspool stripe_width 0 application rbd
> pool 5 'test2' replicated size 2 min_size 1 crush_rule 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 59 flags hashpspool stripe_width 0 application rbd
> max_osd 6
> osd.0 up   in  weight 1 up_from 77 up_thru 96 down_at 74 last_clean_interval [71,73) 192.168.19.151:6800/300 192.168.19.151:6801/300 192.168.19.151:6802/300 192.168.19.151:6803/300 exists,up f462287f-4af9-4251-b1a7-ea5a82ac72e3
> osd.1 up   in  weight 1 up_from 64 up_thru 94 down_at 62 last_clean_interval [32,61) 192.168.19.152:6800/1183 192.168.19.152:6801/1183 192.168.19.152:6802/1183 192.168.19.152:6803/1183 exists,up c7be8e91-7437-42f8-bbbd-80b6fda8cdd0
> osd.2 up   in  weight 1 up_from 64 up_thru 96 down_at 63 last_clean_interval [32,61) 192.168.19.153:6800/1169 192.168.19.153:6801/1169 192.168.19.153:6802/1169 192.168.19.153:6803/1169 exists,up 0ac45781-4fbb-412c-b2f5-fce7d7cfe637
> osd.3 up   in  weight 1 up_from 82 up_thru 96 down_at 0 last_clean_interval [0,0) 192.168.19.151:6804/1725 192.168.19.151:6805/1725 192.168.19.151:6806/1725 192.168.19.151:6807/1725 exists,up 2bdb8f49-feb0-447f-83bf-973aed4bcf3d
> osd.4 up   in  weight 1 up_from 90 up_thru 96 down_at 0 last_clean_interval [0,0) 192.168.19.153:6805/2867 192.168.19.153:6806/2867 192.168.19.153:6807/2867 192.168.19.153:6808/2867 exists,up 02eaf897-6d0a-4d77-94ab-3ac6a91e818c
> osd.5 up   in  weight 1 up_from 96 up_thru 96 down_at 0 last_clean_interval [0,0) 192.168.19.152:6804/1974 192.168.19.152:6805/1974 192.168.19.152:6806/1974 192.168.19.152:6807/1974 exists,up 087436c2-6837-40c1-97f6-edb9e0d4bcd5
>
> # ceph -s
>   cluster:
>     id:     1c6a05cf-f93c-49a3-939d-877bb61107c3
>     health: HEALTH_OK
>
>   services:
>     mon: 3 daemons, quorum pve5-c01,pve5-c02,pve5-c03
>     mgr: pve5-c03(active), standbys: pve5-c02, pve5-c01
>     osd: 6 osds: 6 up, 6 in
>
>   data:
>     pools:   3 pools, 192 pgs
>     objects: 3177 objects, 12696 MB
>     usage:   40551 MB used, 151 GB / 191 GB avail
>     pgs:     192 active+clean
>
> # ceph pg dump pools -f json-pretty
> dumped pools
>
> [
>     {
>         "poolid": 5,
>         "num_pg": 64,
>         "stat_sum": {
>             "num_bytes": 4437573656,
>             "num_objects": 1059,
>             "num_object_clones": 0,
>             "num_object_copies": 2118,
>             "num_objects_missing_on_primary": 0,
>             "num_objects_missing": 0,
>             "num_objects_degraded": 0,
>             "num_objects_misplaced": 0,
>             "num_objects_unfound": 0,
>             "num_objects_dirty": 1059,
>             "num_whiteouts": 0,
>             "num_read": 0,
>             "num_read_kb": 0,
>             "num_write": 1059,
>             "num_write_kb": 4333569,
>             "num_scrub_errors": 0,
>             "num_shallow_scrub_errors": 0,
>             "num_deep_scrub_errors": 0,
>             "num_objects_recovered": 1959,
>             "num_bytes_recovered": 8199864416,
>             "num_keys_recovered": 0,
>             "num_objects_omap": 0,
>             "num_objects_hit_set_archive": 0,
>             "num_bytes_hit_set_archive": 0,
>             "num_flush": 0,
>             "num_flush_kb": 0,
>             "num_evict": 0,
>             "num_evict_kb": 0,
>             "num_promote": 0,
>             "num_flush_mode_high": 0,
>             "num_flush_mode_low": 0,
>             "num_evict_mode_some": 0,
>             "num_evict_mode_full": 0,
>             "num_objects_pinned": 0,
>             "num_legacy_snapsets": 0
>         },
>         "log_size": 1059,
>         "ondisk_log_size": 1059,
>         "up": 128,
>         "acting": 128
>     },
>     {
>         "poolid": 4,
>         "num_pg": 64,
>         "stat_sum": {
>             "num_bytes": 4437573656,
>             "num_objects": 1059,
>             "num_object_clones": 0,
>             "num_object_copies": 3177,
>             "num_objects_missing_on_primary": 0,
>             "num_objects_missing": 0,
>             "num_objects_degraded": 0,
>             "num_objects_misplaced": 0,
>             "num_objects_unfound": 0,
>             "num_objects_dirty": 1059,
>             "num_whiteouts": 0,
>             "num_read": 0,
>             "num_read_kb": 0,
>             "num_write": 1059,
>             "num_write_kb": 4333569,
>             "num_scrub_errors": 0,
>             "num_shallow_scrub_errors": 0,
>             "num_deep_scrub_errors": 0,
>             "num_objects_recovered": 1425,
>             "num_bytes_recovered": 5976883200,
>             "num_keys_recovered": 0,
>             "num_objects_omap": 0,
>             "num_objects_hit_set_archive": 0,
>             "num_bytes_hit_set_archive": 0,
>             "num_flush": 0,
>             "num_flush_kb": 0,
>             "num_evict": 0,
>             "num_evict_kb": 0,
>             "num_promote": 0,
>             "num_flush_mode_high": 0,
>             "num_flush_mode_low": 0,
>             "num_evict_mode_some": 0,
>             "num_evict_mode_full": 0,
>             "num_objects_pinned": 0,
>             "num_legacy_snapsets": 0
>         },
>         "log_size": 1059,
>         "ondisk_log_size": 1059,
>         "up": 192,
>         "acting": 192
>     },
>     {
>         "poolid": 1,
>         "num_pg": 64,
>         "stat_sum": {
>             "num_bytes": 4437573656,
>             "num_objects": 1059,
>             "num_object_clones": 0,
>             "num_object_copies": 3177,
>             "num_objects_missing_on_primary": 0,
>             "num_objects_missing": 0,
>             "num_objects_degraded": 0,
>             "num_objects_misplaced": 0,
>             "num_objects_unfound": 0,
>             "num_objects_dirty": 1059,
>             "num_whiteouts": 0,
>             "num_read": 0,
>             "num_read_kb": 0,
>             "num_write": 1059,
>             "num_write_kb": 4333569,
>             "num_scrub_errors": 0,
>             "num_shallow_scrub_errors": 0,
>             "num_deep_scrub_errors": 0,
>             "num_objects_recovered": 1531,
>             "num_bytes_recovered": 6413090864,
>             "num_keys_recovered": 0,
>             "num_objects_omap": 0,
>             "num_objects_hit_set_archive": 0,
>             "num_bytes_hit_set_archive": 0,
>             "num_flush": 0,
>             "num_flush_kb": 0,
>             "num_evict": 0,
>             "num_evict_kb": 0,
>             "num_promote": 0,
>             "num_flush_mode_high": 0,
>             "num_flush_mode_low": 0,
>             "num_evict_mode_some": 0,
>             "num_evict_mode_full": 0,
>             "num_objects_pinned": 0,
>             "num_legacy_snapsets": 0
>         },
>         "log_size": 1059,
>         "ondisk_log_size": 1059,
>         "up": 192,
>         "acting": 192
>     }
> ]
>
> # ceph df -f json-pretty
>
> {
>     "stats": {
>         "total_bytes": 205522870272,
>         "total_used_bytes": 42521640960,
>         "total_avail_bytes": 163001229312
>     },
>     "pools": [
>         {
>             "name": "default",
>             "id": 1,
>             "stats": {
>                 "kb_used": 4333569,
>                 "bytes_used": 4437573656,
>                 "percent_used": 2.92,
>                 "max_avail": 49126436864,
>                 "objects": 1059
>             }
>         },
>         {
>             "name": "test1",
>             "id": 4,
>             "stats": {
>                 "kb_used": 4333569,
>                 "bytes_used": 4437573656,
>                 "percent_used": 2.92,
>                 "max_avail": 49126436864,
>                 "objects": 1059
>             }
>         },
>         {
>             "name": "test2",
>             "id": 5,
>             "stats": {
>                 "kb_used": 4333569,
>                 "bytes_used": 4437573656,
>                 "percent_used": 2.92,
>                 "max_avail": 73689653248,
>                 "objects": 1059
>             }
>         }
>     ]
> }

> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Does someone else also see a wrong calculated %USED value on their
pools? And can help me to demystify this? ;-)

Thanks.

--
Cheers,
Alwin

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux