Re: All pools have size=3 but "MB data" and "MB used" ratio is 1 to 5

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



2015-03-27 18:27 GMT+01:00 Gregory Farnum <greg@xxxxxxxxxxx>:
> Ceph has per-pg and per-OSD metadata overhead. You currently have 26000 PGs,
> suitable for use on a cluster of the order of 260 OSDs. You have placed
> almost 7GB of data into it (21GB replicated) and have about 7GB of
> additional overhead.
>
> You might try putting a suitable amount of data into the cluster before
> worrying about the ratio of space used to data stored. :)
> -Greg

Hello Greg,

I put a suitable amount of data now, and it looks like my ratio is still 1 to 5.
The folder:
/var/lib/ceph/osd/ceph-N/current/meta/
did not grow, so it looks like that is not the problem.

Do you have any hint how to troubleshoot this issue ???


ansible@zrh-srv-m-cph02:~$ ceph osd pool get .rgw.buckets size
size: 3
ansible@zrh-srv-m-cph02:~$ ceph osd pool get .rgw.buckets min_size
min_size: 2


ansible@zrh-srv-m-cph02:~$ ceph -w
    cluster 4179fcec-b336-41a1-a7fd-4a19a75420ea
     health HEALTH_WARN pool .rgw.buckets has too few pgs
     monmap e4: 4 mons at
{rml-srv-m-cph01=10.120.50.20:6789/0,rml-srv-m-cph02=10.120.50.21:6789/0,rml-srv-m-stk03=10.120.50.32:6789/0,zrh-srv-m-cph02=10.120.50.2:6789/0},
election epoch 668, quorum 0,1,2,3
zrh-srv-m-cph02,rml-srv-m-cph01,rml-srv-m-cph02,rml-srv-m-stk03
     osdmap e2170: 54 osds: 54 up, 54 in
      pgmap v619041: 28684 pgs, 15 pools, 109 GB data, 7358 kobjects
            518 GB used, 49756 GB / 50275 GB avail
               28684 active+clean

ansible@zrh-srv-m-cph02:~$ ceph df
GLOBAL:
    SIZE       AVAIL      RAW USED     %RAW USED
    50275G     49756G         518G          1.03
POOLS:
    NAME                   ID     USED      %USED     MAX AVAIL     OBJECTS
    rbd                    0        155         0        16461G           2
    gianfranco             7        156         0        16461G           2
    images                 8       257M         0        16461G          38
    .rgw.root              9        840         0        16461G           3
    .rgw.control           10         0         0        16461G           8
    .rgw                   11     21334         0        16461G         108
    .rgw.gc                12         0         0        16461G          32
    .users.uid             13      1575         0        16461G           6
    .users                 14        72         0        16461G           6
    .rgw.buckets.index     15         0         0        16461G          30
    .users.swift           17        36         0        16461G           3
    .rgw.buckets           18      108G      0.22        16461G     7534745
    .intent-log            19         0         0        16461G           0
    .rgw.buckets.extra     20         0         0        16461G           0
    volumes                21      512M         0        16461G         161
ansible@zrh-srv-m-cph02:~$
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux