Re: CEPH Cluster Usage Discrepancy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I get that, but isn’t 4TiB to track 2.45M objects excessive? These numbers seem very high to me.




On Sat, Oct 20, 2018 at 10:27 AM -0700, "Serkan Çoban" <cobanserkan@xxxxxxxxx> wrote:

4.65TiB includes size of wal and db partitions too.
On Sat, Oct 20, 2018 at 7:45 PM Waterbly, Dan  wrote:
>
> Hello,
>
>
>
> I have inserted 2.45M 1,000 byte objects into my cluster (radosgw, 3x replication).
>
>
>
> I am confused by the usage ceph df is reporting and am hoping someone can shed some light on this. Here is what I see when I run ceph df
>
>
>
> GLOBAL:
>
>     SIZE        AVAIL       RAW USED     %RAW USED
>
>     1.02PiB     1.02PiB      4.65TiB          0.44
>
> POOLS:
>
>     NAME                                           ID     USED        %USED     MAX AVAIL     OBJECTS
>
>     .rgw.root                                      1      3.30KiB         0        330TiB           17
>
>     .rgw.buckets.data      2      22.9GiB         0        330TiB     24550943
>
>     default.rgw.control                            3           0B         0        330TiB            8
>
>     default.rgw.meta                               4         373B         0        330TiB            3
>
>     default.rgw.log                                5           0B         0        330TiB            0
>
>     .rgw.control           6           0B         0        330TiB            8
>
>     .rgw.meta              7      2.18KiB         0        330TiB           12
>
>     .rgw.log               8           0B         0        330TiB          194
>
>     .rgw.buckets.index     9           0B         0        330TiB         2560
>
>
>
> Why does my bucket pool report usage of 22.9GiB but my cluster as a whole is reporting 4.65TiB? There is nothing else on this cluster as it was just installed and configured.
>
>
>
> Thank you for your help with this.
>
>
>
> -Dan
>
>
>
> Dan Waterbly | Senior Application Developer | 509.235.7500 x225 | dan.waterbly@xxxxxxxxxx
>
> WASHINGTON STATE ARCHIVES | DIGITAL ARCHIVES
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux