Re: CEPH Cluster Usage Discrepancy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Jakub,

No, my setup seems to be the same as yours. Our system is mainly for archiving loads of data. This data has to be stored forever and allow reads, albeit seldom considering the number of objects we will store vs the number of objects that ever will be requested.

It just really seems odd that the metadata surrounding the 25M objects is so high.

We have 144 osds on 9 storage nodes. Perhaps it makes perfect sense but I’d like to know why we are seeing what we are and how it all adds up.

Thanks!
Dan




On Sat, Oct 20, 2018 at 12:36 PM -0700, "Jakub Jaszewski" <jaszewski.jakub@xxxxxxxxx> wrote:

Hi Dan,

Did you configure block.wal/block.db as separate devices/partition (osd_scenario: non-collocated or lvm for clusters installed using ceph-ansbile playbooks )?

I run Ceph version 13.2.1 with non-collocated data.db and have the same situation - the sum of block.db partitions' size is displayed as RAW USED in ceph df.
Perhaps it is not the case for collocated block.db/wal.

Jakub   

On Sat, Oct 20, 2018 at 8:34 PM Waterbly, Dan <dan.waterbly@xxxxxxxxxx> wrote:
I get that, but isn’t 4TiB to track 2.45M objects excessive? These numbers seem very high to me.




On Sat, Oct 20, 2018 at 10:27 AM -0700, "Serkan Çoban" <cobanserkan@xxxxxxxxx> wrote:

4.65TiB includes size of wal and db partitions too.
On Sat, Oct 20, 2018 at 7:45 PM Waterbly, Dan  wrote:
>
> Hello,
>
>
>
> I have inserted 2.45M 1,000 byte objects into my cluster (radosgw, 3x replication).
>
>
>
> I am confused by the usage ceph df is reporting and am hoping someone can shed some light on this. Here is what I see when I run ceph df
>
>
>
> GLOBAL:
>
>     SIZE        AVAIL       RAW USED     %RAW USED
>
>     1.02PiB     1.02PiB      4.65TiB          0.44
>
> POOLS:
>
>     NAME                                           ID     USED        %USED     MAX AVAIL     OBJECTS
>
>     .rgw.root                                      1      3.30KiB         0        330TiB           17
>
>     .rgw.buckets.data      2      22.9GiB         0        330TiB     24550943
>
>     default.rgw.control                            3           0B         0        330TiB            8
>
>     default.rgw.meta                               4         373B         0        330TiB            3
>
>     default.rgw.log                                5           0B         0        330TiB            0
>
>     .rgw.control           6           0B         0        330TiB            8
>
>     .rgw.meta              7      2.18KiB         0        330TiB           12
>
>     .rgw.log               8           0B         0        330TiB          194
>
>     .rgw.buckets.index     9           0B         0        330TiB         2560
>
>
>
> Why does my bucket pool report usage of 22.9GiB but my cluster as a whole is reporting 4.65TiB? There is nothing else on this cluster as it was just installed and configured.
>
>
>
> Thank you for your help with this.
>
>
>
> -Dan
>
>
>
> Dan Waterbly | Senior Application Developer | 509.235.7500 x225 | dan.waterbly@xxxxxxxxxx
>
> WASHINGTON STATE ARCHIVES | DIGITAL ARCHIVES
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux