Hi Dan,
Did you configure block.wal/block.db as separate devices/partition (osd_scenario: non-collocated or lvm for clusters installed using ceph-ansbile playbooks )?
I run Ceph version 13.2.1 with non-collocated data.db and have the same situation - the sum of block.db partitions' size is displayed as RAW USED in ceph df.
Perhaps it is not the case for collocated block.db/wal.
Jakub
On Sat, Oct 20, 2018 at 8:34 PM Waterbly, Dan <dan.waterbly@xxxxxxxxxx> wrote:
_______________________________________________I get that, but isn’t 4TiB to track 2.45M objects excessive? These numbers seem very high to me.
Get Outlook for iOS
On Sat, Oct 20, 2018 at 10:27 AM -0700, "Serkan Çoban" <cobanserkan@xxxxxxxxx> wrote:
4.65TiB includes size of wal and db partitions too. On Sat, Oct 20, 2018 at 7:45 PM Waterbly, Dan wrote: > > Hello, > > > > I have inserted 2.45M 1,000 byte objects into my cluster (radosgw, 3x replication). > > > > I am confused by the usage ceph df is reporting and am hoping someone can shed some light on this. Here is what I see when I run ceph df > > > > GLOBAL: > > SIZE AVAIL RAW USED %RAW USED > > 1.02PiB 1.02PiB 4.65TiB 0.44 > > POOLS: > > NAME ID USED %USED MAX AVAIL OBJECTS > > .rgw.root 1 3.30KiB 0 330TiB 17 > > .rgw.buckets.data 2 22.9GiB 0 330TiB 24550943 > > default.rgw.control 3 0B 0 330TiB 8 > > default.rgw.meta 4 373B 0 330TiB 3 > > default.rgw.log 5 0B 0 330TiB 0 > > .rgw.control 6 0B 0 330TiB 8 > > .rgw.meta 7 2.18KiB 0 330TiB 12 > > .rgw.log 8 0B 0 330TiB 194 > > .rgw.buckets.index 9 0B 0 330TiB 2560 > > > > Why does my bucket pool report usage of 22.9GiB but my cluster as a whole is reporting 4.65TiB? There is nothing else on this cluster as it was just installed and configured. > > > > Thank you for your help with this. > > > > -Dan > > > > Dan Waterbly | Senior Application Developer | 509.235.7500 x225 | dan.waterbly@xxxxxxxxxx > > WASHINGTON STATE ARCHIVES | DIGITAL ARCHIVES > > > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com