issues when bucket index deep-scrubbing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
I plan to shard my largest bucket because of issues of deep-scrubbing
(when PG which index for this bucket is stored on is deep-scrubbed, it
appears many slow requests and OSD grows in memory - after latest
scrub it grows up to 9G).

I trying to found why large bucket index make issues when it is scrubbed.
On test cluster:
radosgw-admin bucket stats --bucket=test1-XX
{ "bucket": "test1-XX",
  "pool": ".rgw.buckets",
  "index_pool": ".rgw.buckets",
  "id": "default.4211.2",
...

I guess index is in object .dir.default.4211.2. (pool: .rgw.buckets)

rados -p .rgw.buckets get .dir.default.4211.2 -
<empty>

But:
rados -p .rgw.buckets listomapkeys .dir.default.4211.2
test_file_2.txt
test_file_2_11.txt
test_file_3.txt
test_file_4.txt
test_file_5.txt

I guess that list of files are stored in leveldb not in one large file.
'omap' files are stored in {osd_dir}/current/omap/, the largest file
that i found in this directory (on production) have 8.8M.

I'm a little confused.

How list of files (for bucket) is stored?
If list of objects in bucket is splitted on many small files in
leveldb that large bucket (with many files) should not cause larger
latency in PUT new object.
Scrubbing also should not be a problem i think ...

What you think about using a sharding to split big buckets into the
smalest one to avoid the problems with big indexes?

--
Regards
Dominik
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux