Re: [ceph-users] Dramatic performance drop at certain number of objects in pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 23 June 2016 at 12:37, Christian Balzer <chibi@xxxxxxx> wrote:
> Case in point, my main cluster (RBD images only) with 18 5+TB OSDs on 3
> servers (64GB RAM each) has 1.8 million 4MB RBD objects using about 7% of
> the available space.
> Don't think I could hit this problem before running out of space.

Perhaps. However ~30TB per server is pretty low with present HDD
sizes. In the pool on our large cluster where we've seen this issue we
have 24x 4TB OSDs per server, and we first hit the problem in pre-prod
testing at about 20% usage (with default 4MB objects). We went to 40 /
8. Then as I reported the other day we hit the issue again at
somewhere around 50% usage. Now we're at 50 / 12.

The boxes mentioned above are a couple of years old. Today we're
buying 2RU servers with 128TB in them (16x 8TB)!

Replacing our current NAS on RBD setup with CephFS is now starting to
scare me...

-- 
Cheers,
~Blairo
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux