Re: Is there an negative relationship between storage utilization and ceph performance?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'd say it's storage in general, though Ceph can be especially harsh on file systems (RBD can invoke particularly bad fragmentation in btrfs for example due to how COW works).

So generally there's a lot of things that can cause slow downs as your disks get full:

1) More objects spread across deeper PG directory trees
2) More disk fragmentation in general
3) fragmentation that generates even more fragmentation during writes once you no longer have contiguous space to store objects.
4) A higher data/pagecache ratio (and more dentries/inodes to cache)
5) disk heads move farther across the disk during random IO.
6) differences between outer and inner track performance on some disks.

There's probably other things I'm missing.

Mark

On 11/04/2014 01:56 PM, Andrey Korolyov wrote:
On Tue, Nov 4, 2014 at 10:49 PM, Udo Lembke <ulembke@xxxxxxxxxxxx> wrote:
Hi,
since a long time I'm looking for performance improvements for our
ceph-cluster.
The last expansion got better performance, because we add another node
(with 12 OSDs). The storage utilization was after that 60%.

Now we reach again 69% (the next nodes are waiting for installation) and
the performance drop! OK, we also change the ceph-version from 0.72.x to
firefly.
But I'm wonder if there an relationship between utilization an performance?!
The OSDs are xfs disks, but now i start to use ext4, because of the bad
fragmentation on a xfs-filesystem (yes, I use the mountoption
allocsize=4M allready).

Has anybody the same effect?

Udo


AFAIR there is a specific point somewhere in Ceph user guide to not
reach commit ratio higher than 70% due to heavy performance impact. In
practice, hot storage feels even fifty-percent commit on a xfs with
default mount parameters, so you may consider as a rule of thumb to
reach commit not higher than 60 percents. For mixed or cold storage
numbers will vary, as average clat and write throughput will matter
less.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux