Re: Is there an negative relationship between storage utilization and ceph performance?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi again,
... after a long time!

Now I have change the whole ceph-cluster from xfs to ext4 (60 OSDs),
change tunables and fill the cluster again.

So I can compare the bench values.

For my setup the cluster runs better with ext4 than with xfs - latency
drop from ~14ms to ~8ms. (rados -p test bench 60 seq --no-cleanup)
Still with the old tunables.

Now with the new tunables (and filled again to 65%) the read performance
was also much better - raised from 440MB/s to ~760MB/s.

The write performance was less after before, but I had problems with the
read-performance (write was OK for me).

I lost a little bit of space - the weight of reach disk was 3.64 before
and 3.58 now.

For me it's looks, that the storage utilization has less impact with
ext4 and ext4 performs better than xfs!

Udo

Am 05.11.2014 01:22, schrieb Christian Balzer:
> 
> Hello,
> 
> On Tue, 04 Nov 2014 20:49:02 +0100 Udo Lembke wrote:
> 
>> Hi,
>> since a long time I'm looking for performance improvements for our
>> ceph-cluster.
>> The last expansion got better performance, because we add another node
>> (with 12 OSDs). The storage utilization was after that 60%.
>>
> Another node of course does more than lower per OSD disk utilization, it
> also adds more RAM (cached objects), more distribution of requests, etc.
> 
> So the question here is, did the usage (number of client IOPS) stay the
> same and just the total amount of stored data did grow?
> 
>> Now we reach again 69% (the next nodes are waiting for installation) and
>> the performance drop! OK, we also change the ceph-version from 0.72.x to
>> firefly.
>> But I'm wonder if there an relationship between utilization an
>> performance?! The OSDs are xfs disks, but now i start to use ext4,
>> because of the bad fragmentation on a xfs-filesystem (yes, I use the
>> mountoption allocsize=4M allready).
>>
> Does defragmenting (all of) the XFS backed OSDs help?
> 
>> Has anybody the same effect?
>>
> I have nothing anywhere near that full, but I can confirm that XFS
> fragments worse than ext4 and the less said about BTRFS, the better. ^.^
> Also defragmenting (not that they needed it) ext4 volumes felt more
> lightweight than XFS.
> 
> Since you now have ext4 OSDs, how about doing a osd bench and fio on those
> compared to XFS backed ones?
> 
> Other than the above, Mark listed a number of good reasons why OSDs (HDDs)
> become slower when getting fuller besides fragmentation.
> 
> Christian
>> Udo
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
> 
> 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux