Improving Performance with more OSD's?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'm looking to improve the raw performance on my small setup (2 Compute Nodes, 
2 OSD's). Only used for hosting KVM images.

Raw read/write is roughly 200/35 MB/s. Starting 4+ VM's simultaneously pushes 
iowaits over 30%, though the system keeps chugging along.

Budget is limited ... :(

I plan to upgrade my SSD journals to something better than the Samsung 840 
EVO's (Intel 520/530?)

One of the things I see mentioned a lot in blogs etc is how ceph's performance 
improves as you add more OSD's and that the quality of the disks does not 
matter so much as the quantity.

How does this work? does ceph stripe reads and writes across the OSD's to 
improve performance?

If I add 3 cheap OSD's to each node (500GB - 1TB) with 10GB SSD journal 
partition each could I expect a big improvement in performance?

What sort of redundancy to setup? currently its min= 1, size=2. Size is not an 
issue, we already have 150% more space than we need, redundancy and 
performance is more important.

Now I think on it, we can live with the slow write performance, but reducing 
iowait would be *really* good.

thanks,
-- 
Lindsay

Attachment: signature.asc
Description: This is a digitally signed message part.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux