Re: misc performance tuning queries (related to OpenStack in particular)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> So quick correction based on Michael's response. In question 4, I should
> have not made any reference to Ceph objects, since objects are not striped
> (per Michael's response). Instead, I should simply have used the words "Ceph
> VM Image" instead of "Ceph objects". A Ceph VM image would constitute 1000s
> of objects, and the different objects are striped/spread across multiple
> OSDs from multiple servers. In that situation, what's answer to #4....

It depends on which linux bonding driver is in use, some drivers load
share on transmit, some load share on receive, some do both and some
only provide active/passive fault tolerance. I have Ceph OSD hosts
using LACP (bond-mode 802.3ad) and they load share on both receive and
transmit. We're utilizing a pair of bonded 1GbE links for the Ceph
public network and another pair of bonded 1GbE links for the cluster
network. The issues we've seen with 1GbE are complexity, shallow
buffers on 1GbE top of rack switch gear (Cisco 4948-10G) and the fact
that not all flows are equal (4x 1GbE != 4GbE).

-- 

Kyle
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux