Re: Possible to bind one osd with a specific networkadapter?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



James,
Thank you£¡
No, I have not separated the public and cluster yet. They are on the same switch. As I don't have many nodes now, the switch won't be the bottleneck currently.

------------------ Original ------------------
From:  "James Harper"<james.harper@xxxxxxxxxxxxxxxx>;
Date:  Sat, Jun 22, 2013 12:41 PM
To:  "Da Chun"<ngugc@xxxxxx>; "ceph-users"<ceph-users@xxxxxxxxxxxxxx>;
Subject:  RE: [ceph-users] Possible to bind one osd with a specific networkadapter?

>
> Hi List,
> Each of my osd nodes has 5 network Gb adapters, and has many osds, one
> disk one osd. They are all connected with a Gb switch.
> Currently I can get an average 100MB/s of read/write speed. To improve the
> throughput further, the network bandwidth will be the bottleneck, right?

Do you already have separate networks for public and cluster?

>
> I can't afford to replace all the adapters and switch with 10Gb ones. How can I
> improve the throughput based on current gears?
>
> My first thought is to use bonding as we have multiple adapters. But bonding
> has performance cost, surely cannot multiplex the throughput. And it has
> dependency on the switch.

LACP bonding should be okay. Each connection will only be 1gbit/second but if you have multiple clients and multiple connections you could see improved performance.

If you want to use plain round robin ordering at layer two, play with net/ipv4/tcp_reordering value to improve things. I do this and iperf gives me 2gbit/second throughput, but with an increase in cpu of course.

> My second thought is to group the adapters and osds. For example, we have
> three adapters called A1, A2, A3, and 6 osds called O1, O2,..., O6. let O1 & O2
> use A1 exclusively, O3 & O4 use A2 exclusively, O5 & O6 use A3 exclusively.
> So they are separated groups, each group has its own disks, adapters, which
> are not shared. Only CPU & memory resource is shared between groups.
>

I tested something similar. Each server has two disks and 2 adapters assigned to the cluster network. Each adapter is on a different subnet. As long as each osd can reach each ip address (because it has adapters on both networks) it should be fine and is probably better than bonding.

Actual multipath would be nice for the public network, but LACP should give you an aggregate increase even if individual connections are still limited to the adapter link speed.

James

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux