Re: [Lsf] Preliminary Agenda and Activities for LSF

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 03/30/2011 04:02 PM, James Bottomley wrote:
On Wed, 2011-03-30 at 07:58 +0200, Hannes Reinecke wrote:
On 03/30/2011 01:09 AM, Shyam_Iyer@xxxxxxxx wrote:

Let me back up here.. this has to be thought in not only the traditional Ethernet
  >  sense but also in a Data Centre Bridged environment. I shouldn't
have wandered
  >  into the multipath constructs..

I think the statement on not going to the same LUN was a little erroneous. I meant
  >  different /dev/sdXs.. and hence different block I/O queues.

Each I/O queue could be thought of as a bandwidth queue class being serviced through
  >  a corresponding network adapter's queue(assuming a multiqueue
capable adapter)

Let us say /dev/sda(Through eth0) and /dev/sdb(eth1) are a cgroup bandwidth group
  >  corresponding to a weightage of 20% of the I/O bandwidth the user
has configured
  >  this weight thinking that this will correspond to say 200Mb of
bandwidth.

Let us say the network bandwidth on the corresponding network queues corresponding
  >  was reduced by the DCB capable switch...
We still need an SLA of 200Mb of I/O bandwidth but the underlying dynamics have changed.

In such a scenario the option is to move I/O to a different bandwidth priority queue
  >  in the network adapter. This could be moving I/O to a new network
queue in eth0 or
  >  another queue in eth1 ..

This requires mapping the block queue to the new network queue.

One way of solving this is what is getting into the open-iscsi world i.e. creating
  >  a session tagged to the relevant DCB priority and thus the
session gets mapped
  >  to the relevant tc queue which ultimately maps to one of the
network adapters multiqueue..

But when multipath fails over to the different session path then the DCB bandwidth
  >  priority will not move with it..

Ok one could argue that is a user mistake to have configured bandwidth priorities
  >  differently but it may so happen that the bandwidth priority was
just dynamically
  >  changed by the switch for the particular queue.

Although I gave an example of a DCB environment but we could definitely look at
  >  doing a 1:n map of block queues to network adapter queues for
non-DCB environments too..

That sounds quite convoluted enough to warrant it's own slot :-)

No, seriously. I think it would be good to have a separate slot
discussing DCB (be it FCoE or iSCSI) and cgroups.
And how to best align these things.

OK, I'll go for that ... Data Centre Bridging; experiences, technologies
and needs ... something like that.  What about virtualisation and open
vSwitch?

Hmm. Not qualified enough to talk about the latter; I was more envisioning the storage-related aspects here (multiqueue mapping, QoS classes etc). With virtualisation and open vSwitch we're more in
the network side of things; doubt open vSwitch can do DCB.
And even if it could, virtio certainly can't :-)

Cheers,

Hannes
--
Dr. Hannes Reinecke		      zSeries & Storage
hare@xxxxxxx			      +49 911 74053 688
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 NÃrnberg
GF: Markus Rex, HRB 16746 (AG NÃrnberg)

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel



[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux