Re: [Lsf] Preliminary Agenda and Activities for LSF

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 2011-03-30 at 07:58 +0200, Hannes Reinecke wrote:
> On 03/30/2011 01:09 AM, Shyam_Iyer@xxxxxxxx wrote:
> >
> > Let me back up here.. this has to be thought in not only the traditional Ethernet
>  > sense but also in a Data Centre Bridged environment. I shouldn't 
> have wandered
>  > into the multipath constructs..
> >
> > I think the statement on not going to the same LUN was a little erroneous. I meant
>  > different /dev/sdXs.. and hence different block I/O queues.
> >
> > Each I/O queue could be thought of as a bandwidth queue class being serviced through
>  > a corresponding network adapter's queue(assuming a multiqueue 
> capable adapter)
> >
> > Let us say /dev/sda(Through eth0) and /dev/sdb(eth1) are a cgroup bandwidth group
>  > corresponding to a weightage of 20% of the I/O bandwidth the user 
> has configured
>  > this weight thinking that this will correspond to say 200Mb of 
> bandwidth.
> >
> > Let us say the network bandwidth on the corresponding network queues corresponding
>  > was reduced by the DCB capable switch...
> > We still need an SLA of 200Mb of I/O bandwidth but the underlying dynamics have changed.
> >
> > In such a scenario the option is to move I/O to a different bandwidth priority queue
>  > in the network adapter. This could be moving I/O to a new network 
> queue in eth0 or
>  > another queue in eth1 ..
> >
> > This requires mapping the block queue to the new network queue.
> >
> > One way of solving this is what is getting into the open-iscsi world i.e. creating
>  > a session tagged to the relevant DCB priority and thus the 
> session gets mapped
>  > to the relevant tc queue which ultimately maps to one of the 
> network adapters multiqueue..
> >
> > But when multipath fails over to the different session path then the DCB bandwidth
>  > priority will not move with it..
> >
> > Ok one could argue that is a user mistake to have configured bandwidth priorities
>  > differently but it may so happen that the bandwidth priority was 
> just dynamically
>  > changed by the switch for the particular queue.
> >
> > Although I gave an example of a DCB environment but we could definitely look at
>  > doing a 1:n map of block queues to network adapter queues for 
> non-DCB environments too..
> >
> That sounds quite convoluted enough to warrant it's own slot :-)
> 
> No, seriously. I think it would be good to have a separate slot 
> discussing DCB (be it FCoE or iSCSI) and cgroups.
> And how to best align these things.

OK, I'll go for that ... Data Centre Bridging; experiences, technologies
and needs ... something like that.  What about virtualisation and open
vSwitch?

James


--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux