Re: How to verify multichannel

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



You can also "cat /proc/fs/cifs/Stats" and see the requests sent per
channel which can be helpful.  For example it enabled us to see that
prior to Aurelien's recent patch series (which is targeted for 5.8
kernel) we only received a slight benefit in large i/o workloads from
multichannel since reads and writes would be sent on the same channel
(other request types would be spread out across the channels).  With
Aurelien's patch series, even with only one network adapter, I am
seeing significant performance gains for large i/o workloads
especially for encrypted or signed workloads where the extra
parallelism on cifsd threads helps.  With multiple network adapters on
the client and the server, we can probably improve this scenario quite
a bit though - choosing which adapters to use (and eventually adding
integration with a userspace Witness Protocol helper to notify about
network interface changes)

On Tue, May 19, 2020 at 11:15 AM Xiaoli Feng <xifeng@xxxxxxxxxx> wrote:
>
> Thanks Aurélien for your professional information. It's very detailed and
> useful.
>
> Now I can see the multiple channel info in network package when mount with
> option "max_channel=2". If doesn't specify it. Client will only open one
> channel. And my smb.conf setup below can work for multichannel. Seems server
> and client don't require multiple network interfaces. And also don't need
> network team.
>
> But When test seedup, it isn't change. I use two vm in the same host. And
> each have 4 cpu and 1G memory. Maybe it's the problem.
>
> Thanks so much.
>
> ----- Original Message -----
> > From: "Aurélien Aptel" <aaptel@xxxxxxxx>
> > To: "Xiaoli Feng" <xifeng@xxxxxxxxxx>, linux-cifs@xxxxxxxxxxxxxxx
> > Sent: Tuesday, May 19, 2020 1:17:11 AM
> > Subject: Re: How to verify multichannel
> >
> > Hi Xiaoli,
> >
> > Xiaoli Feng <xifeng@xxxxxxxxxx> writes:
> > > Hello,
> > >
> > > When I test multichannel. I don't know how to verify if it works. I just
> > > catch network
> > > packages to see if server and client communicate with multiple IP. But from
> > > my test, it
> > > doesn't. Does any one know how to verify multichannel?
> >
> > Testing if it works vs checking speed improvement are 2 different
> > things unfortunately.
> >
> > Source code
> > ===========
> >
> > First you need the latest patchset which is available on my github remote:
> >
> >     https://github.com/aaptel/linux.git
> >
> > You can use the "multichan-page" branch or you can extract the last
> > patches of it (with "multichannel:" in the title). These patches add the
> > multichannel support to the main read/write codepaths from memory pages.
> >
> > Client setup
> > ============
> >
> > Then you need to mount with "vers=3.11,multichannel,max_channels=N" where N
> > will be the total number of channels the client will try to open.
> >
> > Do a network capture of the mount and run couple of commands in the
> > mount point (ls, cat, stat, ...).
> >
> > If the client also has multiple interface, make sure you capture all
> > interfaces traffic (otherwise you might ignore the other channels).
> >
> > Interface list check
> > ====================
> >
> > Look at the trace in wireshark and use "smb2" filter.
> >
> > Look for "Ioctl Response FSCTL_QUERY_NETWORK_INTERFACE_INFO", if the
> > server has multiple interfaces you should see a list.
> >
> > You can also look at /proc/fs/cifs/DebugDate to see the list of
> > interfaces the client saw and has connected to.
> >
> > Multichannel check
> > ==================
> >
> > Still in wireshark:
> > - right-click on the first "Negotiate protocol" packet
> > - "Colorize conversation" > "F5 TCP"
> >
> > You can repeat this for each channel to see clearly different colors for
> > each channels.
> >
> > I have made some screenshots, maybe this can help:
> >
> > https://imgur.com/a/j3IBewF
> >
> > You can see the alternating colors in my traces that shows different
> > channels being used.
> >
> > Speedup check
> > =============
> >
> > Now this is the hard part.
> >
> > If you test with VMs and the interface are "virtual", when you make 2
> > channels you are actually just dividing by 2 the bandwidth of the
> > virtual bus, so you won't see much speedup.
> >
> > To see a speedup you need to limit the bandwidth of the server
> > interface, for example 1MB/s to each, so that together with 2 channel
> > you might see 2MB/s.
> >
> > You can look into iperf to check raw bandwidth and tc to limit it:
> > - https://iperf.fr/
> > - https://netbeez.net/blog/how-to-use-the-linux-traffic-control/
> >
> > Also similarly if the VM only has 1 CPU you might not see much
> > improvement as it will switch from one NIC to the other sequentially
> > instead of in parallel.
> >
> > This gets tricky very quick...
> >
> >
> > > Setup:
> > > server is samba server in linux upstream.
> > > client is linux upstream.
> > > smb.conf is:
> > > [global]
> > > interfaces = eth1, eth2, team0
> > > server multi channel support = yes
> > > vfs objects = recycle aio_pthread
> > > aio read size = 1
> > > aio write size = 1
> > > [cifs]
> > > path=/mnt/cifs
> > > writeable=yes
> >
> > This looks ok but I've mostly used Windows Server 2019 in my test.
> >
> > Let me know how it goes.
> >
> > Cheers,
> > --
> > Aurélien Aptel / SUSE Labs Samba Team
> > GPG: 1839 CB5F 9F5B FB9B AA97  8C99 03C8 A49B 521B D5D3
> > SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg, DE
> > GF: Felix Imendörffer, Mary Higgins, Sri Rasiah HRB 247165 (AG München)
> >
> >
>


-- 
Thanks,

Steve




[Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux