Re: [Patch v5 0/3] Introduce a driver to support host accelerated access to Microsoft Azure Blob for Azure VM

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Oct 13, 2021 at 12:58:55AM +0000, Long Li wrote:
> > Subject: RE: [Patch v5 0/3] Introduce a driver to support host accelerated access
> > to Microsoft Azure Blob for Azure VM
> > 
> > > Subject: Re: [Patch v5 0/3] Introduce a driver to support host
> > > accelerated access to Microsoft Azure Blob for Azure VM
> > >
> > > Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> writes:
> > >
> > > > On Fri, Oct 08, 2021 at 01:11:02PM +0200, Vitaly Kuznetsov wrote:
> > > >> Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> writes:
> > > >>
> > > >> ...
> > > >> >
> > > >> > Not to mention the whole crazy idea of "let's implement our REST
> > > >> > api that used to go over a network connection over an ioctl instead!"
> > > >> > That's the main problem that you need to push back on here.
> > > >> >
> > > >> > What is forcing you to put all of this into the kernel in the
> > > >> > first place?  What's wrong with the userspace network
> > > >> > connection/protocol that you have today?
> > > >> >
> > > >> > Does this mean that we now have to implement all REST apis that
> > > >> > people dream up as ioctl interfaces over a hyperv transport?
> > > >> > That would be insane.
> > > >>
> > > >> As far as I understand, the purpose of the driver is to replace a "slow"
> > > >> network connection to API endpoint with a "fast" transport over
> > > >> Vmbus.
> > > >
> > > > Given that the network connection is already over vmbus, how is this
> > > > "slow" today?  I have yet to see any benchmark numbers anywhere :(
> > > >
> > > >> So what if instead of implementing this new driver we just use
> > > >> Hyper-V Vsock and move API endpoint to the host?
> > > >
> > > > What is running on the host in the hypervisor that is supposed to be
> > > > handling these requests?  Isn't that really on some other guest?
> > > >
> > >
> > > Long,
> > >
> > > would it be possible to draw a simple picture for us describing the
> > > backend flow of the feature, both with network connection and with
> > > this new driver? We're struggling to understand which particular
> > > bottleneck the driver is trying to eliminate.
> > 
> > Thank you for this great suggestion. I'm preparing some diagrams for describing
> > the problem. I will be sending them soon.
> > 
> 
> Please find the pictures describing the problem and data flow before and after this driver.
> 
> existing_blob_access.jpg shows the current method of accessing Blob through HTTP.
> fastpath_blob_access.jpg shows the access to Blob through this driver.
> 
> This driver enables the Blob application to use the host native network to get access directly to the Data Block server. The host networks are the backbones of Azure. The networks are RDMA capable, but they are not available for use by VMs due to security requirements.

Please wrap your lines when responding...

Anyway, this shows that you are trying to work around a crazy network
design by adding lots of kernel code and a custom user/kernel api.

Please just fix your network design instead, put the network that you
want this "blob api" on the RDMA network so that you will get the same
throughput as this odd one-off ioctl will provide.

That way you also get the proper flow control, error handling,
encryption, and all the other goodness that a real network connection
provides you.  Instead of this custom, one-off, fragile, ioctl command
that requires custom userspace code to handle and needs to be maintained
for the next 40+ years by yourself.

Do it right please, do not force the kernel and userspace to do foolish
things because your network designers do not want to do the real work
here.

thanks,

greg k-h



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux