Re: [phodav PATCH 4/4] spice-webdavd: Automount shared folder on Windows

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi

On Thu, Sep 17, 2015 at 3:35 PM, Marc-André Lureau <mlureau@xxxxxxxxxx> wrote:


----- Original Message -----
> Hi
> On Thu, Sep 17, 2015 at 2:47 PM, Marc-André Lureau < mlureau@xxxxxxxxxx >
> wrote:
>
>
> Hi
>
> ----- Original Message -----
>
> > How do you prevent from having multiple drive pointing to the spice folder?
> >
> >
> > Currently, I don't.
> >
> > Only way to connect multiple drives, is if you restart the service,
> > because the drives will disconnect, after restarting guest.
> >
> > I will add a disconnect function to the quit function in the service,
> > so that if uou disable, or restart teh service, the drive will be unmapped
> >
>
> Can't you enumerate existing mappings and only add one if it's necessary?
>
>
> Yes there seems to be API for that, so I can try to implement it.
>
> Still I'll make a disconnect function anyway, because it doesn't make sense
> to have a drive mapped, when the webdavd service is not even running
>

That could be annoying, I would rather not destroy user-defined mappings if possible.

It won't touch user defined mappings, just the Spice client drive. I am trying to disconnect
using remote name, but it doesn't seem to be working, but I can remember what letter
I mapped to and disconnect that.

The big problem is that on Windows, if the drive is not connected, and you accidentally try to open it,
bad things WILL happen (explorer hangs for around 10 seconds or more). I think this is fixed in W10,
but I'm not entirely sure.
 

>
>
> > > ---
> > > Changes since RFC:
> > > - Calling WNetAddConnection2() blindly was slow and caause many threads
> > > to
> > > spawn.
> > > - Now only call it once, when it is sure, that it will connect.
> > > - Now connects to a * drive, instead of always Z:
> > > - Thread is now spawned and joined every time run_service() is called.
> > > ---
> > > Makefile.am | 4 +++
> > > spice/spice-webdavd.c | 76
> > > +++++++++++++++++++++++++++++++++++++++++++++++++++
> > > 2 files changed, 80 insertions(+)
> > >
> > > diff --git a/Makefile.am b/Makefile.am
> > > index 485417b..df9a73e 100644
> > > --- a/Makefile.am
> > > +++ b/Makefile.am
> > > @@ -87,6 +87,10 @@ spice_webdavd_LDADD = \
> > > $(PIE_LDFLAGS) \
> > > $(NULL)
> > >
> > > +if OS_WIN32
> > > +spice_webdavd_LDADD += -lnetapi32 -lmpr
> > > +endif
> > > +
> > > deps.txt:
> > > $(AM_V_GEN)rpm -qa | grep $(host_os) | sort | unix2dos > $@
> > >
> > > diff --git a/spice/spice-webdavd.c b/spice/spice-webdavd.c
> > > index f9c5cf1..3940f1c 100644
> > > --- a/spice/spice-webdavd.c
> > > +++ b/spice/spice-webdavd.c
> > > @@ -737,11 +737,81 @@ open_mux_path (const char *path)
> > > mux_queue = output_queue_new (G_OUTPUT_STREAM (mux_ostream));
> > > }
> > >
> > > +#ifdef G_OS_WIN32
> > > +static gchar
> > > +get_free_drive_letter(void)
> > > +{
> > > + const guint32 max_mask = 1 << 25;
> > > + guint32 drives;
> > > + guint32 mask;
> > > + gint i;
> > > +
> > > + drives = GetLogicalDrives ();
> > > +
> > > + for (i = 0; i < 26; i++)
> > > + {
> > > + mask = max_mask >> i;
> > > + if ((drives & mask) == 0)
> > > + return 'Z' - i;
> > > + }
> > > +
> > > + return 0;
> > > +}
> > > +
> > > +static gpointer
> > > +map_drive(gpointer user_data)
> > > +{
> > > + GCancellable * cancel_map = user_data;
> > > + gchar drive_letter;
> > > + gchar local_name[3];
> > > + gchar remote_name[] = " http://localhost:9843/ ";
> > > + NETRESOURCE net_resource;
> > > + guint32 retval;
> > > + gint i;
> > > +
> > > + for (i = 0; i < 5; ++i)
> > > + {
> > > + if (g_cancellable_is_cancelled (cancel_map))
> > > + return NULL;
> > > + else
> > > + g_usleep (G_USEC_PER_SEC);
> > > + }
> >
> > It looks like this would fail if the guest is started without client
> > connected (or not sharing folder).
> >
> > I'm not sure I follow.
> > This is supposed to return NULL precisely, if the client is not connected,
> > or not sharing folder. Windows webdavd will loop and call run_service(),
> > until sharing is enabled, which will then NOT cancel this function,
> > and we map the drive.
>
> ah so the service keeps running and next time the client connects, then the
> thread will be started again?
>
> No, the thread starts only after the service is started for the first time.
> When user disconnects, the service
> currently doesn't notice on Windows (read_thread is still blocked), so when a
> different user connects,

Very often VM are started without a client connected.


Of course, but that is not an issue. The client was looping and trying to connect even before my patches.

When the VM is started, it will spawn the thread, try to connect, and join it back after 5 seconds. Then it will loop and try again.
When client connects, and enables sharing, the next time the thread is spawned, it will connect and stop looping.
 
> the drive is still mapped, but when the new user enables sharing, the
> original drive will work, as it connects
> to the same remote name ( http://localhost:9843/ ).
>
> This is still an issue, because if the new user doesn't enable sharing, the
> drive will still be there, but won't
> be accessible. But that is issue for another patch (and bug).

At least it would be nice to put this limitation in the commit message.

Ok I will put it there.
 

> > Sorry I didn't mean what the thread was doing, but rather why we need a
> > thread for this task.
>
> Well I could use a g_task or g_idle, or something like that, but g_idle isn't
> cancellable, and g_task

g_idle is cancellable.

Hmm, I couldn't find in the documentation. Usually the functions have some kind of GCancellable parameter.
 

> seemed too complicated for this. When I tried to do this in the main_thread
> (as it just has to be a

gtask is supposed to be simpler (although I don't use it myself ;)

> different thread then read_thread), I ran into race conditions between the
> read_thread and map_drive,
> especially, when using sleep between polling for cancellable.
> Spawning a thread for a few seconds just seemed to be less complicated.

It's hardly convincing me, threads are more racy by nature.

That's true, but since I just need a few lines of code in the thread, and it's not
accessing anything externally (other then the cancel), the thread API is simpler for what I need.

With glib functions (g_task, g_idle, GSimplaAsyncResult), I need to supply stuff
I either don't need, or I don't know what they're for. I just need something, that will
call a function after 5 seconds, and can be cancelled.
That's 2-3 parameters (callback, cancel, maybe time).

Since I don't have that much experience with GLibs Async functions, I chose a thread.
For me it's simpler to implement and to test, it's fewer lines of code, and since the function
it runs is simple, I don't think that there is any advantage in using something else.
More so, if the other functions spawn a thread internally anyway.

I tried using a timer, and removing it before it fired, but sometimes it fired, even when it wasn't supposed to.
I could pass a GCancellable to the timeout instead, and let it fire, but then I would have multiple timeouts
firing at the same time, for no reason (because I would cancel the cancellable).
 

Trying to read from the driver when the started is wrong, at least it should be after the virtio and the listening socket have been successfully opened.

Well I'm not actually reading anything. And I think that read_thread starts only after
everything is prepared, because If I start the service with sharing enabled, everything works fine.
If the user is not connected/sharing is not enabled, the only thing my patches do, is that the new thread
waits for a few seconds, and gets cancelled, when the main loop ends (which will happen almost immediately).
 

I can imagine it may block because WNetAddConnection2 function call is sync and will probe the service. But if it's blocking then it will not be able to handle its communication work.

Why not wrap WNetAddConnection2 in a GSimplaAsyncThread and run it once the client connection is established?

Well that's the problem. I don't know when the client is connected, because there is no signal/callback,
anything that would notify me, that the client connected. The only way I know, that the client is connected,
is that the service stops looping, because it blocked in read_thread (this was happening before my patches also).

When it blocks, I have no way to call anything, that would map the drive, or notify something to map the drive.
That's why the only way was to start something before the read blocked, wait for a few seconds, and if nothing
told it otherwise, it would map the drive. If the read didn't block, it would somehow notify this something,
that the client is not actually connected, and not to map the drive.

I spent a few weeks trying to implement something using already defined signals, g_poll, even windows specific
polling/async reading, but nothing worked.


The bottom line is that even when it is usually better to use something more high level then a thread,
I think it's safe to use a thread here, and using more high level stuff wouldn't bring any advantage.
When I tried implementing something other then a thread, the only thing that brought was a headache,
because they refused to work at all.


> As for why this has to be in a different thread, then read_thread, is because
> if I call map_drive before
> read actually blocks, it returns error, as if it's not connected. I think
> this is because when we try to map
> the drive, client sends a message to the guest, and the read has to be ready
> to receive it, which in turn
> lets the map_drive to actually map the drive.
>
> I tryed using every possible way I could think of, even windows specific
> polling, but I couldn't make
> anything work, except just plain spawning another thread.
>
> Lukas Venhoda
>
> _______________________________________________
> Spice-devel mailing list
> Spice-devel@xxxxxxxxxxxxxxxxxxxxx
> http://lists.freedesktop.org/mailman/listinfo/spice-devel
>

Lukas Venhoda
_______________________________________________
Spice-devel mailing list
Spice-devel@xxxxxxxxxxxxxxxxxxxxx
http://lists.freedesktop.org/mailman/listinfo/spice-devel

[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]     [Monitors]