----- Original Message ----- > Hi > On Thu, Sep 17, 2015 at 2:47 PM, Marc-André Lureau < mlureau@xxxxxxxxxx > > wrote: > > > Hi > > ----- Original Message ----- > > > How do you prevent from having multiple drive pointing to the spice folder? > > > > > > Currently, I don't. > > > > Only way to connect multiple drives, is if you restart the service, > > because the drives will disconnect, after restarting guest. > > > > I will add a disconnect function to the quit function in the service, > > so that if uou disable, or restart teh service, the drive will be unmapped > > > > Can't you enumerate existing mappings and only add one if it's necessary? > > > Yes there seems to be API for that, so I can try to implement it. > > Still I'll make a disconnect function anyway, because it doesn't make sense > to have a drive mapped, when the webdavd service is not even running > That could be annoying, I would rather not destroy user-defined mappings if possible. > > > > > --- > > > Changes since RFC: > > > - Calling WNetAddConnection2() blindly was slow and caause many threads > > > to > > > spawn. > > > - Now only call it once, when it is sure, that it will connect. > > > - Now connects to a * drive, instead of always Z: > > > - Thread is now spawned and joined every time run_service() is called. > > > --- > > > Makefile.am | 4 +++ > > > spice/spice-webdavd.c | 76 > > > +++++++++++++++++++++++++++++++++++++++++++++++++++ > > > 2 files changed, 80 insertions(+) > > > > > > diff --git a/Makefile.am b/Makefile.am > > > index 485417b..df9a73e 100644 > > > --- a/Makefile.am > > > +++ b/Makefile.am > > > @@ -87,6 +87,10 @@ spice_webdavd_LDADD = \ > > > $(PIE_LDFLAGS) \ > > > $(NULL) > > > > > > +if OS_WIN32 > > > +spice_webdavd_LDADD += -lnetapi32 -lmpr > > > +endif > > > + > > > deps.txt: > > > $(AM_V_GEN)rpm -qa | grep $(host_os) | sort | unix2dos > $@ > > > > > > diff --git a/spice/spice-webdavd.c b/spice/spice-webdavd.c > > > index f9c5cf1..3940f1c 100644 > > > --- a/spice/spice-webdavd.c > > > +++ b/spice/spice-webdavd.c > > > @@ -737,11 +737,81 @@ open_mux_path (const char *path) > > > mux_queue = output_queue_new (G_OUTPUT_STREAM (mux_ostream)); > > > } > > > > > > +#ifdef G_OS_WIN32 > > > +static gchar > > > +get_free_drive_letter(void) > > > +{ > > > + const guint32 max_mask = 1 << 25; > > > + guint32 drives; > > > + guint32 mask; > > > + gint i; > > > + > > > + drives = GetLogicalDrives (); > > > + > > > + for (i = 0; i < 26; i++) > > > + { > > > + mask = max_mask >> i; > > > + if ((drives & mask) == 0) > > > + return 'Z' - i; > > > + } > > > + > > > + return 0; > > > +} > > > + > > > +static gpointer > > > +map_drive(gpointer user_data) > > > +{ > > > + GCancellable * cancel_map = user_data; > > > + gchar drive_letter; > > > + gchar local_name[3]; > > > + gchar remote_name[] = " http://localhost:9843/ "; > > > + NETRESOURCE net_resource; > > > + guint32 retval; > > > + gint i; > > > + > > > + for (i = 0; i < 5; ++i) > > > + { > > > + if (g_cancellable_is_cancelled (cancel_map)) > > > + return NULL; > > > + else > > > + g_usleep (G_USEC_PER_SEC); > > > + } > > > > It looks like this would fail if the guest is started without client > > connected (or not sharing folder). > > > > I'm not sure I follow. > > This is supposed to return NULL precisely, if the client is not connected, > > or not sharing folder. Windows webdavd will loop and call run_service(), > > until sharing is enabled, which will then NOT cancel this function, > > and we map the drive. > > ah so the service keeps running and next time the client connects, then the > thread will be started again? > > No, the thread starts only after the service is started for the first time. > When user disconnects, the service > currently doesn't notice on Windows (read_thread is still blocked), so when a > different user connects, Very often VM are started without a client connected. > the drive is still mapped, but when the new user enables sharing, the > original drive will work, as it connects > to the same remote name ( http://localhost:9843/ ). > > This is still an issue, because if the new user doesn't enable sharing, the > drive will still be there, but won't > be accessible. But that is issue for another patch (and bug). At least it would be nice to put this limitation in the commit message. > > ... > > > Sorry I didn't mean what the thread was doing, but rather why we need a > > thread for this task. > > Well I could use a g_task or g_idle, or something like that, but g_idle isn't > cancellable, and g_task g_idle is cancellable. > seemed too complicated for this. When I tried to do this in the main_thread > (as it just has to be a gtask is supposed to be simpler (although I don't use it myself ;) > different thread then read_thread), I ran into race conditions between the > read_thread and map_drive, > especially, when using sleep between polling for cancellable. > Spawning a thread for a few seconds just seemed to be less complicated. It's hardly convincing me, threads are more racy by nature. Trying to read from the driver when the started is wrong, at least it should be after the virtio and the listening socket have been successfully opened. I can imagine it may block because WNetAddConnection2 function call is sync and will probe the service. But if it's blocking then it will not be able to handle its communication work. Why not wrap WNetAddConnection2 in a GSimplaAsyncThread and run it once the client connection is established? > As for why this has to be in a different thread, then read_thread, is because > if I call map_drive before > read actually blocks, it returns error, as if it's not connected. I think > this is because when we try to map > the drive, client sends a message to the guest, and the read has to be ready > to receive it, which in turn > lets the map_drive to actually map the drive. > > I tryed using every possible way I could think of, even windows specific > polling, but I couldn't make > anything work, except just plain spawning another thread. > > Lukas Venhoda > > _______________________________________________ > Spice-devel mailing list > Spice-devel@xxxxxxxxxxxxxxxxxxxxx > http://lists.freedesktop.org/mailman/listinfo/spice-devel > _______________________________________________ Spice-devel mailing list Spice-devel@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/spice-devel