Re: [PATCH rfc v2 0/8] fspool: backend directory

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 24/09/16 00:12, John Ferlan wrote:

On 09/23/2016 11:56 AM, Olga Krishtal wrote:
On 21/09/16 19:17, Maxim Nestratov wrote:
20 сент. 2016 г., в 23:52, John Ferlan <jferlan@xxxxxxxxxx> написал(а):



On 09/15/2016 03:32 AM, Olga Krishtal wrote:
Hi everyone, we would like to propose the first implementation of fspool
with directory backend.

Filesystem pools is a facility to manage filesystems resources similar
to how storage pools manages volume resources. Furthermore new API follows
storage API closely where it makes sense. Uploading/downloading operations
are not defined yet as it is not obvious how to make it properly. I guess
we can use some kind of tar to make a stream from a filesystem. Please share
you thoughts on this particular issue.
So how do you differentiate between with the existing <pool type="fs">
Pool type=fs still provides volumes, i. e. block devices rather than filesystem, though this storage pool can mount file systems resided on a source block device. 

http://libvirt.org/storage.html#StorageBackendFS

Sure the existing fs pool requires/uses a source block device as the
source path and this new variant doesn't require that source but seems
to use some item in order to dictate how to "define" the source on the
fly. Currently only a "DIR" is created - so how does that differ from a
"dir" pool.

Same here, storage "dir" provides files, which are in fact block devices for guests. While filesystem pool "dir" provides guests with file systems. 


I think it'll be confusing to have and differentiate fspool and pool
commands.
Some time ago, we wrote the proposal description and asked for
everyone's advice and opinion.
The aim of fspool is to provide filesystems, not volumes. The simplest
type of fspool is directory pool and
it do has a lot in common with storage_backend_fs. However,  in the
proposal description we said that
the plan is to use other backends:  eg, storage volumes from storage
pool as the source of fs, zfs, etc.
 The final api for fspool will be significantly different, because of
the other backends needs.
Can you please try to create an extra line after the paragraph you're
responding to and the start of your paragraph and then one after.

Thanks for noticing. It looks better.

Anyway, as I pointed out - that description wasn't in my (short term)
memory. Keeping a trail of pointers to previous stuff helps those that
want to refresh their memory on the history.

I will hold this links through the next versions.
https://www.redhat.com/archives/libvir-list/2016-April/msg01941.html
https://www.redhat.com/archives/libvir-list/2016-May/msg00208.html


If you're going to "reuse" things, then using the 'src/util/*' is the
way to go rather than trying to drag in storage_{driver|backend*} APIs.
Crossing driver boundaries is something IIRC we try to avoid.

As I have written before at the moment we have only one backend for fspool -
directory. It is the simplest backend and only the starting point.
I think that it is too early to decide which parts should be moved to src/util/*.
Moreover,  as fspool items and storage pool volumes are pretty different, it
could be possible that they have very little in common. That said, I would leave
things as they are, but if you insist I can try.


      
I didn't dig through all the patches, but from the few I did look at it
seems as though all that's done is to rip out the guts of stuff not
desired from the storage pool driver and replace it with this new code
attributing all the work to the new author/copyright. IOW: Lots of
places where StoragePool appears to be exactly the same as the FSPool.

I have written this lines as a part of  GPLv2+ boilerplate:
https://www.redhat.com/archives/libvir-list/2016-August/msg01160.html, which I took from
other libvirt parts. And I guess it was naturally to change name and company, don't you?
And again, if you insist I can leave out the author/copyright as it wasn't the aim of this series.


I think you need to find a different means to do what you want. It's not
100% what the end goal is.I did download/git am the patches and scan a few patches...
 * In patch 2 you've totally missed how to modify libvirt_public.syms
 * In patch 3, the build breaks in "conf/fs_conf" since the "if { if {}
}" aren't done properly in virFSPoolDefFormatBuf.
 * In patch 5 the remote_protocol_structs fails check/syntax-check... I
stopped there in my build each patch test.
According to the guide I have to do:
|make check| and |make syntax-check for every patch|
Always a good plan!

And it was done. 
And yet as we find out *all the time* some compilers complain more than
others. Watch the list - we have a CI environment in which we find all
sorts of oddities.  In any case, the code in question is:

+    if (def->target.perms.mode != (mode_t) -1 ||
+        def->target.perms.uid != (uid_t) -1 ||
+        def->target.perms.gid != (gid_t) -1 ||
+        def->target.perms.label) {
+        virBufferAddLit(buf, "<permissions>\n");
+        virBufferAdjustIndent(buf, 2);
+        if (def->target.perms.mode != (mode_t) -1)
+        virBufferAsprintf(buf, "<mode>0%o</mode>\n",
+                          def->target.perms.mode);
+        if (def->target.perms.uid != (uid_t) -1)
+            virBufferAsprintf(buf, "<owner>%d</owner>\n",
+                              (int) def->target.perms.uid);
+        if (def->target.perms.gid != (gid_t) -1)
+            virBufferAsprintf(buf, "<group>%d</group>\n",
+                              (int) def->target.perms.gid);
+            virBufferEscapeString(buf, "<label>%s</label>\n",
+                                  def->target.perms.label);
+
+            virBufferAdjustIndent(buf, -2);
+            virBufferAddLit(buf, "</permissions>\n");
+        }
+
+        virBufferAdjustIndent(buf, -2);
+        virBufferAddLit(buf, "</target>\n");

So do you "see" the problem?  The first if has an open {, but the second
one doesn't, although the code is indented and would seemingly want to
have one. The second one has a close }, which gives the impression
something is missing.

Thanks for pointing this out. I will be more attentive.


At libvirt_public.syms I will look one more time.
You added all those API's into LIBVIRT_2.0.0 { above the
"LIBVIRT_1.3.3", but you'll notice there's a LIBVIRT_2.2.0 { afterwards
which all those API's should have gone "at least for now".

IOW: When adding new API's you have to add to the version specific
stanza. As of now you'd be adding API's to 2.3.0, but I doubt we'll make
that cut-off.

Thanks. Will do.


Just because you added them in 2.0.0 internally, once you go to upstream
them - they need to be in the latest.

So this brings me back to my other recent response. I think if we can
figure out what the driver will need, then work through the external API
portion. There's just so much going on - trying to get a sense of it all
at once is well overwhelming.

Actually, I thought we have already figured it out and it was decided to have completely separate API
to manage filesystems but similar to storage pool's, and only this approach let containers get the greatest benefit from it.

John

[...]


-- 
Best regards,
Olga
--
libvir-list mailing list
libvir-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvir-list

[Index of Archives]     [Virt Tools]     [Libvirt Users]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]     [Fedora Tools]