Re: [PATCH v3 01/11] conf: Add some defined NFS Storage Pool mount options

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jan 16, 2019 at 12:47:32PM -0500, John Ferlan wrote:
> On 1/16/19 7:00 AM, Daniel P. Berrangé wrote:
> > Hmm, this has gone back to the "Mount options" concept from v1
> > which I don't think is appropriate. In v2 feedback I had suggested
> > that we should have some more explicit XML description for the subset
> > of options we wish to support, with each option potentially using a
> > different syntax, no generic mount options syntax.
> 
> In v1 it seems to me at least the objection was arbitrarily named
> mount_opts, so specifically named mount_opts felt like a fair trade-off.

I guess I should have been clearer that I didn't like both parts.

> In v2 you noted the features element to be similar to the domain
> features, but then pointed out that you felt we could just default to
> using them. A followup pointed out one concern over OS syntax
> differences, which is essentially why I took this approach - force
> someone to provide the option. If that option used different syntax on
> some other OS, then someone else who cares about that OS could send a
> patch to change the name generated (and we avoid making any assumptions).

If we force apps to provide the option in the XML then it
more likely to cause incompatibilities, as most apps won't
test whether an OS supports a given option.

If we default enable them in libvirt then libvirt can just
put this in in ifdef __linux__ so we don't risk breaking
other non-Linux OS.

> > For 'ro' we should use our normal  <readonly/> empty element syntax.
> 
> True we could, but believe it or not readonly is not even defined in
> storagepool.rng. It's a domaincommon.rng to be used for hypervisor
> options. Adding ro to the mount_opts list was essentially a shortcut of
> patches to include ro with the others. I saw nfsvers a bit different
> since it would provide more than a boolean option.

Even though its a different schema, it is worth using a consistent
element naming approach IMHO.

> Still beyond NFS I'm trying to envision how we'd supply readonly such
> that it would matter for the pool. Wouldn't networked pools require the
> server side to have set up readonly? For local pools how much of the
> exposure of the volume would be controllable such that the pool could
> manage the readonly-ness? A virDomainDiskTranslateSourcePool "option" I
> suppose that could be associated with the domain readonly attribute.
> 
> Perhaps ro or readonly should just be separate then because it involves
> a lot more interaction. Does the overhead of code addition outweigh the
> usefulness of a generic pool readonly option? No one's asked for such a
> feature, so sneaking in ro to/for an NFS mount option felt reasonable.

We can't predict the future reliably. If we use <mountopts><ro/></mountopts>
and  then in future find a need for readonly on other storage pools where
<mountopts> is not relevant, we'll have to invent a second way to specify
readonly. By using <readonly/> we've not tied our hands, so we have more
flexibility to cope with the future needs. It may never matter, but I'd
always aim to avoid restricting ourselves.

> > For nosetuid, nodev, noexec, I'm still inclined to say we should
> > enable them by default. Despite it being a semantic change, I think
> > it is the right thing for a volume that is being used for storing
> > VM disk images.
> 
> Again, the fear of doing that is some OS has different syntax (from your
> v2 response it's possible that FreeBSD mount command options may differ
> from Linux mount command options).
> 
> So, using named options seemed to be a better tradeoff than just
> altering the command generation to add those options for NFS mounts.

If we did it in defaults, then we'd have to be conservative
in what we enable, so if we don't know they are supported
by an OS we shouldn't enable them in the code when building
for that OS.

BTW, I did just check FreeBSD and it supports nosuid and
noexec but not  nodev.

So we'd want

  #ifdef __linux__
  #define opts ",nodev,nosuid,noexec"
  #else
  #ifdef__FreeBSD__
  #define opts ",nosuid,noexec"
  #else
  #define opts ""
  #endif


I'm thinking perhaps we should never have been considering the
XML at all for nodev, nosuid, noexec.

Instead it might be better as a configurable at a host level
rather than XML level.  eg  a /etc/libvirt/storage.conf
file with an 'nfs_extra_mount_opts' config parameter,
whose default value is "nodev,nosuid,noexec" on Linux
hosts.

That would let sysadmins enforce a host-wide policy without
needing to worry about whether an application has been updated
to support setting the XML to add extra mount options.

That would avoid most debate about mount options / features
in the XML.

Probably the protocol version is the only thing that is
important for the XML as that can vary per-server.

Read-only is also probably relevant in XML.


> I would think that would be far more confusing, but seems to me to
> follow how features are assigned/used for specific hypervisors. It still
> requires some sort of mapping of names and we're not allowing arbitrary
> ones. Does nosuid, noexec, or nodev have meaning beyond that of the
> meaning used for NFS? If the answer is no, then it's a very specific
> feature to a very specific pool.

BTW nosuid, noexec, nodev are nothing todo with NFS - they're general
options that can be set on any mount type.

> > This provides a generic framework that will work for other storage
> > pool backend features where there's no mount concept involved eg
> > LVM, iSCSI, etc
> > 
> 
> And what "features" would we consider allowing without having to craft a
> different backend?
> 
> Allowing some provided option for LVM commands could result in needing
> to write code that would be able to handle multiple different formats.
> In particular I'm thinking about how thin provisioning is supported.
> Very different way to configure and a completely different means to scan
> the generated output looking for usable volumes. Previous attempts to
> co-mingle in the current logical backend were NACK'd and no one has
> taken up the create a new backend that supports thin pools vs thin
> snapshots (I think the latter is the terminology used, it's been a few
> years).
> 
> For iSCSI we now have the iscsi-direct backend which perhaps could make
> use of something, but it'd be something fairly specific to some iSCSI
> feature which is no different (to me at least) than providing some
> specific option for netfs.

There's perhaps a need for <features> under the pool <source> to
make it clear that the valid features are specific to the pool
type.  Then bearing in mind what I said earlier, perhaps we should
just forget the entire features / mount opts idea for XML and use
the host config file.


Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|

--
libvir-list mailing list
libvir-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvir-list




[Index of Archives]     [Virt Tools]     [Libvirt Users]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]     [Fedora Tools]

  Powered by Linux