Re: [PATCH v2] storage: vz storage pool support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all, I see here a lot of discussions and explanations about virtuozzo storage. So, I will just add some information. Initially I have planned, that admin or user has properly configured vstorage somewhere over the network. (It can be done via DNS records or zeroconf.) At the host where pool will be stored user/admin do cluster discovery and authorization. As Maxim has mentioned before - vstorage does not use the concept of users and groups, providing specific users and groups with access to specific parts of a cluster. So, anyone authorized to access a cluster can access all its data. However, you can use additional parameters during mounting to define mount owner usr name, group name and acecss mode (so, you can mount cluster as read-only) This means that performing chown/chmod and etc - will have same effect as in nfs case. Of course to perform this operations you need vstorage-client to be installed on the host.


On 08/12/16 16:47, Maxim Nestratov wrote:
08-Dec-16 15:17, John Ferlan пишет:


On 12/08/2016 04:19 AM, Maxim Nestratov wrote:
08-Dec-16 02:22, John Ferlan пишет:

[...]

I see what you mean; however, IMO vstorage should be separate. Maybe
there's another opinion out there, but since you're requiring
"something" else to be installed in order to get the WITH_VSTORAGE
to be
set to 1, then a separate file is in order.

Not sure they're comparable, but zfs has its own. Having separated
vstorage reduces the chance that some day some incompatible logic is
added/altered in the *fs.c (or vice versa).
Ok. I will try.

I think you should consider the *_fs.c code to be the "default" of
sorts. That is default file/dir structure with netfs added in. The
vstorage may just be some file system, but it's not something (yet) on
"every" distribution.
I did not understand actually, what you mean  "be the "default" of
sorts."
As I have understood - what I need to do is to create backend_vstorage.c
with all create/delete/* functionality.

Sorry - I was trying to think of a better way to explain... The 'fs' and 'nfs' pool are default of sorts because one can "ls" (on UNIX/Linux) or
"dir" (on Windows) and get a list of files.

"ls" and "dir" are inherent to the OS, while in this case vstorage
commands are installed separately.
Once you mounted your vstorage cluster to a local filesystem you can
also "ls" it. Thus, I can't see much difference from nfs here.

So if it's more like NFS, then how does one ensure that the local userid
X is the same as the remote userid X? NFS has a root-squashing concept
that results in numerous shall we say "interesting" issues.

Vstorage doesn't have users concept. Authentication is made by a password per node just once. If authentication passes, a key is stored in /etc/vstorage/clusters/CLUSTER_NAME/auth_digest.key Then, permissions are set to a mount point during mounting with -u USER -g GROUP -m MODE options
provided to vstorage-mount command.

Check out the virFileOpen*, virDirCreate, and virFileRemove...

Also what about viFileIsShareFSType? And security_selinux.c code for
NFS? If you use cscope, just search on NFS.

In the virStorageBackendVzStart, I see:

    VSTORAGE_MOUNT -c $pool.source.name $pool.target.path

This call certainly lacks user/group/mode parameters and should be fixed in the next series.
Do you mean fix the default behavior for non-root users?




where VSTORAGE_MOUNT is a build (configure.ac) definition that is the
"Location or name of vstorage-mount program" which would only be set if
the proper package was installed.

In the virStorageBackendVzfindPoolSources, I see:

    VSTORAGE discover

which I assume generates some list of remote "services" (for lack of a
better term) which can be used as/for pool.source.name in order to be
well mounted by the VSTORAGE_MOUNT program.

Compare that to NFS, which uses mount which is included in well every
distro I can think of. That's a big difference. Also let's face it, NFS
has been the essential de facto goto tool to access remote storage for a
long time. Personally, I'd rather see the NFS code split out of the
*_fs.c backend, but I don't have the desire/time to do it - so it stays
as is.

To sum this up, you still think that copy and paste isn't a problem here and will create more value than do any harm, right?

Maxim

[snip]



--
Best regards,
Olga

--
libvir-list mailing list
libvir-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvir-list




[Index of Archives]     [Virt Tools]     [Libvirt Users]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]     [Fedora Tools]
  Powered by Linux