Re: [libvirt-users] Libvirtd running as root tries to access oneadmin (OpenNebula) NFS mount but throws: error: can’t canonicalize path

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 4/12/2016 5:08 PM, John Ferlan wrote:

On 04/12/2016 03:55 PM, TomK wrote:
On 4/12/2016 3:40 PM, Martin Kletzander wrote:
[ I would be way easier to reply if you didn't top-post ]

On Tue, Apr 12, 2016 at 12:07:50PM -0400, TomK wrote:
Hey John,

Hehe, I got the right guy then.  Very nice!  And very good ideas but I
may need more time to reread and try them out later tonight. I'm fully
in agreement about providing more details.  Can't be accurate in a
diagnosis if there isn't much data to go on.  This pool option is new to
me.  Please tell me more on it.  Can't find it in the file below but
maybe it's elsewhere?

( <pool type="fs"> ) perhaps rather than the "NFS" pool ( e.g. <pool
type="netfs"> )


Allright, here's the details:

[root@mdskvm-p01 ~]# rpm -aq|grep -i libvir
libvirt-daemon-driver-secret-1.2.17-13.el7_2.4.x86_64
libvirt-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-driver-network-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-driver-lxc-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-driver-nwfilter-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-driver-interface-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-config-network-1.2.17-13.el7_2.4.x86_64
libvirt-client-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-driver-qemu-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-driver-storage-1.2.17-13.el7_2.4.x86_64
libvirt-python-1.2.17-2.el7.x86_64
libvirt-glib-0.1.9-1.el7.x86_64
libvirt-daemon-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-config-nwfilter-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-driver-nodedev-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-kvm-1.2.17-13.el7_2.4.x86_64
[root@mdskvm-p01 ~]# cat /etc/release
cat: /etc/release: No such file or directory
[root@mdskvm-p01 ~]# cat /etc/*release*
NAME="Scientific Linux"
VERSION="7.2 (Nitrogen)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="7.2"
PRETTY_NAME="Scientific Linux 7.2 (Nitrogen)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:scientificlinux:scientificlinux:7.2:GA"
HOME_URL="http://www.scientificlinux.org//";
BUG_REPORT_URL="mailto:scientific-linux-devel@xxxxxxxxxxxxxxxxx";

REDHAT_BUGZILLA_PRODUCT="Scientific Linux 7"
REDHAT_BUGZILLA_PRODUCT_VERSION=7.2
REDHAT_SUPPORT_PRODUCT="Scientific Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="7.2"
Scientific Linux release 7.2 (Nitrogen)
Scientific Linux release 7.2 (Nitrogen)
Scientific Linux release 7.2 (Nitrogen)
cpe:/o:scientificlinux:scientificlinux:7.2:ga
[root@mdskvm-p01 ~]#

[root@mdskvm-p01 ~]# mount /var/lib/one
[root@mdskvm-p01 ~]# su - oneadmin
Last login: Sat Apr  9 10:39:25 EDT 2016 on pts/0
Last failed login: Tue Apr 12 12:00:57 EDT 2016 from opennebula01 on
ssh:notty
There were 9584 failed login attempts since the last successful login.
i[oneadmin@mdskvm-p01 ~]$ id oneadmin
uid=9869(oneadmin) gid=9869(oneadmin)
groups=9869(oneadmin),992(libvirt),36(kvm)
[oneadmin@mdskvm-p01 ~]$ pwd
/var/lib/one
[oneadmin@mdskvm-p01 ~]$ ls -altriR|grep -i root
134320262 drwxr-xr-x. 45 root     root        4096 Apr 12 07:58 ..
[oneadmin@mdskvm-p01 ~]$


It'd take more time than I have at the present moment to root out what
changed when for NFS root-squash, but suffice to say there were some
corner cases.  Some involving how qemu-img files are generated - I don't
have the details present in my short term memory.

[oneadmin@mdskvm-p01 ~]$ cat /var/lib/one//datastores/0/38/deployment.0
<domain type='kvm'
xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
         <name>one-38</name>
         <vcpu>1</vcpu>
         <cputune>
                 <shares>1024</shares>
         </cputune>
         <memory>524288</memory>
         <os>
                 <type arch='x86_64'>hvm</type>
                 <boot dev='hd'/>
         </os>
         <devices>
<emulator>/usr/libexec/qemu-kvm</emulator>
                 <disk type='file' device='disk'>
                         <source
file='/var/lib/one//datastores/0/38/disk.0'/>
                         <target dev='hda'/>
                         <driver name='qemu' type='qcow2' cache='none'/>
                 </disk>
                 <disk type='file' device='cdrom'>
                         <source
file='/var/lib/one//datastores/0/38/disk.1'/>
                         <target dev='hdb'/>
                         <readonly/>
                         <driver name='qemu' type='raw'/>
                 </disk>
                 <interface type='bridge'>
                         <source bridge='br0'/>
                         <mac address='02:00:c0:a8:00:64'/>
                 </interface>
                 <graphics type='vnc' listen='0.0.0.0' port='5938'/>
         </devices>
         <features>
                 <acpi/>
         </features>
</domain>

[oneadmin@mdskvm-p01 ~]$ cat
/var/lib/one//datastores/0/38/deployment.0|grep -i nfs
[oneadmin@mdskvm-p01 ~]$

Having/using a root squash via an NFS pool is "easy" (famous last words)

Create some pool XML (taking the example I have)

% cat nfs.xml
<pool type='netfs'>
     <name>rootsquash</name>
     <source>
         <host name='localhost'/>
         <dir path='/home/bzs/rootsquash/nfs'/>
         <format type='nfs'/>
     </source>
     <target>
         <path>/tmp/netfs-rootsquash-pool</path>
         <permissions>
             <mode>0755</mode>
             <owner>107</owner>
             <group>107</group>
         </permissions>
     </target>
</pool>

In this case 107:107 is qemu:qemu and I used 'localhost' as the
hostname, but that can be a fqdn or ip-addr to the NFS server.

You've already seen my /etc/exports

virsh pool-define nfs.xml
virsh pool-build rootsquash
virsh pool-start rootsquash
virsh vol-list rootsquash

Now instead of

    <disk type='file' device='disk'>
      <source file='/var/lib/one//datastores/0/38/disk.0'/>
      <target dev='hda'/>
      <driver name='qemu' type='qcow2' cache='none'/>
    </disk>

Something like:

   <disk type='volume' device='disk'>
     <driver name='qemu' type='qemu' cache='none'/>
     <source pool='rootsquash' volume='disk.0'/>
     <target dev='hda'/>
   </disk>

The volume name may be off, but it's perhaps close.  I forget how to do
the readonly bit for a pool (again, my focus is elsewhere).

Of course you'd have to adjust the nfs.xml above to suit your
environment and see what you see/get.  The privileges for the pool and
volumes in the pool become the key to how libvirt decides to "request
access" to the volume.  "disk.1" having read access is probably not an
issue since you seem to be using it as a CDROM; however, "disk.0" is
going to be used for read/write - thus would have to be appropriately
configured...


Cheers,
Tom K.
-------------------------------------------------------------------------------------


Living on earth is expensive, but it includes a free trip around the
sun.

On 4/12/2016 11:45 AM, John Ferlan wrote:
On 04/12/2016 10:58 AM, TomK wrote:
Hey Martin,

Thanks very much.  Appreciate you jumping in on this thread.
Can you provide some more details with respect to which libvirt version
you have installed. I know I've made changes in this space in more
recent versions (not the most recent). I'm no root_squash expert, but I
was the last to change things in the space so that makes me partially
fluent ;-) in NFS/root_squash speak.

I'm always lost in how do we handle *all* the corner cases that are not
even used anywhere at all, but care about the conditions we have in the
code.  Especially when it's constantly changing.  So thanks for jumping
in.  I only replied because nobody else did and I had only the tiniest
clue as to what could happen.

I saw the post, but was heads down somewhere else. Suffice to say trying
to swap in root_squash is a painful exercise...


John

[...]

_______________________________________________
libvirt-users mailing list
libvirt-users@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvirt-users

Thanks John!  Appreciated again.

No worries, handle what's on the plate now and earmark this for checking once you have some free cycles. I can temporarily hop on one leg by using Martin Kletzander's workaround (It's a POC at the moment).

I'll have a look at your instructions further but wanted to find out if that config nfs.xml is a one time thing correct? I'm spinning these up at will via the OpenNebula GUI and if I have update for each VM, that breaks the Cloud provisioning. I'll go over your notes again. I'm optimistic. :)

Cheers,
Tom Kacperski.
-------------------------------------------------------------------------------------
Living on earth is expensive, but it includes a free trip around the sun.

--
libvir-list mailing list
libvir-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvir-list



[Index of Archives]     [Virt Tools]     [Libvirt Users]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]     [Fedora Tools]