[PATCHv2 2/3] storage: document gluster pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Add support for a new <pool type='gluster'>, similar to
RBD and Sheepdog.  Terminology wise, a gluster volume
forms a libvirt storage pool, within the gluster volume,
individual files are treated as libvirt storage volumes.

* docs/schemas/storagepool.rng (poolgluster): New pool type.
* docs/formatstorage.html.in: Document gluster.
* docs/storage.html.in: Likewise, and contrast it with netfs.
* tests/storagepoolxml2xmlin/pool-gluster.xml: New test.
* tests/storagepoolxml2xmlout/pool-gluster.xml: Likewise.
* tests/storagepoolxml2xmltest.c (mymain): Likewise.

Signed-off-by: Eric Blake <eblake@xxxxxxxxxx>
---
 docs/formatstorage.html.in                   | 11 ++--
 docs/schemas/storagepool.rng                 | 21 +++++++
 docs/storage.html.in                         | 90 +++++++++++++++++++++++++++-
 tests/storagepoolxml2xmlin/pool-gluster.xml  |  8 +++
 tests/storagepoolxml2xmlout/pool-gluster.xml | 11 ++++
 tests/storagepoolxml2xmltest.c               |  1 +
 6 files changed, 136 insertions(+), 6 deletions(-)
 create mode 100644 tests/storagepoolxml2xmlin/pool-gluster.xml
 create mode 100644 tests/storagepoolxml2xmlout/pool-gluster.xml

diff --git a/docs/formatstorage.html.in b/docs/formatstorage.html.in
index 90eeaa3..e74ad27 100644
--- a/docs/formatstorage.html.in
+++ b/docs/formatstorage.html.in
@@ -21,8 +21,10 @@
       <code>iscsi</code>, <code>logical</code>, <code>scsi</code>
       (all <span class="since">since 0.4.1</span>), <code>mpath</code>
       (<span class="since">since 0.7.1</span>), <code>rbd</code>
-      (<span class="since">since 0.9.13</span>), or <code>sheepdog</code>
-      (<span class="since">since 0.10.0</span>). This corresponds to the
+      (<span class="since">since 0.9.13</span>), <code>sheepdog</code>
+      (<span class="since">since 0.10.0</span>),
+      or <code>gluster</code> (<span class="since">since
+      1.1.4</span>). This corresponds to the
       storage backend drivers listed further along in this document.
     </p>
     <h3><a name="StoragePoolFirst">General metadata</a></h3>
@@ -129,7 +131,7 @@
       <dt><code>host</code></dt>
       <dd>Provides the source for pools backed by storage from a
         remote server (pool types <code>netfs</code>, <code>iscsi</code>,
-        <code>rbd</code>, <code>sheepdog</code>). Will be
+        <code>rbd</code>, <code>sheepdog</code>, <code>gluster</code>). Will be
         used in combination with a <code>directory</code>
         or <code>device</code> element. Contains an attribute <code>name</code>
         which is the hostname or IP address of the server. May optionally
@@ -160,7 +162,8 @@
       <dt><code>name</code></dt>
       <dd>Provides the source for pools backed by storage from a
         named element (pool types <code>logical</code>, <code>rbd</code>,
-        <code>sheepdog</code>).  Contains a string identifier.
+        <code>sheepdog</code>, <code>gluster</code>).  Contains a
+        string identifier.
         <span class="since">Since 0.4.5</span></dd>
       <dt><code>format</code></dt>
       <dd>Provides information about the format of the pool (pool
diff --git a/docs/schemas/storagepool.rng b/docs/schemas/storagepool.rng
index 66d3c22..17a3ae8 100644
--- a/docs/schemas/storagepool.rng
+++ b/docs/schemas/storagepool.rng
@@ -21,6 +21,7 @@
         <ref name='poolmpath'/>
         <ref name='poolrbd'/>
         <ref name='poolsheepdog'/>
+        <ref name='poolgluster'/>
       </choice>
     </element>
   </define>
@@ -145,6 +146,17 @@
     </interleave>
   </define>

+  <define name='poolgluster'>
+    <attribute name='type'>
+      <value>gluster</value>
+    </attribute>
+    <interleave>
+      <ref name='commonmetadata'/>
+      <ref name='sizing'/>
+      <ref name='sourcegluster'/>
+    </interleave>
+  </define>
+
   <define name='sourceinfovendor'>
     <interleave>
       <optional>
@@ -555,6 +567,15 @@
     </element>
   </define>

+  <define name='sourcegluster'>
+    <element name='source'>
+      <interleave>
+        <ref name='sourceinfohost'/>
+        <ref name='sourceinfoname'/>
+      </interleave>
+    </element>
+  </define>
+
   <define name='IscsiQualifiedName'>
     <data type='string'>
       <param name="pattern">iqn\.[0-9]{4}-(0[1-9]|1[0-2])\.[a-zA-Z0-9\.\-]+(:.+)?</param>
diff --git a/docs/storage.html.in b/docs/storage.html.in
index 1181444..339759d 100644
--- a/docs/storage.html.in
+++ b/docs/storage.html.in
@@ -114,6 +114,9 @@
       <li>
         <a href="#StorageBackendSheepdog">Sheepdog backend</a>
       </li>
+      <li>
+        <a href="#StorageBackendGluster">Gluster backend</a>
+      </li>
     </ul>

     <h2><a name="StorageBackendDir">Directory pool</a></h2>
@@ -275,10 +278,12 @@
         <code>nfs</code>
       </li>
       <li>
-        <code>glusterfs</code>
+        <code>glusterfs</code> - use the glusterfs FUSE file system
+        (to bypass the file system completely, see
+        the <a href="#StorageBackendGluster">gluster</a> pool).
       </li>
       <li>
-        <code>cifs</code>
+        <code>cifs</code> - use the SMB (samba) or CIFS file system
       </li>
     </ul>

@@ -647,5 +652,86 @@
       The Sheepdog pool does not use the volume format type element.
     </p>

+    <h2><a name="StorageBackendGluster">Gluster pools</a></h2>
+    <p>
+      This provides a pool based on native Gluster access.  Gluster is
+      a distributed file system that can be exposed to the user via
+      FUSE, NFS or SMB (see the <a href="#StorageBackendNetfs">netfs</a>
+      pool for that usage); but for minimal overhead, the ideal access
+      is via native access (only possible for QEMU/KVM compiled with
+      libgfapi support).
+
+      The cluster and storage volume must already be running, and it
+      is recommended that the volume be configured with <code>gluster
+      volume set $volname storage.owner-uid=$uid</code>
+      and <code>gluster volume set $volname
+      storage.owner-gid=$gid</code> for the uid and gid that qemu will
+      be run as.  It may also be necessary to
+      set <code>rpc-auth-allow-insecure on</code> for the glusterd
+      service, as well as <code>gluster set $volname
+      server.allow-insecure on</code>, to allow access to the gluster
+      volume.
+
+      <span class="since">Since 1.1.4</span>
+    </p>
+
+    <h3>Example pool input</h3>
+    <p>A gluster volume corresponds to a libvirt storage pool.  If a
+      gluster volume could be mounted as <code>mount -t glusterfs
+      localhost:/volname /some/path</code>, then the following example
+      will describe the same pool without having to create a local
+      mount point.  Remember that with gluster, the mount point can be
+      through any machine in the cluster, and gluster will
+      automatically pick the ideal transport to the actual bricks
+      backing the gluster volume, even if on a different host than the
+      one named in the <code>host</code> designation.  It is also
+      permitted to
+      use <code>&lt;name&gt;volume/dir&lt;/name&gt;</code> to limit
+      the pool to a subdirectory within the gluster volume.</p>
+    <pre>
+      &lt;pool type="gluster"&gt;
+        &lt;name&gt;myglusterpool&lt;/name&gt;
+        &lt;source&gt;
+          &lt;name&gt;volname&lt;/name&gt;
+          &lt;host name='localhost'/&gt;
+        &lt;/source&gt;
+      &lt;/pool&gt;</pre>
+
+    <h3>Example volume output</h3>
+    <p>Libvirt storage volumes associated with a gluster pool
+      correspond to the files that can be found when mounting the
+      gluster volume.  The <code>name</code> is the path relative to
+      the effective mount specified for the pool; and
+      the <code>key</code> is the path including the gluster volume
+      name and any subdirectories specified by the pool.</p>
+    <pre>
+       &lt;volume&gt;
+         &lt;name&gt;myfile&lt;/name&gt;
+         &lt;key&gt;volname/myfile&lt;/key&gt;
+         &lt;source&gt;
+         &lt;/source&gt;
+         &lt;capacity unit='bytes'&gt;53687091200&lt;/capacity&gt;
+         &lt;allocation unit='bytes'&gt;53687091200&lt;/allocation&gt;
+       &lt;/volume&gt;</pre>
+
+    <h3>Example disk attachment</h3>
+    <p>Files within a gluster volume can be attached to Qemu guests.
+    Information about attaching a Gluster image to a
+    guest can be found
+    at the <a href="formatdomain.html#elementsDisks">format domain</a>
+    page.</p>
+
+    <h3>Valid pool format types</h3>
+    <p>
+      The Gluster pool does not use the pool format type element.
+    </p>
+
+    <h3>Valid volume format types</h3>
+    <p>
+      The Gluster pool does not use the volume format type element;
+      for now, all files within a gluster pool are assumed to have raw
+      format.
+    </p>
+
   </body>
 </html>
diff --git a/tests/storagepoolxml2xmlin/pool-gluster.xml b/tests/storagepoolxml2xmlin/pool-gluster.xml
new file mode 100644
index 0000000..ae9401f
--- /dev/null
+++ b/tests/storagepoolxml2xmlin/pool-gluster.xml
@@ -0,0 +1,8 @@
+<pool type='gluster'>
+  <source>
+    <name>volume</name>
+    <host name='localhost'/>
+  </source>
+  <name>mygluster</name>
+  <uuid>65fcba04-5b13-bd93-cff3-52ce48e11ad8</uuid>
+</pool>
diff --git a/tests/storagepoolxml2xmlout/pool-gluster.xml b/tests/storagepoolxml2xmlout/pool-gluster.xml
new file mode 100644
index 0000000..5844c1a
--- /dev/null
+++ b/tests/storagepoolxml2xmlout/pool-gluster.xml
@@ -0,0 +1,11 @@
+<pool type='gluster'>
+  <name>mygluster</name>
+  <uuid>65fcba04-5b13-bd93-cff3-52ce48e11ad8</uuid>
+  <capacity unit='bytes'>0</capacity>
+  <allocation unit='bytes'>0</allocation>
+  <available unit='bytes'>0</available>
+  <source>
+    <host name='localhost'/>
+    <name>volume</name>
+  </source>
+</pool>
diff --git a/tests/storagepoolxml2xmltest.c b/tests/storagepoolxml2xmltest.c
index 0ae9b29..0ab72e4 100644
--- a/tests/storagepoolxml2xmltest.c
+++ b/tests/storagepoolxml2xmltest.c
@@ -100,6 +100,7 @@ mymain(void)
     DO_TEST("pool-iscsi-multiiqn");
     DO_TEST("pool-iscsi-vendor-product");
     DO_TEST("pool-sheepdog");
+    DO_TEST("pool-gluster");

     return ret==0 ? EXIT_SUCCESS : EXIT_FAILURE;
 }
-- 
1.8.3.1

--
libvir-list mailing list
libvir-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvir-list




[Index of Archives]     [Virt Tools]     [Libvirt Users]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]     [Fedora Tools]