Re: Ovirt and gluster 7/8

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks Strahil,

I'm attaching a text file with the result of volume info and one with the result of set help.

I might not have made myself clear. My gluster cluster is running glusterfs 5.13 (I know it's old, but I want to wait to retire the legacy systems before upgrading). My other servers use different versions of glusterfs from 3.7.9 to 6.0. The Ovirt server was using v. 7.9 without a problem, when it upgraded to 8.4 it stopped working properly (mounting and creating directories and copying files works, creating file does not). I only want to downgrade the Ovirt server from 8.4 to 7.9 without touching the gluster cluster or the other servers.

On 3/28/21 10:30 AM, Strahil Nikolov wrote:

Can you provide the volume info ?

I have previously managed to downgrade v7.0 to 6.6 , but it's risky - so nobody will recommend you. Previously, the shard size was 64MB by default. It will be interesting to see the current and default value (gluster volume set help).


Best Regards,
Strahil Nikolov

On Sun, Mar 28, 2021 at 9:20, Valerio Luccio

Hello all,

I have a gluster storage that is still running gluster 5. Unfortunately I have some legacy systems that cannot be updated that use this storage (I'm hoping to replace these systems in the next 12 months, I do all of this by my lonesome so it takes some time).

On a CentOS 8 server I run Ovirt to manage some VM's. It was using gluster 7 to connect to the storage and that worked. A couple of days ago I did an update and Ovirt updated my gluster packages to 8, after which I now get 'Failed to get trusted.glusterfs.shard.file-size' when trying to create files from the Ovirt server (the other servers don't have issues). Two related questions:

  1. Is there a way to fix this, maybe changing some parameter in a conf file ?
  2. If 1. is not possible, can you see pitfalls in downgrading the packages to gluster 7 after downloading them from http://mirror.centos.org ?
Thanks in advance.
--
As a result of Coronavirus-related precautions, NYU and the Center for Brain Imaging operations will be managed remotely until further notice.
All telephone calls and e-mail correspondence are being monitored remotely during our normal business hours of 9am-5pm, Monday through Friday.
 
For MRI scanner-related emergency, please contact: Keith Sanzenbach at keith.sanzenbach@xxxxxxx and/or Pablo Velasco at pablo.velasco@xxxxxxx
For computer/hardware/software emergency, please contact: Valerio Luccio at valerio.luccio@xxxxxxx
For TMS/EEG-related emergency, please contact: Chrysa Papadaniil at chrysa@xxxxxxx
For CBI-related administrative emergency, please contact: Jennifer Mangan at jennifer.mangan@xxxxxxx

Valerio Luccio     (212) 998-8736
Center for Brain Imaging     4 Washington Place, Room 158
New York University     New York, NY 10003

"In an open world, who needs windows or gates ?"
________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users


--
As a result of Coronavirus-related precautions, NYU and the Center for Brain Imaging operations will be managed remotely until further notice.
All telephone calls and e-mail correspondence are being monitored remotely during our normal business hours of 9am-5pm, Monday through Friday.
 
For MRI scanner-related emergency, please contact: Keith Sanzenbach at keith.sanzenbach@xxxxxxx and/or Pablo Velasco at pablo.velasco@xxxxxxx
For computer/hardware/software emergency, please contact: Valerio Luccio at valerio.luccio@xxxxxxx
For TMS/EEG-related emergency, please contact: Chrysa Papadaniil at chrysa@xxxxxxx
For CBI-related administrative emergency, please contact: Jennifer Mangan at jennifer.mangan@xxxxxxx

Valerio Luccio     (212) 998-8736
Center for Brain Imaging     4 Washington Place, Room 158
New York University     New York, NY 10003

"In an open world, who needs windows or gates ?"
 
Volume Name: MRIData
Type: Distributed-Replicate
Volume ID: e051ac20-ead1-4648-9ac6-a29b531515ca
Status: Started
Snapshot Count: 0
Number of Bricks: 6 x (2 + 1) = 18
Transport-type: tcp
Bricks:
Brick1: hydra1:/gluster1/data
Brick2: hydra1:/gluster2/data
Brick3: hydra1:/arbiter/1 (arbiter)
Brick4: hydra1:/gluster3/data
Brick5: hydra2:/gluster1/data
Brick6: hydra1:/arbiter/2 (arbiter)
Brick7: hydra2:/gluster2/data
Brick8: hydra2:/gluster3/data
Brick9: hydra2:/arbiter/1 (arbiter)
Brick10: hydra3:/gluster1/data
Brick11: hydra3:/gluster2/data
Brick12: hydra3:/arbiter/1 (arbiter)
Brick13: hydra3:/gluster3/data
Brick14: hydra4:/gluster1/data
Brick15: hydra3:/arbiter/2 (arbiter)
Brick16: hydra4:/gluster2/data
Brick17: hydra4:/gluster3/data
Brick18: hydra4:/arbiter/1 (arbiter)
Options Reconfigured:
features.cache-invalidation: off
nfs.exports-auth-enable: on
nfs.disable: on
transport.address-family: inet
cluster.data-self-heal: on
cluster.metadata-self-heal: on
cluster.entry-self-heal: on
cluster.self-heal-daemon: on
cluster.quorum-type: auto
server.allow-insecure: on
network.ping-timeout: 10
auth.allow: *
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.low-prio-threads: 32
network.remote-dio: enable
cluster.eager-lock: enable
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
features.shard: on
user.cifs: off
cluster.choose-local: off
storage.owner-uid: 36
storage.owner-gid: 36
Option: cluster.lookup-unhashed
Default Value: on
Description: This option if set to ON, does a lookup through all the sub-volumes, in case a lookup didn't return any result from the hash subvolume. If set to OFF, it does not do a lookup on the remaining subvolumes.

Option: cluster.lookup-optimize
Default Value: on
Description: This option if set to ON enables the optimization of -ve lookups, by not doing a lookup on non-hashed subvolumes for files, in case the hashed subvolume does not return any result. This option disregards the lookup-unhashed setting, when enabled.

Option: cluster.min-free-disk
Default Value: 10%
Description: Percentage/Size of disk space, after which the process starts balancing out the cluster, and logs will appear in log files

Option: cluster.min-free-inodes
Default Value: 5%
Description: after system has only N% of inodes, warnings starts to appear in log files

Option: cluster.rebalance-stats
Default Value: off
Description: This option if set to ON displays and logs the  time taken for migration of each file, during the rebalance process. If set to OFF, the rebalance logs will only display the time spent in each directory.

Option: cluster.subvols-per-directory
Default Value: (null)
Description: Specifies the directory layout spread. Takes number of subvolumes as default value.

Option: cluster.readdir-optimize
Default Value: off
Description: This option if set to ON enables the optimization that allows DHT to requests non-first subvolumes to filter out directory entries.

Option: cluster.rebal-throttle
Default Value: normal
Description:  Sets the maximum number of parallel file migrations allowed on a node during the rebalance operation. The default value is normal and allows a max of [($(processing units) - 4) / 2), 2]  files to be migrated at a time. Lazy will allow only one file to be migrated at a time and aggressive will allow max of [($(processing units) - 4) / 2), 4]

Option: cluster.lock-migration
Default Value: off
Description:  If enabled this feature will migrate the posix locks associated with a file during rebalance

Option: cluster.force-migration
Default Value: off
Description: If disabled, rebalance will not migrate files that are being written to by an application

Option: cluster.weighted-rebalance
Default Value: on
Description: When enabled, files will be allocated to bricks with a probability proportional to their size.  Otherwise, all bricks will have the same probability (legacy behavior).

Option: cluster.entry-change-log
Default Value: on
Description: This option exists only for backward compatibility and configuring it doesn't have any effect

Option: cluster.read-subvolume
Default Value: (null)
Description: inode-read fops happen only on one of the bricks in replicate. Afr will prefer the one specified using this option if it is not stale. Option value must be one of the xlator names of the children. Ex: <volname>-client-0 till <volname>-client-<number-of-bricks - 1>

Option: cluster.read-subvolume-index
Default Value: -1
Description: inode-read fops happen only on one of the bricks in replicate. AFR will prefer the one specified using this option if it is not stale. allowed options include -1 till replica-count - 1

Option: cluster.read-hash-mode
Default Value: 1
Description: inode-read fops happen only on one of the bricks in replicate. AFR will prefer the one computed using the method specified using this option.
0 = first readable child of AFR, starting from 1st child.
1 = hash by GFID of file (all clients use same subvolume).
2 = hash by GFID of file and client PID.
3 = brick having the least outstanding read requests.

Option: cluster.background-self-heal-count
Default Value: 8
Description: This specifies the number of per client self-heal jobs that can perform parallel heals in the background.

Option: cluster.metadata-self-heal
Default Value: on
Description: Using this option we can enable/disable metadata i.e. Permissions, ownerships, xattrs self-heal on the file/directory.

Option: cluster.data-self-heal
Default Value: on
Description: Using this option we can enable/disable data self-heal on the file. "open" means data self-heal action will only be triggered by file open operations.

Option: cluster.entry-self-heal
Default Value: on
Description: Using this option we can enable/disable entry self-heal on the directory.

Option: cluster.self-heal-daemon
Default Value: on
Description: This option applies to only self-heal-daemon. Index directory crawl and automatic healing of files will not be performed if this option is turned off.

Option: cluster.heal-timeout
Default Value: 600
Description: time interval for checking the need to self-heal in self-heal-daemon

Option: cluster.self-heal-window-size
Default Value: 1
Description: Maximum number blocks per file for which self-heal process would be applied simultaneously.

Option: cluster.data-change-log
Default Value: on
Description: This option exists only for backward compatibility and configuring it doesn't have any effect

Option: cluster.metadata-change-log
Default Value: on
Description: This option exists only for backward compatibility and configuring it doesn't have any effect

Option: cluster.data-self-heal-algorithm
Default Value: (null)
Description: Select between "full", "diff". The "full" algorithm copies the entire file from source to sink. The "diff" algorithm copies to sink only those blocks whose checksums don't match with those of source. If no option is configured the option is chosen dynamically as follows: If the file does not exist on one of the sinks or empty file exists or if the source file size is about the same as page size the entire file will be read and written i.e "full" algo, otherwise "diff" algo is chosen.

Option: cluster.eager-lock
Default Value: on
Description: Enable/Disable eager lock for replica volume. Lock phase of a transaction has two sub-phases. First is an attempt to acquire locks in parallel by broadcasting non-blocking lock requests. If lock acquisition fails on any server, then the held locks are unlocked and we revert to a blocking locks mode sequentially on one server after another.  If this option is enabled the initial broadcasting lock request attempts to acquire a full lock on the entire file. If this fails, we revert back to the sequential "regional" blocking locks as before. In the case where such an "eager" lock is granted in the non-blocking phase, it gives rise to an opportunity for optimization. i.e, if the next write transaction on the same FD arrives before the unlock phase of the first transaction, it "takes over" the full file lock. Similarly if yet another data transaction arrives before the unlock phase of the "optimized" transaction, that in turn "takes over" the lock as well. The actual unlock now happens at the end of the last "optimized" transaction.

Option: disperse.eager-lock
Default Value: on
Description: Enable/Disable eager lock for regular files on a disperse volume. If a fop takes a lock and completes its operation, it waits for next 1 second before releasing the lock, to see if the lock can be reused for next fop from the same client. If ec finds any lock contention within 1 second it releases the lock immediately before time expires. This improves the performance of file operations. However, as it takes lock on first brick, for few operations like read, discovery of lock contention might take long time and can actually degrade the performance. If eager lock is disabled, lock will be released as soon as fop completes.

Option: disperse.other-eager-lock
Default Value: on
Description: It's equivalent to the eager-lock option but for non regular files.

Option: disperse.eager-lock-timeout
Default Value: 1
Description: Maximum time (in seconds) that a lock on an inode is kept held if no new operations on the inode are received.

Option: disperse.other-eager-lock-timeout
Default Value: 1
Description: It's equivalent to eager-lock-timeout option but for non regular files.

Option: cluster.quorum-type
Default Value: none
Description: If value is "fixed" only allow writes if quorum-count bricks are present.  If value is "auto" only allow writes if more than half of bricks, or exactly half including the first, are present.

Option: cluster.quorum-count
Default Value: (null)
Description: If quorum-type is "fixed" only allow writes if this many bricks are present.  Other quorum types will OVERWRITE this value.

Option: cluster.choose-local
Default Value: true
Description: Choose a local subvolume (i.e. Brick) to read from if read-subvolume is not explicitly set.

Option: cluster.self-heal-readdir-size
Default Value: 1KB
Description: readdirp size for performing entry self-heal

Option: cluster.ensure-durability
Default Value: on
Description: Afr performs fsyncs for transactions if this option is on to make sure the changelogs/data is written to the disk

Option: cluster.consistent-metadata
Default Value: no
Description: If this option is enabled, readdirp will force lookups on those entries read whose read child is not the same as that of the parent. This will guarantee that all read operations on a file serve attributes from the same subvol as long as it holds  a good copy of the file/dir.

Option: cluster.heal-wait-queue-length
Default Value: 128
Description: This specifies the number of heals that can be queued for the parallel background self heal jobs.

Option: cluster.favorite-child-policy
Default Value: none
Description: This option can be used to automatically resolve split-brains using various policies without user intervention. "size" picks the file with the biggest size as the source. "ctime" and "mtime" pick the file with the latest ctime and mtime respectively as the source. "majority" picks a file with identical mtime and size in more than half the number of bricks in the replica.

Option: cluster.stripe-block-size
Default Value: 128KB
Description: Size of the stripe unit that would be read from or written to the striped servers.

Option: cluster.stripe-coalesce
Default Value: true
Description: Enable/Disable coalesce mode to flatten striped files as stored on the server (i.e., eliminate holes caused by the traditional format).

Option: diagnostics.latency-measurement
Default Value: off
Description: If on stats related to the latency of each operation would be tracked inside GlusterFS data-structures. 

Option: diagnostics.dump-fd-stats
Default Value: off
Description: If on stats related to file-operations would be tracked inside GlusterFS data-structures.

Option: diagnostics.brick-log-level
Default Value: INFO
Description: Changes the log-level of the bricks

Option: diagnostics.client-log-level
Default Value: INFO
Description: Changes the log-level of the clients

Option: diagnostics.brick-sys-log-level
Default Value: CRITICAL
Description: Gluster's syslog log-level

Option: diagnostics.client-sys-log-level
Default Value: CRITICAL
Description: Gluster's syslog log-level

Option: diagnostics.brick-logger
Default Value: (null)
Description: (null)

Option: diagnostics.client-logger
Default Value: (null)
Description: (null)

Option: diagnostics.brick-log-format
Default Value: (null)
Description: (null)

Option: diagnostics.client-log-format
Default Value: (null)
Description: (null)

Option: diagnostics.brick-log-buf-size
Default Value: 5
Description: (null)

Option: diagnostics.client-log-buf-size
Default Value: 5
Description: (null)

Option: diagnostics.brick-log-flush-timeout
Default Value: 120
Description: (null)

Option: diagnostics.client-log-flush-timeout
Default Value: 120
Description: (null)

Option: diagnostics.stats-dump-interval
Default Value: 0
Description: Interval (in seconds) at which to auto-dump statistics. Zero disables automatic dumping.

Option: diagnostics.fop-sample-interval
Default Value: 0
Description: Interval in which we want to collect FOP latency samples.  2 means collect a sample every 2nd FOP.

Option: diagnostics.stats-dump-format
Default Value: json
Description:  The dump-format option specifies the format in which to dump the statistics. Select between "text", "json", "dict" and "samples". Default is "json".

Option: diagnostics.fop-sample-buf-size
Default Value: 65535
Description: The maximum size of our FOP sampling ring buffer.

Option: diagnostics.stats-dnscache-ttl-sec
Default Value: 86400
Description: The interval after wish a cached DNS entry will be re-validated.  Default: 24 hrs

Option: performance.cache-max-file-size
Default Value: 0
Description: Maximum file size which would be cached by the io-cache translator.

Option: performance.cache-min-file-size
Default Value: 0
Description: Minimum file size which would be cached by the io-cache translator.

Option: performance.cache-refresh-timeout
Default Value: 1
Description: The cached data for a file will be retained for 'cache-refresh-timeout' seconds, after which data re-validation is performed.

Option: performance.cache-priority
Default Value: 
Description: Assigns priority to filenames with specific patterns so that when a page needs to be ejected out of the cache, the page of a file whose priority is the lowest will be ejected earlier

Option: performance.cache-size
Default Value: 32MB
Description: Size of the read cache.

Option: performance.io-thread-count
Default Value: 16
Description: Number of threads in IO threads translator which perform concurrent IO operations

Option: performance.high-prio-threads
Default Value: 16
Description: Max number of threads in IO threads translator which perform high priority IO operations at a given time

Option: performance.normal-prio-threads
Default Value: 16
Description: Max number of threads in IO threads translator which perform normal priority IO operations at a given time

Option: performance.low-prio-threads
Default Value: 16
Description: Max number of threads in IO threads translator which perform low priority IO operations at a given time

Option: performance.least-prio-threads
Default Value: 1
Description: Max number of threads in IO threads translator which perform least priority IO operations at a given time

Option: performance.enable-least-priority
Default Value: on
Description: Enable/Disable least priority

Option: performance.iot-watchdog-secs
Default Value: (null)
Description: Number of seconds a queue must be stalled before starting an 'emergency' thread.

Option: performance.iot-cleanup-disconnected-reqs
Default Value: off
Description: 'Poison' queued requests when a client disconnects

Option: performance.iot-pass-through
Default Value: false
Description: Enable/Disable io threads translator

Option: performance.io-cache-pass-through
Default Value: false
Description: Enable/Disable io cache translator

Option: performance.qr-cache-timeout
Default Value: 1
Description: (null)

Option: performance.cache-invalidation
Default Value: false
Description: When "on", invalidates/updates the metadata cache, on receiving the cache-invalidation notifications

Option: performance.ctime-invalidation
Default Value: false
Description: Quick-read by default uses mtime to identify changes to file data. However there are applications like rsync which explicitly set mtime making it unreliable for the purpose of identifying change in file content . Since ctime also changes when content of a file  changes and it cannot be set explicitly, it becomes  suitable for identifying staleness of cached data. This option makes quick-read to prefer ctime over mtime to validate its cache. However, using ctime can result in false positives as ctime changes with just attribute changes like permission without changes to file data. So, use this only when mtime is not reliable

Option: performance.flush-behind
Default Value: on
Description: If this option is set ON, instructs write-behind translator to perform flush in background, by returning success (or any errors, if any of previous  writes were failed) to application even before flush FOP is sent to backend filesystem. 

Option: performance.nfs.flush-behind
Default Value: on
Description: If this option is set ON, instructs write-behind translator to perform flush in background, by returning success (or any errors, if any of previous  writes were failed) to application even before flush FOP is sent to backend filesystem. 

Option: performance.write-behind-window-size
Default Value: 1MB
Description: Size of the write-behind buffer for a single file (inode).

Option: performance.resync-failed-syncs-after-fsync
Default Value: (null)
Description: If sync of "cached-writes issued before fsync" (to backend) fails, this option configures whether to retry syncing them after fsync or forget them. If set to on, cached-writes are retried till a "flush" fop (or a successful sync) on sync failures. fsync itself is failed irrespective of the value of this option. 

Option: performance.nfs.write-behind-window-size
Default Value: 1MB
Description: Size of the write-behind buffer for a single file (inode).

Option: performance.strict-o-direct
Default Value: off
Description: This option when set to off, ignores the O_DIRECT flag.

Option: performance.nfs.strict-o-direct
Default Value: off
Description: This option when set to off, ignores the O_DIRECT flag.

Option: performance.strict-write-ordering
Default Value: off
Description: Do not let later writes overtake earlier writes even if they do not overlap

Option: performance.nfs.strict-write-ordering
Default Value: off
Description: Do not let later writes overtake earlier writes even if they do not overlap

Option: performance.write-behind-trickling-writes
Default Value: on
Description: (null)

Option: performance.aggregate-size
Default Value: 128KB
Description: Will aggregate writes until data of specified size is fully filled for a single file provided there are no dependent fops on cached writes. This option just sets the aggregate size. Note that aggregation won't happen if performance.write-behind-trickling-writes is turned on. Hence turn off performance.write-behind.trickling-writes so that writes are aggregated till a max of "aggregate-size" bytes

Option: performance.nfs.write-behind-trickling-writes
Default Value: on
Description: (null)

Option: performance.lazy-open
Default Value: yes
Description: Perform open in the backend only when a necessary FOP arrives (e.g writev on the FD, unlink of the file). When option is disabled, perform backend open right after unwinding open().

Option: performance.read-after-open
Default Value: yes
Description: read is sent only after actual open happens and real fd is obtained, instead of doing on anonymous fd (similar to write)

Option: performance.open-behind-pass-through
Default Value: false
Description: Enable/Disable open behind translator

Option: performance.read-ahead-page-count
Default Value: 4
Description: Number of pages that will be pre-fetched

Option: performance.read-ahead-pass-through
Default Value: false
Description: Enable/Disable read ahead translator

Option: performance.readdir-ahead-pass-through
Default Value: false
Description: Enable/Disable readdir ahead translator

Option: performance.md-cache-pass-through
Default Value: false
Description: Enable/Disable md cache translator

Option: performance.md-cache-timeout
Default Value: 1
Description: Time period after which cache has to be refreshed

Option: performance.cache-swift-metadata
Default Value: (null)
Description: Cache swift metadata (user.swift.metadata xattr)

Option: performance.cache-samba-metadata
Default Value: (null)
Description: Cache samba metadata (user.DOSATTRIB, security.NTACL xattr)

Option: performance.cache-capability-xattrs
Default Value: (null)
Description: Cache xattrs required for capability based security

Option: performance.cache-ima-xattrs
Default Value: (null)
Description: Cache xattrs required for IMA (Integrity Measurement Architecture)

Option: performance.md-cache-statfs
Default Value: off
Description: Cache statfs information of filesystem on the client

Option: performance.xattr-cache-list
Default Value: (null)
Description: A comma separated list of xattrs that shall be cached by md-cache. The only wildcard allowed is '*'

Option: performance.nl-cache-pass-through
Default Value: false
Description: Enable/Disable nl cache translator

Option: features.encryption
Default Value: off
Description: enable/disable client-side encryption for the volume.

Option: encryption.master-key
Default Value: (null)
Description: Pathname of regular file which contains master volume key

Option: encryption.data-key-size
Default Value: 256
Description: Data key size (bits)

Option: encryption.block-size
Default Value: 4096
Description: Atom size (bits)

Option: network.frame-timeout
Default Value: 1800
Description: Time frame after which the (file) operation would be declared as dead, if the server does not respond for a particular (file) operation.

Option: network.ping-timeout
Default Value: 42
Description: Time duration for which the client waits to check if the server is responsive.

Option: network.tcp-window-size
Default Value: (null)
Description: Specifies the window size for tcp socket.

Option: client.ssl
Default Value: (null)
Description: enable/disable client.ssl flag in the volume.

Option: network.remote-dio
Default Value: disable
Description: If enabled, in open/creat/readv/writev fops, O_DIRECT flag will be filtered at the client protocol level so server will still continue to cache the file. This works similar to NFS's behavior of O_DIRECT. Anon-fds can choose to readv/writev using O_DIRECT

Option: client.event-threads
Default Value: 2
Description: Specifies the number of event threads to execute in parallel. Larger values would help process responses faster, depending on available processing power. Range 1-32 threads.

Option: network.inode-lru-limit
Default Value: 16384
Description: Specifies the limit on the number of inodes in the lru list of the inode cache.

Option: auth.allow
Default Value: *
Description: Allow a comma separated list of addresses and/or hostnames to connect to the server. Option auth.reject overrides this option. By default, all connections are allowed.

Option: auth.reject
Default Value: (null)
Description: Reject a comma separated list of addresses and/or hostnames to connect to the server. This option overrides the auth.allow option. By default, all connections are allowed.

Option: server.allow-insecure
Default Value: on
Description: (null)

Option: server.root-squash
Default Value: off
Description: Map requests from uid/gid 0 to the anonymous uid/gid. Note that this does not apply to any other uids or gids that might be equally sensitive, such as user bin or group staff.

Option: server.anonuid
Default Value: 65534
Description: value of the uid used for the anonymous user/nfsnobody when root-squash is enabled.

Option: server.anongid
Default Value: 65534
Description: value of the gid used for the anonymous user/nfsnobody when root-squash is enabled.

Option: server.statedump-path
Default Value: /var/run/gluster
Description: Specifies directory in which gluster should save its statedumps.

Option: server.outstanding-rpc-limit
Default Value: 64
Description: Parameter to throttle the number of incoming RPC requests from a client. 0 means no limit (can potentially run out of memory)

Option: server.ssl
Default Value: (null)
Description: enable/disable server.ssl flag in the volume.

Option: server.manage-gids
Default Value: off
Description: Resolve groups on the server-side.

Option: server.dynamic-auth
Default Value: on
Description: When 'on' perform dynamic authentication of volume options in order to allow/terminate client transport connection immediately in response to *.allow | *.reject volume set options.

Option: server.gid-timeout
Default Value: 300
Description: Timeout in seconds for the cached groups to expire.

Option: server.event-threads
Default Value: 1
Description: Specifies the number of event threads to execute in parallel. Larger values would help process responses faster, depending on available processing power.

Option: server.tcp-user-timeout
Default Value: 42
Description: (null)

Option: server.keepalive-time
Default Value: (null)
Description: (null)

Option: server.keepalive-interval
Default Value: (null)
Description: (null)

Option: server.keepalive-count
Default Value: (null)
Description: (null)

Option: transport.listen-backlog
Default Value: 1024
Description: This option uses the value of backlog argument that defines the maximum length to which the queue of pending connections for socket fd may grow.

Option: ssl.own-cert
Default Value: (null)
Description: SSL certificate. Ignored if SSL is not enabled.

Option: ssl.private-key
Default Value: (null)
Description: SSL private key. Ignored if SSL is not enabled.

Option: ssl.ca-list
Default Value: (null)
Description: SSL CA list. Ignored if SSL is not enabled.

Option: ssl.crl-path
Default Value: (null)
Description: Path to directory containing CRL. Ignored if SSL is not enabled.

Option: ssl.certificate-depth
Default Value: (null)
Description: Maximum certificate-chain depth.  If zero, the peer's certificate itself must be in the local certificate list.  Otherwise, there may be up to N signing certificates between the peer's and the local list.  Ignored if SSL is not enabled.

Option: ssl.cipher-list
Default Value: (null)
Description: Allowed SSL ciphers. Ignored if SSL is not enabled.

Option: ssl.dh-param
Default Value: (null)
Description: DH parameters file. Ignored if SSL is not enabled.

Option: ssl.ec-curve
Default Value: (null)
Description: ECDH curve name. Ignored if SSL is not enabled.

Option: performance.write-behind
Default Value: on
Description: enable/disable write-behind translator in the volume.

Option: performance.read-ahead
Default Value: on
Description: enable/disable read-ahead translator in the volume.

Option: performance.readdir-ahead
Default Value: on
Description: enable/disable readdir-ahead translator in the volume.

Option: performance.io-cache
Default Value: on
Description: enable/disable io-cache translator in the volume.

Option: performance.quick-read
Default Value: on
Description: enable/disable quick-read translator in the volume.

Option: performance.open-behind
Default Value: on
Description: enable/disable open-behind translator in the volume.

Option: performance.nl-cache
Default Value: off
Description: enable/disable negative entry caching translator in the volume. Enabling this option improves performance of 'create file/directory' workload

Option: performance.stat-prefetch
Default Value: on
Description: enable/disable meta-data caching translator in the volume.

Option: performance.client-io-threads
Default Value: on
Description: enable/disable io-threads translator in the client graph of volume.

Option: performance.nfs.write-behind
Default Value: on
Description: enable/disable write-behind translator in the volume

Option: performance.force-readdirp
Default Value: true
Description: Convert all readdir requests to readdirplus to collect stat info on each entry.

Option: performance.cache-invalidation
Default Value: false
Description: When "on", invalidates/updates the metadata cache, on receiving the cache-invalidation notifications

Option: features.uss
Default Value: off
Description: enable/disable User Serviceable Snapshots on the volume.

Option: features.snapshot-directory
Default Value: .snaps
Description: Entry point directory for entering snapshot world. Value can have only [0-9a-z-_] and starts with dot (.) and cannot exceed 255 character

Option: features.show-snapshot-directory
Default Value: off
Description: show entry point in readdir output of snapdir-entry-path which is set by samba

Option: features.tag-namespaces
Default Value: off
Description: This option enables this translator's functionality that tags every fop with a namespace hash for later throttling, stats collection, logging, etc.

Option: network.compression
Default Value: off
Description: enable/disable network compression translator

Option: network.compression.window-size
Default Value: -15
Description: Size of the zlib history buffer.

Option: network.compression.mem-level
Default Value: 8
Description: Memory allocated for internal compression state. 1 uses minimum memory but is slow and reduces compression ratio; memLevel=9 uses maximum memory for optimal speed. The default value is 8.

Option: network.compression.min-size
Default Value: 0
Description: Data is compressed only when its size exceeds this.

Option: network.compression.compression-level
Default Value: -1
Description: Compression levels 
0 : no compression, 1 : best speed, 
9 : best compression, -1 : default compression 

Option: features.quota-deem-statfs
Default Value: on
Description: If set to on, it takes quota limits into consideration while estimating fs size. (df command) (Default is on).

Option: nfs.transport-type
Default Value: (null)
Description: Specifies the nfs transport type. Valid transport types are 'tcp' and 'rdma'.

Option: nfs.rdirplus
Default Value: (null)
Description: When this option is set to off NFS falls back to standard readdir instead of readdirp

Option: features.read-only
Default Value: off
Description: When "on", makes a volume read-only. It is turned "off" by default.

Option: features.worm
Default Value: off
Description: When "on", makes a volume get write once read many  feature. It is turned "off" by default.

Option: features.worm-file-level
Default Value: off
Description: When "on", activates the file level worm. It is turned "off" by default.

Option: features.worm-files-deletable
Default Value: on
Description: When "off", doesn't allow the Worm filesto be deleted. It is turned "on" by default.

Option: features.default-retention-period
Default Value: 120
Description: The default retention period for the files.

Option: features.retention-mode
Default Value: relax
Description: The mode of retention (relax/enterprise). It is relax by default.

Option: features.auto-commit-period
Default Value: 180
Description: Auto commit period for the files.

Option: storage.linux-aio
Default Value: off
Description: Support for native Linux AIO

Option: storage.batch-fsync-mode
Default Value: reverse-fsync
Description: Possible values:
	- syncfs: Perform one syncfs() on behalf oa batchof fsyncs.
	- syncfs-single-fsync: Perform one syncfs() on behalf of a batch of fsyncs and one fsync() per batch.
	- syncfs-reverse-fsync: Perform one syncfs() on behalf of a batch of fsyncs and fsync() each file in the batch in reverse order.
 in reverse order.
	- reverse-fsync: Perform fsync() of each file in the batch in reverse order.

Option: storage.batch-fsync-delay-usec
Default Value: 0
Description: Num of usecs to wait for aggregating fsync requests

Option: storage.owner-uid
Default Value: -1
Description: Support for setting uid of brick's owner

Option: storage.owner-gid
Default Value: -1
Description: Support for setting gid of brick's owner

Option: storage.node-uuid-pathinfo
Default Value: off
Description: return glusterd's node-uuid in pathinfo xattr string instead of hostname

Option: storage.health-check-interval
Default Value: 30
Description: Interval in seconds for a filesystem health check, set to 0 to disable

Option: storage.build-pgfid
Default Value: off
Description: Enable placeholders for gfid to path conversion

Option: storage.gfid2path-separator
Default Value: :
Description: Path separator for glusterfs.gfidtopath virt xattr

Option: storage.reserve
Default Value: 1
Description: Percentage of disk space to be reserved. Set to 0 to disable

Option: storage.force-create-mode
Default Value: 0000
Description: Mode bit permission that will always be set on a file.

Option: storage.force-directory-mode
Default Value: 0000
Description: Mode bit permission that will be always set on directory

Option: storage.create-mask
Default Value: 0777
Description: Any bit not set here will be removed from themodes set on a file when it is created

Option: storage.create-directory-mask
Default Value: 0777
Description: Any bit not set here will be removed from themodes set on a directory when it is created

Option: storage.max-hardlinks
Default Value: 100
Description: max number of hardlinks allowed on any one inode.
0 is unlimited, 1 prevents any hardlinking at all.

Option: storage.ctime
Default Value: off
Description: When this option is enabled, time attributes (ctime,mtime,atime) are stored in xattr to keep it consistent across replica and distribute set. The time attributes stored at the backend are not considered 

Option: storage.bd-aio
Default Value: off
Description: Support for native Linux AIO

Option: config.gfproxyd
Default Value: off
Description: If this option is enabled, the proxy client daemon called gfproxyd will be started on all the trusted storage pool nodes

Option: cluster.server-quorum-type
Default Value: (null)
Description: This feature is on the server-side i.e. in glusterd. Whenever the glusterd on a machine observes that the quorum is not met, it brings down the bricks to prevent data split-brains. When the network connections are brought back up and the quorum is restored the bricks in the volume are brought back up.

Option: cluster.server-quorum-ratio
Default Value: (null)
Description: Sets the quorum percentage for the trusted storage pool.

Option: changelog.changelog-barrier-timeout
Default Value: 120
Description: After 'timeout' seconds since the time 'barrier' option was set to "on", unlink/rmdir/rename  operations are no longer blocked and previously blocked fops are allowed to go through

Option: features.barrier-timeout
Default Value: 120
Description: After 'timeout' seconds since the time 'barrier' option was set to "on", acknowledgements to file operations are no longer blocked and previously blocked acknowledgements are sent to the application

Option: features.trash
Default Value: off
Description: Enable/disable trash translator

Option: features.trash-dir
Default Value: .trashcan
Description: Directory for trash files

Option: features.trash-eliminate-path
Default Value: (null)
Description: Eliminate paths to be excluded from trashing

Option: features.trash-max-filesize
Default Value: 5MB
Description: Maximum size of file that can be moved to trash

Option: features.trash-internal-op
Default Value: off
Description: Enable/disable trash translator for internal operations

Option: cluster.enable-shared-storage
Default Value: disable
Description: Create and mount the shared storage volume(gluster_shared_storage) at /var/run/gluster/shared_storage on enabling this option. Unmount and delete the shared storage volume  on disabling this option.

Option: cluster.write-freq-threshold
Default Value: 0
Description: Defines the number of writes, in a promotion/demotion cycle, that would mark a file HOT for promotion. Any file that has write hits less than this value will be considered as COLD and will be demoted.

Option: cluster.read-freq-threshold
Default Value: 0
Description: Defines the number of reads, in a promotion/demotion cycle, that would mark a file HOT for promotion. Any file that has read hits less than this value will be considered as COLD and will be demoted.

Option: cluster.tier-pause
Default Value: off
Description: (null)

Option: cluster.tier-promote-frequency
Default Value: 120
Description: (null)

Option: cluster.tier-demote-frequency
Default Value: 3600
Description: (null)

Option: cluster.watermark-hi
Default Value: 90
Description: Upper % watermark for promotion. If hot tier fills above this percentage, no promotion will happen and demotion will happen with high probability.

Option: cluster.watermark-low
Default Value: 75
Description: Lower % watermark. If hot tier is less full than this, promotion will happen and demotion will not happen. If greater than this, promotion/demotion will happen at a probability relative to how full the hot tier is.

Option: cluster.tier-mode
Default Value: cache
Description: Either 'test' or 'cache'. Test mode periodically demotes or promotes files automatically based on access. Cache mode does so based on whether the cache is full or not, as specified with watermarks.

Option: cluster.tier-max-promote-file-size
Default Value: 0
Description: The maximum file size in bytes that is promoted. If 0, there is no maximum size (default).

Option: cluster.tier-max-mb
Default Value: 4000
Description: The maximum number of MB that may be migrated in any direction in a given cycle by a single node.

Option: cluster.tier-max-files
Default Value: 10000
Description: The maximum number of files that may be migrated in any direction in a given cycle by a single node.

Option: cluster.tier-compact
Default Value: on
Description: Activate or deactivate the compaction of the DB for the volume's metadata.

Option: cluster.tier-hot-compact-frequency
Default Value: 604800
Description: Frequency to compact DBs on hot tier in system

Option: cluster.tier-cold-compact-frequency
Default Value: 604800
Description: Frequency to compact DBs on cold tier in system

Option: features.ctr-enabled
Default Value: off
Description: Enable CTR xlator

Option: features.record-counters
Default Value: off
Description: Its a Change Time Recorder Xlator option to enable recording write and read heat counters. The default is disabled. If enabled, "cluster.write-freq-threshold" and "cluster.read-freq-threshold" defined the number of writes (or reads) to a given file are needed before triggering migration.

Option: features.ctr-sql-db-cachesize
Default Value: 12500
Description: Defines the cache size of the sqlite database of changetimerecorder xlator.The input to this option is in pages.Each page is 4096 bytes. Default value is 12500 pages.The max value is 262144 pages i.e 1 GB and the min value is 1000 pages i.e ~ 4 MB. 

Option: features.ctr-sql-db-wal-autocheckpoint
Default Value: 25000
Description: Defines the autocheckpoint of the sqlite database of  changetimerecorder. The input to this option is in pages. Each page is 4096 bytes. Default value is 25000 pages.The max value is 262144 pages i.e 1 GB and the min value is 1000 pages i.e ~4 MB.

Option: locks.trace
Default Value: off
Description: Trace the different lock requests to logs.

Option: locks.mandatory-locking
Default Value: off
Description: Specifies the mandatory-locking mode. Valid options are 'file' to use linux style mandatory locks, 'forced' to use volume strictly under mandatory lock semantics only and 'optimal' to treat advisory and mandatory locks separately on their own.

Option: cluster.quorum-reads
Default Value: no
Description: This option has been removed. Reads are not allowed if quorum is not met.

Option: features.timeout
Default Value: (null)
Description: Specifies the number of seconds the quiesce translator will wait for a CHILD_UP event before force-unwinding the frames it has currently stored for retry.

Option: features.failover-hosts
Default Value: (null)
Description: It is a comma separated list of hostname/IP addresses. It Specifies the list of hosts where the gfproxy daemons are running, to which the the thin clients can failover to.

Option: features.shard
Default Value: off
Description: enable/disable sharding translator on the volume.

Option: features.shard-block-size
Default Value: 64MB
Description: The size unit used to break a file into multiple chunks

Option: features.shard-deletion-rate
Default Value: 100
Description: The number of shards to send deletes on at a time

Option: features.cache-invalidation
Default Value: off
Description: When "on", sends cache-invalidation notifications.

Option: features.cache-invalidation-timeout
Default Value: 60
Description: After 'timeout' seconds since the time client accessed any file, cache-invalidation notifications are no longer sent to that client.

Option: features.leases
Default Value: off
Description: When "on", enables leases support

Option: features.lease-lock-recall-timeout
Default Value: 60
Description: After 'timeout' seconds since the recall_lease request has been sent to the client, the lease lock will be forcefully purged by the server.

Option: disperse.background-heals
Default Value: 8
Description: This option can be used to control number of parallel heals

Option: disperse.heal-wait-qlength
Default Value: 128
Description: This option can be used to control number of heals that can wait

Option: dht.force-readdirp
Default Value: on
Description: This option if set to ON, forces the use of readdirp, and hence also displays the stats of the files.

Option: disperse.read-policy
Default Value: gfid-hash
Description: inode-read fops happen only on 'k' number of bricks in n=k+m disperse subvolume. 'round-robin' selects the read subvolume using round-robin algo. 'gfid-hash' selects read subvolume based on hash of the gfid of that file/directory.

Option: cluster.shd-max-threads
Default Value: 1
Description: Maximum number of parallel heals SHD can do per local brick. This can substantially lower heal times, but can also crush your bricks if you don't have the storage hardware to support this.

Option: cluster.shd-wait-qlength
Default Value: 1024
Description: This option can be used to control number of heals that can wait in SHD per subvolume

Option: cluster.locking-scheme
Default Value: full
Description: If this option is set to granular, self-heal will stop being compatible with afr-v1, which helps afr be more granular while self-healing

Option: cluster.granular-entry-heal
Default Value: no
Description: If this option is enabled, self-heal will resort to granular way of recording changelogs and doing entry self-heal.

Option: features.locks-revocation-secs
Default Value: 0
Description: Maximum time a lock can be taken out, beforebeing revoked.

Option: features.locks-revocation-clear-all
Default Value: false
Description: If set to true, will revoke BOTH granted and blocked (pending) lock requests if a revocation threshold is hit.

Option: features.locks-revocation-max-blocked
Default Value: 0
Description: A number of blocked lock requests after which a lock will be revoked to allow the others to proceed.  Can be used in conjunction w/ revocation-clear-all.

Option: features.locks-notify-contention
Default Value: no
Description: When this option is enabled and a lock request conflicts with a currently granted lock, an upcall notification will be sent to the current owner of the lock to request it to be released as soon as possible.

Option: features.locks-notify-contention-delay
Default Value: 5
Description: This value determines the minimum amount of time (in seconds) between upcall contention notifications on the same inode. If multiple lock requests are received during this period, only one upcall will be sent.

Option: disperse.shd-max-threads
Default Value: 1
Description: Maximum number of parallel heals SHD can do per local brick.  This can substantially lower heal times, but can also crush your bricks if you don't have the storage hardware to support this.

Option: disperse.shd-wait-qlength
Default Value: 1024
Description: This option can be used to control number of heals that can wait in SHD per subvolume

Option: disperse.cpu-extensions
Default Value: auto
Description: force the cpu extensions to be used to accelerate the galois field computations.

Option: disperse.self-heal-window-size
Default Value: 1
Description: Maximum number blocks(128KB) per file for which self-heal process would be applied simultaneously.

Option: cluster.use-compound-fops
Default Value: no
Description: This option exists only for backward compatibility and configuring it doesn't have any effect

Option: performance.parallel-readdir
Default Value: off
Description: If this option is enabled, the readdir operation is performed in parallel on all the bricks, thus improving the performance of readdir. Note that the performance improvement is higher in large clusters

Option: performance.rda-request-size
Default Value: 131072
Description: size of buffer in readdirp calls initiated by readdir-ahead 

Option: performance.rda-cache-limit
Default Value: 10MB
Description: maximum size of cache consumed by readdir-ahead xlator. This value is global and total memory consumption by readdir-ahead is capped by this value, irrespective of the number/size of directories cached

Option: performance.nl-cache-positive-entry
Default Value: (null)
Description: enable/disable storing of entries that were lookedup and found to be present in the volume, thus lookup on non existent file is served from the cache

Option: performance.nl-cache-limit
Default Value: 131072
Description: the value over which caching will be disabled fora while and the cache is cleared based on LRU

Option: performance.nl-cache-timeout
Default Value: 60
Description: Time period after which cache has to be refreshed

Option: cluster.brick-multiplex
Default Value: off
Description: This global option can be used to enable/disable brick multiplexing. Brick multiplexing ensures that compatible brick instances can share one single brick process.

Option: cluster.max-bricks-per-process
Default Value: 0
Description: This option can be used to limit the number of brick instances per brick process when brick-multiplexing is enabled. If not explicitly set, this tunable is set to 0 which denotes that brick-multiplexing can happen without any limit on the number of bricks per process. Also this option can't be set when the brick-multiplexing feature is disabled.

Option: cluster.halo-enabled
Default Value: False
Description: Enable Halo (geo) replication mode.

Option: cluster.halo-shd-max-latency
Default Value: 99999
Description: Maximum latency for shd halo replication in msec.

Option: cluster.halo-nfsd-max-latency
Default Value: 5
Description: Maximum latency for nfsd halo replication in msec.

Option: cluster.halo-max-latency
Default Value: 5
Description: Maximum latency for halo replication in msec.

Option: cluster.halo-max-replicas
Default Value: 99999
Description: The maximum number of halo replicas; replicas beyond this value will be written asynchronouslyvia the SHD.

Option: cluster.halo-min-replicas
Default Value: 2
Description: The minimmum number of halo replicas, before adding out of region replicas.

Option: features.utime
Default Value: off
Description: enable/disable utime translator on the volume.

Option: ctime.noatime
Default Value: on
Description: enable/disable noatime option with ctime enabled.

Option: feature.cloudsync-storetype
Default Value: (null)
Description: Defines which remote store is enabled


________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux