Re: gluster and multipath

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 24/01/2017 10:23 PM, Alessandro Briosi wrote:
Ok, I also am going to use Proxmox. Any advise on how to configure the bricks? I plan to have a 2 node replica. Would appreciate you sharing your full setup :-)

Three node replica - preferred to two as quorum works best with a odd number of nodes. If storage on a third node is an issue then use an arbiter node.

https://gluster.readthedocs.io/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/


I use sharding with 65MB shards, it makes for very fast efficient heals. Just one brick per node, but each brick is 4 disks in ZFS Raid 10 with a fast SSD log device.


zpool status
  pool: tank
 state: ONLINE
scan: scrub repaired 100K in 16h15m with 0 errors on Tue Jan 3 15:21:10 2017
config:

NAME                                               STATE READ WRITE CKSUM
tank                                               ONLINE 0     0     0
mirror-0                                         ONLINE 0     0     0
ata-WDC_WD30EFRX-68EUZN0_WD-WMC4N2874892       ONLINE 0     0     0
ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N4TKR8C2       ONLINE 0     0     0
mirror-1                                         ONLINE 0     0     0
ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N4TKR3Y0       ONLINE 0     0     0
ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N4TKR84T       ONLINE 0     0     0
        logs
ata-KINGSTON_SHSS37A240G_50026B7266074B8A-part1  ONLINE 0     0     0
        cache
ata-KINGSTON_SHSS37A240G_50026B7266074B8A-part2  ONLINE 0     0     0


ZFS properties:

compression=lz4
atime=off
xattr=sa
sync=standard
acltype=posixacl


gluster v info

Volume Name: datastore4
Type: Replicate
Volume ID: 0ba131ef-311d-4bb1-be46-596e83b2f6ce
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: vnb.proxmox.softlog:/tank/vmdata/datastore4
Brick2: vng.proxmox.softlog:/tank/vmdata/datastore4
Brick3: vna.proxmox.softlog:/tank/vmdata/datastore4
Options Reconfigured:
performance.readdir-ahead: on
cluster.data-self-heal: on
features.shard: on
cluster.quorum-type: auto
cluster.server-quorum-type: server
nfs.disable: on
nfs.addr-namelookup: off
nfs.enable-ino32: off
performance.strict-write-ordering: off
performance.stat-prefetch: on
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
cluster.eager-lock: enable
network.remote-dio: enable
features.shard-block-size: 64MB
cluster.granular-entry-heal: yes
cluster.locking-scheme: granular



--
Lindsay Mathieson

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux