advice on optimal configuration

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

I have 128 physically identical blades, with 1GbE uplink per blade,
and 10GbE between chassis ( 32 blades per chassis ). Each node will
have a 80GB gluster partition. Dual-quad core intel Xeons, 24GB RAM.

The goal is to use gluster as a cache for files used by render
applications. All files in gluster could be re-generated or retrieved
from the upstream file server.

My first volume config attempt is 64 replicated volumes with partner
pairs on different chassis.

Is replicating a performance hit? Do reads balance between replication nodes?

Would NUFA make more sense for this set-up?

Here is my config, any advice appreciated.

Thank you,
-Barry


>>>>
volume c001b17-1
    type protocol/client
    option transport-type tcp
    option remote-host c001b17
    option transport.socket.nodelay on
    option transport.remote-port 6996
    option remote-subvolume brick1
    option ping-timeout 5
end-volume
.
<snip>
.
volume c004b48-1
    type protocol/client
    option transport-type tcp
    option remote-host c004b48
    option transport.socket.nodelay on
    option transport.remote-port 6996
    option remote-subvolume brick1
    option ping-timeout 5
end-volume

volume replicate001-17
    type cluster/replicate
    subvolumes c001b17-1 c002b17-1
end-volume
.
<snip>
.
volume replicate001-48
    type cluster/replicate
    subvolumes c001b48-1 c002b48-1
end-volume

volume replicate003-17
    type cluster/replicate
    subvolumes c003b17-1 c004b17-1
end-volume
.
<snip>
.
volume replicate003-48
    type cluster/replicate
    subvolumes c003b48-1 c004b48-1
end-volume

volume distribute
    type cluster/distribute
    subvolumes replicate001-17 replicate001-18 replicate001-19
replicate001-20 replicate001-21 replicate001-22 replicate001-23
replicate001-24 replicate001-25 replicate001-26 replicate001-27
replicate001-28 replicate001-29 replicate001-30 replicate001-31
replicate001-32 replicate001-33 replicate001-34 replicate001-35
replicate001-36 replicate001-37 replicate001-38 replicate001-39
replicate001-40 replicate001-41 replicate001-42 replicate001-43
replicate001-44 replicate001-45 replicate001-46 replicate001-47
replicate001-48 replicate003-17 replicate003-18 replicate003-19
replicate003-20 replicate003-21 replicate003-22 replicate003-23
replicate003-24 replicate003-25 replicate003-26 replicate003-27
replicate003-28 replicate003-29 replicate003-30 replicate003-31
replicate003-32 replicate003-33 replicate003-34 replicate003-35
replicate003-36 replicate003-37 replicate003-38 replicate003-39
replicate003-40 replicate003-41 replicate003-42 replicate003-43
replicate003-44 replicate003-45 replicate003-46 replicate003-47
replicate003-48
end-volume

volume writebehind
    type performance/write-behind
    option cache-size 64MB
    option flush-behind on
    subvolumes distribute
end-volume

volume readahead
    type performance/read-ahead
    option page-count 4
    subvolumes writebehind
end-volume

volume iocache
    type performance/io-cache
    option cache-size 128MB
    option cache-timeout 10
    subvolumes readahead
end-volume

volume quickread
    type performance/quick-read
    option cache-timeout 1
    option max-file-size 64kB
    subvolumes iocache
end-volume

volume statprefetch
    type performance/stat-prefetch
    subvolumes quickread
end-volume


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux