Ok, hi.
I think I'm committing a major blunder here which may be why I'm
not seeing better through put.
These xlators should be stacked, is that right? I defined the
following;
volume brick1
type storage/posix
option directory /home/sdm1
end-volume
volume server
type protocol/server
subvolumes brick1
option transport-type tcp/server # For TCP/IP transport
# option client-volume-filename /etc/glusterfs/glusterfs-client.vol
option auth.ip.brick1.allow *
end-volume
volume writebehind
type performance/write-behind
option aggregate-size 131072 # in bytes
subvolumes brick1
end-volume
volume readahead
type performance/read-ahead
option page-size 65536 ### in bytes
option page-count 16 ### memory cache size is page-count x page-size
per file
subvolumes brick1
end-volume
Should I have used the 'server' volume as the subvolume for read-ahead
and write-behind in the above? Or should read-ahead and write-behind
be between the basic brick and the server volume? Is there a
diffrence in performance?
I grabbed 5 volumes from the SATA Beast. I think the best way to
test this is with the real files and jobs. So it's go for broke and
full bore time.
If I have two front ends I need I'll need the postix lock deal,
the io threader is a must or why bother. If I unify, both front ends
need access to the same namespace brick so it has to have locks on it
too, yes?
Looking at the GlusterFS Translators v1.3 server examples. Why
is the io thread xlator so high up in the stack? Would it be better
farther down that stack closer to the basic bricks? If not, why not?
-------------------------------------------------------------------------------
Chris Johnson |Internet: johnson@xxxxxxxxxxxxxxxxxxx
Systems Administrator |Web: http://www.nmr.mgh.harvard.edu/~johnson
NMR Center |Voice: 617.726.0949
Mass. General Hospital |FAX: 617.726.7422
149 (2301) 13th Street |Knowing what thou knowest not
Charlestown, MA., 02129 USA |is in a sence omniscience. Piet Hein
-------------------------------------------------------------------------------