Dear Amar,
Thank you for the advice.
(Actually, I used byte units, like, 16384 for 16KB)
The thing is, 128KB page-size, while gives full throughput, seems to
kill Rewrite completely, as one could see from the Table down below in
the quote. There I used page-size 128KB and page-count of 2.
--
Best regards,
Grigory Shamov
Kazan Science Centre of RAS,
Kazan, Russian Federation
Amar wrote:
Hi,
With read-ahead 'option page-size 16kB' you may not achieve maximum
throughput, so, better try 'option page-size 128kB' (Note that 16K is just
treated as 16bytes, as the parser needs 'KB' 'MB' 'GB' as the unit, not K,
M, or G respectively).
Regards,
Amar
On Dec 19, 2007 10:56 PM, Grigory Shamov <gas@xxxxxx> wrote:
Dear GlusterFS developers,
Some of the Bonnie++ results are
like this:
===============================================
FileSystem: Sequential Output , K/sec
Per-char Block Rewrite
===============================================
NFS 14442 30419 7710
Lustre 16012 35228 19018
GlusterFS 16582 15833 8358
GlusterFS, wb 17988 43774 8409
GlusterFS, ra 18414 15863 1804
GlusterFS, ra, wb 22403 41821 355
===============================================
FileSystem: Sequential Input, K/sec Random
Per-char Block seeks, #/s
===============================================
NFS 20229 49510 178.8
Lustre 17284 47753 53.0
GlusterFS 16791 16815 161.4
GlusterFS, wb 15304 17438 174.1
GlusterFS, ra 19420 54803 143.3
GlusterFS, ra, wb 19900 54427 144.4
===============================================