well, if dbench was to give me close to real numbers then these number look quite .. well, disappointing quick tests on a single brick volume that fuse mountpoints locally to show a massive disproportion raw fs / gluster fuse 336.224 / 17.7982 I believe the volume is a standard one, meaning it wasn't tweaked could this be worked around somehow, tuned? but if this kind of performance should be expected then I'll have to abandon the idea of deploying gluster thanks On 25/04/12 17:44, Jeff Darcy wrote: > On Wed, 25 Apr 2012 17:20:13 +0100 > lejeczek<peljasz at yahoo.co.uk> wrote: > >> would a tool such as dbench be a valid bechmark for gluster? > Dbench is a pretty good tool. As long as you use the fileio back end, > your loadfiles should work on GlusterFS just fine. > >> and, most importantly, is there any formula to estimate raw >> fs to gluster performance ratio for different setups? >> for instance: >> having a replicated volume, two bricks, fuse mountpoint to >> volume via non-congested 1Gbps >> or even >> a volume on single brick with fuse client mountpoing locally >> >> what percentage/fraction of raw filesystem performance >> should we expect from gluster? roughly? > As I'm sure you know, deriving system or application performance from > component performance is an exercise in dealing with chaos. There's > certainly no formula for it, and even sophisticated models usually > can't overcome the fact that very small differences in how the parts > interact - e.g. latency distributions or queuing behavior - can result > in large changes to the result. *In general* your network is going to > be the main factor affecting bandwidth, and for small numbers of disks > they're going to govern latency. To know how that affects your > application, you'd have to know whether it's bandwidth- or > latency-bound, and how well it can take advantage of parallelism. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://gluster.org/pipermail/gluster-users/attachments/20120426/0df5b1f4/attachment.htm>