Problem during reproducing smallfile experiment on Gluster 10

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Title: Problem during reproducing smallfile experiment on Gluster 10

Dear Gluster developers,

This is Hyunseung Park at Gluesys, South Korea.

We are trying to replicate the test in https://github.com/gluster/glusterfs/issues/2771 but to no avail.

In our experiments, Gluster version 10 unfortunately did not perform noticably better than version 9.


v9 2x2                          average
create  3399.99 3484.79 2702.57 3195.783333    
ls -l   65605.2 64930.6 72018.7 67518.16667    
chmod   4858.95 4965.29 5597.73 5140.656667    
stat    7334.88 7755.89 8335.11 7808.626667    
read    7015.64 8255.48 7007.01 7426.043333    
append  2554.93 2777.65 2572.57 2635.05
mkdir   1800.29 1865.07 1805.48 1823.613333    
rmdir   1854.09 1722.89 1876.81 1817.93
cleanup 2402.02 2447.36 2438.71 2429.363333    
                                       
v10 2x2                         average
create  3741.39 3174.82 3234.42 3383.543333    
ls -l   71543.7 67275.9 72975.1 70598.23333    
chmod   5441.11 5109.22 5004.08 5184.803333    
stat    7746.37 7677.99 7885.72 7770.026667    
read    7061.12 7165.21 7121.07 7115.8 
append  3458.93 2641.84 2887.46 2996.076667    
mkdir   2685.22 1879.35 1970.91 2178.493333    
rmdir   2240.11 1648.37 1602.16 1830.213333    
cleanup 3739.68 2407.57 2403.48 2850.243333    

The result above is from the test that deployed 32 threads on each of the 4 clients.

Some results were better than others, but it is not good enough when compared to the result in the aforementioned link.

We are wondering what we can do to get the full potential of the new version.

We have been running tests with varying file sizes, number of threads, different volume topology, etc. but we were not able to see data conclusive enough.

We were also not able to find meaningful output from running tests using other benchmark tools such as bonnie++ and FIO.

To find the potential cause we tried to look into the program by calling malloc_stats() and using perf.

However, we also could not find something noteworthy from the result.

Here is the data recorded during one set of smallfile test (from create to cleanup): https://drive.google.com/drive/folders/1NMXNjgOZ7svDd4-YvKCU4UAp43tm15dC?usp=sharing

Below is our test environment:

Basic HW info: VM (vSphere), 2 core CPU, 4G RAM. 4 servers and 4 clients.

OS: Centos 7

kernel version: 3.10.0-1160

Gluster version: 9.4 and 10.0, built rpm from source

Configuration result (case of version 10):

GlusterFS configure summary

===========================

FUSE client          : yes

epoll IO multiplex   : yes

fusermount           : yes

readline             : no

georeplication       : yes

Linux-AIO            : yes

Linux io_uring       : no

Use liburing         : no

Enable Debug         : no

Run with Valgrind    : no

Sanitizer enabled    : none

XML output           : yes

Unit Tests           : no

Track priv ports     : yes

POSIX ACLs           : yes

SELinux features     : yes

firewalld-config     : yes

Events               : yes

EC dynamic support   : x64 sse avx

Use memory pools     : no

Nanosecond m/atimes  : yes

Server components    : yes

Legacy gNFS server   : no

IPV6 default         : no

Use TIRPC            : no

With Python          : 2.7

Cloudsync            : yes

Metadata dispersal   : no

Link with TCMALLOC   : yes

Enable Brick Mux     : no

-------

Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk

Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-devel


[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux