Performance issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello ,

 

I did the test to generate 100,000 files on glusterfs, each file is 4k, it
need to take 150m27s . I did the same test in local disk and samba share,
just 1m14s and 4m35s. Can you tell me how to improve the performance , since
I want to use glusterfs to handle a large amount of files. 

Thank you very much !!

 

Ben

 

------------------

Server conf

-----------------

volume storage-ds

           type storage/posix

           option directory /storage

   end-volume

   volume storage-ns

           type storage/posix

           option directory /storage-ns

   end-volume

   volume server

     type protocol/server

     option transport-type tcp/server

     subvolumes storage-ds

     option auth.ip.storage-ds.allow 172.16.*

     option auth.ip.storage-ns.allow 172.16.*

   end-volume

 

---------------

Client conf

---------------- 

   volume 01

     type protocol/client

     option transport-type tcp/client

     option remote-host 172.16.40.11

     option transport-timeout 15

     option remote-subvolume storage-ds

   end-volume

 

   volume 01-ns

     type protocol/client

     option transport-type tcp/client

     option remote-host 172.16.40.11

     option transport-timeout 15

     option remote-subvolume storage-ns

   end-volume

 

   volume 02

     type protocol/client

     option transport-type tcp/client

     option remote-host 172.16.40.12

     option transport-timeout 15

     option remote-subvolume storage-ds

   end-volume

 

   volume 02-ns

     type protocol/client

     option transport-type tcp/client

     option remote-host 172.16.40.12

     option transport-timeout 15

     option remote-subvolume storage-ns

   end-volume

 

   volume 03

     type protocol/client

     option transport-type tcp/client

     option remote-host 172.16.40.13

     option transport-timeout 15

     option remote-subvolume storage-ds

   end-volume

 

 

   volume 04

     type protocol/client

     option transport-type tcp/client

     option remote-host 172.16.40.14

     option transport-timeout 15

     option remote-subvolume storage-ds

   end-volume

 

volume afr-ns

        type cluster/afr

        subvolumes 01-ns 02-ns

   end-volume

 

volume afr01

        type cluster/afr

        subvolumes 01 02

   end-volume

 

volume afr02

        type cluster/afr

        subvolumes 03 04

   end-volume

 

volume storage-unify

           type cluster/unify

           subvolumes afr01 afr02

           option namespace afr-ns

           option scheduler rr

           option rr.limits.min-free-disk 5%

   end-volume

 

   volume readahead

        type performance/read-ahead

        option page-size 128kb ### in bytes

        option page-count 64 ### memory cache size is page-count x page-size
per file

        subvolumes storage-unify

    end-volume

 

volume iothreads

           type performance/io-threads

           option thread-count 8

           option cache-size 128MB

           subvolumes readahead

   end-volume

 

volume io-cache

type performance/io-cache

option cache-size 512MB             # default is 32MB

option page-size 256KB               #128KB is default option

option force-revalidate-timeout 7200  # default is 1

subvolumes iothreads

end-volume

 

volume writebehind

         type performance/write-behind

         option aggregate-size 131072 # in bytes

         option flush-behind on

         subvolumes io-cache

   end-volume

 

 



[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux