Writing is slow when there are 10 million files.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear All, 

I have a problem with slow writing when there are 10 million files.
(Top level directories are 2,500.)

I configured GlusterFS distributed cluster(3 nodes).
Each node's spec is below.

 CPU: Xeon E5-2620 (2.00GHz 6 Core)
 HDD: SATA 7200rpm 4TB*12 (RAID 6)
 NW: 10GBEth
 GlusterFS : glusterfs 3.4.2 built on Jan  3 2014 12:38:06
 
This cluster(volume) is mounted on CentOS via FUSE client. 
This volume is storage of our application and I want to store 3 hundred million to 5 billion files.

I performed a writing test, writing 32KByte file × 10 million to this volume, and encountered a problem.

(1) Writing is so slow and slow down as number of files increases.
  In non clustering situation(one node), this node's writing speed is 40 MByte/sec at random, 
  But writing speed is 3.6MByte/sec on that cluster.
(2) ls command is very slow.
  About 20 second. Directory creation takes about 10 seconds at lowest.

Question:
 
 1)5 Billion files are possible to store in GlusterFS?
  Has someone succeeded to store billion  files to GlusterFS?
  
 2) Could you give me a link for a tuning guide or some information of tuning?
 
Thanks.

-- Michitaka Terada
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux