Hi Ben, Thanks for the info. Cheers, Ron On 29/04/15 21:03, Ben Turner wrote: > ----- Original Message ----- >> From: "Ron Trompert" <ron.trompert@xxxxxxxxxxx> >> To: gluster-users@xxxxxxxxxxx >> Sent: Wednesday, April 29, 2015 1:25:59 PM >> Subject: Poor performance with small files >> >> Hi, >> >> We run gluster as storage solution for our Owncloud-based sync and share >> service. At the moment we have about 30 million files in the system >> which addup to a little more than 30TB. Most of these files are as you >> may expect very small, i.e. in the 100KB ball park. For about a year >> everything ran perfectly fine. We run 3.6.2 by the way. > > Upgrade to 3.6.3 and set client.event-threads and server.event-threads to at least 4: > > "Previously, epoll thread did socket even-handling and the same thread was used for serving the client or processing the response received from the server. Due to this, other requests were in a queue untill the current epoll thread completed its operation. With multi-threaded epoll, events are distributed that improves the performance due the parallel processing of requests/responses received." > > Here are the guidelines for tuning them: > > https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/Small_File_Performance_Enhancements.html > > In my testing with epoll threads at 4 I saw a between a 15% and 50% increase depending on the workload. > > There are several smallfile perf enhancements in the works: > > *http://www.gluster.org/community/documentation/index.php/Features/Feature_Smallfile_Perf > > *Lookup unhashed is the next feature and should be ready with 3.7(correct me if I am wrong). > > *If you are using RAID 6 you may want to do some testing with RAID 10 or JBOD, but the benefits here only come into play with alot of concurrent access(30+ processes / threads working with different files). > > *Tiering may help here if you want to add some SSDs, this is also a 3.7 feature. > > HTH! > > -b > >> >> Now we are trying to commission new hardware. We have done this by >> adding the new nodes to our cluster and using the add-brick and >> remove-brick procedure to get the data to the new nodes. In a week we >> have migrated only 8.5TB this way. What are we doing wrong here? Is >> there a way to improve the gluster performance on small files? >> >> I have another question. If you want to setup a gluster that will >> contain lots of very small files. What would be a good practice to set >> things up in terms configuration, sizes of bricks related tot memory and >> number of cores, number of brick per node etc.? >> >> >> >> Best regards and thanks in advance, >> >> Ron >> >> >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users@xxxxxxxxxxx >> http://www.gluster.org/mailman/listinfo/gluster-users >> _______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users