Hi Amukher,
Even after upgrade to 3.7 small files transfer rate is slow.
Below is the volume info.
Volume Name: integvol1
Type: Replicate
Volume ID: 31793ba4-eeca-462a-a0cd-9adfb281225b
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: integ-gluster1:/srv/sdb2/brick4
Brick2: integ-gluster2:/srv/sdb2/brick4
Options Reconfigured:
server.event-threads: 30
client.event-threads: 30
----
I understand that for replication it would take some more time, but here its taking more time.
Time taken for git clone in non gluster directory = 25 sec
Time taken for git clone in gluster directory = 14 minutes
Its a huge difference. Plz let me know any other tuning parameters need to be done.
Regards,
Kamal
Even after upgrade to 3.7 small files transfer rate is slow.
Below is the volume info.
Volume Name: integvol1
Type: Replicate
Volume ID: 31793ba4-eeca-462a-a0cd-9adfb281225b
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: integ-gluster1:/srv/sdb2/brick4
Brick2: integ-gluster2:/srv/sdb2/brick4
Options Reconfigured:
server.event-threads: 30
client.event-threads: 30
----
I understand that for replication it would take some more time, but here its taking more time.
Time taken for git clone in non gluster directory = 25 sec
Time taken for git clone in gluster directory = 14 minutes
Its a huge difference. Plz let me know any other tuning parameters need to be done.
Regards,
Kamal
============ Forwarded Message ============From : bturner@xxxxxxxxxxTo : gjprabu@xxxxxxxxxxxxDate : Thu, 30 Apr 2015 17:14:00 +0530Subject : Re: [Gluster-users] client is terrible with large amount of small files============ Forward Message ============----- Original Message -----> From: "Atin Mukherjee" <amukherj@xxxxxxxxxx>> To: "gjprabu" <gjprabu@xxxxxxxxxxxx>> Sent: Thursday, April 30, 2015 7:37:19 AM> Subject: Re: client is terrible with large amount of small files>>> On 04/30/2015 03:09 PM, gjprabu wrote:> > Hi Amukher,> >> > How to resolve this issue, till we need to wait for 3.7 release> > or any work around is there.> You will have to as this feature is in for 3.7.My apologies, I didn't realize that MT epoll didn't land in 3.6. If you want to test it out there is an alpha build available:I wouldn't run this in production until 3.7 is released though. Again sorry for the confusion.-b> >> > RegardsPrabu> >> >> >> >> >> > ---- On Thu, 30 Apr 2015 14:49:46 +0530 Atin> > Mukherjee<amukherj@xxxxxxxxxx> wrote ----> >> >> >> > On 04/30/2015 02:32 PM, gjprabu wrote:> > > Hi bturner,> > >> > >> > > I am getting below error while adding server.event> > >> > > gluster v set integvol server.event-threads 3> > > volume set: failed: option : server.event-threads does not exist> > > Did you mean server.gid-timeout or ...manage-gids?> > This option is not available in 3.6, its going to come in 3.7> >> > >> > >> > > Glusterfs version has been upgraded to 3.6.3> > > Also os kernel upgraded to 6.6 kernel> > > Yes two brick are running in KVM and one is physical machine and we> > are not using thinp.> > >> > > Regards> > > G.J> > >> > >> > >> > >> > >> > > ---- On Thu, 30 Apr 2015 00:37:44 +0530 Ben> > Turner&lt;bturner@xxxxxxxxxx&gt; wrote ----> > >> > > ----- Original Message -----> > > &gt; From: "gjprabu" &lt;gjprabu@xxxxxxxxxxxx&gt;> > > &gt; To: "A Ghoshal" &lt;a.ghoshal@xxxxxxx&gt;> > > &gt; Cc: gluster-users@xxxxxxxxxxx,> > > &gt; Sent: Wednesday, April 29, 2015 9:07:07 AM> > > &gt; Subject: Re: client is terrible with large> > amount of small files> > > &gt;> > > &gt; Hi Ghoshal,> > > &gt;> > > &gt; Please find the details below.> > > &gt;> > > &gt; A) Glusterfs version> > > &gt; glusterfs 3.6.2> > >> > > Upgrade to 3.6.3 and set client.event-threads and server.event-threads> > to at least 4. Here is a guide on tuning MT epoll:> > >> > >> > >> > > &gt;> > > &gt; B) volume configuration (gluster v &lt;volname&gt;> > info)> > > &gt; gluster volume info> > > &gt;> > > &gt;> > > &gt; Volume Name: integvol> > > &gt; Type: Replicate> > > &gt; Volume ID: b8f3a19e-59bc-41dc-a55a-6423ec834492> > > &gt; Status: Started> > > &gt; Number of Bricks: 1 x 3 = 3> > > &gt; Transport-type: tcp> > > &gt; Bricks:> > > &gt; Brick1: integ-gluster2:/srv/sdb1/brick> > > &gt; Brick2: integ-gluster1:/srv/sdb1/brick> > > &gt; Brick3: integ-gluster3:/srv/sdb1/brick> > > &gt;> > > &gt;> > > &gt; C) host linux version> > > &gt; CentOS release 6.5 (Final)> > >> > > Are your bricks on LVM? Are you using thinp? If so update to the> > latest kernel as thinp perf was really bad in 6.5 and early 6.6 kernels.> > >> > > &gt;> > > &gt; D) details about the kind of network you use to connect your> > servers making> > > &gt; up your storage pool.> > > &gt; We are connecting LAN to LAN there is no special network> > configuration done> > > &gt;> > > &gt; Frome client we use to mount like below> > > &gt; mount -t glusterfs gluster1:/integvol /mnt/gluster/> > > &gt;> > > &gt;> > > &gt; Regards> > > &gt; Prabu> > > &gt;> > > &gt;> > > &gt;> > > &gt; ---- On Wed, 29 Apr 2015 17:58:16 +0530 A> > Ghoshal&lt;a.ghoshal@xxxxxxx&gt; wrote> > > &gt; ----> > > &gt;> > > &gt;> > > &gt;> > > &gt; Performance would largely depend upon setup. While I cannot> > think of any> > > &gt; setup that would cause write to be this slow, if would help> > if you share the> > > &gt; following details:> > > &gt;> > > &gt; A) Glusterfs version> > > &gt; B) volume configuration (gluster v &lt;volname&gt;> > info)> > > &gt; C) host linux version> > > &gt; D) details about the kind of network you use to connect your> > servers making> > > &gt; up your storage pool.> > > &gt;> > > &gt; Thanks,> > > &gt; Anirban> > > &gt;> > > &gt;> > > &gt;> > > &gt; From: gjprabu &lt; gjprabu@xxxxxxxxxxxx &gt;> > > &gt; To: &lt; gluster-users@xxxxxxxxxxx &gt;> > > &gt; Date: 04/29/2015 05:52 PM> > > &gt; Subject: Re: client is terrible with large> > amount of small> > > &gt; files> > > &gt; Sent by: gluster-users-bounces@xxxxxxxxxxx> > > &gt;> > > &gt;> > > &gt;> > > &gt;> > > &gt; Hi Team,> > > &gt;> > > &gt; If anybody know the solution please share us.> > > &gt;> > > &gt; Regards> > > &gt; Prabu> > > &gt;> > > &gt;> > > &gt;> > > &gt; ---- On Tue, 28 Apr 2015 19:32:40 +0530 gjprabu &lt;> > gjprabu@xxxxxxxxxxxx &gt;> > > &gt; wrote ----> > > &gt; Hi Team,> > > &gt;> > > &gt; We are using glusterfs newly and testing data transfer part> > in client using> > > &gt; fuse.glusterfs file system but it is terrible with large> > amount of small> > > &gt; files (Large amount of small file 150MB of size it's writing> > around 18min).> > > &gt; I can able copy small files and syncing between the server> > brick are working> > > &gt; fine but it is terrible with large amount of small files.> > > &gt;> > > &gt; if anybody please share the solution for the above issue.> > > &gt;> > > &gt; Regards> > > &gt; Prabu> > > &gt;> > > &gt; _______________________________________________> > > &gt; Gluster-users mailing list> > > &gt; Gluster-users@xxxxxxxxxxx> > > &gt; http://www.gluster.org/mailman/listinfo/gluster-users> > > &gt;> > > &gt; _______________________________________________> > > &gt; Gluster-users mailing list> > > &gt; Gluster-users@xxxxxxxxxxx> > > &gt; http://www.gluster.org/mailman/listinfo/gluster-users> > > &gt;> > > &gt;> > > &gt; =====-----=====-----=====> > > &gt; Notice: The information contained in this e-mail> > > &gt; message and/or attachments to it may contain> > > &gt; confidential or privileged information. If you are> > > &gt; not the intended recipient, any dissemination, use,> > > &gt; review, distribution, printing or copying of the> > > &gt; information contained in this e-mail message> > > &gt; and/or attachments to it are strictly prohibited. If> > > &gt; you have received this communication in error,> > > &gt; please notify us by reply e-mail or telephone and> > > &gt; immediately and permanently delete the message> > > &gt; and any attachments. Thank you> > > &gt;> > > &gt;> > > &gt;> > > &gt; _______________________________________________> > > &gt; Gluster-users mailing list> > > &gt; Gluster-users@xxxxxxxxxxx> > > &gt; http://www.gluster.org/mailman/listinfo/gluster-users> > >> > >> > >> > >> > >> > >> > >> > >> > > _______________________________________________> > > Gluster-users mailing list> > > Gluster-users@xxxxxxxxxxx> > >> >> >>> --> ~Atin>
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users