Re: Very slow ls - WARNING

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



FYI, another data point that echos Franco's experience.

 

I turned this option (cluster.readdir-optimize) on after reading the thread post and in fact the 'ls' perf seem to increase quite a bit, but after a few days, this morning all 85 of our compute nodes reported no files on the mount point which was .. disconcerting to a number of users.

 

The filesystem was still mounted and the data was intact, but 'ls' reported nothing, which makes it somewhat less than useful.

 

After turning off that option and remounting, all the clients see their files again, albeit more slowly again.

 

The config is gluster 3.4.2 on amd64/SL6.4 and is now

 

 

$ gluster volume info gl

Volume Name: gl

Type: Distribute

Volume ID: 21f480f7-fc5a-4fd8-a084-3964634a9332

Status: Started

Number of Bricks: 8

Transport-type: tcp,rdma

Bricks:

Brick1: bs2:/raid1

Brick2: bs2:/raid2

Brick3: bs3:/raid1

Brick4: bs3:/raid2

Brick5: bs4:/raid1

Brick6: bs4:/raid2

Brick7: bs1:/raid1

Brick8: bs1:/raid2

Options Reconfigured:

cluster.readdir-optimize: off

performance.write-behind-window-size: 1MB

performance.flush-behind: on

performance.cache-size: 268435456

nfs.disable: on

performance.io-cache: on

performance.quick-read: on

performance.io-thread-count: 64

auth.allow: 10.2.*.*,10.1.*.*

 

 

hjm

 

 

 

On Sunday, February 23, 2014 04:11:28 AM Franco Broi wrote:

> All the client filesystems core-dumped. Lost a lot of production time.

>

> I've disabled the cluster.readdir-optimize option and remounted all the

> filesystems. ________________________________________

> From: gluster-users-bounces@xxxxxxxxxxx [gluster-users-bounces@xxxxxxxxxxx]

> on behalf of Franco Broi [Franco.Broi@xxxxxxxxxx] Sent: Friday, February

> 21, 2014 10:57 PM

> To: Vijay Bellur

> Cc: gluster-users@xxxxxxxxxxx

> Subject: Re: Very slow ls

>

> Amazingly setting cluster.readdir-optimize has fixed the problem, ls is

> still slow but there's no long pause on the last readdir call.

>

> What does this option do and why isn't it enabled by default?

> _______________________________________

> From: gluster-users-bounces@xxxxxxxxxxx [gluster-users-bounces@xxxxxxxxxxx]

> on behalf of Franco Broi [Franco.Broi@xxxxxxxxxx] Sent: Friday, February

> 21, 2014 7:25 PM

> To: Vijay Bellur

> Cc: gluster-users@xxxxxxxxxxx

> Subject: Re: Very slow ls

>

> On 21 Feb 2014 22:03, Vijay Bellur <vbellur@xxxxxxxxxx> wrote:

> > On 02/18/2014 12:42 AM, Franco Broi wrote:

> > > On 18 Feb 2014 00:13, Vijay Bellur <vbellur@xxxxxxxxxx> wrote:

> > > > On 02/17/2014 07:00 AM, Franco Broi wrote:

> > > > > I mounted the filesystem with trace logging turned on and can see

> > > > > that

> > > > > after the last successful READDIRP there is a lot of other

> > > > > connections

> > > > > being made the clients repeatedly which takes minutes to complete.

> > > >

> > > > I did not observe anything specific which points to clients

> > > > repeatedly

> > > > reconnecting. Can you point to the appropriate line numbers for this?

> > > >

> > > > Can you also please describe the directory structure being referred

> > > > here?

> > >

> > > I was tailing the log file while the readdir script was running and

> > > could see respective READDIRP calls for each readdir, after the last

> > > call all the rest of the stuff in the log file was returning nothing but

> > > took minutes to complete. This particular example was a directory

> > > containing a number of directories, one for each of the READDIRP calls

> > > in the log file.

> >

> > One possible tuning that can possibly help:

> >

> > volume set <volname> cluster.readdir-optimize on

> >

> > Let us know if there is any improvement after enabling this option.

>

> I'll give it a go but I think this is a bug and not a performance issue.

> I've filed a bug report on bugzilla.

> > Thanks,

> > Vijay

>

> ________________________________

>

>

> This email and any files transmitted with it are confidential and are

> intended solely for the use of the individual or entity to whom they are

> addressed. If you are not the original recipient or the person responsible

> for delivering the email to the intended recipient, be advised that you

> have received this email in error, and that any use, dissemination,

> forwarding, printing, or copying of this email is strictly prohibited. If

> you received this email in error, please immediately notify the sender and

> delete the original.

>

> ________________________________

>

>

> This email and any files transmitted with it are confidential and are

> intended solely for the use of the individual or entity to whom they are

> addressed. If you are not the original recipient or the person responsible

> for delivering the email to the intended recipient, be advised that you

> have received this email in error, and that any use, dissemination,

> forwarding, printing, or copying of this email is strictly prohibited. If

> you received this email in error, please immediately notify the sender and

> delete the original.

>

> _______________________________________________

> Gluster-users mailing list

> Gluster-users@xxxxxxxxxxx

> http://supercolony.gluster.org/mailman/listinfo/gluster-users

>

> ________________________________

>

>

> This email and any files transmitted with it are confidential and are

> intended solely for the use of the individual or entity to whom they are

> addressed. If you are not the original recipient or the person responsible

> for delivering the email to the intended recipient, be advised that you

> have received this email in error, and that any use, dissemination,

> forwarding, printing, or copying of this email is strictly prohibited. If

> you received this email in error, please immediately notify the sender and

> delete the original.

>

> _______________________________________________

> Gluster-users mailing list

> Gluster-users@xxxxxxxxxxx

> http://supercolony.gluster.org/mailman/listinfo/gluster-users

 

---

Harry Mangalam - Research Computing, OIT, Rm 225 MSTB, UC Irvine

[m/c 2225] / 92697 Google Voice Multiplexer: (949) 478-4487

415 South Circle View Dr, Irvine, CA, 92697 [shipping]

MSTB Lat/Long: (33.642025,-117.844414) (paste into Google Maps)

---

 

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux