Re: Finding i/o bottleneck

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



> >> Not sure how gfs2 deals with client caching, but in other scenarios
> >> it's probably easier to just throw a lot of ram in the system and let
> >> the filesystem cache do its job. You still have to deal with
> >> applications that need to fsync(), though.
> >
> > Our nodes all have 12 gigs of ddr3 ram, that should be plenty. The node
> > where the application I'm dealing with is has about half used.
>
> Yes, but how does gfs2 deal with filesystem caching?  There must be
> some restriction and overhead to keep it consistent across nodes.

Yes indeed, afaik, when reading (wich is mostly our case), the node reads 
the data from the disk and is able to cache it. When a node is trying to 
write to that file, it must tell other nodes to flush their cache for that 
file. But that is only my understanding of the mechanics of glocks, I might 
be wrong.

I did opened up a ticket with RH to help find the i/o contention source.

Regards, 

_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos


[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]
  Powered by Linux