Re: missing files

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Some time ago I had a similar performance problem (with 3.4 if I remember correctly): a just created volume started to work fine, but after some time using it performance was worse. Removing all files from the volume didn't improve the performance again.

The only way I had to recover a performance similar to the initial one without recreating the volume was to remove all volume contents and also delete all 256 .glusterfs/xx/ directories from all bricks.

The backend filesystem was XFS.

Could you try if this is the same case ?

Xavi

On 02/11/2015 12:22 PM, David F. Robinson wrote:
Don't think it is the underlying file system. /data/brickxx is the underlying xfs. Performance to this is fine. When I created a volume it just puts the data in /data/brick/test2. The underlying filesystem shouldn't know/care that it is in a new directory.

Also, if I create a /data/brick/test2 volume and put data on it, it gets slow in gluster. But, writing to /data/brick is still fine. And, after test2 gets slow, I can create a /data/test3 volume that is empty and its speed is fine.

My knowledge is admittedly very limited here, but I don't see how it could be the underlying filesystem if the slowdown only occurs on the gluster mount and not on the underlying xfs filesystem.

David  (Sent from mobile)

===============================
David F. Robinson, Ph.D.
President - Corvid Technologies
704.799.6944 x101 [office]
704.252.1310      [cell]
704.799.7974      [fax]
David.Robinson@xxxxxxxxxxxxx
http://www.corvidtechnologies.com

On Feb 11, 2015, at 12:18 AM, Justin Clift <justin@xxxxxxxxxxx> wrote:

On 11 Feb 2015, at 03:06, Shyam <srangana@xxxxxxxxxx> wrote:
<snip>
2) We ran an strace of tar and also collected io-stats outputs from these volumes, both show that create and mkdir is slower on slow as compared to the fast volume. This seems to be the overall reason for slowness

Any idea's on "why" the create and mkdir is slower?

Wondering if it's a case of underlying filesystem parameters (for the bricks)
+ maybe physical storage structure having become badly optimised over time.
eg if its on spinning rust, not ssd, and sector placement is now bad

Any idea if there are tools that can analyse this kind of thing?  eg meta
data placement / fragmentation / on a drive for XFS/ext4

+ Justin

--
GlusterFS - http://www.gluster.org

An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.

My personal twitter: twitter.com/realjustinclift

_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel

_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel




[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux