Re: Performance degeneration issues

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



[ ... random IO on a collection of 6 million of 2K files
degrades from 8000/s to 200/s ... ]

>> What's the layout of you storage? how many disks, size, etc.
>> How much RAM, number of CPUs in your server?

Perhaps the question above is a diplomatic way of hinting
something different.

The "op" is complaining that they want 8000 (logical) IOPS,
which they have observed with purely sequential patterns and
write caching at load time, but they are only getting 200
(logical) IOPS on a sustained basis at database access time.

The goal seems therefore to be that they want 8000 (logical)
IOPS even in the worst case scenario (1 physical IOPS per
logical IOPS).

The question to the "op" is then whether you are sure that your
storage layer can deliver a bit more than 8000 (physical) IOPS
and thus the filesystem can abstract it into something that
gives you the required 8000 (logical) IOPS.

All this in the case where your application really requires 8000
(physical) IOPS from the storage layer, which is a very high
target, especially as it seems that your current storage layer
peaks around 200 (physical) IOPS.

> With the SANs I checked if they are under heavy load (they are
> not).

What does "load" mean here? Does it mean tranfer rate or IOPS?

> Of those VMs only I is situated in an environment where there
> is load from other VMs.

VMs usually are not a good idea for IO intensive loads...
However the combination of VM, SAN and file systems used as
small records DBMSes seems popular, as each choice has the same
root source.

> It isn't really a sorting algorithm. Every file has 3 elements
> that can be used to categorise it. So the actual sorting
> amounts to a linear search (read) for those elements and a
> move/copy&delete (after possible creation of 4 directories in
> which to sort/categorise). [categorisation is done in this
> pattern: /<unique id>/<year>/fixed/(a|b)]

Ahh, the usual ridiculous "file systems are the optimal DBMS for
collections of small random records" delusion, with the bizarre
additional delusion that changing directory entries amounts to
"actual sorting".

> All of which show the same issues. So my reasoning was either
> hardware or FS. But seeing as the degeneration als happens
> with the SANs I thought it might be more of a FS specific
> issue.

Great insight! Couldn't it be that you have misdesigned and
miscoded your application in a demented way on the wrong storage
layer so that it scales very badly, and that you have been
fooled by tests that "worked well" at a much smaller scale?
Surely not. :-)

[ ... ]

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs


[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux