Re: Crawling and indexing hardware

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 7 May 2008 20:06:40 +0200 "Marcus Herou"
<marcus.herou@xxxxxxxxxxxxx> wrote:

> 1.  Big index files ~x Gig each
> 2.  Many small files in a huge amount of directories.

Do you plan to do any AFR (automatic file replication) ?  If so,
consider that even a one-byte change to your "big index files" will
cause the /entire/ file to be AFR'd between all participating nodes.

> Finally what tools would suite to test zillions of small files ?
> Bonnie++ ? Fewer big files ? Still Bonnie++ or perhaps IOZone ?

IOZone is an interesting tool, assuming you can interpret the
results. :P  I have been using Bonnie++ and FFSB extensively over the
past couple of weeks to stresstest / benchmark Gluster.  Both have the
advantage of producing easily interpretable results, and FFSB is highly
configurable, depending on what sort of tests you'd like to run (read /
write / both, small / large files, lots / few files, etc..).

The following page contains some sample FFSB configs to work from :
http://tastic.brillig.org/~jwb/zfs-xfs-ext4.html
(see "Step 8".)

Cheers !

-- 
Daniel Maher <dma AT witbe.net>




[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux