Re: distrubuted iops measuring

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2012-03-26 00:00, Jiri Horky wrote:
> Hi,
> 
> I would like to measure IOPS on a distributed file system from several 
> hosts in parallel.
> I am a bit lost in what options I should use. I would like to have each 
> host in a cluster accessing its own file to not stress metadata and/or 
> locking infrastructure. I thought that on all clients, I should run just:
> 
> fio --server
> 
> And from an admin node something like
> 
> fio --client server1 job.desc.1 --client server2 job.desc.2 --client 
> server3 job.desc.3...., where job.desc.X is specific to the client 
> (different filename).
> 
> But it seems like each host executes each job file, which is not what I 
> would like...
> I bet there is a way how to accomplish this. Could you please point me 
> to the right direction?

That is/was indeed the intended idea. It's just a parsing issue that
causes it to support just one job file across a bunch of hosts, it's not
a fio limitation. So it should be relatively easy to fix. Your above
incantation ends up being identical to doing:

$ fio --client server1 --client server2 --client server3 job1 job2 job3

I can fix this when I get the time, or you can dive into it yourself if
you want. It's in init.c:parse_cmd_line().

-- 
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe fio" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Kernel]     [Linux SCSI]     [Linux IDE]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux