Re: segfault runninng fio against 2048 jobs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 04/20/2012 04:21 PM, Roger Sibert wrote:
> I was thinking along the lines of adding a command, job_size_allowed =
> default, (l)arge, (xl)arge, (j)umbo with using those in where you do the
> check FIO_MAX_JOBS or REAL_MAX_JOBS.
> 
> Default=1
> Large=1.5
> XLarge=2
> Jumbo=3
> 
> char output[(REAL_MAX_JOBS*job_size_allowed) + 512], *p = output;
> 
> Sorry if the code is off/wrong, I have spent the last 5 days doing
> nothing but perl and bash scripting along with a twist of SQL so my
> brain is mush :P
> 
> I used to test RAID code for a living so I wasn't about to start digging
> since I don't know the code well enough which means that if I push in on
> one side something else will more than likely pop out on the other.
> 
> Looking at what your describing vs what I was thinking it sounds like
> your approach of setting it up to allow for a more dynamic range would
> be a more elegant approach that would serve better in the long run.

Yes, the point of doing segmented thread_data arrays would be to get rid
of any fio imposed constraint on the number of jobs that could be
supported. And do so without requiring tweaking of the shm segment size
on the OS in question.

-- 
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe fio" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Kernel]     [Linux SCSI]     [Linux IDE]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux