Hi Jens, I am facing issues while applying the patch, Please find the details, =========== The patch for the file is , --- a/fio.h +++ b/fio.h @@ -67,6 +67,7 @@ struct thread_data { char verror[FIO_VERROR_SIZE]; pthread_t thread; unsigned int thread_number; + unsigned int group_thread_number; unsigned int groupid; struct thread_stat ts; There is no such line in that file. $>grep 'unsigned int thread_number' fio.h extern unsigned int thread_number; The line 67 does not have any related lines, 65 enum { 66 RW_SEQ_SEQ = 0, 67 RW_SEQ_IDENT, 68 }; Let me know how to proceed further. Thanks, Suresh.D On Sat, Apr 14, 2012 at 8:29 AM, Jens Axboe <axboe@xxxxxxxxx> wrote: > On 2012-04-14 17:23, Jens Axboe wrote: >> On 2012-04-14 01:48, Suresh Dhanarajan wrote: >>> Hi, >>> >>> I wanted to use the "offset increment option" and "numjobs" so that i >>> can split the work load between ten jobs and reduce the run time. >>> But what i am seeing is that the offset counter is not getting reset >>> when the first job in the jobfile is completed.(is it that the offset >>> is global for the job file and not for the Jobs inside the jobfile?) >>> So my read starts from the last offset set by the writes. >>> >>> here is my job file, >>> >>> [global] >>> bs=64k >>> direct=1 >>> numjobs=10 >>> size=1m >>> group_reporting >>> >>> [write-phase] >>> offset_increment=1m >>> filename=/dev/sdb >>> rw=write >>> write_iolog=verfywrite2 >>> >>> [read-phase] >>> stonewall >>> offset_increment=1m >>> filename=/dev/sdb >>> rw=read >>> write_iolog=verfyread2 >>> >>> I tried using the offset=0 option in the read phase job but now every >>> time the read happens from zeroth offsite. >>> Now the offset increment option is not getting honored. >>> >>> [write-phase] >>> offset_increment=1m >>> filename=/dev/sdb >>> rw=write >>> write_iolog=verfywrite2 >>> >>> [read-phase] >>> stonewall >>> offset=0 >>> offset_increment=1m >>> filename=/dev/sdb >>> rw=read >>> write_iolog=verfyread2 >>> >>> I tried the same case with verify option. >>> the behavior is same. >>> >>> Is there any way that i can reset the offset counters once the job is completed? >> >> Right now there isn't, but it does make sense to reset the counter >> across stonewalls. At the moment, it'll just increment for each job. >> Internally, fio sets up all the jobs, even across stonewalls, when it >> starts up. It just starts some of them later on, depending on those >> parameters. It does seem most useful to have it reset across a hard >> barrier though, I can definitely change it to do that. >> >> Let me brew up a patch later today that you can test. > > Had a few minutes now, didn't expect that. Can you try this? Do a make > clean first. > > diff --git a/filesetup.c b/filesetup.c > index 166ace8..bad5bed 100644 > --- a/filesetup.c > +++ b/filesetup.c > @@ -714,7 +714,7 @@ int setup_files(struct thread_data *td) > need_extend = 0; > for_each_file(td, f, i) { > f->file_offset = td->o.start_offset + > - (td->thread_number - 1) * td->o.offset_increment; > + (td->group_thread_number - 1) * td->o.offset_increment; > > if (!td->o.file_size_low) { > /* > diff --git a/fio.h b/fio.h > index 95d9d77..05a5572 100644 > --- a/fio.h > +++ b/fio.h > @@ -67,6 +67,7 @@ struct thread_data { > char verror[FIO_VERROR_SIZE]; > pthread_t thread; > unsigned int thread_number; > + unsigned int group_thread_number; > unsigned int groupid; > struct thread_stat ts; > > diff --git a/init.c b/init.c > index 7422628..46571ee 100644 > --- a/init.c > +++ b/init.c > @@ -35,6 +35,7 @@ static int dump_cmdline; > static int def_timeout; > > static struct thread_data def_thread; > +static int group_thread_number; > struct thread_data *threads = NULL; > > int exitall_on_terminate = 0; > @@ -829,8 +830,10 @@ static int add_job(struct thread_data *td, const char *jobname, int job_add_num, > if ((td->o.stonewall || td->o.new_group) && prev_group_jobs) { > prev_group_jobs = 0; > groupid++; > + group_thread_number = 0; > } > > + td->group_thread_number = ++group_thread_number; > td->groupid = groupid; > prev_group_jobs++; > > > -- > Jens Axboe > -- To unsubscribe from this list: send the line "unsubscribe fio" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html