Re: CPUs, threads, and speed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jan 16, 2020 at 2:00 AM Andrey Kuzmin <andrey.v.kuzmin@xxxxxxxxx> wrote:
>
> On Wed, Jan 15, 2020 at 11:36 PM Mauricio Tavares <raubvogel@xxxxxxxxx> wrote:
> >
> > On Wed, Jan 15, 2020 at 2:00 PM Andrey Kuzmin <andrey.v.kuzmin@xxxxxxxxx> wrote:
> > >
> > > On Wed, Jan 15, 2020 at 9:29 PM Mauricio Tavares <raubvogel@xxxxxxxxx> wrote:
> > > >
> > > > On Wed, Jan 15, 2020 at 1:04 PM Andrey Kuzmin <andrey.v.kuzmin@xxxxxxxxx> wrote:
> > > > >
> > > > > On Wed, Jan 15, 2020 at 8:29 PM Gruher, Joseph R
> > > > > <joseph.r.gruher@xxxxxxxxx> wrote:
> > > > > >
> > > > > > > -----Original Message-----
> > > > > > > From: fio-owner@xxxxxxxxxxxxxxx <fio-owner@xxxxxxxxxxxxxxx> On Behalf Of
> > > > > > > Mauricio Tavares
> > > > > > > Sent: Wednesday, January 15, 2020 7:51 AM
> > > > > > > To: fio@xxxxxxxxxxxxxxx
> > > > > > > Subject: CPUs, threads, and speed
> > > > > > >
> > > > > > > Let's say I have a config file to preload drive that looks like this (stolen from
> > > > > > > https://github.com/intel/fiovisualizer/blob/master/Workloads/Precondition/fill
> > > > > > > _4KRandom_NVMe.ini)
> > > > > > >
> > > > > > > [global]
> > > > > > > name=4k random write 4 ios in the queue in 32 queues
> > > > > > > filename=/dev/nvme0n1
> > > > > > > ioengine=libaio
> > > > > > > direct=1
> > > > > > > bs=4k
> > > > > > > rw=randwrite
> > > > > > > iodepth=4
> > > > > > > numjobs=32
> > > > > > > buffered=0
> > > > > > > size=100%
> > > > > > > loops=2
> > > > > > > randrepeat=0
> > > > > > > norandommap
> > > > > > > refill_buffers
> > > > > > >
> > > > > > > [job1]
> > > > > > >
> > > > > > > That is taking a ton of time, like days to go. Is there anything I can do to speed it
> > > > > > > up?
> > > > > >
> > > > > > When you say preload, do you just want to write in the full capacity of the drive?
> > > > >
> > > > > I believe that preload here means what in SSD world is called drive
> > > > > preconditioning. It means bringing a fresh drive into steady mode
> > > > > where it gives you the true performance in production over months of
> > > > > use rather than the unrealistic fresh drive random write IOPS.
> > > > >
> > > > > > A sequential workload with larger blocks will be faster,
> > > > >
> > > > > No, you cannot get the job done by sequential writes since it doesn't
> > > > > populate FTL translation tables like random writes do.
> > > > >
> > > > > As to taking a ton, the rule of thumb is to give the SSD 2xcapacity
> > > > > worth of random writes. At today speeds, that should take just a
> > > > > couple of hours.
> > > > >
> > > >       When you say 2xcapacity worth of random writes, do you mean just
> > > > setting size=200%?
> > >
> > > Right.
> > >
> >       Then I wonder what I am doing wrong now. I changed the config file to
> >
> > [root@testbox tests]# cat preload.conf
> > [global]
> > name=4k random write 4 ios in the queue in 32 queues
> > ioengine=libaio
> > direct=1
> > bs=4k
> > rw=randwrite
> > iodepth=4
> > numjobs=32
> > buffered=0
> > size=200%
> > loops=2
> > random_generator=tausworthe64
> > thread=1
> >
> > [job1]
> > filename=/dev/nvme0n1
> > [root@testbox tests]#
> >
> > but when I run it, now it spits out much larger eta times:
> >
> > Jobs: 32 (f=32): [w(32)][0.0%][w=382MiB/s][w=97.7k IOPS][eta
> > 16580099d:14h:55m:27s]]
>
>  Size is set on per thread basis, so you're doing 32x200%x2 loops=128
> drive capacities here.
>
> Also, using 32 threads doesn't improve anything. 2 (and even one)
> threads with qd=128 will push the drive
> to its limits.
>
     Update: so I redid the config file a bit to pass some of the
arguments from command line, and cut down number of jobs and loops.
And I ran it again, this time sequential write to the drive I have not
touched to see how fast it was going to go. My eta is still
astronomical:

[root@testbox tests]# cat preload_fio.conf
[global]
name=4k random
ioengine=${ioengine}
direct=1
bs=${bs_size}
rw=${iotype}
iodepth=4
numjobs=1
buffered=0
size=200%
loops=1

[job1]
filename=${devicename}
[root@testbox tests]# devicename=/dev/nvme1n1 ioengine=libaio
iotype=write bs_size=128k ~/dev/fio/fio ./preload_fio.conf
job1: (g=0): rw=write, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T)
128KiB-128KiB, ioengine=libaio, iodepth=4
fio-3.17-68-g3f1e
Starting 1 process
Jobs: 1 (f=1): [W(1)][0.0%][w=1906MiB/s][w=15.2k IOPS][eta 108616d:00h:00m:24s]

> Regards,
> Andrey
> >
> > Compare with what I was getting with size=100%
> >
> >  Jobs: 32 (f=32): [w(32)][10.8%][w=301MiB/s][w=77.0k IOPS][eta 06d:13h:56m:51s]]
> >
> > > Regards,
> > > Andrey
> > > >
> > > > > Regards,
> > > > > Andrey
> > > > >
> > > > > > like:
> > > > > >
> > > > > > [global]
> > > > > > ioengine=libaio
> > > > > > thread=1
> > > > > > direct=1
> > > > > > bs=128k
> > > > > > rw=write
> > > > > > numjobs=1
> > > > > > iodepth=128
> > > > > > size=100%
> > > > > > loops=2
> > > > > > [job00]
> > > > > > filename=/dev/nvme0n1
> > > > > >
> > > > > > Or if you have a use case where you specifically want to write it in with 4K blocks, you could probably increase your queue depth way beyond 4 and see improvement in performance, and you probably don't want to specify norandommap if you're trying to hit every block on the device.
> > > > > >
> > > > > > -Joe




[Index of Archives]     [Linux Kernel]     [Linux SCSI]     [Linux IDE]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux