Re: CPUs, threads, and speed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jan 15, 2020 at 2:00 PM Andrey Kuzmin <andrey.v.kuzmin@xxxxxxxxx> wrote:
>
> On Wed, Jan 15, 2020 at 9:29 PM Mauricio Tavares <raubvogel@xxxxxxxxx> wrote:
> >
> > On Wed, Jan 15, 2020 at 1:04 PM Andrey Kuzmin <andrey.v.kuzmin@xxxxxxxxx> wrote:
> > >
> > > On Wed, Jan 15, 2020 at 8:29 PM Gruher, Joseph R
> > > <joseph.r.gruher@xxxxxxxxx> wrote:
> > > >
> > > > > -----Original Message-----
> > > > > From: fio-owner@xxxxxxxxxxxxxxx <fio-owner@xxxxxxxxxxxxxxx> On Behalf Of
> > > > > Mauricio Tavares
> > > > > Sent: Wednesday, January 15, 2020 7:51 AM
> > > > > To: fio@xxxxxxxxxxxxxxx
> > > > > Subject: CPUs, threads, and speed
> > > > >
> > > > > Let's say I have a config file to preload drive that looks like this (stolen from
> > > > > https://github.com/intel/fiovisualizer/blob/master/Workloads/Precondition/fill
> > > > > _4KRandom_NVMe.ini)
> > > > >
> > > > > [global]
> > > > > name=4k random write 4 ios in the queue in 32 queues
> > > > > filename=/dev/nvme0n1
> > > > > ioengine=libaio
> > > > > direct=1
> > > > > bs=4k
> > > > > rw=randwrite
> > > > > iodepth=4
> > > > > numjobs=32
> > > > > buffered=0
> > > > > size=100%
> > > > > loops=2
> > > > > randrepeat=0
> > > > > norandommap
> > > > > refill_buffers
> > > > >
> > > > > [job1]
> > > > >
> > > > > That is taking a ton of time, like days to go. Is there anything I can do to speed it
> > > > > up?
> > > >
> > > > When you say preload, do you just want to write in the full capacity of the drive?
> > >
> > > I believe that preload here means what in SSD world is called drive
> > > preconditioning. It means bringing a fresh drive into steady mode
> > > where it gives you the true performance in production over months of
> > > use rather than the unrealistic fresh drive random write IOPS.
> > >
> > > > A sequential workload with larger blocks will be faster,
> > >
> > > No, you cannot get the job done by sequential writes since it doesn't
> > > populate FTL translation tables like random writes do.
> > >
> > > As to taking a ton, the rule of thumb is to give the SSD 2xcapacity
> > > worth of random writes. At today speeds, that should take just a
> > > couple of hours.
> > >
> >       When you say 2xcapacity worth of random writes, do you mean just
> > setting size=200%?
>
> Right.
>
      Then I wonder what I am doing wrong now. I changed the config file to

[root@testbox tests]# cat preload.conf
[global]
name=4k random write 4 ios in the queue in 32 queues
ioengine=libaio
direct=1
bs=4k
rw=randwrite
iodepth=4
numjobs=32
buffered=0
size=200%
loops=2
random_generator=tausworthe64
thread=1

[job1]
filename=/dev/nvme0n1
[root@testbox tests]#

but when I run it, now it spits out much larger eta times:

Jobs: 32 (f=32): [w(32)][0.0%][w=382MiB/s][w=97.7k IOPS][eta
16580099d:14h:55m:27s]]

Compare with what I was getting with size=100%

 Jobs: 32 (f=32): [w(32)][10.8%][w=301MiB/s][w=77.0k IOPS][eta 06d:13h:56m:51s]]

> Regards,
> Andrey
> >
> > > Regards,
> > > Andrey
> > >
> > > > like:
> > > >
> > > > [global]
> > > > ioengine=libaio
> > > > thread=1
> > > > direct=1
> > > > bs=128k
> > > > rw=write
> > > > numjobs=1
> > > > iodepth=128
> > > > size=100%
> > > > loops=2
> > > > [job00]
> > > > filename=/dev/nvme0n1
> > > >
> > > > Or if you have a use case where you specifically want to write it in with 4K blocks, you could probably increase your queue depth way beyond 4 and see improvement in performance, and you probably don't want to specify norandommap if you're trying to hit every block on the device.
> > > >
> > > > -Joe




[Index of Archives]     [Linux Kernel]     [Linux SCSI]     [Linux IDE]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux