On Wed, Jan 15, 2020 at 9:29 PM Mauricio Tavares <raubvogel@xxxxxxxxx> wrote: > > On Wed, Jan 15, 2020 at 1:04 PM Andrey Kuzmin <andrey.v.kuzmin@xxxxxxxxx> wrote: > > > > On Wed, Jan 15, 2020 at 8:29 PM Gruher, Joseph R > > <joseph.r.gruher@xxxxxxxxx> wrote: > > > > > > > -----Original Message----- > > > > From: fio-owner@xxxxxxxxxxxxxxx <fio-owner@xxxxxxxxxxxxxxx> On Behalf Of > > > > Mauricio Tavares > > > > Sent: Wednesday, January 15, 2020 7:51 AM > > > > To: fio@xxxxxxxxxxxxxxx > > > > Subject: CPUs, threads, and speed > > > > > > > > Let's say I have a config file to preload drive that looks like this (stolen from > > > > https://github.com/intel/fiovisualizer/blob/master/Workloads/Precondition/fill > > > > _4KRandom_NVMe.ini) > > > > > > > > [global] > > > > name=4k random write 4 ios in the queue in 32 queues > > > > filename=/dev/nvme0n1 > > > > ioengine=libaio > > > > direct=1 > > > > bs=4k > > > > rw=randwrite > > > > iodepth=4 > > > > numjobs=32 > > > > buffered=0 > > > > size=100% > > > > loops=2 > > > > randrepeat=0 > > > > norandommap > > > > refill_buffers > > > > > > > > [job1] > > > > > > > > That is taking a ton of time, like days to go. Is there anything I can do to speed it > > > > up? > > > > > > When you say preload, do you just want to write in the full capacity of the drive? > > > > I believe that preload here means what in SSD world is called drive > > preconditioning. It means bringing a fresh drive into steady mode > > where it gives you the true performance in production over months of > > use rather than the unrealistic fresh drive random write IOPS. > > > > > A sequential workload with larger blocks will be faster, > > > > No, you cannot get the job done by sequential writes since it doesn't > > populate FTL translation tables like random writes do. > > > > As to taking a ton, the rule of thumb is to give the SSD 2xcapacity > > worth of random writes. At today speeds, that should take just a > > couple of hours. > > > When you say 2xcapacity worth of random writes, do you mean just > setting size=200%? Right. Regards, Andrey > > > Regards, > > Andrey > > > > > like: > > > > > > [global] > > > ioengine=libaio > > > thread=1 > > > direct=1 > > > bs=128k > > > rw=write > > > numjobs=1 > > > iodepth=128 > > > size=100% > > > loops=2 > > > [job00] > > > filename=/dev/nvme0n1 > > > > > > Or if you have a use case where you specifically want to write it in with 4K blocks, you could probably increase your queue depth way beyond 4 and see improvement in performance, and you probably don't want to specify norandommap if you're trying to hit every block on the device. > > > > > > -Joe