On 19.04.20 20:01, Douglas Gilbert wrote: > On 2020-04-13 6:57 p.m., Martin K. Petersen wrote: >> >> Doug, >> >>> Many disks implement the SCSI PRE-FETCH commands. One use case might >>> be a disk-to-disk compare, say between disks A and B. Then this >>> sequence of commands might be used: PRE-FETCH(from B, IMMED), >>> READ(from A), VERIFY (BYTCHK=1 on B with data returned from READ). The >>> PRE-FETCH (which returns quickly due to the IMMED) fetches the data >>> from the media into B's cache which should speed the trailing VERIFY >>> command. The next chunk of the compare might be done in parallel, >>> with A and B reversed. >> >> Minor nit: I agree with the code and the use case. But the commit >> description should reflect what the code actually does (not much in the >> absence of cache, etc.) > > On reflection, there is no reason why the implementation of PRE-FETCH > for a scsi_debug ramdisk can't do what it implies. IOWs get those blocks > into (say) the machine's L3 cache. This is to speed a following > VERIFY(BYTCHK=1) [or NVMe Compare ***] that will use those blocks. The > question is, how? > > I have added this to resp_pre_fetch(): > memcpm(ramdisk_ptr, ramdisk_ptr, num_blks*blk_sz); > > Will that be optimized out? If so, is there a better/faster way to > encourage a machine to populate its cache? > Have a look at prefetch_range() ? > Doug Gilbert > > > *** I have a recent WD SN550 SSD whose sequential read speed (after > data (zeros) written) is around 1200 MB/sec. Its read speed _before_ > data was written was around 25 KB/sec !! And its compare speed > (with random data written) is a very disappointing 25 MB/sec. > >