John R Pierce wrote:
On 9/9/2017 9:47 AM, hw wrote:
Isn´t it easier for SSDs to write small chunks of data at a time?
The small chunk might fit into some free space more easily than
a large one which needs to be spread out all over the place.
the SSD collects data blocks being written and when a full flash block worth of data is collected, often 256K to several MB, it writes them all at once to a single contiguous block on the flash array, no matter what the 'address' of the blocks being written is. think of it as a 'scatter-gather' operation.
different drive brands and models use different strategies for this, and all this is completely opaque to the host OS so you really can't outguess or manage this process at the OS or disk controller level.
What if the collector is full?
I understand that using small chunk sizes can reduce performance because
many chunks need to be dealt with. Using large chunks would involve
reading and writing larger amounts of data every time, and that also
could reduce performance.
With a chunk size of 1MB, disk access might amount to huge amounts of
data being read and written unnecessarily. So what might be a good chunk
size for SSDs?
_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
https://lists.centos.org/mailman/listinfo/centos