Dne 21. 01. 24 v 3:21 matthew patton napsal(a):
As like you already said it was never ACKed, so the software that tried to write it never expected it to be written.
we don't care about the user program and what it thinks got written or not.
That's way higher up the stack.
Any write-thru cache has NO business writing new data to cache first, it must
hit the source media first. Once that is done it can be ACK'd. The ONLY other
part of the "transaction" is an update to the cache management block-mapping
to invalidate the block so as to prevent stale reads.
THEN IF there is a case to be made for re-caching the new data (we know it
was a block under active management), that is a SECOND OP that can also be
made asynchronous. Write-thru should ALWAYS perform and behave like cache
device doesn't exist at all.
Hi
Anyone can surely write a caching policy following rules above, however
current DM cache is working differently with cached 'blocks'.
Method above would require to drop/demote whole cached block out of the cache
first. Then update the content on the origin device, and promote the whole
such updated block back to cache. i.e. user writes sector 512b and the cached
block with 512KiB would need to be recached...
So here I could wish a good luck with performance of such engine, the current
DM cache engine is using parallel writes - thus there can be a moment where
the cache has simply the more recent and valid data.
The problem here will happen when origin would have faulty sectors - so DM
target takes this risk - it should not have any impact on properly written
software that is using transactional mechanisms properly.
So if there is a space for much slower caching that will never ever have any
dirty pages - someone can bravely step-in and write a new caching policy for
such engine.
Regards
Zdenek