Zdenek, you will believe in below email that I am advocating for max
snapshot size.
I was not.
Only kernel feature I was suggesting was making judgement about when or
how to refuse allocation for new chunks. Nothing else. Not based on
consumed space, unique space consumed by volumes or snapshots.
Based only on FREE SPACE metric, not USED SPACE metric (which can be
more complex).
When you say that freezing allocation has same effect as error target,
you could be correct.
I will not respond to individual remarks but rewrite here below as
summary:
- call collection of all critical volumes C.
- call C1 and C2 members of C.
- each Ci ∈ C has a number FE(Ci) for the number of free (unallocated)
extents of that volume
- each Ci ∈ C has a fixed number RE(Ci) for the number of reserved
extents.
Observe that FE(Ci) may be smaller than RE(Ci). E.g. a volume may have
1000 reserved extents (RE(Ci)) but 500 free extents (FE(Ci)) at which
point it has more reserved extents than it can use.
Therefore in our calculations we use the smaller of those two numbers
for the effective reserved extents (ERE(Ci)).
ERE(Ci) = min( FE(Ci), RE(Ci) )
Now the total number of effective reserved extents of the pool is the
total effective number of reserved extents of collection C.
ERE(POOL) = ERE(C) = ∑ ERE(Ci)
This number is dependent on the live number of free extents of each
critical volume Ci.
Now the critical equation that each time will be evaluated when a chunk
is being requested for allocation, is:
ERE(POOL) < FE(POOL)
As long as the Effective Reserved Extents of the entire pool is smaller
than the number of Free Extents in the entire pool, nothing is the
matter.
However, when
ERE(POOL) >= FE(POOL) we enter a critical 'fullness' situation.
This may be likened to a 95% threshold.
At this point you will start 'randomly' denying not only write requests
for regular volumes, or writeable snapshots, but also regular read-only
snapshots can see their allocation requests (for CoW) denied.
This would of course immediately invalidate those snapshots, if the
write request was caused by a critical volume (Ci) that is still being
serviced.
If you say this is not much different from replacing the volumes by
error targets, I would agree.
As long as pool fullness is maintained, this 'denial of service' is not
really random but consistent. However if something was done to e.g. drop
a snapshot, and space would be freed, then writes would continue after.
_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/