thin disk -- like overcomitted/virtual memory? (was Re: about the lying nature of thin)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Xen wrote:
You know mr. Patton made the interesting allusion that thin provisioning is designed to lie and is meant to lie, and I beg to differ.
----
   Isn't using a thin memory pool for disk space similar to using
a virtual memory/swap space that is smaller than the combined sizes of all
processes?

   I.e.  Administrators can choose to decide whether to over-allocate
swap or paging file space or to have it be a hard limit -- and forgive me
if I'm wrong, but isn't this a configurable in /proc/sys/vm with the
over-commit parms (among others)?

   Doesn't over-commit in the LVM space have similar checks and balances
as over-commit in the VM space?  Whether it does or doesn't, shouldn't
the reasoning be similar in how they can be controlled?

   In regards to LVM overcommit -- does it matter (at least in the short
term), if that over-committed space is filled with "SPARSE" data files?.
I mean, suppose I allocate space for astronomical bodies -- in some areas/directions, I might have very SPARSE usage, vs. towards the core of a galaxy, I might expect less sparce usage. If a file system can be successfully closed with 'no errors' -- doesn't that still mean it is "integrous" -- even if its sparse files don't all have enough room to be expanded?

   Does it make sense to think about a OOTS (OutOfThinSpace) daemon that
can be setup with priorities to reclaim space?

I see 2 types of "quota" here. And I can see the metaphor of these types being extended into disk space: Direct space, that physically present, and "indirect or *temporary* space" -- which you might try to reserve at the beginning of a job. Your job could be configured to wait until the indirect space is available, or die immediately. But conceivably indirect space is space on a robot-cartridge retrieval system that has huge amount of virtual space, but at the cost of needing to be loaded before your job can run. Extending that idea -- the indirect space could be configured as "high priority space" -- meaning once it is allocated, it stays allocated *until* the job completes (in other words the job would have a low chance of being "evicted" by an OOTS damon), vs. most "extended space would have the priority of "temporary space" -- with processes using large amounts of such 'indirect space and having a low expectation of quick completion being high on the oots-daemon's list?

Processes could also be willing to "give up memory and suspend" -- where, when called, a handler could give back Giga-or Tera bytes of memory
and save it's state as needing to restart the last pass.

Lots of possibilities -- if LVM-this space is managed like memory-virtual space. That means some outfits might choose to never over-allocate, while others might allow fraction.

   From how it sounds -- when you run out of thin space, what happens
now is that the OS keeps allocating more Virtual space that has no backing store (in memory or on disk)...with a notification buried in a system log somewhere.
   On my own machine, I've seen >50% of memory returned after
sending a '3' to /proc/sys/vm/drop_caches -- maybe similar emergency measures could help in the short term, with long term handling being as
similarly flexible as VM policies.

Does any of this sound sensible or desirable? How much effort is needed for how much 'bang'?


_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux