Re: staged dm_internal_{suspend, resume} related changes for wider review

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jan 02 2015 at  5:56pm -0500,
Mikulas Patocka <mpatocka@xxxxxxxxxx> wrote:

> Hi, I'm back.
> 
> I looked at current internal suspend code in 3.19.
> 
> One thing that seems strange is this:
> 
> static void __dm_internal_suspend(struct mapped_device *md, unsigned suspend_flags)
> {
>         struct dm_table *map = NULL;
> 
>         if (dm_suspended_internally_md(md))
>                 return; /* nested internal suspend */
> ....
> static void __dm_internal_resume(struct mapped_device *md)
> {
>         if (!dm_suspended_internally_md(md))
>                 return; /* resume from nested internal suspend */
> 
> - the comments imply that the code supports nested internal suspend, while 
> in fact it doesn't. If the caller calls:
> * dm_internal_suspend_noflush
> * dm_internal_suspend_noflush
> * dm_internal_resume
> the device is not suspended while it should be.

Yes, very true, it should.  But IIRC in practice it doesn't hurt us to
not be bothered about being so precise given the current caller.

> The "if" branch in __dm_internal_resume has nothing to do with nested 
> suspend, it's called if the caller calls dm_internal_resume without 
> dm_internal_suspend_noflush - that shouldn't happen, so there should be 
> WARN or BUG.

The first internal resume would've cleared the DM_INTERNAL_SUSPEND_FLAG,
etc.  So AFAICT the code in question does also cover multiple internal
suspends/resumes (albeit without the counter you've suggested).

But I did narrowly define "nested suspend" as an internal suspend that
arrives _after_ a normal suspend.

>From the header of commit ffcc39364 ("dm: enhance internal suspend and
resume interface"):

    Both DM_SUSPEND_FLAG and DM_INTERNAL_SUSPEND_FLAG may be set if a device
    was already suspended when dm_internal_suspend_noflush() was called --
    this can be thought of as a "nested suspend".  A "nested suspend" can
    occur with legacy userspace dm-thin code that might suspend all active
    thin volumes before suspending the pool for resize.

Could be the reality is "nested suspend" is also supporting the case of
multiple internal suspends.

> If you want to support multiple internal suspends (which is needed if one 
> table can have multiple dm-thin targets), you need a counter that counts 
> how many times the device was suspended - the counter doesn't have to be 
> atomic because the code already holds md->suspend_lock. If you don't want 
> to support multiple internal suspends, there should be at least WARN when 
> the caller attempts to do it.

I'll have a look.  Will need some time to revisit this line of work.

FYI, the device-mapper-test-suite has fairly comprehensive test coverage
for all supported scenarios, e.g.:

# dmtest run --suite thin-provisioning -t /SuspendTests/
Loaded suite thin-provisioning
Started
SuspendTests
  nested_internal_suspend_using_inactive_table...PASS
  suspend_pool_active_thins_no_io...PASS
  suspend_pool_after_suspend_thin...PASS
  suspend_pool_concurrent_suspend...PASS
  suspend_pool_no_active_thins...PASS
  suspend_pool_no_thins...PASS
  suspend_pool_resume_thin...PASS
  suspend_pool_suspended_thin...PASS
  wait_on_bit_during_resume...PASS
  wait_on_bit_during_suspend...PASS

# dmtest run --suite thin-provisioning -n activate_thin_while_pool_suspended_fails
Loaded suite thin-provisioning
Started
CreationTests
  activate_thin_while_pool_suspended_fails...PASS

So any changes in this area need to be verified using dmts.

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel



[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux