Re: [RFC 09/15] PM / Hibernate: user, implement user_ops writer

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 03/30/2010 12:09 AM, Rafael J. Wysocki wrote:
> On Monday 29 March 2010, Jiri Slaby wrote:
>> Probably the easiest solution is to revert it as noted above: a page is
>> taken from snapshot (with patches I have here the snapshot layer is only
>> told to "store next page" without returning a page to the caller), fed
>> through crypto layers as needed and finally given to chunk writer which
>> assembles PAGE_SIZE blocks from the chunks. Then whole pages of
>> compressed/encrypted data are given to user or in-kernel block io by
>> hibernate_io_ops->write_page. In-kernel .write_page simply calls
>> swap_write_page (which in turn hib_bio_write_page while storing swap
>> sector entries).
> 
> That's fine if I understand correctly.

(It is in fact the same what s2disk does, except it works on a single
page and the block io chaining is up to the block layer.)

>> The very similar happens for fops->write, ergo hibernate_io_ops->read_page.
> 
> You can't open /dev/snapshot for reading and writing at the same time, because
> that wouldn't make any sense.

Err, I didn't mean to write that. fops->read and
hibernate_io_ops->write_page are both involved solely in image writing
and are consumer and producer respectively. Vice versa for image reading.

> Now, during resume the image is not present in memory at all.  In fact, we

<snipped> -- we both understand the code the same.

> Now, compression can happen in two places: while the image is created
> or after it has been created (current behavior).  In the latter case, the image
> pages need not be compressed in place, they may be compressed after being
> returned by snapshot_read_next(), in a temporary buffer (that's now s2disk
> does that).  So you can arrange things like this:
> 
> create image
> repeat:
> - snapshot_read_next() -> buffer
> - if buffer is full, compress it (possibly encrypt it) and write the result to
>   the storage
> 
> This way you'd just avoid all of the complications and I fail to see any
> drawbacks.

Yes, this was the intention. Except I wanted snapshot_read_next to be
something like snapshot_write_next_page which would call
hibernate_io_ops->write_page(buf, len) somewhere in the deep.
hibernate_io_ops is an alias for the first module which accepts the page
and feeds it further. E.g. for hibernate_io_ops being compress_ops it
may be a chain like compress_ops->write_page => encrypt_ops->write_page
=> swap_ops->write_page.

But if you want to preserve snapshot_read_next, then it would look like
repeat:
  snapshot_read_next() -> buffer, len = PAGE_SIZE
  compress_ops->write_page(buffer, len) =>
    encrypt_ops->write_page(buffer, len) =>
    swap_ops->write_page(buffer, len)

instead of
repeat:
  snapshot_write_next_page()
In this case its work is to fetch a next page and call appropriate
.write_page.

> Now, an attractive thing would be to compress data while creating the image
> and that may be done in the following way:

I wouldn't go for this. We should balance I/O and CPU and this can be
done only when writing the image, if I can say. OTOH I must admit I have
no numbers.

thanks,
-- 
js
_______________________________________________
linux-pm mailing list
linux-pm@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linux-foundation.org/mailman/listinfo/linux-pm

[Index of Archives]     [Linux ACPI]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [CPU Freq]     [Kernel Newbies]     [Fedora Kernel]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux