On Wed, Aug 29, 2018 at 1:29 PM Robert Moskowitz <rgm@xxxxxxxxxxxxxxx> wrote: > > > > On 08/29/2018 08:12 AM, Peter Robinson wrote: > >>> On 08/21/2018 11:40 AM, Nicolas Chauvet wrote: > >>>> Hi there, > >>>> > >>>> While testing upstream kernel on some devices, I've recently > >>>> discovered that suspend was broken for "distro kernel" (not only > >>>> related to fedora kernel, but also ubuntu). > >>> On the Fedora users list we have had a terrible time with suspend on > >>> F28. 4.17.3 was ok, but then broke with 4.17.4 and was not, for the > >>> most part, fixed until 4.17.11. But still even with 4.17.14 a number > >>> of us are still reporting that suspend fails the first time with the > >>> system immediately restarting and then when you unlock your system and > >>> suspend again, that 'takes'. > >> And we have been getting a weekly kernel update on F28 64 for some time > >> now. I cannot remember another Fedora release where the kernel updates > >> have come so frequently for so long. > > Well there's been just one or two major vulnerabilities this year > > (spectre/meltdown) that have been a major reason for some of these so > > it's not unexpected, at least for those that vaguely sort of follow > > the security industry. > > > Not my area of security industry. Protocol design takes up all my > cycles. But I did hear of these. > > How would suspend work with no swap partition? I have to spend a little > time working out how to use gparted to add a swap partition to a sata HD. Suspend should be just fine (S3, s2idle etc) , I think you mean hibernate which writes the contents of RAM out to storage so it can turn the device off completely. _______________________________________________ arm mailing list -- arm@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to arm-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/arm@xxxxxxxxxxxxxxxxxxxxxxx