On Mon, Apr 4, 2011 at 10:26 AM, Frank Hofmann <frank.hofmann@xxxxxxxxxx> wrote: > > > On Mon, 4 Apr 2011, Andrei Warkentin wrote: > >> On Mon, Apr 4, 2011 at 8:27 AM, Andrei Warkentin <andreiw@xxxxxxxxxxxx> >> wrote: >>> >>> On Mon, Apr 4, 2011 at 9:00 AM, Andrei Warkentin <andreiw@xxxxxxxxxxxx> >>> wrote: >>>> >>>> This resolves the deadlock issue with suspend. There is no >>>> need to claim host before the remove op. >>>> >>>> Signed-off-by: Andrei Warkentin <andreiw@xxxxxxxxxxxx> >>> >>> Frank, >>> >>> Can you try this out? I think this fixes it. >>> >>> Ohad, >>> >>> I think this means we can take >>> 1c8cf9c997a4a6b36e907c7ede5f048aeaab1644 out (mmc: sdio: fix SDIO >>> suspend/resume regression). What do you think? >>> >>> Thanks, >>> A >>> >> >> Ohad, nevermind. >> >> I think there is a bigger issue here at stake for removeable cards. If >> you have a mounted file system, >> the usage count for mmc_blk usage will never drop enough to remove the >> device! >> >> I think the block.c for removal needs to change somewhat. Removed MDs >> need to clear devidx and be put on an "orphan list" from where >> they will remove themselves if usage count drops to zero. On re-probe, >> the "orphan list" needs to be scanned for allocated devidx, and if it >> is found, then that MD should be reused. >> I'll see if I can put something together. >> >> A >> > > Just to clarify, do you want a test result without > 1c8cf9c997a4a6b36e907c7ede5f048aeaab1644 or better wait for now ? > > FrankH. > I don't think there is a point. For example, for mounted media, if the userspace unmounts the filesystem on suspend, then the device will successfully remove. For root fs on a removeable card, it will never tear down the mmcblk MD, because mmc_blk_release will never be called enough times (fs is mounted). Is the card actually an external card for you or an eMMC? You could just get away with unsafe resume, I suppose... A -- To unsubscribe from this list: send the line "unsubscribe linux-mmc" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html