Been busy so I haven't been able to follow up.
The hanging issue is fixed! It appears that this was probably addressed
by commit a0725910f3e23569cdb88e1c726decc669ecc81a, which I believe
first came in 4.18.7.
I'm still having trouble with bcache devices (the caching device and the
backing store) being recognized between reboots), but I'll start a
separate thread for that.
-Cameron
On 07/29/2018 11:45 AM, Cameron Berkenpas wrote:
Hello,
I'm back in town if you're still interested in sending me debug patch.
Thanks!
On 07/13/2018 08:15 AM, Cameron Berkenpas wrote:
I have 4.18-rc4 installed.
However, I'm going way out of town for 2 weeks and will not be able
to touch this at all in the meantime.
Thanks for your help!
On 07/12/2018 08:47 AM, Cameron Berkenpas wrote:
Hello,
I can definitely compile 4.18-rc3, but perhaps you mean 4.18-rc4,
which is now the latest?
Anyway, the disk setups between these 2 machines are fairly similar
(and I even swapped RAID controllers between the systems to verify
it isn't a controller issue). However, I can elaborate on the
differences if you're interested.
Thanks!
-Cameron
On 07/12/2018 06:03 AM, Coly Li wrote:
On 2018/7/10 2:51 AM, Cameron Berkenpas wrote:
Some other details I just remembered.
Hi Cameron,
And I can trigger this hang whether the bcache device (ie,
/dev/bcache0)
is formatted or not.
While re-attaching the cache device hangs, I can still mount the
filesystem... but any attempt to access the filesystem will hang
indefinitely (ie, ls).
While PPC64LE is PPC64 in little endian mode, is it possible there's
some codepaths that are assuming PPC is always big endian? I've heard
this is the case with some userspace software.
So far bcache does not support big endian, and I am working on it at
mean time. From the kernel and user space tool, I don't see any
explicit
big endian code indeed. So if PPC64LE is similar to x86-64, I will
take
this as a deadlock issue. But you are sure this won't happen on x64-64
machine, then there should be some corner case that people
neglected before.
Thanks.
Coly Li
[snipped]