Re: qemu-arm: zram: mkfs.ext4 : Unable to handle kernel NULL pointer dereference at virtual address 00000140

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jun 08, 2022 at 11:42:33AM +0900, Sergey Senozhatsky wrote:
> On (22/06/08 11:39), Sergey Senozhatsky wrote:
> > On (22/06/07 16:52), Minchan Kim wrote:
> > > > rootfs: https://oebuilds.tuxbuild.com/29zhlbEc3EWq2wod9Uy964Bp27q/images/am57xx-evm/rpb-console-image-lkft-am57xx-evm-20220601222434.rootfs.ext4.gz
> > > > kernel: https://builds.tuxbuild.com/29zhqJJizU2Y7Ka7ArhryUOrNDC/zImage
> > > > 
> > > > Boot command,
> > > >  /usr/bin/qemu-system-aarch64 -cpu host,aarch64=off -machine
> > > > virt-2.10,accel=kvm -nographic -net
> > > > nic,model=virtio,maaacaddr=BA:DD:AD:CC:09:04 -net tap -m 2048 -monitor
> > > > none -kernel kernel/zImage --append "console=ttyAMA0 root=/dev/vda rw"
> > > > -hda rootfs/rpb-console-image-lkft-am57xx-evm-20220601222434.rootfs.ext4
> > > > -m 4096 -smp 2
> > > > 
> > > > # cd /opt/kselftests/default-in-kernel/zram
> > > > # ./zram.sh
> > > > 
> > > > Allow me sometime I will try to bisect this problem.
> > > 
> > > Thanks for sharing the info. 
> > > 
> > > I managed to work your rootfs with my local arm build
> > > based on the problematic git tip. 
> > > However, I couldn't suceed to reproduce it.
> > > 
> > > I needed to build zsmalloc/zram built-in instead of modules
> > > Is it related? Hmm,
> > > 
> > > Yeah, It would be very helpful if you could help to bisect it.
> > 
> > This looks like a NULL lock->name dereference in lockdep. I suspect
> > that somehow local_lock doesn't get .dep_map initialized. Maybe running
> > the kernel with CONFIG_DEBUG_LOCK_ALLOC would help us? Naresh, can you
> > help us with this?
> 
> Hmm, actually, hold on. mapping_area is per-CPU, so what happens if CPU
> get offlined and onlined again? I don't see us re-initializing mapping_area
> local_lock_init(&zs_map_area.lock) and so on.

I am trying to understand the problem. AFAIK, the mapping_area was
static allocation per cpu so in zs_cpu_down, we never free the
mapping_area itself. Then, why do we need to reinitialize the local
lock again?



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux