[no subject]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

We are seeing below "ubi errors" during booting.
Although this does not cause any functionality break, I am wondering
if there is any way to fix it ?
We are using Kernel 4.14 with UBI and squashfs (ubiblock) as volumes,
and with systemd.

Anybody have experienced the similar logs with ubi/squashfs and
figured out a way to avoid it ?
It seems like these open volumes are called twice, thus error -16
indicates (device or resource busy).
Or, are these logs expected because of squashfs or ubiblock ?
Or, do we need to add anything related to udev-rules ?

{
....
[  129.394789] ubi0 error: ubi_open_volume: cannot open device 0,
volume 6, error -16
[  129.486498] ubi0 error: ubi_open_volume: cannot open device 0,
volume 7, error -16
[  129.546582] ubi0 error: ubi_open_volume: cannot open device 0,
volume 8, error -16
[  129.645014] ubi0 error: ubi_open_volume: cannot open device 0,
volume 9, error -16
[  129.676456] ubi0 error: ubi_open_volume: cannot open device 0,
volume 6, error -16
[  129.706655] ubi0 error: ubi_open_volume: cannot open device 0,
volume 10, error -16
[  129.732740] ubi0 error: ubi_open_volume: cannot open device 0,
volume 7, error -16
[  129.811111] ubi0 error: ubi_open_volume: cannot open device 0,
volume 8, error -16
[  129.852308] ubi0 error: ubi_open_volume: cannot open device 0,
volume 9, error -16
[  129.923429] ubi0 error: ubi_open_volume: cannot open device 0,
volume 10, error -16

}

Thanks,
Pintu



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux