On Tue, Aug 28, 2018 at 07:40:25PM -0700, Andi Kleen wrote: > On Tue, Aug 28, 2018 at 09:54:20AM +0200, Carlos Maiolino wrote: > > > > > > Ok, so the blame is assigned, but the question is still how to avoid the > > > warning: > > > > I'd say, unless you are using a volume manager that is doing something weird, > > which essentially LVM did by keep writing to RO volumes for snapshots, you > > I don't use any snapshots. > > > should avoid the warning by checking why your device went into RO mode and > > fix it. > > You seem to be very confused about the purpose of kernel warnings. > > Users should never be able to cause warnings by any action. > It's always a software bug of some sort. I don't know why you're trying to pick a fight with Carlos, Andi. Neither the warning nor whatever is triggering the device to go RO has anything to do with XFS, so taking potshots the messenger and those trying to help you is out of line. We're just trying to get enough information to be able to point you at the right people to get your issue fixed. To do that, there's /one/ question we need answered. A question both Eric and Carlos have already asked you: how is the block device being changed to RO while a RW filesystem is mounted on it? FYI, the warning is /trivial/ to provoke manually by turning a block device read only under an active filesystem with the blockdev command: root@test4~# blockdev --setro /dev/vdc [515320.769159] ------------[ cut here ]------------ [515320.770078] generic_make_request: Trying to write to read-only block-device vdc (partno 0) [515320.771611] WARNING: CPU: 11 PID: 8173 at block/blk-core.c:2171 generic_make_request_checks+0x308/0x4a0 [515320.773349] CPU: 11 PID: 8173 Comm: xfs_io Not tainted 4.18.0-dgc+ #648 [515320.774577] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.11.1-1 04/01/2014 [515320.776122] RIP: 0010:generic_make_request_checks+0x308/0x4a0 [515320.777187] Code: 6c 03 00 00 48 8d 74 24 08 48 89 df c6 05 6c de fa 00 01 e8 ea 91 01 00 48 c7 c7 10 90 30 82 48 89 c6 44 89 ea e8 28 3d 9c ff <0f> 0b 48 8b 43 08 e9 b5 fd ff ff 48 8b 53 28 49 03 3 [515320.780642] RSP: 0018:ffffc90008e0faf8 EFLAGS: 00010282 [515320.781617] RAX: 0000000000000000 RBX: ffff880829db8180 RCX: 0000000000000006 [515320.782934] RDX: 0000000000000007 RSI: 0000000000000082 RDI: ffff88083fd15550 [515320.784272] RBP: ffff88023fab2000 R08: ffffffff8183f550 R09: 000000000000004e [515320.785595] R10: ffffc90008e0fbc8 R11: ffffffff82e5c9ee R12: 0000000000001000 [515320.786910] R13: 0000000000000000 R14: ffffc90008e0fc90 R15: ffff880828cba180 [515320.788241] FS: 00007f6beac60840(0000) GS:ffff88083fd00000(0000) knlGS:0000000000000000 [515320.789738] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [515320.790802] CR2: 0000559f72e98000 CR3: 0000000829f2f005 CR4: 0000000000060ee0 [515320.792124] Call Trace: [515320.792649] ? blk_queue_enter+0x212/0x240 [515320.793423] generic_make_request+0x78/0x450 [515320.794231] ? iov_iter_get_pages+0xbe/0x2a0 [515320.795035] ? submit_bio+0x6d/0x120 [515320.795714] submit_bio+0x6d/0x120 [515320.796403] iomap_dio_bio_actor+0x1b5/0x3a0 [515320.797218] ? iomap_page_release.part.29+0x40/0x40 [515320.798136] iomap_apply+0xb0/0x130 [515320.798803] iomap_dio_rw+0x2a6/0x3c0 [515320.799504] ? iomap_page_release.part.29+0x40/0x40 [515320.800458] ? xfs_file_dio_aio_write+0x117/0x2e0 [515320.801345] xfs_file_dio_aio_write+0x117/0x2e0 [515320.802205] xfs_file_write_iter+0x83/0xb0 [515320.802988] __vfs_write+0x109/0x190 [515320.803695] vfs_write+0xb6/0x180 [515320.804535] ksys_pwrite64+0x71/0x90 [515320.805212] do_syscall_64+0x5a/0x180 [515320.805910] entry_SYSCALL_64_after_hwframe+0x49/0xbe [515320.806852] RIP: 0033:0x7f6bebd5e024 That doesn't happen on a normal system under normal operation, though, so we really need to know where it is coming from. It's likely something in userspace on your system is doing it but until we know the source it is not clear where the incorrect behaviour lies and what may need fixing. So, can you please find what is changing the block device mode to RO for us? Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx