On 5/10/22 3:58 PM, Ilya Dryomov wrote:
On Fri, May 6, 2022 at 5:22 PM Xiubo Li <xiubli@xxxxxxxxxx> wrote:
Use the VM_BUG_ON_FOLIO macro to get more infomation when we hit
the BUG_ON, and continue the loop when seeing the incorrect none
write opcode in writepages_finish().
URL: https://tracker.ceph.com/issues/55421
Signed-off-by: Xiubo Li <xiubli@xxxxxxxxxx>
---
fs/ceph/addr.c | 9 ++++++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
index 04a6c3f02f0c..63b7430e1ce6 100644
--- a/fs/ceph/addr.c
+++ b/fs/ceph/addr.c
@@ -85,7 +85,7 @@ static bool ceph_dirty_folio(struct address_space *mapping, struct folio *folio)
if (folio_test_dirty(folio)) {
dout("%p dirty_folio %p idx %lu -- already dirty\n",
mapping->host, folio, folio->index);
- BUG_ON(!folio_get_private(folio));
+ VM_BUG_ON_FOLIO(!folio_get_private(folio), folio);
return false;
}
@@ -122,7 +122,7 @@ static bool ceph_dirty_folio(struct address_space *mapping, struct folio *folio)
* Reference snap context in folio->private. Also set
* PagePrivate so that we get invalidate_folio callback.
*/
- BUG_ON(folio_get_private(folio));
+ VM_BUG_ON_FOLIO(folio_get_private(folio), folio);
folio_attach_private(folio, snapc);
return ceph_fscache_dirty_folio(mapping, folio);
@@ -733,8 +733,11 @@ static void writepages_finish(struct ceph_osd_request *req)
/* clean all pages */
for (i = 0; i < req->r_num_ops; i++) {
- if (req->r_ops[i].op != CEPH_OSD_OP_WRITE)
+ if (req->r_ops[i].op != CEPH_OSD_OP_WRITE) {
+ pr_warn("%s incorrect op %d req %p index %d tid %llu\n",
+ __func__, req->r_ops[i].op, req, i, req->r_tid);
break;
+ }
osd_data = osd_req_op_extent_osd_data(req, i);
BUG_ON(osd_data->type != CEPH_OSD_DATA_TYPE_PAGES);
--
2.36.0.rc1
Hi Xiubo,
Since in this revision the loop isn't actually continued, the only
substantive change left seems to be the addition of pr_warn before the
break. I went ahead and folded this patch into "ceph: check folio
PG_private bit instead of folio->private" (which is now queued up for
5.18-rc).
Sure, Ilya.
Thanks!
-- Xiubo
Thanks,
Ilya