That's something I asked for and Slava and I discussed. In a complex system it just makes things tidier, and things explode earlier when not working as planned. On Thu, Feb 27, 2025 at 1:50 PM David Howells <dhowells@xxxxxxxxxx> wrote: > > Viacheslav Dubeyko <slava@xxxxxxxxxxx> wrote: > > > This patch fixes the issue by means of locking > > client->osdc.lock and client->monc.mutex before > > the checking client->osdc.osdmap and > > client->monc.monmap. > > You've also added clearance of a bunch of pointers into destruction and error > handling paths (can I recommend you mention it in the commit message?). Is > that a "just in case" thing? It doesn't look like the client can get > resurrected afterwards, but I may have missed something. If it's not just in > case, does the access and clearance of the pointers need wrapping in the > appropriate lock? > > > --- a/net/ceph/debugfs.c > > +++ b/net/ceph/debugfs.c > > @@ -36,18 +36,20 @@ static int monmap_show(struct seq_file *s, void *p) > > int i; > > struct ceph_client *client = s->private; > > > > - if (client->monc.monmap == NULL) > > - return 0; > > - > > - seq_printf(s, "epoch %d\n", client->monc.monmap->epoch); > > - for (i = 0; i < client->monc.monmap->num_mon; i++) { > > - struct ceph_entity_inst *inst = > > - &client->monc.monmap->mon_inst[i]; > > - > > - seq_printf(s, "\t%s%lld\t%s\n", > > - ENTITY_NAME(inst->name), > > - ceph_pr_addr(&inst->addr)); > > + mutex_lock(&client->monc.mutex); > > + if (client->monc.monmap) { > > + seq_printf(s, "epoch %d\n", client->monc.monmap->epoch); > > + for (i = 0; i < client->monc.monmap->num_mon; i++) { > > + struct ceph_entity_inst *inst = > > + &client->monc.monmap->mon_inst[i]; > > + > > + seq_printf(s, "\t%s%lld\t%s\n", > > + ENTITY_NAME(inst->name), > > + ceph_pr_addr(&inst->addr)); > > + } > > } > > + mutex_unlock(&client->monc.mutex); > > + > > return 0; > > } > > > > You might want to look at using RCU for this (though not necessarily as part > of this fix). > > Apart from that: > > Reviewed-by: David Howells <dhowells@xxxxxxxxxx> > > David >