On Tue, 4 Aug 2020 11:14:10 +0200 SeongJae Park <sjpark@xxxxxxxxxx> wrote: > From: SeongJae Park <sjpark@xxxxxxxxx> > > This commit implements a debugfs interface for DAMON. It works for the > virtual address spaces monitoring. [...] > + > +#define targetid_is_pid(ctx) \ > + (ctx->target_valid == kdamond_vm_target_valid) > + [...] > + > +static ssize_t debugfs_target_ids_write(struct file *file, > + const char __user *buf, size_t count, loff_t *ppos) > +{ > + struct damon_ctx *ctx = &damon_user_ctx; > + char *kbuf; > + unsigned long *targets; > + ssize_t nr_targets; > + ssize_t ret = count; > + struct damon_target *target; > + int i; > + int err; > + > + kbuf = user_input_str(buf, count, ppos); > + if (IS_ERR(kbuf)) > + return PTR_ERR(kbuf); > + > + targets = str_to_target_ids(kbuf, ret, &nr_targets); > + if (!targets) { > + ret = -ENOMEM; > + goto out; > + } > + > + if (targetid_is_pid(ctx)) { > + for (i = 0; i < nr_targets; i++) > + targets[i] = (unsigned long)find_get_pid( > + (int)targets[i]); > + } > + > + mutex_lock(&ctx->kdamond_lock); > + if (ctx->kdamond) { > + ret = -EINVAL; > + goto unlock_out; > + } > + > + if (targetid_is_pid(ctx)) { > + damon_for_each_target(target, ctx) > + put_pid((struct pid *)target->id); If non-pid target ids were set before by the kernel API, this will cause a problem. Therefore, the DAMON users should cleanup there target ids properly. However, I found that this could be easily missed. Indeed, my new test code missed the cleanup. Moreover, it would be hard to do that when concurrent DAMON users exist. One straightforward fix would be making 'damon_set_targets()' to remember last target id type and do 'put_pid()' if the last target id type was pid, instead of here. This will work, but make the address space independent part to be coupled with the dependent part. Or, we could add another callback for cleanup and let debugfs code to register a function doing 'put_pid()' and remove of the targets as the callback. This approach will allow the address space independent part to be remain independent. I will fix this problem with the second approach in the next spin. Thanks, SeongJae Park