Re: [PATCH v4 08/18] tools/testing/nvdimm: add 'bio_delay' mechanism

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Dec 23, 2017 at 04:56:43PM -0800, Dan Williams wrote:
> In support of testing truncate colliding with dma add a mechanism that
> delays the completion of block I/O requests by a programmable number of
> seconds. This allows a truncate operation to be issued while page
> references are held for direct-I/O.
> 
> Signed-off-by: Dan Williams <dan.j.williams@xxxxxxxxx>

> @@ -387,4 +389,64 @@ union acpi_object * __wrap_acpi_evaluate_dsm(acpi_handle handle, const guid_t *g
>  }
>  EXPORT_SYMBOL(__wrap_acpi_evaluate_dsm);
>  
> +static DEFINE_SPINLOCK(bio_lock);
> +static struct bio *biolist;
> +int bio_do_queue;
> +
> +static void run_bio(struct work_struct *work)
> +{
> +	struct delayed_work *dw = container_of(work, typeof(*dw), work);
> +	struct bio *bio, *next;
> +
> +	pr_info("%s\n", __func__);

Did you mean to leave this print in, or was it part of your debug while
developing?  I don't see any other prints in the rest of the nvdimm testing
code?

> +	spin_lock(&bio_lock);
> +	bio_do_queue = 0;
> +	bio = biolist;
> +	biolist = NULL;
> +	spin_unlock(&bio_lock);
> +
> +	while (bio) {
> +		next = bio->bi_next;
> +		bio->bi_next = NULL;
> +		bio_endio(bio);
> +		bio = next;
> +	}
> +	kfree(dw);
> +}
> +
> +void nfit_test_inject_bio_delay(int sec)
> +{
> +	struct delayed_work *dw = kzalloc(sizeof(*dw), GFP_KERNEL);
> +
> +	spin_lock(&bio_lock);
> +	if (!bio_do_queue) {
> +		pr_info("%s: %d seconds\n", __func__, sec);

Ditto with this print - did you mean to leave it in?

> +		INIT_DELAYED_WORK(dw, run_bio);
> +		bio_do_queue = 1;
> +		schedule_delayed_work(dw, sec * HZ);
> +		dw = NULL;

Why set dw = NULL here?  In the else case we leak dw - was this dw=NULL meant
to allow a kfree(dw) after we get out of the if() (and probably after we drop
the spinlock)?

> +	}
> +	spin_unlock(&bio_lock);
> +}
> +EXPORT_SYMBOL_GPL(nfit_test_inject_bio_delay);
> +

> diff --git a/tools/testing/nvdimm/test/nfit.c b/tools/testing/nvdimm/test/nfit.c
> index 7217b2b953b5..9362b01e9a8f 100644
> --- a/tools/testing/nvdimm/test/nfit.c
> +++ b/tools/testing/nvdimm/test/nfit.c
> @@ -872,6 +872,39 @@ static const struct attribute_group *nfit_test_dimm_attribute_groups[] = {
>  	NULL,
>  };
>  
> +static ssize_t bio_delay_show(struct device_driver *drv, char *buf)
> +{
> +	return sprintf(buf, "0\n");
> +}

It doesn't seem like this _show() routine adds much?  We could have it print
out the value of 'bio_do_queue' so we can see if we are currently queueing
bios in a workqueue element, but that suffers pretty badly from a TOCTOU race.

Otherwise we could just omit the _show() altogether and just use
DRIVER_ATTR_WO(bio_delay).

> +
> +static ssize_t bio_delay_store(struct device_driver *drv, const char *buf,
> +		size_t count)
> +{
> +	unsigned long delay;
> +	int rc = kstrtoul(buf, 0, &delay);
> +
> +	if (rc < 0)
> +		return rc;
> +
> +	nfit_test_inject_bio_delay(delay);
> +	return count;
> +}
> +DRIVER_ATTR_RW(bio_delay);

   DRIVER_ATTR_WO(bio_delay);  ?



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux