Re: [PATCH RFC v1 01/01] dm-lightnvm: An open FTL for open firmware SSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Mar 21 2014 at  2:32am -0400,
Matias Bjørling <m@xxxxxxxxxxx> wrote:

> LightNVM implements the internal logic of an SSD within the host system.
> This includes logic such as translation tables for logical to physical
> address translation, garbage collection and wear-leveling.
> 
> It is designed to be used either standalone or with a LightNVM
> compatible firmware. If used standalone, NVM memory can be simulated
> by passing timings to the dm target table. If used with a LightNVM
> compatible device, the device will be queued upon initialized for the
> relevant values.
> 
> The last part is still in progress and a fully working prototype will be
> presented in upcoming patches.
> 
> Contributions to make this possible by the following people:
> 
> Aviad Zuck <aviadzuc@xxxxxxxxx>
> Jesper Madsen <jmad@xxxxxx>
> 
> Signed-off-by: Matias Bjorling <m@xxxxxxxxxxx>
...
> diff --git a/drivers/md/lightnvm/core.c b/drivers/md/lightnvm/core.c
> new file mode 100644
> index 0000000..113fde9
> --- /dev/null
> +++ b/drivers/md/lightnvm/core.c
> @@ -0,0 +1,705 @@
> +#include "lightnvm.h"
> +
> +/* alloc pbd, but also decorate it with bio */
> +static struct per_bio_data *alloc_init_pbd(struct nvmd *nvmd, struct bio *bio)
> +{
> +	struct per_bio_data *pb = mempool_alloc(nvmd->per_bio_pool, GFP_NOIO);
> +
> +	if (!pb) {
> +		DMERR("Couldn't allocate per_bio_data");
> +		return NULL;
> +	}
> +
> +	pb->bi_end_io = bio->bi_end_io;
> +	pb->bi_private = bio->bi_private;
> +
> +	bio->bi_private = pb;
> +
> +	return pb;
> +}
> +
> +static void free_pbd(struct nvmd *nvmd, struct per_bio_data *pb)
> +{
> +	mempool_free(pb, nvmd->per_bio_pool);
> +}
> +
> +/* bio to be stripped from the pbd structure */
> +static void exit_pbd(struct per_bio_data *pb, struct bio *bio)
> +{
> +	bio->bi_private = pb->bi_private;
> +	bio->bi_end_io = pb->bi_end_io;
> +}
> +

Hi Matias,

This looks like it'll be very interesting!  But I won't have time to do
a proper review of this code for ~1.5 weeks (traveling early next week
and then need to finish some high priority work on dm-thin once I'm
back).

But a couple quick things I noticed:

1) you don't need to roll your own per-bio-data allocation code any
more.  The core block layer provides per_bio_data now.

And the DM targets have been converted to make use of it.  See callers
of dm_per_bio_data() and how the associated targets set
ti->per_bio_data_size

2) Also, if you're chaining bi_end_io (like it appears you're doing)
you'll definitely need to call atomic_inc(&bio->bi_remaining); after you
restore bio->bi_end_io.  This is a new requirement of the 3.14 kernel
(due to the block core's immutable biovec changes).

Please sort these issues out, re-test on 3.14, and post v2, thanks!
Mike

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel





[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux