Hi, Recently, I've been benchmarking some different hardware crypto accelerators and many of them appear to be tuned toward largish requests (up to 16k) with a given key and a base IV. I've created a very simple patch for dm-crypt that uses PAGE_SIZE blocks to aid in the driver performance testing, but I lack the cryptographic understanding to determine if there is significant exposure by allowing a dm-crypt device to use a block size that exceeds the sector size. For instance, I was thinking about allowing a block size that is a multiple of sectors - rather than just one sector. (I was picturing adding an argument to the end of the table that was the number of sectors to use as the block size with a default of "1".) But I'm not confident that I understand the full impact (but it sure is nice to more fully utilize the available hardware :). So, 1. Does anyone know if there will be significant exposure to the plaintext if dm-crypt used larger block sizes? 2. Would an optional, configurable block-size (up to PAGE_SIZE) be of interest? If so, would it make sense to be per-target or a compile-time constant? I've attached the basic patch without any sort of ability to configure the value to provide a more concrete explanation of what I'm trying to explain It changes the dm-crypt block size to PAGE_SIZE from sector size. Any and all thoughts about change viability or cryptographic impact would be appreciated! I'm more than happy to rewrite this into something that would be acceptable, if there's interest. thanks! will --- drivers.orig/md/dm-crypt.c 2011-02-17 17:26:08.246685915 -0600 +++ drivers/md/dm-crypt.c 2011-02-01 16:53:03.764387497 -0600 @@ -416,20 +416,20 @@ static int crypt_convert_block(struct cr dmreq->ctx = ctx; sg_init_table(&dmreq->sg_in, 1); - sg_set_page(&dmreq->sg_in, bv_in->bv_page, 1 << SECTOR_SHIFT, + sg_set_page(&dmreq->sg_in, bv_in->bv_page, 1 << PAGE_SHIFT, bv_in->bv_offset + ctx->offset_in); sg_init_table(&dmreq->sg_out, 1); - sg_set_page(&dmreq->sg_out, bv_out->bv_page, 1 << SECTOR_SHIFT, + sg_set_page(&dmreq->sg_out, bv_out->bv_page, 1 << PAGE_SHIFT, bv_out->bv_offset + ctx->offset_out); - ctx->offset_in += 1 << SECTOR_SHIFT; + ctx->offset_in += 1 << PAGE_SHIFT; if (ctx->offset_in >= bv_in->bv_len) { ctx->offset_in = 0; ctx->idx_in++; } - ctx->offset_out += 1 << SECTOR_SHIFT; + ctx->offset_out += 1 << PAGE_SHIFT; if (ctx->offset_out >= bv_out->bv_len) { ctx->offset_out = 0; ctx->idx_out++; @@ -442,7 +442,7 @@ static int crypt_convert_block(struct cr } ablkcipher_request_set_crypt(req, &dmreq->sg_in, &dmreq->sg_out, - 1 << SECTOR_SHIFT, iv); + 1 << PAGE_SHIFT, iv); if (bio_data_dir(ctx->bio_in) == WRITE) r = crypto_ablkcipher_encrypt(req); @@ -1294,6 +1294,17 @@ static int crypt_map(struct dm_target *t return DM_MAPIO_SUBMITTED; } +#define CRYPT_BLOCK_SIZE PAGE_SIZE +static void crypt_io_hints(struct dm_target *ti, + struct queue_limits *limits) +{ + limits->logical_block_size = CRYPT_BLOCK_SIZE; + limits->physical_block_size = CRYPT_BLOCK_SIZE; + blk_limits_io_min(limits, CRYPT_BLOCK_SIZE); +} + + + static int crypt_status(struct dm_target *ti, status_type_t type, char *result, unsigned int maxlen) { @@ -1433,6 +1444,7 @@ static struct target_type crypt_target = .message = crypt_message, .merge = crypt_merge, .iterate_devices = crypt_iterate_devices, + .io_hints = crypt_io_hints, }; static int __init dm_crypt_init(void) _______________________________________________ dm-crypt mailing list dm-crypt@xxxxxxxx http://www.saout.de/mailman/listinfo/dm-crypt