On 9/18/23 18:36, Matthew Wilcox wrote:
On Mon, Sep 18, 2023 at 01:04:55PM +0200, Hannes Reinecke wrote:
@@ -449,6 +450,22 @@ __bread(struct block_device *bdev, sector_t block, unsigned size)
bool block_dirty_folio(struct address_space *mapping, struct folio *folio);
+static inline sector_t block_index_to_sector(pgoff_t index, unsigned int blkbits)
+{
+ if (PAGE_SHIFT < blkbits)
+ return (sector_t)index >> (blkbits - PAGE_SHIFT);
+ else
+ return (sector_t)index << (PAGE_SHIFT - blkbits);
+}
Is this actually more efficient than ...
loff_t pos = (loff_t)index * PAGE_SIZE;
return pos >> blkbits;
It feels like we're going to be doing this a lot, so we should find out
what's actually faster.
I fear that's my numerical computation background chiming in again.
One always tries to worry about numerical stability, and increasing a
number always risks of running into an overflow.
But yeah, I guess your version is simpler, and we can always lean onto
the compiler folks to have the compiler arrive at the same assembler
code than my version.
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
hare@xxxxxxx +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Ivo Totev, Andrew
Myers, Andrew McDonald, Martje Boudien Moerman