Re: dm-writeboost testing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Oct 03, 2013 at 10:27:54PM +0900, Akira Hayakawa wrote:

> > dm-cache doesn't have this problem, if you overwrite the same piece of 
> > data again and again, it goes to the cache device.
> 
> It is not a bug but should/can be optimized.
> 
> Below is the cache hit path for writes.
> writeboost performs very poorly when a partial write hits
> which then turns `needs_cleanup_perv_cache` to true.

Are you using fixed size blocks for caching then?  The whole point of
using a journal/log based disk layout for caching is you can slurp up
all writes irrespective of their size.

What are the scenarios where you out perform dm-cache?

- Joe




> Partial write hits is believed to be unlikely so
> I decided to give up this path to make other likely-paths optimized.
> I think this is just a tradeoff issue of what to be optimized the most.
> 
>         if (found) {
> 
>                 if (unlikely(on_buffer)) {
>                         mutex_unlock(&cache->io_lock);
> 
>                         update_mb_idx = mb->idx;
>                         goto write_on_buffer;
>                 } else {
>                         u8 dirty_bits = atomic_read_mb_dirtiness(seg, mb);
> 
>                         /*
>                          * First clean up the previous cache
>                          * and migrate the cache if needed.
>                          */
>                         bool needs_cleanup_prev_cache =
>                                 !bio_fullsize || !(dirty_bits == 255);
> 
>                         if (unlikely(needs_cleanup_prev_cache)) {
>                                 wait_for_completion(&seg->flush_done);
>                                 migrate_mb(cache, seg, mb, dirty_bits, true);
>                         }
> 
> I checked that the mkfs.ext4 writes only in 4KB size
> so it is not gonna turn the boolean value true for going into the slowpath.
> 
> Problem:
> Problem is that
> it chooses the slowpath even though the bio is full-sized overwrite
> in the test.
> 
> The reason is that the dirty bits is sometimes seen as 0
> and the suspect is the migration daemon.
> 
> I guess you created the writeboost device with the default configuration.
> In that case migration daemon always works and
> some metadata is cleaned up in background.
> 
> If you turns both enable_migration_modulator and allow_migrate to 0
> before beginning the test
> to stop migration at all
> it never goes into the slowpath with the test.
> 
> Solution:
> Changing the code to
> avoid going into the slowpath when the dirty bits is zero
> will solve this problem.
> 
> And done. Please pull the latest one from the repo.
> --- a/Driver/dm-writeboost-target.c
> +++ b/Driver/dm-writeboost-target.c
> @@ -688,6 +688,14 @@ static int writeboost_map(struct dm_target *ti, struct bio *bio
>                         bool needs_cleanup_prev_cache =
>                                 !bio_fullsize || !(dirty_bits == 255);
> 
> +                       /*
> +                        * Migration works in background
> +                        * and may have cleaned up the metablock.
> +                        * If the metablock is clean we need not to migrate.
> +                        */
> +                       if (!dirty_bits)
> +                               needs_cleanup_prev_cache = false;
> +
>                         if (unlikely(needs_cleanup_prev_cache)) {
>                                 wait_for_completion(&seg->flush_done);
>                                 migrate_mb(cache, seg, mb, dirty_bits, true);
> 
> Thanks,
> Akira

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel




[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux