Re: [PATCH 0/5] a caching layer for raid 5/6

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Shaohua,

here are a couple of notes from reading through the code in a bit more
detail:


Error retries:
  - What is the reason for retry_bio_list?  If a driver returns an
    I/O error to the higher levels it already has retried and came
    to the conclusion this is a permanent error.

Flushes:
  - no need to allocate a task here
  - no real need to clone the bio either

Tasks:
  - the completion argument passed to r5l_queue_bio is always the same,
    the code would be a lot simpler by removing this abstraction.
  - that would also allow allocating the task embedded in the range
    and cut down on memory allocations
  - we're not really manipulating the bio payload, so shouldn't a
    _fast cone be fine here?
    In fact why do we clone the bio at all?
  - r5l_queue_task should probably be split into two helpers
    for data vs parity
  - r5l_queue_bio and r5c_copy_bio should probably use bvec iterators
  - r5l_run_task does very different things for data vs metadata,
    it's proably better it.

Allocations:
  - most metadata pages are allocated as highmem leading to
    constant kmap/kunmap.  Maybe just allocate them as GFP_KERNEL
    to simplify things?

Misc:
  - where does the ioctl in r5l_ioctl anѕ associated functions come
    from?  There are not ioctls handler here so the naming seems
    rather confusing.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux