Re: SSD - TRIM command

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> ext4 send trim commands to device (disk/md raid/nbd)
> kernel swap send this commands (when possible) to device too
> for internal raid5 parity disk this could be done by md, for data
> disks this should be done by ext4

That's an interesting point.

On which basis should a parity "block" get a TRIM?

If you ask me, I think the complete TRIM story is, at
best, a temporary patch.

IMHO the wear levelling should be handled by the filesystem
and, with awarness of this, by the underlining device drivers.
Reason is that the FS knows better what's going on with the
blocks and what will happen.

bye,

pg

> 
> the other question... about resync with only write what is different
> this is very good since write and read speed can be different for ssd
> (hd don´t have this 'problem')
> but i´m sure that just write what is diff is better than write all
> (ssd life will be bigger, hd maybe... i think that will be bigger too)
> 
> 
> 2011/2/9 Eric D. Mudama <edmudama@xxxxxxxxxxxxxxxx>:
> > On Wed, Feb  9 at 11:28, Scott E. Armitage wrote:
> >>
> >> Who sends this command? If md can assume that determinate mode is
> >> always set, then RAID 1 at least would remain consistent. For RAID 5,
> >> consistency of the parity information depends on the determinate
> >> pattern used and the number of disks. If you used determinate
> >> all-zero, then parity information would always be consistent, but this
> >> is probably not preferable since every TRIM command would incur an
> >> extra write for each bit in each page of the block.
> >
> > True, and there are several solutions.  Maybe track space used via
> > some mechanism, such that when you trim you're only trimming the
> > entire stripe width so no parity is required for the trimmed regions.
> > Or, trust the drive's wear leveling and endurance rating, combined
> > with SMART data, to indicate when you need to replace the device
> > preemptive to eventual failure.
> >
> > It's not an unsolvable issue.  If the RAID5 used distributed parity,
> > you could expect wear leveling to wear all the devices evenly, since
> > on average, the # of writes to all devices will be the same.  Only a
> > RAID4 setup would see a lopsided amount of writes to a single device.
> >
> > --eric
> >
> > --
> > Eric D. Mudama
> > edmudama@xxxxxxxxxxxxxxxx
> >
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> > the body of a message to majordomo@xxxxxxxxxxxxxxx
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> >
> 
> 
> 
> -- 
> Roberto Spadim
> Spadim Technology / SPAEmpresarial
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

-- 

piergiorgio
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux