Re: SSD - TRIM command

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



yeah, we will make it :)
maurice, i was making some raid1 new read balance, could you help me
benchmark it?
it's kernel 2.6.37 based, here is the code:
www.spadim.com.br/raid1/
there's raid1.new.c raid1.new.h, raid1.old.c raid1.old.h
the old and new kernel source code

for user space we can now make this:
/sys/block/mdXXX/md/read_balance_mode
/sys/block/mdXXX/md/read_balance_stripe_shift
/sys/block/mdXXX/md/read_balance_config

at read_balance_mode we have now 4 modes:
near_head (default, working without problems, very good for hd only,
ssd should other mode)

round_robin (normal round robin, with per mirror counter (can make
round_robin) after some reads, very good for ssd only array)

stripe (like raid0, with read_balance_stripe_shift we can shift the
sector number with " >> " command and after select the disk with %
raid_disks, very good for hd or ssd, a good number for shift is >=5,
but not so much since this can make math formula use only the first
disk)

time_based (based on head positioning time + read time + i/o queue
time, selecting the best disk to read, work with ssd and hd very well,
current implementation don't have i/o queue time but i will study and
put it to work too)

all configurations for round_robin and time_based as sent to kernel by
read_balance_config
type cat /sys/block/mdxxx/md/read_balance_config
and send per disk the parameters
the first line on cat command is the parameters list, after | is read
only variables, you can't change it, just read
use echo "0 0 0 0 0 0 0 0 0 0"> read_balance_config to change values

thanks =]

2011/2/8 maurice <mhilarius@xxxxxxxxx>:
> On 2/8/2011 1:50 PM, Roberto Spadim wrote:
>>
>> =] now the right answer :)
>> question: maybe in future... could we make trim compatible with md?
>>
> I  hope that future is "real soon now"
> MLC SSD is now starting to appear in the "Enterprise space.
> Companies like Pliant have released products for that.
> Typical SAN RAID controllers have specific performance limits which can be
> saturated with a not very large number of SSDs.
> To get higher IOs we need a more powerful RAID engine
> A typical 48 core, 128GB RAM box using AMD CPUs and 4 SAS HBAs to disk JBD
> cases can be a ridiculously power RAID engine for a
> reasonable cost ( at least reasonable compered to NetApp, EMC, Hitachi SANs,
> etc) with a large number of devices.
>
> BUT: To use SSDs in the design we need mdadm to be more SSD friendly.
>
>
> --
> Cheers,
> Maurice Hilarius
> eMail: /mhilarius@xxxxxxxxx/
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>



-- 
Roberto Spadim
Spadim Technology / SPAEmpresarial
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux