off topic again... continue with the idea of optimizations, the last otimization we could have is implement a filesystem at harddisk it could implement all filesystem functions, no device function, it could have many more information about data, not only block 'in use'/'not in use'. it could understand: file starting at block x ending at block y, with information w, accestime z, etc etc. it could be more intelligent than a raw device. in others words, it's a fileserver... why implement algorithms at device level? today harddisk processors (fpga, arm processors, others) have a lot of cpu power not in use, why not use it? that's why we send trim to device, if it's a harddisk or ssd or anyother pseudo/real device no problem, we sent the trim command to otimize it ---------------- getting out of off topic, please stop sending 'i think it's not a performace feature, it don't need be implemented in device level', let's implement all functions that device level could allow (ATA/SCSI specifications or any other) and optimize when possible checking neil md roadmap, badblock work will be very good for md devices, it's a good optimization for raid1 since mirror will only fail when many blocks fail can we implement TRIM at MD level? it's a good feature to implement? we will have a lot of work to implement it? my opnion we can, on some raid levels it's a good feature we will have a lot of work to implement and test any answer from raid developers? -- Roberto Spadim Spadim Technology / SPAEmpresarial -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html