Re: RAID5 Performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



[ ... ]
> That largely explains why in the tests I have mentioned small
> sync write IOPS for many "consumerish" flash SSDs top at around
> 100, instead of the usual > 10,000 for small non-sync writes.
[ ... ]

To summarize the preceding long discussion:

* The stats reported show a low level of IOPS being carried out.

* The critical part of the workload seems to be synchronous
  small writes.

* Probably then the primary issue is the use of flash SSDs that
  have a limited number of IOPS for small synchronous writes.

* A secondary issue is that RAID5 results in RMW for small
  writes.

There are two possible options:

* Replace the flash SSDs with those that are known to deliver
  high (at least > 10,000 single threaded) small synchronous
  write IOPS.

* Relax the requirement for synchronous writes on *both* the
  primary and secondary DRBD servers, if feeling lucky.

The third option, which is to change the workload so that it
does not emit small synchronous writes to the storage layer,
seems not practical in the context.

Ideally the system would also be switched from RAID5 to RAID10
to avoid the large penalty on small writes at the RAID level
too.

That may be considered expensive, but as I wrote:

> [ ... ] it requires a storage layer that has to cover all
> possible IO workloads optimally, [ ... ]
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux