Hello, On Thu, Dec 29, 2016 at 05:24:16PM -0800, Doug Dumitru wrote: > > My test is of a "managed" array with a "host side Flash Translation > Layer". This means that software is linearizing the writes before > RAID-5 sees them. This is how the major "storage appliance" vendors > get really fast performance. One vendor, running an earlier version > of the software I am running here, was able to support 5000 ESXI VDI > clients from a single 2U storage server (with a lot of FC cards). The > boot storm took about 3 minutes to settle. > Does this software happen to be opensource / publicly available ? Thanks, -- Pasi > Single drives are around 500 MB/sec which is 125K IOPS through our > engine. Eight drives are (8-1)x500=3500 MB/sec or 900K IOPS. This is > actually faster than FIO can generate a test pattern from a single > job. It is also faster than stock RAID-5 can linearly write without > patches. > > In terms of wear, lots of users are running very light write > environments. This is good as many configurations are > 50:1 write > amp if you measure "end to end". By end to end, I mean, how many > flash writes happen when you create a small file inside of a file > system. This leads to "file system write amp" x "raid write amp" x > "SSD write amp". Some people don't like this approach as the file > system is often "off limits" and a black box. Then again, some file > systems are better than others (for 10K sync creates, EXT4 and XFS are > both about 4.4:1 whereas ZFS is a lot worse). And with EXT4/XFS, you > can mitigate some of this with an SSD or mapping layer that compresses > blocks. > > Doug Dumitru > -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html