> > Linux Kernel Multi-stream I/O Hint Implementation > > > > Enterprise, datacenter, and client systems increasingly deploy NAND > > flash-based SSDs. However, in use, SSDs cannot avoid inevitable garbage > > collection that deterministically causes write amplification which > > decreases device performance. Unfortunately, write amplification also > > decreases SSD lifetime. However, with multi-stream, unavoidable garbage > > collection overhead (e.g., write amplification) can be significantly > > reduced. For multi-stream devices, the host tags device I/O write > > requests with a stream ID (e.g., I/O hint). The SSD controller places the > > data in media erase blocks according to the stream ID. For example, a SSD > > controller stores data with same stream ID in an associated physical > > location inside SSD. In this way, the multi-stream depends on host I/O > > hints. So it is useful to develop how to implement multi-stream I/O hints > > under limited protocol constraints. The T10 SCSI standard group has > > already standardized the multi-stream feature and NVMe standardization is > > an ticipated in March, 2016. Many Linux users want to leverage > > multi-stream as a mainstream Linux feature since they have seen > > performance improvement and SSD lifetime extension when evaluating > > multi-stream enabled devices. Hence, the multi-stream feature is a good > > Linux community development candidate and should be discussed within the > > community. I propose this multi-stream topic (i.e., I/O write hint > > implementation) in a discussion session. I can briefly present the > > multi-stream system architecture and answer any technical questions. > > So a key question for a feature like this is: How many stream IDs are > devices going to support? Because AFAIR so far the answer was "it depends > on the device". However the design how stream IDs can be used greatly > differs between "a couple of stream IDs" and e.g. 2^32 stream IDs. Without > this information I don't think the discussion would be very useful. So can > you provide some rough numbers? I can think of an RFC which proposed context IDs for eMMCs long time ago [1]. One of my concerns is, if filesystem gives any hints through bios, block layer loses a chance to merge consecutive bios due to the different ids, which results in performance jitters. IMO, if there's something new, it'd be worth to see how to open, close, and assign IDs as well as any practical numbers representing there-in trade-offs. [1] http://comments.gmane.org/gmane.linux.kernel.mmc/12567 > > Honza > -- > Jan Kara <jack@xxxxxxxx> > SUSE Labs, CR > -- > To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html