Michael Monnerie put forth on 10/28/2010 4:44 AM: > On Mittwoch, 27. Oktober 2010 Robert Brockway wrote: >> Similarly virtual hosts have little chance of trying to establish >> the physical nature of the device holding their filesystems. > > Yes, performance optimizations will be fun in the near future. VMs, thin > provisioning, NetApps WAFL, LVM, funny disk layouts, all can do things > completely different than our "old school" thinking. I wonder when > there's gonna be an I/O scheduler that just elevates the I/O from a VM > to the real host, so that the host itself can optimize and align. After > all, a VM has no idea of the storage. That's why already now you can > choose "noop" as the scheduler in a VM. I guess there will be a > "virtualized" scheduler once, but we will see. I don't see how any of this is really that different from where we already are with advanced storage systems and bare metal host OSes. We're already virtualized WRT basic SAN arrays and maybe even some PCIe RAID cards if they allow carving a RAID set into LUNs. Take for example a small FC/iSCSI SAN array controller box with 16 x 1TB SATA drives. We initialize it using the serial console, web gui, or other management tool into a single RAID 6 array with 14TB of raw space using a 256KB stripe size. We then carve this 14TB into 10 LUNs, 1.4TB each, and unmask each LUN to the FC WWN of a bare metal host running Linux. Lets assume the array controller starts at the outside edge of each disk and works its way to the inner cylinder when creating each LUN, which seems like a logical way for a vendor to implement this. We now have 10 LUNs each with progressively less performance than the one preceding it due to its location on the platters. Now, on each host we format the 1.4TB LUN with XFS. In this configuration, given that the LUNs are spread all across the platters, from outside to inside cylinder, is it really going to matter where each AG or the log is located, from a performance standpoint? The only parameters we actually know for sure here are the stripe width (14) and the stripe size (256KB). We have no knowledge of the real layout of the cylinders when we run mkfs.xfs. So as we move to a totally virtualized guest OS, we then lose the stripe width and stripe size information. How much performance does this really cost us WRT XFS filesystem layout? And considering these are VM guests, which are by design meant for consolidation, not necessarily performance, are we really losing anything at all, when looking at the big picture? How many folks are running their critical core business databases in virtual machine guests? How about core email systems? Other performance/business critical applications? -- Stan _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs