On Thu, 12 May 2016, Shehbaz Jaffer wrote: > Hi Sage, > > > Hmm, good question. My assumption was that even a drive-managed drive > > would expose the zone layout via libzbc, such that we can control IO > > to avoid any remapping machinery in the drive. > > A drive managed SMR drive will not expose Zone information to Host, > so libzbc would have failed to run ZBC/ZAC commands. However the drive > that has been shipped is fortunately not drive managed, but host aware! Ah, perfect! > > Does libzbc recognize the drive? > > Yes! > I am able to detect zones and run libzbc tests on these drives. > If we do random writes or do not abide by zone > rules, the writes will be serialized internally which works to our > advantage. > > This is what one of the zbc_info command from libzbc gives me: > > $ sudo zbc_info /dev/sg1 > Device /dev/sg1: ATA ST8000AS0022-1WL SN01 > ATA ZAC interface, Host-aware disk model > 15628053168 logical blocks of 512 B > 1953506646 physical blocks of 4096 B > 8001.563 GB capacity Yay! Does zbc it describe the region/size of the drive that is random access too? > A detailed version of libzbc tests can be viewed here: > http://tracker.ceph.com/projects/ceph/wiki/BlueStore_SMR_Support_GSOC_2016_ > Progress_Report What are the failed tests 010 and 011 about? > I can successfully query zones, the current write pointer, zone size, > offset and so on. In terms of next steps, I am running into some > permission issues with Ceph setup, and I am working on resolving them. Ping any of us in #ceph-devel with any questions. > I am also following a commit by Ramesh (Sandisk) for bitmap allocator. > Once the setup is done, I think I will first run the Bluestore stupid > allocator and do some performance benchmarks. Then I can explore how we > can make things better. Please let me know if this approach sounds good. Yep, that sounds good! sage