RE: SMR Benchmarking Results

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Sounds like the zone open and/or close commands are triggering I/O operations.

Just as an experiment, how long does it take to open and close a bunch of zones with NO data being written to them?


Allen Samuels
SanDisk |a Western Digital brand
2880 Junction Avenue, San Jose, CA 95134
T: +1 408 801 7030| M: +1 408 780 6416
allen.samuels@xxxxxxxxxxx


> -----Original Message-----
> From: Shehbaz Jaffer [mailto:shehbazjaffer007@xxxxxxxxx]
> Sent: Wednesday, May 25, 2016 9:44 PM
> To: Sage Weil <sweil@xxxxxxxxxx>
> Cc: Allen Samuels <Allen.Samuels@xxxxxxxxxxx>; ceph-
> devel@xxxxxxxxxxxxxxx
> Subject: SMR Benchmarking Results
> 
> Hi Sage,
> 
> I have been working on benchmarking SMR Drives using libzbc. It appears
> that issuing ZBC commands for zone aware host is more inefficient as
> compared to normal copy operations using 'dd' command.
> 
> I created a 256 MB file and placed it in memory (so that we do not have data
> fetch overheads). I copy this file repeatedly on a Host Aware SMR drive in 2
> scenarios :
> 
> a) dd - I use dumb dd that takes 1MB chunks of file and keeps copying the file
> to SMR drive for <writeSize> bytes. Note that dd does not take the zones
> into consideration.
> 
> b) SMR_aware_copy - This copy takes file chunks 1MB in size, but issues ZBC
> commands to open each zone, write 256 MB data to the zone, close the
> zone, and then move to another zone till <writeSize> bytes have been
> written.
> 
> performance results for 1GB, 10GB write sizes are 5x slower with "zone
> aware" writing, as compared to normal dd writing:
> 
> writeSize (in GB)     dd time (in min:sec)     smr_aware_copy (in min:sec)
> 1 GB                              0:7                                0:34
> 10 GB                            1:11                              6:41
> 50 GB                            5:51                               NA
> 100 GB                          11:42                             NA
> 
> (all writes were followed by sync command)
> 
> I was trying to see if there is an internal cache of some sort in the Host Aware
> SMR drive, which probably serializes all writes up to certain extent for dd
> command, but the time for writes using the dd command for up to 100GB
> follow a linear pattern. I will try to see if we hit a bottleneck with dd for larger
> file sizes or unaligned writes.
> 
> Followup questions:
> --------------------------
> 
> a) I think we should have some workload traces or patterns so that we can
> benchmark SMR drives and make allocator more SMR friendly. In particular -
> i) size of files,
> ii) alignment of files
> iii) % read / write/ delete workloads
> iv) degree of parallelism in writing.
> 
> b) SMR Drive has a notion of parallel writes - i.e. multiple zones can be kept
> open and written to simultaneously. I do not think there are multiple heads
> involved but internally there is some form of "efficient parallel write to zone"
> mechanism in SMR. I am thinking about this because when we query SMR
> drive information, it shows that most effieicnt number of zones can be
> parallelly kept open = 128 .
> Maybe this is something that we can take advantage of?
> 
> Thanks and Regards,
> Shehbaz
��.n��������+%������w��{.n����z��u���ܨ}���Ơz�j:+v�����w����ޙ��&�)ߡ�a����z�ޗ���ݢj��w�f




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux