Re: State of SMR support in Ceph?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Take a look at the available SMR drives:
https://www.ixsystems.com/community/resources/list-of-known-smr-drives.141/

I wouldn’t put a single one of those drives into a Ceph system.  You won’t save any money.  In fact, it’s likely a sink-hole for labor on your part.  In fact, you couldn’t pay me to take them.  The Seagate 8TB units listed above are the same price as their Enterprise.  Clearly, manufacturing costs went down, and this allows a (temporary) margin boost for the manufacturer over the near term.  

Their intended use cases are not for a Ceph system.   Performance is lower.  Therefore, cost is higher. They’re meant for desktops and cheap long term storage. 
I’d let Backblaze test, and wait a year or two for the fail rate reports.  And then, I’d take into account their primary use case...long term backup and desktop use.  

When I built out Ceph, I spent some time looking at performance in mixed-use, cost, reliability.  
I looked at the user application(s) from an IOPS and bandwidth perspective.  Then I plotted out the demands of the application(s), as well as user trends, and added 20%.  Then I threw in 20% growth per month.  I extended it out 2 years.  Then I doubled it.  What’s my time worth?  What’s the time worth for 160 developers?  Rock-solid performance and reliability is worth it.

Then I figured out the cost, got my funding, ordered, and built it.  I warned the user community up front about IOPS performance, drastic application demands require changes.  The users have done an excellent job staying within their requirements.  We didn’t want an inefficient application that is expensive to implement.  
 
Seagate Exos 7e8’s are ~$25/TB.  8TB, 16TB.  
I have 200 Seagate Enterprise drives.  In 2.5 years, I’ve had 2 drives fail.  I have about 160 users, and dozens of automated platforms that use the system.  I make the users aware of the IOPS demands.  They make their applications and tests more efficient over time.  They schedule and queue.  I like to sleep at night.  I spend about 20 minutes a week doing maintenance, mainly my Ceph dashboard and some Ansible output.
I talk to the users, I spend time with them when they’re planning.  

It is time for a boost.  I’ve followed costs and DWPD on available enterprise SSD’s.   With my same application and user data, updated, projected, I need an enterprise SSD with DWPD around 3.  5 and 10 are too expensive.  1 is too low.  I won’t use “read-optimizes” with DWPD of 1.
I didn’t even look at consumer grade.  

Micron MAX 5200 (SATA) at DWPD of 3 is perfect.  Cost is within reason when I consider performance and my time.  I’m about to add 56 of these, need to finish my testing.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux