Re: State of SMR support in Ceph?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 06.05.20 um 16:12 schrieb brad.swanson@xxxxxxxxxx:
Take a look at the available SMR drives:
https://www.ixsystems.com/community/resources/list-of-known-smr-drives.141/

Thanks for this nice overview link!
Indeed, my question was more thinking about "the next years". While currently, SMR is certainly not to be used in enterprise systems,
unless your use cases and software stack really matches it, all the different energy-assisted technologies (MAMR, HAMR, something-else-AMR) are likely to stay for the next years,
unless we manage to change physics. So it would be useful to have a way to exploit this technology, even if it (currently) will cause headaches,
but Ceph has already managed to overcome many of the headaches admins had to endure in the past (e.g. RAID controller intricacies).

I wouldn’t put a single one of those drives into a Ceph system.  You won’t save any money.  In fact, it’s likely a sink-hole for labor on your part.  In fact, you couldn’t pay me to take them.  The Seagate 8TB units listed above are the same price as their Enterprise.  Clearly, manufacturing costs went down, and this allows a (temporary) margin boost for the manufacturer over the near term.

Their intended use cases are not for a Ceph system.   Performance is lower.  Therefore, cost is higher. They’re meant for desktops and cheap long term storage.
I’d let Backblaze test, and wait a year or two for the fail rate reports.  And then, I’d take into account their primary use case...long term backup and desktop use.

The only use case I'd consider would indeed be tape-like: Long-term backups, the only writing activity is influx of data, balancing (if needed)
and purging old backups (after applying some chunking such as restic / duplicati and others do).

When I built out Ceph, I spent some time looking at performance in mixed-use, cost, reliability.
I looked at the user application(s) from an IOPS and bandwidth perspective.  Then I plotted out the demands of the application(s), as well as user trends, and added 20%.  Then I threw in 20% growth per month.  I extended it out 2 years.  Then I doubled it.  What’s my time worth?  What’s the time worth for 160 developers?  Rock-solid performance and reliability is worth it.

We did similar estimates indeed for our CephFS use case. And you are fully right, SMR (and potentially, any *AMR) is out of the picture for this use case.

Then I figured out the cost, got my funding, ordered, and built it.  I warned the user community up front about IOPS performance, drastic application demands require changes.  The users have done an excellent job staying within their requirements.  We didn’t want an inefficient application that is expensive to implement.
Seagate Exos 7e8’s are ~$25/TB. 8TB, 16TB.
I have 200 Seagate Enterprise drives.  In 2.5 years, I’ve had 2 drives fail.  I have about 160 users, and dozens of automated platforms that use the system.  I make the users aware of the IOPS demands.  They make their applications and tests more efficient over time.  They schedule and queue.  I like to sleep at night.  I spend about 20 minutes a week doing maintenance, mainly my Ceph dashboard and some Ansible output.
I talk to the users, I spend time with them when they’re planning.

I also agree on this (we are similar scale with our computing cluster, albeit our Exos drives were less happy, we have about one failure per month, but our cooling is likely worse than yours).

It is time for a boost.  I’ve followed costs and DWPD on available enterprise SSD’s.   With my same application and user data, updated, projected, I need an enterprise SSD with DWPD around 3.  5 and 10 are too expensive.  1 is too low.  I won’t use “read-optimizes” with DWPD of 1.
I didn’t even look at consumer grade.

Micron MAX 5200 (SATA) at DWPD of 3 is perfect.  Cost is within reason when I consider performance and my time.  I’m about to add 56 of these, need to finish my testing.

Thanks for this interesting report! :-)

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux