Re: flashcache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



2013/1/16 Mark Nelson <mark.nelson@xxxxxxxxxxx>:
> Looks like a fun configuration to test!  Having said that, I have no idea
> how stable flashcache is.  It's certainly not something we've used in
> production before!  Keep that in mind.

As wrote before, this should not be an issue, ceph should handle failures.
But is ceph able to detect a badly-wrote blocks?

> With only 2 SSDs for 12 spinning disks, you'll need to make sure the SSDs
> are really fast.  I use Intel 520s for testing which are great, but I
> wouldn't use them in  production.  The S3700 might be a good bet at larger
> sizes, but it looks like the 100GB version is a lot slower than the 200GB
> version, and that's still a bit slower than the 400GB version.  Assuming you
> have 10GbE, you'll probably be capped by the SSDs for large block sequential
> workloads.  Having said that, I still think this has potential to be a nice
> setup.  Just be aware that we usually don't stick that much stuff on the
> SSDs!

12 spinning disks will be the worse scenario.
When in production i'll start with 5 server, with 6 disks each (3+3
for each SSD)
Then i'll try to add hosts instead of adding disks.
I don't have 10GbE actually but only 2GbE bonded. I can evaluate 4GbE.

> It'd be amazing if supermicro could cram another 2 SSD slots in the back.
> Maybe by that time we'll all be using PCIE flash storage though. :)

How much a PCIE storage?
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux