Re: flashcache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



2013/1/16 Sage Weil <sage@xxxxxxxxxxx>:
> This sort of configuration effectively bundles the disk and SSD into a
> single unit, where the failure of either results in the loss of both.
> From Ceph's perspective, it doesn't matter if the thing it is sitting on
> is a single disk, an SSD+disk flashcache thing, or a big RAID array.  All
> that changes is the probability of failure.

Ok, it will fail, but this should not be an issue, in a cluster like
ceph, right?
With or without flashcache or SSD, ceph should be able to handle
disks/nodes/osds failures on its own by replicating in real time to
multiple server.

Should I worry about loosing data in case of failure? It should
rebalance automatically in case of failure with no data loss.

> I would worry that there is a lot of stuff piling onto the SSD and it may
> become your bottleneck.  My guess is that another 1-2 SSDs will be a
> better 'balance', but only experiementation will really tell us that.
>
> Otherwise, those seem to all be good things to put on teh SSD!

I can't add more than 2 SSD, I don't have enough space.
I can move OS to the first 2 spinning disks in raid1 software, if this
will improve performance of SSD

What about swap? I'm thinking to no use swap at all and start with 16/32GB RAM
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux