Re: Placement groups on a 216 OSD cluster with multiple pools

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 15/11/2013 8:57 AM, Dane Elwell wrote:
[2] - I realise the dangers/stupidity of a replica size of 0, but some of the data we wish
to store just isn’t /that/ important.

We've been thinking of this too. The application is storing boot-images, ISOs, local repository mirrors etc where recovery is easy with a slight inconvenience if the data has to be re-fetched. This suggests a neat additional feature for Ceph would be the ability to have metadata attached to zero-replica objects that includes a URL where a copy could be recovered/re-fetched. Then it could all happen auto-magically.

We also have users trampolining data between systems in order to buffer fast-data streams or handle data-surges. This can be zero-replica too.



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux