On 05/16/2013 03:34 AM, 大椿 wrote:
I'm a newbie to ceph and also storage, just attracted by ceph's modern
features.
We want to build up a backup/restore system with two layers: HDD and Tape.
With ceph as the virtualization layer, export NFS/CIFS/iSCSI interface
to business storage systems.
Some rough questions:
1. Does RADOS support tape as a backend? I can't find any reference doc
about it.
RADOS works on top of mounted file systems, so whatever your file system
is isn't that relevant for RADOS (the OSDs, actually). However, I do
not know if there's some way, or some file system, for tape drives that
would provide the same behaviour as file systems as, say, ext3 or xfs.
If there are, then I guess it's a matter of testing it, unless I'm
missing something less obvious that would render this a bad idea.
2. Is it possible to separate two different backends as two layers, and
let ceph automatically place the objects in different layers according
to some rules?
The CRUSH map allows you to configure how and where replicas are stored.
I envision such a scenario in which you could have, say, two racks,
each with different kinds of storage devices, and you'd configure your
map to replicate across those two racks. This might be a naive approach
though and I'm sure someone else might have a better idea on how
feasible your idea is.
3. Is there a minimum node size limit of a ceph cluster?
You can configure a Ceph cluster in just one node.
Common setups however have multiple OSDs per host, as many hosts as you
want, ideally replicating across failure domains (which usually imply
having multiple hosts, each of different failure domains), and
scattering monitors across available hosts (or dedicated hosts if you
have the hardware for it).
-Joao
--
Joao Eduardo Luis
Software Engineer | http://inktank.com | http://ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com