Hi ,
As Christian has mentioned ... bit more detailed information will do us good..Had explored Cephfs -- but performance was an issue vis-a-vis zfs when we tested ( more than a year back) , so we did not get into details.
I will let the Cephfs experts chip in here on the present state of Cephfs
How are you using zfs on your main site .. nfs/cifs / iscsi Is there a business requirement to restore data in a defined time ?
How much data is in play here ?
How much data is in play here ?
what are the odds it fails (hw quality is improving by the day -- does mean it wont fail) ?
How fast can one replace failed HW (We have spare HW always avaialbe) ?
do you need always on backup, especially offsite backup?
Have you explored Tape option ?How fast can one replace failed HW (We have spare HW always avaialbe) ?
do you need always on backup, especially offsite backup?
We are using zfs on solaris and freebsd as a filer ( nfs/cifs) and we keep three copies of snapshot (We have 5 TB of data )
- local on filers ( snapshot every hour for 2 days)
- onsite on another machine 1 Week (snapshot copy every 12 hrs on a machine onsite )
- offsite (snapshot copy every day for 4 weeks --> then from offsite to tape).
- local on filers ( snapshot every hour for 2 days)
- onsite on another machine 1 Week (snapshot copy every 12 hrs on a machine onsite )
- offsite (snapshot copy every day for 4 weeks --> then from offsite to tape).
For DB backup we have a system in place but it does not rely on zfs snapshot, Would love to know how you manage DB backups with zfs snapshots.
ZFS is a mature technology ..
P.S We use ceph for openstack (ephemeral /cinder / glance ) .. with no backup. (One year on we are still learning new things and it has just worked)
On Fri, Jun 26, 2015 at 9:00 AM, Christian Balzer <chibi@xxxxxxx> wrote:
Hello,
On Fri, 26 Jun 2015 00:28:20 +0200 Cybertinus wrote:
> Hello everybody,
>
>
> I'm looking at Ceph as an alternative for my current storage solution,
> but I'm wondering if it is the right choice for me. I'm hoping you guys
> can help me decide.
>
> The current setup is a FreeBSD 10.1 machine running entirely on ZFS. The
> function of the machine is offsite backup for important data. For some
> (fairly rapidly changing) data this server is the only backup of it. But
> because the data is changing fairly quickly (every day at least) I'm
> looking to get this server more HA then it is now.
> It is just one FreeBSD machine, so this is an enormous SPOF off course.
>
But aside from the SPOF part that machine is sufficient for your usage,
right?
Care to share the specs of it and what data volume (total space used, daily
transactions) we're talking about>
> The most used functionality of ZFS that I use is the snapshot technology.
> I've got multiple users on this server and each user has it's own
> filesystem within the pool. And I just snapshot each filesystem regularly
> and that way I enable the users to go back in time.
> I've looked at the snapshot functionality of Ceph, but it's not clear to
> me what I can snapshot with it exactly.
>
> Furthermore: what is the best way to hook Ceph to the application I use
> to transfer the data from the users to the backup server? Today I'm using
> OwnCloud, which is (in essence) a WebDAV server. Now I'm thinking about
> replacing OwnCloud with something custom build. That way I can let PHP
> talk directly with librados, which makes it easy to store the data.
> Or I can keep on using OwnCloud and just hook up Ceph via CephFS. This
> has the added advantage that I don't have to get my head around the
> concept of object storage :p ;).
>
I'm slightly confused here, namely:
You use owncloud (I got a test installation on a VM here, too), which
uses a DB (mysql by default) to index the files uploaded.
How do you make sure that your snapshots are consistent when it comes to
DB files other than being lucky 99.9% of the time?
I'll let the CephFS experts pipe up, but the usual disclaimers about
CephFS stability do apply, in particular the latest (beta) version of Ceph
has this line on top of the changelog:
---
Highlights here include lots of RGW Swift fixes, RBD feature work
surrounding the new object map feature, more CephFS snapshot fixes, and a
few important CRUSH fixes.
---
Now you could just mount an RBD image (or run a VM) with BTRFS and have
snapshots again that are known to work.
However going back to my first question up there, I have a feeling that a
functional Ceph cluster with at least 3 storage nodes might be both too
expensive while at the same time less performant than what you have now.
A 2 node DRBD cluster might fit your needs better.
Christian
--
Christian Balzer Network/Systems Engineer
chibi@xxxxxxx Global OnLine Japan/Fusion Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com