Re: S3 and RBD backup

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Sanjeev,

This is something we have started on a test cluster for now and if proven to be robust will bring to production.
We are using the ceph functionality described here https://docs.ceph.com/en/pacific/mgr/nfs/, available starting from Pacific.

Best,

Giuseppe

From: Sanjeev Jha <sanjeev_mac@xxxxxxxxxxx>
Date: Thursday, 19 May 2022 at 09:41
To: Lo Re Giuseppe <giuseppe.lore@xxxxxxx>, stéphane chalansonnet <schalans@xxxxxxxxx>
Cc: "ceph-users@xxxxxxx" <ceph-users@xxxxxxx>
Subject: Re:  Re: S3 and RBD backup

Hi Giuseppe,

Thanks for your suggesion.

Could you please elaborate more the term "exporting bucket as NFS share"? How you are exporting the bucket? Are you using S3FS for this or some other mechanism?

Best regards,
Sanjeev
________________________________
From: Lo Re Giuseppe <giuseppe.lore@xxxxxxx>
Sent: Thursday, May 19, 2022 11:45 AM
To: stéphane chalansonnet <schalans@xxxxxxxxx>; Sanjeev Jha <sanjeev_mac@xxxxxxxxxxx>
Cc: ceph-users@xxxxxxx <ceph-users@xxxxxxx>
Subject: Re:  Re: S3 and RBD backup

Hi,

We are doing exactly the same, exporting bucket as NFS share and run on it our backup software to get data to tape.
Given the data volumes replication to another S3 disk based endpoint is not viable for us.
Regards,

Giuseppe

On 18.05.22, 23:14, "stéphane chalansonnet" <schalans@xxxxxxxxx> wrote:

    Hello,

    In fact S3 should be replicated on another region or AZ , and backup should
    be managed with versioning on bucket.

    But, in our case, we needed to secure the backup of databases (on K8S) into
    our external backup solution (EMC Networker)

    We implemented Ganesha and create an export NFS link to the bucket of some
    users S3.
    NFS export was mounted into storage backup Node and backup .

    Not the simpler solution but it works ;)

    Regards,
    Stephane



    Le mer. 18 mai 2022 à 22:34, Sanjeev Jha <sanjeev_mac@xxxxxxxxxxx> a écrit :

    > Thanks Janne for the information in detail.
    >
    > We have RHCS 4.2 non-collocated setup in one DC only. There are few RBD
    > volumes mapped to MariaDB Database.
    > Also, S3 endpoint with bucket is being used to upload objects. There is no
    > multisite zone has been implemented yet.
    > My Requirement is to take backup of RBD images and database.
    > How can S3 bucket backup and restore be possible?
    > We are looking for many opensource tool like rclone for S3 and Benji for
    > RBD but not able to make sure whether these tools would be enough to
    > achieve backup goal.
    > Your suggestion based on the above case would be much appreciated.
    >
    > Best,
    > Sanjeev
    >
    > ________________________________
    > From: Janne Johansson <icepic.dz@xxxxxxxxx>
    > Sent: Tuesday, May 17, 2022 1:01 PM
    > To: Sanjeev Jha <sanjeev_mac@xxxxxxxxxxx>
    > Cc: ceph-users@xxxxxxx <ceph-users@xxxxxxx>
    > Subject: Re:  S3 and RBD backup
    >
    > Den mån 16 maj 2022 kl 13:41 skrev Sanjeev Jha <sanjeev_mac@xxxxxxxxxxx>:
    > > Could someone please let me know how to take S3 and RBD backup from Ceph
    > side and possibility to take backup from Client/user side?
    > > Which tool should I use for the backup?
    >
    > Backing data up, or replicating it is a choice between a lot of
    > variables and options, and choosing something that has the least
    > negative effects for your own environment and your own demands. Some
    > options will cause a lot of network traffic, others will use a lot of
    > CPU somewhere, others will waste disk on the destination for
    > performance reasons and some will have long and complicated restore
    > procedures. Some will be realtime copies but those might put extra
    > load on the cluster while running, others will be asynchronous but
    > might need a database at all times to keep track of what not to copy
    > because it is already at the destination. Some synchronous options
    > might even cause writes to be slower in order to guarantee that ALL
    > copies are in place before sending clients an ACK, some will not and
    > those might lose data that the client thought was delivered 100% ok.
    >
    > Without knowing what your demands are, or knowing what situation and
    > environment you are in, it will be almost impossible to match the
    > above into something that is good for you.
    > Some might have a monetary cost, some may require a complete second
    > cluster of equal size, some might have a cost in terms of setup work
    > from clueful ceph admins that will take a certain amount of time and
    > effort. Some options might require clients to change how they write
    > data into the cluster in order to help the backup/replication system.
    >
    > There is unfortunately not a single best choice for all clusters,
    > there might even not exist a good option just to cover both S3 and RBD
    > since they are inherently very different.
    > RBD will almost certainly be only full restores of a large complete
    > image, S3 users might want to have the object
    > foo/bar/MyImportantWriting.doc from last wednesday back only and not
    > revert the whole bucket or the whole S3 setup.
    >
    > I'm quite certain that there will not be a single
    > cheap,fast,efficient,scalable,unnoticeable,easy solution that solves
    > all these problems at once, but rather you will have to focus on what
    > the toughest limitations are (money, time, disk, rackspace, network
    > capacity, client and IO demands?) and look for solutions (or products)
    > that work well with those restrictions.
    >
    > --
    > May the most significant bit of your life be positive.
    > _______________________________________________
    > ceph-users mailing list -- ceph-users@xxxxxxx
    > To unsubscribe send an email to ceph-users-leave@xxxxxxx
    >
    _______________________________________________
    ceph-users mailing list -- ceph-users@xxxxxxx
    To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux