CEPH Cluster Backup - Options on my solution

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi All

I've been worried that I did not have a good backup of my cluster and
having looked around I could not find anything which did not require
local storage.

I found Rhian script while looking for a backup solution before a major
version upgrade and found that it worked very well.

I'm looking for opinions on this method below.

This uses stdio to pipe the data directly from rbd to a s3 host, while
encrypting the data.

I have a few worries.

1. The snap shots, I think I should be deleting the older one and only
keeping the the last one or two.
2. Having only one full backup and diff from there seems wrong to me but
the amount of data would seem to preclude creating a new full back up on
a regular biases.

I'm sure that others will pipe up with more issues.

Note. The sum.c program is needed because s3 needs to be told about
files which are larger than 5Gig and the standard tools I tested all had
a limit of 32bits when printing out the size in bytes

Cheers

Mike


#!/bin/bash
###
# Original Author: Rhian Resnick - Updated a lot by Mike O'Connor
# Purpose: Backup CEPH RBD using snapshots, the files that are created
should be stored off the ceph cluster, but you can use ceph storage
during the process of backing them up.
###


export AWS_PROFILE=wasabi
PUBKEY='PublicKey'

pool=$1
if [ “$pool” == “” ]
then
    echo Usage: $0 pool
    exit 1
fi
rbd ls $pool | while read vol
do
    if [ $vol == ISO ]
    then
      continue
    fi
    # Look up latest backup file

    echo BACKUP ${vol}
    LASTSNAP=`aws s3 ls s3://dcbackup/$vol/ | sort | tail -n 1 | awk
'{print $4}' | cut -d "." -f 1`
    echo "Last Snap: $vol/$LASTSNAP"

    # Create a snap, we need this to do the diff
    NEWSNAP=`date +%y%m%d%H%M`
    echo "New Snap: $NEWSNAP"
        echo rbd snap create $pool/$vol@$NEWSNAP
        rbd snap create $pool/$vol@$NEWSNAP

    if [ "$LASTSNAP" == "" ]
    then
        RBD_SIZE=`rbd diff $pool/$vol | ./sum`
        echo "rbd export-diff $pool/$vol@$NEWSNAP - | seccure-encrypt
${PUBKEY} | aws s3 cp --expected-size ${RBD_SIZE} -
s3://dcbackup/$vol/$NEWSNAP.diff"
        rbd export-diff $pool/$vol@$NEWSNAP - | seccure-encrypt
${PUBKEY} | aws s3 cp --expected-size ${RBD_SIZE} -
s3://dcbackup/$vol/$NEWSNAP.diff
    else
        RBD_SIZE=`rbd diff --from-snap $LASTSNAP $pool/$vol | ./sum`
        echo "rbd export-diff --from-snap $LASTSNAP $pool/$vol@$NEWSNAP
- | seccure-encrypt ${PUBKEY} | aws s3 cp --expected-size ${RBD_SIZE}  -
s3://dcbackup/$vol/$NEWSNAP.diff"
        rbd export-diff --from-snap $LASTSNAP $pool/$vol@$NEWSNAP - |
seccure-encrypt ${PUBKEY} | aws s3 cp  - s3://dcbackup/$vol/$NEWSNAP.diff
    fi
    echo
    echo
done


#include <stdio.h>
#include <inttypes.h>
#include <string.h>
#include <stdlib.h>

/* sum column 2 of input with lines like:  number number word
 * skipping the first (header) row
 *
 * usage: sum [-v] file
 *
 * if -v is specified a count of rows processed (including the header)
is also output
 */

int main(int argc, char *argv[]) {
        uint64_t sum = 0L;
        size_t n = 100;
        unsigned int count=1;
        char *buf = (char*)malloc(n);

        getline(&buf,&n,stdin);
        while (1) {
                uint64_t offset, bytes;
                n = fscanf(stdin, "%lld %lld %s\n", &offset, &bytes, buf);
                if (n != 3) break;
                ++count;
                sum += bytes;
        }
        if (argc>1 && strcmp(argv[1],"-v")==0)
                printf("%d %lld\n", count, sum);
        else
                printf("%lld\n", sum);
}
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux