Re: CephFS as Offline Storage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have already installed multiple one node ceph cluster with cephfs for
non-productive workloads in the last few years.
Had no major issue, e.g. once a broken HDD. The question is what kind of EC
or replication you will use. Also only powered off the node in a clean and
healthy state ;-)

What would interest me is,  pause the cluster with "ceph osd pause" and
send the nodes to hibernate.

Joachim


Am Mi., 22. Mai 2024 um 07:55 Uhr schrieb Matthias Ferdinand <
mf+ml.ceph@xxxxxxxxx>:

> On Tue, May 21, 2024 at 08:54:26PM +0000, Eugen Block wrote:
> > It’s usually no problem to shut down a cluster. Set at least the noout
> flag,
> > the other flags like norebalance, nobackfill etc won’t hurt either. Then
> > shut down the servers. I do that all the time with test clusters (they do
> > have data, just not important at all), and I’ve never had data loss after
> > powering them back on. When all OSDs are up, unset the flags and let it
> > scrub. Usually, the (deep-)scrubbing will start almost immediately.
>
> One surprise I have with the Ubuntu test cluster (non-containerized,
> Ubuntu packages) that I regularly shut down is that the signals from log
> rotation (I assume) to Ceph daemons interfere with Ceph startup. When
> rebooted after some days, it is the same on all nodes: no Ceph daemon is
> running.
> Workaround: another reboot
>
> Matthias
>
> >
> > Zitat von "adam.ther" <adam.ther@xxxxxxx>:
> >
> > > Thanks guys,
> > >
> > > I think ill just risk it cause it's just for backup, then write
> > > something up later as a follow up on what happens in-case others want
> to
> > > do similar. I agree it not typical, im a bit of an odd-duck datahorder.
> > >
> > > Regards,
> > >
> > > Adam
> > >
> > > On 5/21/24 14:21, Matt Vandermeulen wrote:
> > > > I would normally vouch for ZFS for this sort of thing, but the mix
> > > > of drive sizes will be... and inconvenience, at best. You could get
> > > > creative with the hierarchy (making zraid{2,3} of mirrors of
> > > > same-sized drives, or something), but it would be far from ideal. I
> > > > use ZFS for my own home machines however, all the drives are
> > > > identical.
> > > >
> > > > I'm curious about this application of Ceph though, in home-lab use.
> > > > Performance likely isn't a top concern, just a durable persistent
> > > > storage target, so this is an interesting use case.
> > > >
> > > >
> > > > On 2024-05-21 17:02, adam.ther wrote:
> > > > > Hello,
> > > > >
> > > > > It's all non-corperate data, I'm just trying to cut back on
> > > > > wattage (removes around 450W of the 2.4KW) by powering down
> > > > > backup servers that house 208TB while not being backed up or
> > > > > restoring.
> > > > >
> > > > > ZFS sounds interesting however does it play nice with a mix of
> > > > > drive sizes? That's primarily why I use Ceph, it's okay (if not
> > > > > ideal) with 4x 22TB, 8x 10TB, 10x 4TB.
> > > > >
> > > > > So that said, would Ceph have any known issues with long power
> > > > > downs aside from it nagging about the scrubbing schedule? Mark i
> > > > > see you said it wouldn't matter but does Ceph not use a date
> > > > > based scheduler?
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Adam
> > > > >
> > > > > On 5/21/24 13:29, Marc wrote:
> > > > > > > > I think it is his lab so maybe it is a test setup for
> production.
> > > > > > > Home production?
> > > > > > A home setup to test on, before he applies changes to his
> production
> > > > > >
> > > > > > Saluti 🍷 ;)
> > > > > >
> > > > > _______________________________________________
> > > > > ceph-users mailing list -- ceph-users@xxxxxxx
> > > > > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> > > > _______________________________________________
> > > > ceph-users mailing list -- ceph-users@xxxxxxx
> > > > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> > > _______________________________________________
> > > ceph-users mailing list -- ceph-users@xxxxxxx
> > > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >
> >
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux