Re: how to power off a cephfs cluster cleanly

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jul 25, 2019 at 7:48 AM Dan van der Ster <dan@xxxxxxxxxxxxxx> wrote:
>
> Hi all,
>
> In September we'll need to power down a CephFS cluster (currently
> mimic) for a several-hour electrical intervention.
>
> Having never done this before, I thought I'd check with the list.
> Here's our planned procedure:
>
> 1. umounts /cephfs from all hpc clients.
> 2. ceph osd set noout
> 3. wait until there is zero IO on the cluster
> 4. stop all mds's (active + standby)

You can also use `ceph fs set <name> down true` which will flush all
metadata/journals, evict any lingering clients, and leave the file
system down until manually brought back up even if there are standby
MDSs available.

-- 
Patrick Donnelly, Ph.D.
He / Him / His
Senior Software Engineer
Red Hat Sunnyvale, CA
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux