Re: Questions on Ceph cluster without OS disks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Thomas,

by default we allocate 1GB per Host on the Management Node, nothing on the
PXE booted server.

This value can be changed in the management container config file
(/config/config.yml):
> ...
> logFilesPerServerGB: 1
> ...
After changing the config, you need to restart the mgmt container.

--
Martin Verges
Managing director

Mobile: +49 174 9335695
E-Mail: martin.verges@xxxxxxxx
Chat: https://t.me/MartinVerges

croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263

Web: https://croit.io
YouTube: https://goo.gl/PGE1Bx


Am Mo., 23. März 2020 um 09:30 Uhr schrieb Thomas Schneider <
74cmonty@xxxxxxxxx>:

> Hello Martin,
>
> how much disk space do you reserve for log in the PXE setup?
>
> Regards
> Thomas
>
> Am 22.03.2020 um 20:50 schrieb Martin Verges:
> > Hello Samuel,
> >
> > we from croit.io don't use NFS to boot up Servers. We copy the OS
> directly
> > into the RAM (approximately 0.5-1GB). Think of it like a container, you
> > start it and throw it away when you no longer need it.
> > This way we can save the slots of OS harddisks to add more storage per
> node
> > and reduce overall costs as 1GB ram is cheaper then an OS disk and
> consumes
> > less power.
> >
> > If our management node is down, nothing will happen to the cluster. No
> > impact, no downtime. However, you do need the mgmt node to boot up the
> > cluster. So after a very rare total power outage, your first system would
> > be the mgmt node and then the cluster itself. But again, if you configure
> > your systems correct, no manual work is required to recover from that.
> For
> > everything else, it is possible (but definitely not needed) to deploy our
> > mgmt node in active/passive HA.
> >
> > We have multiple hundred installations worldwide in production
> > environments. Our strong PXE knowledge comes from more than 20 years of
> > datacenter hosting experience and it never ever failed us in the last >10
> > years.
> >
> > The main benefits out of that:
> >  - Immutable OS freshly booted: Every host has exactly the same version,
> > same library, kernel, Ceph versions,...
> >  - OS is heavily tested by us: Every croit deployment has exactly the
> same
> > image. We can find errors much faster and hit much fewer errors.
> >  - Easy Update: Updating OS, Ceph or anything else is just a node reboot.
> > No cluster downtime, No service Impact, full automatic handling by our
> mgmt
> > Software.
> >  - No need to install OS: No maintenance costs, no labor required, no
> other
> > OS management required.
> >  - Centralized Logs/Stats: As it is booted in memory, all logs and
> > statistics are collected on a central place for easy access.
> >  - Easy to scale: It doesn't matter if you boot 3 oder 300 nodes, all
> > boot the exact same image in a few seconds.
> >  .. lots more
> >
> > Please do not hesitate to contact us directly. We always try to offer an
> > excellent service and are strongly customer oriented.
> >
> > --
> > Martin Verges
> > Managing director
> >
> > Mobile: +49 174 9335695
> > E-Mail: martin.verges@xxxxxxxx
> > Chat: https://t.me/MartinVerges
> >
> > croit GmbH, Freseniusstr. 31h, 81247 Munich
> > CEO: Martin Verges - VAT-ID: DE310638492
> > Com. register: Amtsgericht Munich HRB 231263
> >
> > Web: https://croit.io
> > YouTube: https://goo.gl/PGE1Bx
> >
> >
> > Am Sa., 21. März 2020 um 13:53 Uhr schrieb huxiaoyu@xxxxxxxxxxxx <
> > huxiaoyu@xxxxxxxxxxxx>:
> >
> >> Hello, Martin,
> >>
> >> I notice that Croit advocate the use of ceph cluster without OS disks,
> but
> >> with PXE boot.
> >>
> >> Do you use a NFS server to serve the root file system for each node?
> such
> >> as hosting configuration files, user and password, log files, etc. My
> >> question is, will the NFS server be a single point of failure? If the
> NFS
> >> server goes down, the network experience any outage, ceph nodes may not
> be
> >> able to write to the local file systems, possibly leading to service
> outage.
> >>
> >> How do you deal with the above potential issues in production? I am a
> bit
> >> worried...
> >>
> >> best regards,
> >>
> >> samuel
> >>
> >>
> >>
> >>
> >> ------------------------------
> >> huxiaoyu@xxxxxxxxxxxx
> >>
> >>
> >>
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux