Re: Ceph Leadership Team meeting 2021-05-05

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks!
–––––––––
Sébastien Han
Senior Principal Software Engineer, Storage Architect

"Always give 100%. Unless you're giving blood."

On Thu, May 6, 2021 at 1:59 PM Sebastian Wagner <sewagner@xxxxxxxxxx> wrote:
>
> Hi Sebastien
>
> Am 06.05.21 um 12:22 schrieb Sebastien Han:
> > Hi Sebastian!
> >
> > Thanks!
> > –––––––––
> > Sébastien Han
> > Senior Principal Software Engineer, Storage Architect
> >
> > "Always give 100%. Unless you're giving blood."
> >
> > On Thu, May 6, 2021 at 11:19 AM Sebastian Wagner <sewagner@xxxxxxxxxx> wrote:
> >> Hi Sebastien!
> >>
> >> Am 06.05.21 um 08:51 schrieb Sebastien Han:
> >>> Inline:
> >>> Thanks!
> >>> –––––––––
> >>> Sébastien Han
> >>> Senior Principal Software Engineer, Storage Architect
> >>>
> >>> "Always give 100%. Unless you're giving blood."
> >>>
> >>> On Thu, May 6, 2021 at 12:29 AM Gregory Farnum <gfarnum@xxxxxxxxxx> wrote:
> >>>> Hi all,
> >>>>
> >>>> Highlights from this weeks' CLT meeting:
> >>>>
> >>>> - Can we deprecate ceph-nano?
> >>>>     - Yes; it is no longer maintained. Josh will note as much in the
> >>>> README and  archive this and other old repos in the github org.
> >>>>     - ceph-adm replaces most use cases since it supports
> >>>> containerization and deployment onto a single host. Team will explore
> >>>> supporting OSDs on loopback devices, which is the main differentiator.
> >>> Small correction, ceph-nano does not use loopback devices, it runs
> >>> "ceph-osd --mkfs" on a directory and optionally can take block device
> >>> if provided.
> >> cephadm is not a 100% replacement for ceph-nano, indeed. My
> >> idea was to consolidate things a bit and provide a fully supported
> >> alternative.
> >>
> >>
> >>> As gaps I can see:
> >>>
> >>> * cephadm does not have multi-clusters support.
> >> It does actually. If you're a bit careful with the bootstrap arguments (like
> >> avoding port conflicts), you can have more than a single cluster.
> > Agreed, however, cn's goal was to be frictionless, assuming people do
> > not need to know about port conflicting or internal details.
> > it'd be nice if cephadm could change the port automatically if a mon
> > is already running (which I assume is the most problematic issue).
>
> Yeah, multi-cluster support requires some love indeed. We had different
> priorities.
>
> >
> >>
> >>> * With cn you can run as many clusters with any Ceph version.
> >> Right, we have a dependency on the major Ceph version.
> >>> * cephadm only runs on Linux, where cn can run on any OS that has a
> >>> Docker daemon running
> >> Accidentally, I had a discussion with someone who was interested in
> >> porting cephadm to FreeBSD. But yes, cephadm makes use of systemd.
> >>
> >>
> >>> Otherwise, I'm fine with archiving the repository. However, there is
> >>> still a lot of interest, I'm frequently getting e-mail about it so we
> >>> should make it clear that cephadm replaces it.
> >>> Also, are we ok to lose the audience of non-Linux users? I believe it
> >>> makes a good portion of the user base.
> >>
> >> That's Windows, right?
> > Both Windows and MacOS, have Docker on {Mac|Window}, which are really popular.
> > Losing such an audience wouldn't be good, since a lot of non-technical
> > people are still using cn for demos I believe, regardless of which
> > platform they are running on.
>
> Do you have any idea? Porting cephadm to Windows isn't going to be
> trivial. Wait, what? AFAIK we don't have native ceph binaries for
> Windows. Guess it's WSL then?

Yes yes it's the WSL 2 driver, Docker runs inside a VM :)
cn internally uses the Docker API to manipulate the container(s), so
we don't even need a docker binary on Windows!

>
> >
> >>> FYI I just archived https://github.com/ceph/cn-core which goal was to
> >>> use a Golang based bootstrapper instead of the demo.sh script from
> >>> ceph-container.
> >>>
> >>>> - Discussed the Redmine tracker triage process and considered removing
> >>>> the top-level "Ceph" project from the list of available new ticket
> >>>> destinations. I and others pushed back on this since we need a place
> >>>> to put non-subproject-specific issues, so we agreed that going forward
> >>>> project leads will scrub the top-level Ceph project for new relevant
> >>>> issues during their regular bug scrubs. I took an action item to go
> >>>> through the existing back log and sort it appropriately (though I also
> >>>> think Sage did a bunch this morning).
> >>>>
> >>>> - We added a COSI repository in the Ceph org for working with RGW.
> >>>> - Pacific v16.2.2 needed release notes review, which it got so the
> >>>> release is out now.
> >>>> - We got a question about new options for the dashboard and other
> >>>> component communication, following on from the CDS sessions. Ernesto
> >>>> will follow up on this.
> >>>>
> >>>> -Greg
> >>>> _______________________________________________
> >>>> Dev mailing list -- dev@xxxxxxx
> >>>> To unsubscribe send an email to dev-leave@xxxxxxx
> >>>>
> >>> _______________________________________________
> >>> Dev mailing list -- dev@xxxxxxx
> >>> To unsubscribe send an email to dev-leave@xxxxxxx
>
_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx




[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux