And to clarify, too, this Aquarium work is the first attempt by folks to build a file backed storage setup, it's great to see innovation around this. Matt On Thu, Oct 20, 2022 at 1:50 PM Joao Eduardo Luis <joao@xxxxxxxx> wrote: > On 2022-10-20 17:46, Matt Benjamin wrote: > > The ability to run as a stand-alone service without a RADOS service > > comes > > from the Zipper API work, which is part of upstream Ceph RGW, > > obviously. > > It should relatively soon be possible to load new Zipper store drivers > > (backends) at runtime, so there won't be a need to maintain a fork of > > Ceph > > RGW. > > Indeed it does. None of this would be possible without Zipper and the > SAL abstraction work. :) > > -Joao > > > > > regards, > > > > Matt > > > > On Thu, Oct 20, 2022 at 1:34 PM Joao Eduardo Luis <joao@xxxxxxxx> > > wrote: > > > >> # s3gw v0.7.0 > >> > >> The s3gw team is announcing the release of s3gw v0.7.0. This release > >> contains fixes to known bugs and new features. This includes an early > >> version of an object explorer via the web-based UI. See the CHANGELOG > >> below for more information. > >> > >> This project is still under early-stage development and is not > >> recommended for production systems and upgrades are not guaranteed to > >> succeed from one version to another. Additionally, although we strive > >> for API parity with RADOSGW, features may still be missing. > >> > >> Do not hesitate to provide constructive feedback. > >> > >> ## CHANGELOG > >> > >> Exciting changes include: > >> > >> - Bucket management features for non-admin users (create/update/delete > >> buckets) on the UI. > >> - Different improvements on the UI. > >> - Several bug fixes. > >> - Improved charts. > >> > >> Full changelog can be found at > >> https://github.com/aquarist-labs/s3gw/releases/tag/v0.7.0 > >> > >> ## OBTAINING s3gw > >> > >> Container images can be found on GitHub’s container registry: > >> > >> ghcr.io/aquarist-labs/s3gw:v0.7.0 > >> ghcr.io/aquarist-labs/s3gw-ui:v0.7.0 > >> > >> Additionally, a helm chart [1] is available at ArtifactHUB: > >> > >> https://artifacthub.io/packages/helm/s3gw/s3gw > >> > >> For additional information, see the documentation: > >> > >> https://s3gw-docs.readthedocs.io/en/latest/ > >> > >> ## WHAT IS s3gw > >> > >> s3gw is an S3-compatible service that focuses on deployment within a > >> Kubernetes environment backed by any PVC, including Longhorn [2]. > >> Since > >> its inception, the primary focus has been on Cloud Native deployments. > >> However, s3gw can be deployed in a myriad of scenarios (including a > >> standalone container), provided it has some form of storage attached. > >> > >> s3gw is based on Ceph’s RADOSGW but runs as a stand-alone service > >> without the RADOS cluster and relies on a storage backend still under > >> heavy development by the storage team at SUSE. Additionally, the s3gw > >> team is developing a web-based UI for management and an object > >> explorer. > >> > >> More information can be found at https://aquarist-labs.io/s3gw/ or > >> https://github.com/aquarist-labs/s3gw/ . > >> > >> -Joao and the s3gw team > >> > >> [1] https://github.com/aquarist-labs/s3gw-charts > >> [2] https://longhorn.io > >> _______________________________________________ > >> ceph-users mailing list -- ceph-users@xxxxxxx > >> To unsubscribe send an email to ceph-users-leave@xxxxxxx > >> > > > > > > -- > > > > Matt Benjamin > > Red Hat, Inc. > > 315 West Huron Street, Suite 140A > > Ann Arbor, Michigan 48103 > > > > http://www.redhat.com/en/technologies/storage > > > > tel. 734-821-5101 > > fax. 734-769-8938 > > cel. 734-216-5309 > > _______________________________________________ > > ceph-users mailing list -- ceph-users@xxxxxxx > > To unsubscribe send an email to ceph-users-leave@xxxxxxx > > -- Matt Benjamin Red Hat, Inc. 315 West Huron Street, Suite 140A Ann Arbor, Michigan 48103 http://www.redhat.com/en/technologies/storage tel. 734-821-5101 fax. 734-769-8938 cel. 734-216-5309 _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx