Re: State of Gluster project

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Il 2020-06-21 01:26 Strahil Nikolov ha scritto:
The efforts are  far less than reconstructing the disk of a VM from
CEPH. In gluster , just run a find on the brick  searching for the
name of the VM disk and you will find the VM_IMAGE.xyz  (where xyz is
just a number)  and then concatenate the  list into a single file.

Sure, but it is somewhat impractical with a 6 TB fileserver image and 500 users screaming for their files ;)

And I fully expect to be the reconstruction much easier than Ceph but, from what I read, Ceph is less likely to broke in the first place. But I admit I never seriously run a Ceph cluster, so maybe it is more fragile than I expect.

That's  true,  but  you  could  also  use  NFS Ganesha,  which  is
more  performant  than FUSE and also as  reliable  as  it.

From this very list I read about many users with various problems when using NFS Ganesha. Is that a wrong impression?

It's  not so hard to  do it  -  just  use  either  'reset-brick' or
'replace-brick' .

Sure - the command itself is simple enough. The point it that each reconstruction is quite more "riskier" than a simple RAID reconstruction. Do you run a full Gluster SDS, skipping RAID? How do you found this setup?

Thanks.

--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it [1]
email: g.danti@xxxxxxxxxx - info@xxxxxxxxxx
GPG public key ID: FF5F32A8
________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux