Re: State of Gluster project

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




На 21 юни 2020 г. 10:53:10 GMT+03:00, Gionatan Danti <g.danti@xxxxxxxxxx> написа:
>Il 2020-06-21 01:26 Strahil Nikolov ha scritto:
>> The efforts are  far less than reconstructing the disk of a VM from
>> CEPH. In gluster , just run a find on the brick  searching for the
>> name of the VM disk and you will find the VM_IMAGE.xyz  (where xyz is
>> just a number)  and then concatenate the  list into a single file.
>
>Sure, but it is somewhat impractical with a 6 TB fileserver image and 
>500 users screaming for their files ;)
>And I fully expect to be the reconstruction much easier than Ceph but, 
>from what I read, Ceph is less likely to broke in the first place. But
>I 
>admit I never seriously run a Ceph cluster, so maybe it is more fragile
>
>than I expect.

With  every community project ,  you are in the position  of a Betta  Tester  - no matter Fedora,  Gluster  or CEPH. So far  ,  I had  issues with upstream  projects only diring and immediately after patching  - but this is properly mitigated  with a  reasonable patching strategy (patch  test environment and several months later  patch prod with the same repos).
Enterprise  Linux breaks (and alot) having 10-times more  users and use  cases,  so you cannot expect to start to use  Gluster  and assume that a  free  peoject won't break at all.
Our part in this project is to help the devs to create a test case for our workload ,  so  regressions will be reduced to minimum.

In the past 2  years,  we  got 2  major  issues with VMware VSAN and 1  major  issue  with  a Enterprise Storage cluster (both solutions are quite  expensive)  - so  I always recommend proper  testing  of your  software .


>> That's  true,  but  you  could  also  use  NFS Ganesha,  which  is
>> more  performant  than FUSE and also as  reliable  as  it.
>
>From this very list I read about many users with various problems when 
>using NFS Ganesha. Is that a wrong impression?

>From my observations,  almost nobody  is complaining about Ganesha in the mailing list -> 50% are  having issues  with geo replication,20%  are  having issues with small file performance and the rest have issues with very old version of gluster  -> v5 or older.

>> It's  not so hard to  do it  -  just  use  either  'reset-brick' or
>> 'replace-brick' .
>
>Sure - the command itself is simple enough. The point it that each 
>reconstruction is quite more "riskier" than a simple RAID 
>reconstruction. Do you run a full Gluster SDS, skipping RAID? How do
>you 
>found this setup?

I  can't say that a  replace-brick  on a 'replica  3' volume is more  riskier  than a rebuild  of a raid,  but I have noticed that nobody is  following Red Hat's  guide  to use  either:
-  a  Raid6  of 12  Disks (2-3  TB  big)
-  a Raid10  of  12  Disks (2-3  TB big)
-  JBOD disks in 'replica  3' mode (i'm not sure about the size  RH recommends,  most probably 2-3 TB)
 So far,  I didn' have the opportunity to run on JBODs.


>Thanks.
________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux