Re: [Gluster-Maintainers] Release 3.10: Testing feedback requested

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is an update to let the community know the tests that I have performed against 3.10 RC0 (and later with some further fixes in 3.10 branch).

Some of this can get into the respective github testing issues that I filed, and I would be updating the same.

The tests were more to benchmark Gluster, but it in the course of doing the same, I did test out a few other things around it that should help.

Environment:
- 4 servers, with 12 SAS 10K disks and 4 Intel SSDs each
- 4 clients
- All machines connected via IB gear, but tests run using IP over IB
  - Basically network bandwidth was about 15Gbps
- All machines running CentOS7.2

Tests:
- Upgrade from 3.8.8 to 3.10.0rc0
- Induce required healing during rolling upgrade, to test if file health recovers post upgrade
  - IOW, ongoing IO during upgrade process
  - Tested on Replicate volumes only

- Created 6 types of volumes,
NOTE: Considering the disks available not all volumes existed on the cluster at the same time
  - S(etup)1: Disperse 6x(4+2) on SAS JBOD disks
  - S2: AFR 2-way 18x2 on SAS JBOD disks
  - S3: Disperse 4x(2+1) on SSD disks
  - S4: AFR 2-way 6x2 on SSD disks
  - S3.1: Disperse 4x(2+1) on SAS disks
  - S4.1: AFR 2-way 6x2 on SAS disks

- Ran the following tests on the above volumes
  - gbench script (see [1])
- Basically tests, IOZone, and smallfile based workload from multiple clients
    - Also does volume content and cache, purges in between tests
    - This was run on all volume combinations i.e S1-S6
    - This test was repeated on S1 and S2 with brick multiplexing turned on
- Additional smallfile tests, (yet to put up the source for these tests, but here is a gist [2] (do not judge the code quality :) )) - Single client single thread, create/read across small(64KB)/medium(10MB)/large(1GB) files
      - So sort of 6 tests in total for the above
- Single client single thread, listing across small(64KB) files dropping client caches between runs, and not dropping client caches between runs
    - This was run on all volume combinations i.e S1-S6
- This test was repeated on S1 and S2 post enabling the new readdirahead and security.ima xattr caching
    - This test was repeated on S1 and S2 with brick multiplexing turned on

Results:
- Ideally I should be posting performance numbers from these tests, although I have them, they are not in a presentable/repeatable form, hence not posting numbers
  - NOTE: I will possibly anyway post it subsequently
- All tests have passed, any issues/bugs faced during testing have subsequently been fixed in 3.10 (there were about 4-6 bugs that I faced)

Notes:
- Disperse ran with sse/avx cpu-extension (need to recheck the machines to be sure which)

Shyam

[1] gbench test: https://github.com/gluster/gbench/blob/master/bench-tests/bt-0000-0001/GlusterBench.py

[2] gist containing python code for the additional smallfile tests: https://gist.github.com/ShyamsundarR/dfbc2e717ed64b466222aed6d3ae5bf7

On 02/15/2017 09:42 AM, Shyam wrote:
Hi,

We have some feedback on the glusterd and bitrot issues.

I am testing out upgrade in CentOS and also doing some performance
regression testing across AFR and EC volumes, results expected by end of
this week.

Request others to post updates to tests done on the issues.

If people are blocked, then please let us know as well.

Just a friendly reminder, we always slip releases due to lack of testing
or testing feedback, let's try not to repeat the same.

Current release date is still 21st Feb 2017

Shyam

On 02/05/2017 10:00 PM, Shyam wrote:
Hi,

For the 3.10 release, we are tracking testing feedback per component
using github issues. The list of issues to report testing feedback
against is in [1].

We have assigned each component level task to the noted maintainer to
provide feedback, in case others are taking up the task please assign
the same to yourself.

Currently github allows only issue assignment to members of the gluster
organization, and that is not complete or filled up as expected. So,
request maintainers to mention folks who would be doing the testing in
the issue using @<github username> or the user to assign the task to
themselves.

Feedback is expected at the earliest to meet the current release date of
21st Feb, 2017.

Once we have the packages built, we would request the users list to help
with the testing as well.

Thanks,
Release team

[1] Link to testing issues: http://bit.ly/2kDCR8M
_______________________________________________
maintainers mailing list
maintainers@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/maintainers
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-devel
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-devel



[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux