Re: one brick one volume process dies?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I emailed the logs earlier to just you.

On 13/09/17 11:58, Gaurav Yadav wrote:
Please send me the logs as well i.e glusterd.logs and cmd_history.log.


On Wed, Sep 13, 2017 at 1:45 PM, lejeczek <peljasz@xxxxxxxxxxx <mailto:peljasz@xxxxxxxxxxx>> wrote:



    On 13/09/17 06:21, Gaurav Yadav wrote:

        Please provide the output of gluster volume info,
        gluster volume status and gluster peer status.

        Apart  from above info, please provide glusterd
        logs, cmd_history.log.

        Thanks
        Gaurav

        On Tue, Sep 12, 2017 at 2:22 PM, lejeczek
        <peljasz@xxxxxxxxxxx <mailto:peljasz@xxxxxxxxxxx>
        <mailto:peljasz@xxxxxxxxxxx
        <mailto:peljasz@xxxxxxxxxxx>>> wrote:

            hi everyone

            I have 3-peer cluster with all vols in replica
        mode, 9
            vols.
            What I see, unfortunately, is one brick fails
        in one
            vol, when it happens it's always the same vol
        on the
            same brick.
            Command: gluster vol status $vol - would show
        brick
            not online.
            Restarting glusterd with systemclt does not
        help, only
            system reboot seem to help, until it happens,
        next time.

            How to troubleshoot this weird misbehaviour?
            many thanks, L.

            .
            _______________________________________________
            Gluster-users mailing list
        Gluster-users@xxxxxxxxxxx
        <mailto:Gluster-users@xxxxxxxxxxx>
            <mailto:Gluster-users@xxxxxxxxxxx
        <mailto:Gluster-users@xxxxxxxxxxx>>
        http://lists.gluster.org/mailman/listinfo/gluster-users
        <http://lists.gluster.org/mailman/listinfo/gluster-users>
           
        <http://lists.gluster.org/mailman/listinfo/gluster-users
        <http://lists.gluster.org/mailman/listinfo/gluster-users>>



    hi, here:

    $ gluster vol info C-DATA

    Volume Name: C-DATA
    Type: Replicate
    Volume ID: 18ffba73-532e-4a4d-84da-fceea52f8c2e
    Status: Started
    Snapshot Count: 0
    Number of Bricks: 1 x 3 = 3
    Transport-type: tcp
    Bricks:
    Brick1:
    10.5.6.49:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-C-DATA
    Brick2:
    10.5.6.100:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-C-DATA
    Brick3:
    10.5.6.32:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-C-DATA
    Options Reconfigured:
    performance.md-cache-timeout: 600
    performance.cache-invalidation: on
    performance.stat-prefetch: on
    features.cache-invalidation-timeout: 600
    features.cache-invalidation: on
    performance.io-thread-count: 64
    performance.cache-size: 128MB
    cluster.self-heal-daemon: enable
    features.quota-deem-statfs: on
    changelog.changelog: on
    geo-replication.ignore-pid-check: on
    geo-replication.indexing: on
    features.inode-quota: on
    features.quota: on
    performance.readdir-ahead: on
    nfs.disable: on
    transport.address-family: inet
    performance.cache-samba-metadata: on


    $ gluster vol status C-DATA
    Status of volume: C-DATA
    Gluster process       TCP Port RDMA Port Online  Pid
    ------------------------------------------------------------------------------
    Brick 10.5.6.49:/__.aLocalStorages/0/0-GLUS
    TERs/0GLUSTER-C-DATA            N/A       N/A N       N/A
    Brick 10.5.6.100:/__.aLocalStorages/0/0-GLU
    STERs/0GLUSTER-C-DATA            49152     0 Y       9376
    Brick 10.5.6.32:/__.aLocalStorages/0/0-GLUS
    TERs/0GLUSTER-C-DATA            49152     0 Y       8638
    Self-heal Daemon on localhost               N/A      
    N/A Y       387879
    Quota Daemon on localhost                   N/A      
    N/A Y       387891
    Self-heal Daemon on rider.private.ccnr.ceb.
    private.cam.ac.uk <http://private.cam.ac.uk> N/A      
    N/A Y       16439
    Quota Daemon on rider.private.ccnr.ceb.priv
    ate.cam.ac.uk <http://ate.cam.ac.uk> N/A       N/A
    Y       16451
    Self-heal Daemon on 10.5.6.32               N/A      
    N/A Y       7708
    Quota Daemon on 10.5.6.32                   N/A      
    N/A Y       8623
    Self-heal Daemon on 10.5.6.17               N/A      
    N/A Y       20549
    Quota Daemon on 10.5.6.17                   N/A      
    N/A Y       9337

    Task Status of Volume C-DATA
    ------------------------------------------------------------------------------
    There are no active volume tasks




    .
    _______________________________________________
    Gluster-users mailing list
    Gluster-users@xxxxxxxxxxx
    <mailto:Gluster-users@xxxxxxxxxxx>
    http://lists.gluster.org/mailman/listinfo/gluster-users
    <http://lists.gluster.org/mailman/listinfo/gluster-users>




.
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux