poor performance with encryption and SSL enabled

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi gluster folks,

I'm looking for some configuration or debugging advice for a distributed-replicated volume that uses SSL and at rest encryption.

SSL certs are self-signed and generated on all servers. Combined into a glusterfs.ca in /etc/ssl. By itself the SSL is working well.

I've also turned on the disk encryption feature. Master key was generated with 'openssl rand -hex 32' as per the docs and copied to all gluster servers.

Status of volume: data
Gluster process Port Online Pid
------------------------------------------------------------------------------
Brick ip-10-9-0-62.ec2.internal:/export/brick 49152 Y 13393
Brick ip-10-9-0-101.ec2.internal:/export/brick 49152 Y 8412
Brick ip-10-9-0-103.ec2.internal:/export/brick 49152 Y 10125
Brick ip-10-9-0-102.ec2.internal:/export/brick 49152 Y 8266
Brick ip-10-9-0-100.ec2.internal:/export/brick 49152 Y 8263
Brick ip-10-9-0-105.ec2.internal:/export/brick 49152 Y 8277
Brick ip-10-9-0-104.ec2.internal:/export/brick 49152 Y 8261
Brick ip-10-9-0-106.ec2.internal:/export/brick 49152 Y 8272

Task Status of Volume data
------------------------------------------------------------------------------
There are no active volume tasks

Volume Name: data
Type: Distributed-Stripe
Volume ID: afad6283-5bee-42c1-b9e5-c3ed64e04aae
Status: Started
Number of Bricks: 4 x 2 = 8
Transport-type: tcp
Bricks:
Brick1: ip-10-9-0-62.ec2.internal:/export/brick
Brick2: ip-10-9-0-101.ec2.internal:/export/brick
Brick3: ip-10-9-0-103.ec2.internal:/export/brick
Brick4: ip-10-9-0-102.ec2.internal:/export/brick
Brick5: ip-10-9-0-100.ec2.internal:/export/brick
Brick6: ip-10-9-0-105.ec2.internal:/export/brick
Brick7: ip-10-9-0-104.ec2.internal:/export/brick
Brick8: ip-10-9-0-106.ec2.internal:/export/brick
Options Reconfigured:
server.allow-insecure: on
nfs.ports-insecure: on
auth.allow: *
client.ssl: on
server.ssl: on
auth.ssl-allow: *
features.encryption: on
encryption.master-key: /root/keystore/master.key
performance.quick-read: off
performance.write-behind: off
performance.open-behind: off
nfs.disable: on

If I run dd or any i/o operations I see a flurry of these messages in the logs.

[2015-02-24 16:58:51.144099] W [stripe.c:5288:stripe_internal_getxattr_cbk] (--> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1e0)[0x3fd0620550] (--> /usr/lib64/glusterfs/3.6.2/xlator/cluster/stripe.so(stripe_internal_getxattr_cbk+0x36a)[0x7f6a152a12ba] (--> /usr/lib64/glusterfs/3.6.2/xlator/protocol/client.so(client3_3_fgetxattr_cbk+0x174)[0x7f6a154db284] (--> /usr/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0xa5)[0x3fd0e0ea75] (--> /usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x142)[0x3fd0e0ff02] ))))) 0-data-stripe-3: invalid argument: frame->local

Thanks in advance for any tips/suggestions!

-Adam
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux