On Wed, Sep 12, 2012 at 12:05 AM, siga hiro <hirokisiga at gmail.com> wrote: > Hi. > > I make the data area of postgresql in glusterfs on the server. > And,I mounted a glusterfs area from postgres-node, and postgresql started. > Then,It became an error when pgbench start. > ( if fuse-mount,became error.if nfs-mount,not error. > mount -t glusterfs 172.22.0.181:/syncdata /syncdata -> pgbench ERROR > mount -t nfs -o nolock,vers=3,tcp,soft,timeo=3 172.22.0.181:/syncdata > /syncdata -> pgbench OK > ) > > I found this [https://bugzilla.redhat.com/show_bug.cgi?id=810944]... > > Is it the bug of the kernel, or can i cope with it with glusterfs, or > is there what or an idea would you teach? > > > Node: > postgres node:172.22.0.180 > OS:ScientificLinux61 > DB:postgresql91-9.1.4-3PGDG > > glusterfs node 172.22.0.181/172.22.0.182 > OS:ScientificLinux61 > glusterfs3.3.0-1 > > Details: > > ?I make the data area of postgresql on glusterfs. > # gluster peer status -> 172.22.0.181 > Number of Peers: 1 > Hostname: 172.22.0.182 > Uuid: 601a3cf7-8614-4b89-83e4-2dab319f3582 > State: Peer in Cluster (Connected) > > # gluster volume info > Volume Name: syncdata > Type: Replicate > Volume ID: c7708db1-0eb7-4a0d-a21c-24162ac3ed8f > Status: Started > Number of Bricks: 1 x 2 = 2 > Transport-type: tcp > Bricks: > Brick1: 172.22.0.181:/home/syncdata > Brick2: 172.22.0.182:/home/syncdata > > postgresql node > # mount -t glusterfs 172.22.0.181:/syncdata /syncdata > # ls /syncdata/pgdata/ > PG_VERSION global pg_ident.conf pg_serial pg_tblspc > postgresql.conf > base pg_clog pg_multixact pg_stat_tmp pg_twophase > postmaster.opts > pg_hba.conf pg_notify pg_subtrans pg_xlog postmaster.pid > > ?Then, pgbench was started. > postgresql node > $ /usr/pgsql-9.1/bin/pgbench -i -s 100 > ..... > 2012-09-12 11:53:51 ERROR: unexpected data beyond EOF in block > 99480 of relation base/12780/16396 > > $ /usr/pgsql-9.1/bin/pgbench -h 172.22.0.182 -c 100 -t 10 > 2012-09-12 11:53:51 ERROR: unexpected data beyond EOF in block > 99480 of relation base/12780/16396 Can you check if any of the following options fix your problem - 1) --direct-io-mode=enable (to glusterfs) 2) --attribute-timeout=0 (to glusterfs) 3) gluster volume set syncdata performance.write-behind off Thanks, Avati -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://gluster.org/pipermail/gluster-users/attachments/20120920/afe52c74/attachment.htm>