Problems with folders not being created on newly added disks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Anyone know if this problem has been addressed in v. 2.0.6?

Roland

2009/9/9 Liam Slusser <lslusser at gmail.com>:
> You should really upgrade to gluster 2.0.6, there has been many bug fixes.
>
> ls
>
>
>
> On Sep 9, 2009, at 4:36 AM, Roland Rabben <roland at jotta.no> wrote:
>
>> Hi
>> I am using GlusterFS 2.02 on Ubuntu 9.04 64 bit. I have 4 data-nodes and 3
>> clients. Se my vol files at the end of this email.
>>
>> After adding more disks to my data-nodes for more capacity and
>> reconfiguring
>> GlusterFS to include those drives I am experiencing problems.
>>
>> I am getting "No such file or directory" if I try to copy a new file into
>> an
>> existing directory.
>> However, if I copy a new file into a new directory everyting works fine.
>>
>> It seems that if I create the folderstructure from the old data-nodes on
>> the
>> new disks, everything works fine.
>>
>> So my questions are?
>>
>> 1. Am I doing somthing wrong in the upgrade process?
>> 2. Do I need to manually create the existing folders on the new hard
>> drives?
>> 3. Self heal does not fix this. Shouldn't it?
>> 4. Is there a tool that will create the folderstructure on the new disks
>> for
>> me?
>>
>>
>> Client vol file example:
>> =================
>> # DN-000
>> volume dn-000-01
>> ? ? ? type protocol/client
>> ? ? ? option transport-type tcp
>> ? ? ? option remote-host dn-000
>> ? ? ? option remote-subvolume brick-01
>> end-volume
>>
>> volume dn-000-02
>> ? ? ? type protocol/client
>> ? ? ? option transport-type tcp
>> ? ? ? option remote-host dn-000
>> ? ? ? option remote-subvolume brick-02
>> end-volume
>>
>> volume dn-000-03
>> ? ? ? type protocol/client
>> ? ? ? option transport-type tcp
>> ? ? ? option remote-host dn-000
>> ? ? ? option remote-subvolume brick-03
>> end-volume
>>
>> volume dn-000-04
>> ? ? ? type protocol/client
>> ? ? ? option transport-type tcp
>> ? ? ? option remote-host dn-000
>> ? ? ? option remote-subvolume brick-04
>> end-volume
>>
>>
>> volume dn-000-ns
>> ? ? ? type protocol/client
>> ? ? ? option transport-type tcp
>> ? ? ? option remote-host dn-000
>> ? ? ? option remote-subvolume brick-ns
>> end-volume
>>
>> # DN-001
>> volume dn-001-01
>> ? ? ? type protocol/client
>> ? ? ? option transport-type tcp
>> ? ? ? option remote-host dn-001
>> ? ? ? option remote-subvolume brick-01
>> end-volume
>>
>> volume dn-001-02
>> ? ? ? type protocol/client
>> ? ? ? option transport-type tcp
>> ? ? ? option remote-host dn-001
>> ? ? ? option remote-subvolume brick-02
>> end-volume
>>
>> volume dn-001-03
>> ? ? ? type protocol/client
>> ? ? ? option transport-type tcp
>> ? ? ? option remote-host dn-001
>> ? ? ? option remote-subvolume brick-03
>> end-volume
>>
>> volume dn-001-04
>> ? ? ? type protocol/client
>> ? ? ? option transport-type tcp
>> ? ? ? option remote-host dn-001
>> ? ? ? option remote-subvolume brick-04
>> end-volume
>>
>> volume dn-001-ns
>> ? ? ? type protocol/client
>> ? ? ? option transport-type tcp
>> ? ? ? option remote-host dn-001
>> ? ? ? option remote-subvolume brick-ns
>> end-volume
>>
>> # DN-002
>> volume dn-002-01
>> ? ? ? type protocol/client
>> ? ? ? option transport-type tcp
>> ? ? ? option remote-host dn-002
>> ? ? ? option remote-subvolume brick-01
>> end-volume
>>
>> volume dn-002-02
>> ? ? ? type protocol/client
>> ? ? ? option transport-type tcp
>> ? ? ? option remote-host dn-002
>> ? ? ? option remote-subvolume brick-02
>> end-volume
>>
>> volume dn-002-03
>> ? ? ? type protocol/client
>> ? ? ? option transport-type tcp
>> ? ? ? option remote-host dn-002
>> ? ? ? option remote-subvolume brick-03
>> end-volume
>>
>> volume dn-002-04
>> ? ? ? type protocol/client
>> ? ? ? option transport-type tcp
>> ? ? ? option remote-host dn-002
>> ? ? ? option remote-subvolume brick-04
>> end-volume
>>
>> # DN-003
>> volume dn-003-01
>> ? ? ? type protocol/client
>> ? ? ? option transport-type tcp
>> ? ? ? option remote-host dn-003
>> ? ? ? option remote-subvolume brick-01
>> end-volume
>>
>> volume dn-003-02
>> ? ? ? type protocol/client
>> ? ? ? option transport-type tcp
>> ? ? ? option remote-host dn-003
>> ? ? ? option remote-subvolume brick-02
>> end-volume
>>
>> volume dn-003-03
>> ? ? ? type protocol/client
>> ? ? ? option transport-type tcp
>> ? ? ? option remote-host dn-003
>> ? ? ? option remote-subvolume brick-03
>> end-volume
>>
>> volume dn-003-04
>> ? ? ? type protocol/client
>> ? ? ? option transport-type tcp
>> ? ? ? option remote-host dn-003
>> ? ? ? option remote-subvolume brick-04
>> end-volume
>>
>> # Replicate data between the servers
>> # Use pairs, but swtich the order to distribute read load
>> volume repl-000-001-01
>> ? ? ? type cluster/replicate
>> ? ? ? subvolumes dn-000-01 dn-001-01
>> end-volume
>>
>> volume repl-000-001-02
>> ? ? ? type cluster/replicate
>> ? ? ? subvolumes dn-001-02 dn-000-02
>> end-volume
>>
>> volume repl-000-001-03
>> ? ? ? type cluster/replicate
>> ? ? ? subvolumes dn-000-03 dn-001-03
>> end-volume
>>
>> volume repl-000-001-04
>> ? ? ? type cluster/replicate
>> ? ? ? subvolumes dn-001-04 dn-000-04
>> end-volume
>>
>>
>> volume repl-002-003-01
>> ? ? ? type cluster/replicate
>> ? ? ? subvolumes dn-002-01 dn-003-01
>> end-volume
>>
>> volume repl-002-003-02
>> ? ? ? type cluster/replicate
>> ? ? ? subvolumes dn-003-02 dn-002-02
>> end-volume
>>
>> volume repl-002-003-03
>> ? ? ? type cluster/replicate
>> ? ? ? subvolumes dn-002-03 dn-003-03
>> end-volume
>>
>> volume repl-002-003-04
>> ? ? ? type cluster/replicate
>> ? ? ? subvolumes dn-003-04 dn-002-04
>> end-volume
>>
>>
>> # Also replicate the namespace
>> volume repl-ns
>> ? ? ? type cluster/replicate
>> ? ? ? subvolumes dn-000-ns dn-001-ns
>> end-volume
>>
>> # Distribute the data using the "adaptive least usage" scheduler
>> # We have a 5GB treshold for disk-usage first, then we look at
>> write-usage,
>> and finanly read-usage
>> volume dfs
>> ? ? ? type cluster/unify
>> ? ? ? option namespace repl-ns
>> ? ? ? option scheduler alu
>> ? ? ? option scheduler.limits.min-free-disk 5%
>> ? ? ? option scheduler.alu.order disk-usage:write-usage:read-usage
>> ? ? ? option scheduler.alu.disk-usage.entry-threshold 5GB
>> ? ? ? option scheduler.alu.disk-usage.exit-threshold 1GB
>> ? ? ? option scheduler.alu.write-usage.entry-threshold 25
>> ? ? ? option scheduler.alu.write-usage.exit-threshold 5
>> ? ? ? option scheduler.alu.read-usage.entry-threshold 25
>> ? ? ? option scheduler.alu.read-usage.exit-threshold 5
>> ? ? ? subvolumes repl-000-001-01 repl-000-001-02 repl-000-001-03
>> repl-000-001-04 repl-002-003-01 repl-002-003-02 repl-002-003-03
>> repl-002-003-04
>>
>> end-volume
>>
>> # Enable write-behind to decrease write latency
>> volume wb
>> ? ? ? type performance/write-behind
>> ? ? ? option flush-behind off
>> ? ? ? option cache-size 128MB
>> ? ? ? subvolumes dfs
>> end-volume
>>
>> volume cache
>> ? ? ? type performance/io-cache
>> ? ? ? option cache-size 1024MB
>> ? ? ? subvolumes wb
>> end-volume
>>
>>
>>
>>
>>
>> Server vol file example:
>> ==================
>> # The posix volumes
>> volume posix-01
>> ? type storage/posix
>> ? option directory /mnt/data01
>> end-volume
>>
>> volume posix-02
>> ? type storage/posix
>> ? option directory /mnt/data02
>> end-volume
>>
>> volume posix-03
>> ? ? ? type storage/posix
>> ? ? ? option directory /mnt/data03
>> end-volume
>>
>> volume posix-04
>> ? ? ? type storage/posix
>> ? ? ? option directory /mnt/data04
>> end-volume
>>
>>
>> volume posix-ns
>> ? type storage/posix
>> ? option directory /var/lib/glusterfs/ns
>> end-volume
>>
>> # Add locking capabilities
>> volume locks-01
>> ? type features/locks
>> ? subvolumes posix-01
>> end-volume
>>
>> volume locks-02
>> ? type features/locks
>> ? subvolumes posix-02
>> end-volume
>>
>> volume locks-03
>> ? ? ? type features/locks
>> ? ? ? subvolumes posix-03
>> end-volume
>>
>> volume locks-04
>> ? ? ? type features/locks
>> ? ? ? subvolumes posix-04
>> end-volume
>>
>>
>> volume locks-ns
>> ? type features/locks
>> ? subvolumes posix-ns
>> end-volume
>>
>> # Finally add threads to the briks
>> volume brick-01
>> ? type performance/io-threads
>> ? option thread-count 8
>> ? subvolumes locks-01
>> end-volume
>>
>> volume brick-02
>> ? type performance/io-threads
>> ? option thread-count 8
>> ? subvolumes locks-02
>> end-volume
>>
>> volume brick-03
>> ? ? ? type performance/io-threads
>> ? ? ? option thread-count 8
>> ? ? ? subvolumes locks-03
>> end-volume
>>
>> volume brick-04
>> ? ? ? type performance/io-threads
>> ? ? ? option thread-count 8
>> ? ? ? subvolumes locks-04
>> end-volume
>>
>>
>> volume brick-ns
>> ? type performance/io-threads
>> ? option thread-count 8
>> ? subvolumes locks-ns
>> end-volume
>>
>> # Mount the posix drives as a network drive
>> volume server
>> ? type protocol/server
>> ? option transport-type tcp
>> ? subvolumes brick-01 brick-02 brick-03 brick-04 brick-ns
>> ? option auth.addr.brick-01.allow 10.0.*
>> ? option auth.addr.brick-02.allow 10.0.*
>> ? option auth.addr.brick-03.allow 10.0.*
>> ? option auth.addr.brick-04.allow 10.0.*
>> ? option auth.addr.brick-ns.allow 10.0.*
>> end-volume
>>
>>
>>
>> Regards
>>
>> Roland Rabben
>> Founder & CEO Jotta AS
>> Cell: +47 90 85 85 39
>> Phone: +47 21 04 29 00
>> Email: roland at jotta.no
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>



-- 
Roland Rabben
Founder & CEO Jotta AS
Cell: +47 90 85 85 39
Phone: +47 21 04 29 00
Email: roland at jotta.no


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux