Hi,
I have a problem with "mmap" on my GlusterFs test environment and "apt"
on Debian.
My environment:
Hardware:
2 different servers for storage
1 server as client
On top of the server I use a virtual server setup (details
http://linux-vserver.org).
OS:
Debian Sarge with self compiled 2.6.19.2 (uname -r 2.6.19.2-vs2.2.0) and
latest stable virtual server patch.
GlusterFs: latest mainline 2.4 from repository
What I'm trying to do:
- Create a AFR Mirror over the 2 Servers.
- Mount the Volume on Server 3 (Client).
- Install on the mounted volume the hole virtual Server with Apache,
MySql and so on.
So I have a full redundant Virtual Server mirrored over two bricks .
After some help from Avati Anand last week the above setup works just
fine. I tried out Mysql and it works normally (will make some more tests
in the future).
But now I have the problem with apt.
For example when I try to update the packagelists within the virtual
server I get the following error:
mastersql:/# apt-get update
Get:1 http://security.debian.org etch/updates Release.gpg
[189B]
...
Hit http://ftp.de.debian.org etch/non-free
Packages
Fetched 2B in 7s
(0B/s)
Reading package lists... Error!
E: Couldn't make mmap of 12582912 bytes - mmap (19 No such device)
W: Unable to munmap
E: The package lists or status file could not be parsed or opened.
After some googling I found out, that apt uses "memory mapped files" and
it seems that apt can't find some device.
But I'm not able to find out which device does it can't find.
Short description of MMAP:
http://www.ecst.csuchico.edu/~beej/guide/ipc/mmap.html
Have you any idea what can cause this problem? Without GlusterFs as the
underlaying filesystem the problems not occurs.
thanks
Urban Loesch
Krishna Srinivas wrote:
Hi Danson,
Updating the replica only on the next access is not the best
solution for the reasons mentioned by you. We will announce
the AFR's auto-sync design to the list soon. It is scheduled
for 1.4.
Regards
Krishna
On 5/12/07, Danson Michael Joseph <danson.joseph@xxxxxxxxxxxxxxxxxx>
wrote:
Hi Again,
I just read that the CODA filesystem can replicate to multiple servers,
and when a server goes down and comes back up some time after files on
the remaining server have changed, the policy for repair is to repair on
next access. Now I don't believe that this policy is ideal because if
the second server then fails before a file has been accessed, the file
is not up to date, but it does highlight a possible repair technique
whereby the client does a loopback write for all files with different
timestamps or some other marked difference?
Regards,
Danson
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxx
http://lists.nongnu.org/mailman/listinfo/gluster-devel
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxx
http://lists.nongnu.org/mailman/listinfo/gluster-devel