Hi, We are having some trouble with one of our fileservers using XFS (on linux). Yesterday, one of the external RAIDs on the server failed. Of course, it is unavoidable that some data would get lost from the fileserver in such an event, however, we lost a lot more files than would seem reasonable. In particular, we lost a number of files that had not been written to (but had been been read from, in some cases) in several weeks. The data loss manifested itself through files being truncated to length 0 or to some other size short of what they should be. (We happen to have an external database that keeps track of that.) The fileserver is based on CentOS 6.3 with kernel version 2.6.32-279.9.1.el6.x86_64. It has got several external RAIDs in the 100 TB range, connected via FibreChannel. In case it matters: The server's primary role is as a samba server servicing a large number of Windows XP and Windows 7 machines. We had already been trying to reduce the possible impact of a hardware failure by setting a few tunables in /etc/sysctl.conf to try and make the kernel not keep dirty buffers around too long: vm.dirty_background_bytes = 536870912 vm.dirty_bytes = 134217728 vm.dirty_writeback_centisecs = 500 vm.dirty_expire_centisecs = 3000 and by issuing a sync from cron every 15 minutes: 0,15,30,45 * * * * /bin/sync Unfortunately, I seem to be unable so far to reproduce the issue on a smaller system - and I cannot exactly just walk up to the in-production fileserver and rip out yet another array just to see what happens... This leaves me with a few questions: Why did we lose so much data through the crash? Why did not even a sync every 15 minutes prevent further damage? What can we do to prevent this from happening again in the future? Regards, Guido _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs