If you have been managing Linux filesystems on large disk arrays, I
would like to hear your experience/advice on what filesystem you have
selected (ext3/xfs/reiserfs/jfs).
My main concern with XFS is that fsck.xfs is basically a shell script
that always returns success.
The utility that does the real check, runs only in single user mode
otherwise it complaints "out of memory" (4GB RAM with Dual Xeon 2.0Ghz
CPUs). I tried it once on the 4TB fs. It ran for over the weekend and
it was still running come Monday morning when I had kill the process
and bring the file server back on line for end users.
I would appreciate your personal experiences with any of the above fs on
TB/PB scale.
<background>
I have under my administration a file server with two arrays 6TB and
4TB. The consultant prior to me had formatted both the arrays with XFS
fs. The client was experiencing frequent disconnects between client
stations and the file server - lately it could be resolved only with a
system reboot and not just a restart of the NFS server. I tried NFS
tuning (increase server threads, rsize, wsize etc.) on the server as
well as the client but not much improvement.
To resolve finger pointing between hardware and OS/software we asked the
hardware vendor to re-certify all the components in the arrays as well
as the header. In this process they have cleaned up both disk arrays
and we have to start afresh. (I do have backups of the data in two
places).
</background>
Thanks for your time and advice.
Regards,
--
Arun Khan