Question about memory usage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Are there any resources that can help determine how much memory I need
to run a SAMBA cluster with several terabytes GFS storage? 

I have three RHCS clusters on Red Hat 4 U4, with two nodes in each
cluster. Both servers in a cluster have the same SAN storage mounted,
but only one node accesses the storage at a time (mostly). The storage
is shared out over several SAMBA shares, with several users accessing
the data at a time, via an automated proxy user, so technically, only a
few user connections are made directly to the clusters, but a few
hundred GB of data is written and accessed daily. Almost all data is
write once, read many, and the files range from a few KB to several
hundred MB.

I recently noticed that when running 'free -m', the servers all run
very low on memory. If I remove one node from the cluster by stopping
rgmanager, gfs, clvmd, fenced, cman, and ccsd, the memory get released
until I join it to the cluster again. I could stop them one at a time to
make sure it is GFS, but I assume much of the RAM is getting used for
caching and for GFS needs. I do not mind upgrading the RAM, but I would
like to know if there is a good way to size the servers properly for
this type of usage.

The servers are Dual Proc 3.6Ghz, with 2GB RAM each. They have U320 15K
SCSI drives, and Emulex Fibre Channel to the SAN. Everything else
appears to run fine, but one server ran out of memory, and I see others
that range between 16MB and 250MB free RAM. 

Thanks,
Danny


--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux