I got nothing personally. Well, a really nice laptop, but nothing extreme.
My officemate, however, who is a graduate student in computational/theoretical studies of materials, has access to everything here:
http://hpc.cnsi.ucsb.edu/clusters/clusters.php
Summaries of some of them:
1: The Dell cluster is composed of Dell 1750 dual CPU 3.06GHz Xeon servers and a single Dell 1750 monitoring node. The head node has 4GB RAM, 2 mirrored system disks, and a 2TB RAID array that is shared to the cluster. The
128 compute nodes have 2GB RAM each, and a Myrinet M3F-PCIXD-2 card. The nodes are interconnected using a M3-E128 Myrinet chassis fully populated with M3-SW16-8F line cards. There are two Ethernet networks, one for general TCP traffic (NFS, etc.) and one for administration.
2: This cluster is based on HP DL145 Opterons (2218's, 2.6Ghz) with a Myrinet2k high speed, low latency interconnect. Each node has 4Gb of RAM and 80 Gb of local disk space. (This is also a 128 node machine, each node having 2 dual-core processors).
And so on. It's funny to hear his group sit around and talk about how one cluster is SOOOOO slow for certain jobs, and the smallest clusters are still 16 processors. One system shows up as a single computer with 16 processors and 32 GB of shared RAM for extreme parallelization/high memory requirements. All that power, and all they ever see are command line print-outs and some charts and graphs.
Of course, people at other universities will have bigger systems than this.