menu MENU
Leung Tsang
Leung TsangRobert J. Hiller Professor of EngineeringProfessor of Electrical Engineering and Computer ScienceElectrical Engineering and Computer Science
(734) 764-7651 3228A EECS1301 Beal AvenueAnn Arbor, MI 48109-2122

Computational Facilities

At the Radiation Laboratory, Tsang’s Group has high-performance workstations and PCs suitable for EM modeling, development, and testing of algorithm. In addition, the computing facilities include nodes on XSEDE at NSF and Flux HPC clusters at University of Michigan.

HPC at U. of Michigan:

Flux is a Linux-based high-performance computing cluster that has approximately 16,000 cores at University of Michigan. There are five types of compute nodes:

• standard memory, with 4GB RAM per core;
• larger memory, with 25GB per core;
• GPU, with NVIDIA K20x GPUs;
• Hadoop, with 17TB of HDFS space;
• Xeon Phi, with 8 Intel Xeon Phi coprocessors.

The compute nodes are connected to each other and to 640TB of parallel storage for scratch files with 40Gbps InfiniBand. In addition to these clusters, Flux also has 360 cores with larger amounts of RAM — about 25GB per core, or 1TB in a 40-core node. For researchers with codes requiring large amounts of RAM or cores in a single system, the Larger Memory Flux nodes can be a good option.

In addition, Tsang’s group owns four computer nodes that are exclusively used by the group members. The configuration of each nodes is as follows:

– 2 x E5-2680V3 (24 total core)
– 8 x 16GB DDR4 2133MHz (128GB)
– 4TB 7200RPM Drive
– EDR ConnectX 4 100Gbps Infiniband Adaptor

Radiation Laboratory Computing Facilities:

For the graduate students, the laboratory has high-performance PCs, HP ProDesk 600 G1, equipped with Intel Core i7-4790 Processor, 32GB memory, and NVIDIA GTX 750Ti 2GB video card. High performance workstation are also used for parallel simulations using shared memory with OpenMP.

HPC at NSF’s XSEDE:

Prof. Tsang has been awarded computation resources from NSF’s Extreme Science and Engineering Discovery Environment (XSEDE). The project is “Large Scale Simulations of Multiple Scattering of Waves in Random Media and Rough Surface” (TG-EAR100002). Stampede is described in Table I.


Table I Hardware spec for one node on Stampede

Network56Gb/s
CPUTwo, 8-core E5-2680 at 2.7GHz
Memory32GB
Filesystem14-PB higher performance Lustre file system
ExpandabilitySpace saved for expansion
CoprocessorIntel Xeon Phi61-core for GPU