TrAC will focus on core AI research and its translational applications to various domains. The center will initially focus on conducting core artificial intelligence research, as well as pursuing five application areas of artificial intelligence.

1. Major Equipment and Computational Facilities

The success of Artificial Intelligence over the past decade can be largely attributed to the availability of state-of-the-art compute resources, particularly, the graphical processing units (GPUs). Therefore, for any TrAC member or students of the member wishing to perform cutting-edge research in AI, they will need access to these resources. One of the Data Scientists in TrAC is also a campus champion for XSEDE (an NSF funded program for cyberinfrastructure), and hence can provide the researchers with a wide variety of compute options depending on the use of the researchers. Some of these options include (but not limited to)

  • Access to the ISU HPC cluster (Nova) that has a significant number of GPU nodes which were contributed by core members of TrAC. Nova cluster has a total of 180 CPU nodes and 20 GPU nodes with a total of 60 Nvidia A100 GPUs (80GB GPU RAM) and 5 Nvidia V100 – 32GB GPUs. These resources are at par with most of the NSF funded supercomputers in the country and should cater well to the AI researchers in TrAC.
  • A variety of federally (NSF) funded, geographically distributed, remote access supercomputer whose allocations are managed by XSEDE (through campus champion’s allocation). Some of the supercomputers and cloud resources managed by XSEDE are:
    • Bridges2 – A Supercomputer housed at Pittsburgh Supercomputing Center with 388 CPU nodes. 24 GPU nodes with each node having 8 Nvidia V100 – 32GB GPUs. This amounts to a total of 192 GPUs.
    • Expanse – Housed at San Diego Supercomputing Center with 728 CPU nodes and 52 GPU nodes with each node having 4 Nvidia V100 GPUs – 32GB GPUs (total of 208 GPUs).
    • Anvil – Housed at Purdue University, has 1000 CPU nodes and 16 GPU nodes with a total of 64 Nvidia A100 GPUs
    • JetStream2 – A NSF-funded cloud computing platform with a total of 384 CPU nodes and 90 GPU nodes with each node having 4 Nvidia A100 – 40GB GPUs (total of 360 GPUs).

TrAC staff will provide onboarding training, initial access (to try out and test the systems), and help with crafting computational allocation requests for large scale access to these resources .

  • Support will be provided to TrAC members on how to write large scale grants for compute resources at these supercomputers.

Cloud-based resources

  • Microsoft Azure
  • Nvidia (for training and education of topics related to Deep Learning and GPU computing).

Apart from this, TrAC also provides consultation on what GPUs to purchase if the members are interested to purchase for their personal needs.

2. Software

TrAC provides support in installing following python open-source libraries (along with GPU acceleration support):

  • Pytorch
  • Tensorflow
  • Keras

TrAC also has access to a wide array of software tools including:

  • Abaqus – Abaqus
  • Adobe – All Adobe Creative Cloud Software
  • ANSYS – ANSYS and EKIM
  • Fluent – Fluent and Gambit
  • Mechdyne/VRCO – CaveLIB, Conduit and GetReal
  • Microsoft – all Microsoft software via a Microsoft Campus Agreement
  • PTC – ProEngineer
  • PTV – VISSIM Traffic Simulation
  • Dassault Systems – Solidworks
  • Siemens PLM – UGNX, SolidEdge, Team Center Visualization, JT Open, JT Translator, JT Utilities, Parasolid and Technomatics
  • Unity – Game Engine
  • Various remote meeting software including Zoom, WebEx, and Microsoft Teams