Loading Events

« All Events

Interpretability in AI

February 2 @ 8:00 am February 27 @ 5:00 pm

Interpretability in AI

In this course, you will learn about various interpretable and explainable machine learning algorithms, a branch of machine learning and AI. This course covers everything you need to know about interpretability, including an overview of basic concepts of interpretability, interpretable models, model-agnostic methods, and example-based explanations. You will also learn how to leverage these interpretable approaches to address the specific real-world problems.

You will engage in hands-on activities, homework, and instructor consulting to make learning Interpretability in AI enjoyable and rewarding. You will also be able to tackle real-world problems in science and engineering. By the end of this course, you’ll have the skills and confidence to tackle machine-learning challenge with interpretable methods.

Course At a Glance

Course Hours: 16 hours

Instructional Period: February 2 – February 27, 2026

Time to Complete Badge: 60 days

Last Day to Earn Badge: March 31, 2026

Expertise Level: Beginner/Intermediate

Advanced AI Track Course – This course is part of the Advanced AI track in the TrAC Micro-Credential pathway at Iowa State University.

MLOps

End-to-End Computer Vision

Generative Models

Mastering PyTorch

End-to-End Natural Language Processing

Interpretability in AI

Prerequisite & Audience

Prerequisite

  • Basic Python programming
  • Basic understanding of machine learning models
  • Basic understanding of deep learning models
  • Basic PyTorch programming

Audience: The course is intended for a broad audience within the spectrum of the software and technology industry, including software engineers, data scientists, data engineers, data analysts, research scientists, and software developers. The course is designed to provide a basic understanding of Interpretability in AI and how to use these methods for a broad range of audiences.

Learning Outcomes

Upon completing this course, students will be able to do the following:

  • Formulate a machine learning problem with interpretable models based on the specific task
  • Develop basic interpretable machine learning models
  • Develop model-agnostic methods for the interpretability in black-box machine learning models
  • Develop example-based explanations for the interpretability in black-box machine learning models

Assessments

  • 2 Quizzes to learn basic knowledge of Interpretability in AI
  • 1 coding assignment for students to develop basic interpretable machine learning models (e.g., linear regression, logistic regression, decision tree, etc.) to solve tasks by using deep learning packages such as PyTorch
  • 1 coding assignment for students to develop model-agnostic methods, such as LIME and Shapley Values, for interpreting the black-box machine learning models

Course Outline

Module 1: Introduction to Interpretability in Machine Learning

Module 2: Develop Basic Interpretable Machine Learning Models

Module 3: Develop Model-Agnostic Methods

Module 4: Introduction to Example-based Explanations

Course Procedures

The course starts on February 2, 2026. All coursework must be completed by March 31, 2026, in order to earn the micro-credential badge. You will continue to have access to the course materials until January 1, 2027. The approximate time to complete this course is 16 hours.

This course has an instructional period from February 2 to February 27, 2026. During this instructional period, course materials will be released weekly and live synchronous sessions will be held. You may complete the course materials at your own pace. Live Zoom meetings will be conducted for interactive coding sessions. A suitable time for these live sessions will be determined. The recordings of those sessions will be available soon after each meeting.

You will receive the Interpretability in AI micro-credential badge upon successful completion of the course assessments.

Course Materials

Course materials are provided within the course. No additional purchase is required.

Registration

STUDENTS

Register for the course as a 1-credit independent study course with a maximum of 3 such TrAC courses per semester.

Students must register for the independent study by emailing benearl@iastate.edu and cc-ing baditya@iastate.edu. Also, fill this Google form for our records: https://forms.gle/d7PUHaqjko6sPHVg7

Note: You cannot cancel your registration after April 1 for any course.

INDUSTRY PROFESSIONALS/ISU STAFF/POSTDOCS

$ 500 .00

ISU Professionals/Staff and Government Employees: $300

About the Instructor

Zhanhong Jiang is a data scientist in the Translational AI Center (TrAC) at Iowa State University. His research interests lie in machine learning and distributed optimization. He has rich experience in developing AI/ML models/algorithms from theory to practice.

Contact: Nicole Hayungs