Article on our GAM(e) changer Paper in Montreal AI Ethics Institute’s weekly Blog

Symbolic picture for the article. The link opens the image in a large view.
Image Credits @Montreal AI Ethics Institute

Our recent work got featured in this weeks Montreal AI Ethics Institute’s weekly Blog post. For our article see below or explore the whole content Page of their Blog. Montreal AI Ethics provides quick summaries of the latest research & reporting in AI ethics. Their posts are regularly featured in Vox or MIT Technology Review as well as Fortune articles.

Our paper investigates a series of intrinsically interpretable ML models. More specifically, our focus is on advanced extensions of generalized additive models (GAM) in which predictors are modeled independently in a non-linear way to generate shape functions that can capture arbitrary patterns but remain fully interpretable.

Enjoy the short article version of our paper below:

Or read into the full paper:


Written by

Nico Hambauer

on 18.11.2022