Less interpretable methods

Neural networks and ensemble methods like bagging, random forests, and boosting can greatly increase predictive accuracy at the cost of ease of interpretation.

Joshua Loftus

Trees and forests

Compositional nonlinearity

(not active yet) Slides, notebooks, exercises

Slides for (tree) ensembles ([PDF])

Slides for deep learning ([PDF])

Notebook for ?


Text and figures are licensed under Creative Commons Attribution CC BY 4.0. The figures that have been reused from other sources don't fall under this license and can be recognized by a note in their caption: "Figure from ...".


For attribution, please cite this work as

Loftus (2021, March 28). Neurath's Speedboat: Less interpretable methods. Retrieved from http://joshualoftus.com/ml4ds/09-uninterpretable/

BibTeX citation

  author = {Loftus, Joshua},
  title = {Neurath's Speedboat: Less interpretable methods},
  url = {http://joshualoftus.com/ml4ds/09-uninterpretable/},
  year = {2021}