This image was created with the AI Midjourney by the appliedAI Institute for Europe.

TransferLab courses

Are you already an ML expert and want to deepen your knowledge or learn new techniques? The TransferLab team offers exciting trainings in Bayesian ML, Safe Reinforcement Learning, Anomaly Detection and much more.

Deepen your knowledge of AI technologies

Our mission is to equip experts with knowledge, tools and skills to responsibly develop and apply the latest AI technologies to shape a future we would like to live in.

Each major topic the TransferLab team works on is covered in one or more trainings. It walks you through the theory and provides fully fleshed out examples that illustrate best practices in the field.

Probabilistic model checking with Storm

This is a one-day workshop introducing the concept of probabilistic model checking and its applications.

Goal of the training, including:

  • Understanding the theory behind probabilistic model checking.
  • Getting to know the most important algorithms.
  • Learning how to model systems as Markov chains and Markov decision processes.
  • Learning how to formulate queries about the behavior of a system.

Methods and problems of explainable AI

This is a two-day workshop for ML practitioners who want to make their models more understandable to themselves and decision makers.

Goal of the training, including:

  • Get an overview of the potential applications of explainability in machine learning.
  • Learn the taxonomy of explainability methods and their applicability to different types of models.
  • Learn how to integrate explainability considerations into the machine learning pipeline.
  • Learn about model-independent and model-specific explainability methods.

An introduction to Bayesian methods in ML

This is a two-day workshop introducing Bayesian modeling using practical examples and probabilistic programming.

Goal of the training, including:

  • Learn Bayesian methodology and understand how Bayesian inference can be used as a general machine learning tool.
  • Understand the computational challenges in Bayesian inference and how to overcome them.
  • Become familiar with the basics of approximate Bayesian inference.
  • Understanding the first steps in probabilistic programming with Pyro.

Classic and modern methods in planning and control

An overview of model-based planning and control for AI engineers interested in solving real-world decision and control problems using efficient methods.

Aim of the training, including:

  • Fundamentals of decision making: environments, trajectories, actors, and rewards.
  • Typical and less typical control problems.
  • Planning and classical control.
  • From simulation to reality.

Safe and efficient Deep Reinforcement Learning

This 2-day Deep Dive into Deep RL is suitable for engineers who want to solve real-world control problems using efficient methods.

Aims of the training, among others:

  • Participants will learn the most important recent advances in Deep RL and gain a sense of when and how RL techniques should (and perhaps more importantly, when not to) be used. Learning content includes:
  • Fundamentals of deep, model-free, and model-based RL.
  • What problems are appropriate for an RL approach?
  • Exploration vs. exploitation
  • On-policy, off-policy, and offline learning.

Practical anomaly detection

A two-day introduction to unsupervised ML techniques for anomaly detection and their strengths and weaknesses in various application domains.

Goal of the training, including:

  • Understand qualitative and quantitative definitions of anomalies.
  • Overview of the theoretical foundations and practical implementations of several anomaly detection algorithms.
  • Understand which algorithms are appropriate for which application domains.
  • Learn how to evaluate and compare the performance of different algorithms.

Calibration of modern classifiers

An introduction to the pitfalls of uncalibrated classifiers and modern techniques for (re)calibration.

Goal of the training, including:

  • Quantitative understanding of how optimal decisions depend on all probabilities, not just the predicted class.
  • Learning to measure the (mis)calibration of models.
  • Recalibration of classifiers during training: loss functions, regularization.
  • Recalibration of classifiers a posteriori: non-parametric, parametric, Bayesian.

Contact TransferLab