Tutorial sessions will take place in the same Lecture Hall of the lectures, from 18.00 to 19.30 on the following dates:

  • Monday 29.04.19
  • Wednesday 22.05.19
  • Wednesday 12.06.19
  • Wednesday 26.06.19
  • Wednesday 10.07.19 Change of room: due to an event organized by FAU, the last tutorial session will be held in Lecture Hall A.

Installation instructions

What we need

We need to install the following software and python packages:

  • Python 3.7
  • Programming language we will use throughout this course
  • Jupyter Notebook 5.7.8
  • Interactive development environment
  • Numpy 1.16
  • Multidimensional arrays in Python
  • Scipy 1.3
  • Scientific programming in Python
  • Matplotlib 3.1
  • Easy plots
  • Tensorflow 1.13.1 with Keras 2.2.4
  • Machine learning framework

Hi! Today I will be your personal tutor and I will guide you through the configuration of your system! This is a computer-generated face made with a neural network. Produced by a GAN (generative adversarial network) StyleGAN (Dec 2018) - Karras et al. and Nvidia Original GAN (2014) - Goodfellow et al. See here for more information


We will use the MiniConda package and environment management system. To perform the installation, use the following procedure:

  1. Download the appropriate version of MiniConda and perform the installation with the default settings.
    Remember that, under Linux and MacOS, in order to run a downloaded bash script you have to provide execution rights
    ( chmod 777 [filename])
    and run it in the terminal with ./[filename].
  2. Open the Anaconda Prompt. You will find it in your Application Menu in Windows and MacOS, while it will be enough to open a bash session under a Linux environment. Create a new environment and activate it with the commands:
  3. conda create --name neuralnets
    conda activate neuralnets

  4. Install the needed packages:
  5. conda install jupyter h5py hdf5 matplotlib mkl-service numpy scipy tensorflow

Validate the installation

You can now run Jupyter Notebook from the Application Menu. If you don't find it, you can run it by opening again the Anaconda Prompt, activating again the neuralnets environment and typing jupyter notebook .

Please run the following test notebook within Jupyter to check if all the packages were correctly installed

Need any more help?

If you encounter a problem which you cannot solve by yourself during the installation, have questions reguarding the lectures or the exercises, please write an email to one of the assistants. We will try to take care of the problem within a reasonable time.

Tutorial 1


Suggested exercises:

Feed forward neural network with random weight initialization This is how a neural network with many layers, two inputs and one output looks like before training. A convenient choice of the parameters of this network allows to reproduce any arbitrary function.

Tutorial 2


  • Approximate a 1d function
  • Logical Operations
  • How can we implement logical operations such as AND, OR, NOT, XOR with neural networks?

  • Approximate a 2d function
  • Train a neural network that approximates a given function. We manually implement the backpropagation algorithm.

  • Introduction to Keras: a simple neural network
  • Build a neural network which learns to reproduce a given image

  • Classifier: train a simple convolutional neural network to recognize numbers
  • Build a convolutional neural network that classifies the digits from the MNIST dataset. The dataset contains training, validation and test images with their correct labels. Implemented in Keras.

Suggested exercises (bonus):

  • Manually approximate a 2d function
  • Manually choose the weights of the neural network to approximate a 2d function with arbitrary precision. We can use logical operations between the output of the neurons. In this example we use a deep network (more than one layer)

  • Approximate a square
  • Manually choose the weights of the neural network so that it approximates a square.

  • Approximate any convex shape
  • Manually choose the weights of the neural network so that it approximates a given convex shape.

  • Simple neural network: implementation without keras
  • Build a neural network which learns to reproduce a given image

  • Simple classifier: train a simple convolutional neural network to recognize numbers
  • Same example as above but with a dense neural network instead of a convolutional one.

  • Distinguish between two given shapes
  • Train a network that is able to distinguish between a circle and a square

Logical operations with neural network The figure shows how to implement a logic AND with a single artificial neuron with sigmoid activation function. How can we implement the other logical gates?

Backpropagation algorithm The backpropagation algorithm is the procedure we can employ to calculate the derivatives of the cost function with respect to its parameters. We need them in order to update the parameters of the neural network through the Gradient Descent algorithm. Without backpropagation, deep neural network could not be trained in an efficient way.

Reproduce a given image. Train a neural network that tries to reproduce a given image. If the network is not big enough, the best possible approximation will be returned. This is a sort of image compression if the amount of parameters of the network is smaller than the number of pixels of the image. Try reproduce bigger images with less parameters in order to get a compressed description of the image.

Convolutional neural network used to train a digit classifier. We give the input image as a 28x28x1 array (single channel i.e. b/w). We apply a first convolutional layer with 8 filters (which extract 8 different features). Then, we apply a subsampling layer, which combines into one pixel regions of 2x2 pixels. We repeat again the same structure with another convolutional layer followed by a subsampling layer. Finally, we flatten the output of the network so that instead of 8 channels of 7x7 pictures we get a single channel with 392 neurons. We apply a dense layer with 10 neurons with a sigmoid activation function in order to return probabilities.

Tutorial 3


  • Simple Autoencoder
  • In this tutorial, we will train a very simple autoencodeer. We discretize a function in N_points and give it as input to the autoencoder. We want the output to be the function itself. However, since there is a very small layer in the middle of the autoencoder (bottleneck), the task is not trivial. In other words, the central layer will contain a compressed representation of the function. In particular, we choose a set of random gaussians with different mean mu and standard deviation sigma as functions.

  • LSTM: random text generation
  • We implement the example on Andrej Karpathy blog on a collection of Physics articles.

Suggested exercises (bonus):

  • Denoising Autoencoder
  • Create an autoencoder that, given images of random circles with noise produces noiseless circles.

Autoencoder: Example of autoencoder neural network

Denoising autoencoder. Example of denoising autoencoder. Given random noisy circles, the network should produce noiseless circles.

LSTM Character-level text generation See here for more information.

Tutorial 4


  • Reinforcement Learning
  • In this tutorial, we will focus on Reinforcement Learning. We will explore together the standard Q-Learning and Policy Gradient algorithms. We apply them to a classical problem: the Cartpole. Please install the Cartpole simulation library with the command pip install gym

    Below you can find the complete notebooks:

Suggested exercises:

  • Escape from the maze
  • Apply RL to solve the challenge of finding, as fast as possible, a "treasure" in:
    • a fixed given labyrinth
    • an arbitrary labyrinth (in each run, the player finds itself in another labyrinth)
    Use the labyrinth generator on Wikipedia "Maze Generation Algorithm"
    You can start from the example notebook we provide below (try to correct it).

Updated slides on Reinforcement Learning

Reinforcement Learning Train a network that decides the action to take based on reward.

The Cartpole Problem The Cartpole is an inverted pendulum with the center of mass above its pivot point. It is unstable and falls over. It can be controlled by moving the cart. The goal of the problem is to keep the pole balanced by moving the cart left or right, by applying appropriate forces to the pivot point. For more information, see this blog post

Escape from the maze

Tutorial 5



Please use this section below to provide us an anonymous feedback. Thank you!