CS 479/679
Neural Networks

Logo Welcome
Lecture Videos

Winter 2025

This web page is a summary of the course. For the official details, please refer to its listing in the Undergraduate Calendar.

This course surveys how networks of neurons can perform computation. We will cover a variety of methods for designing and training neural networks. We will study some state-of-the-art methods for artificial neural networks, as well as some approaches that are guided by the biological constraints of the brain. We will look at both supervised and unsupervised learning, and methods to improve their performance.

Instructor

Jeff Orchard, and Mohamed Hibat-Allah

Prerequisites

Goals

By the end of the course, students will be able to:

  • Write a program to simulation the activity of a network of neurons
  • Formulate a neural learning method as gradient-based optimization
  • Derive a neural learning method based on energy minimization
  • Encode and decode data from the activity of a population of neurons
  • Evaluate implementations, designs, and optimization strategies
  • Identify some commonalities between artificial neural networks and the brain

Textbooks

  • Theoretical Neuroscience, Dayan and Abbott, 2001 (UW library link)
  • Neural Networks and Deep Learning, Nielsen, 2017 (link)
  • Deep Learning, Goodfellow, Bengio and Courville, 2016 (link)
  • Neural Engineering, Eliasmith and Anderson, 2003: MIT Press.

Evaluation (tentative)

The course will have several assignments, a midterm, and final Assessment. Assignments will involve programming in Python.

For CS 479, the grade breakdown is:

  • 30% Assignments
  • 20% Midterm Exam
  • 50% Final Exam

For CS 679, the grade breakdown is:

  • 50% Assignments
  • 20% Midterm Exam
  • 30% Final Exam

Topics

Neuron Models

  • Neuron models, spiking vs. firing-rate
  • Activation functions
  • Synapses
  • Networks of neurons

Supervised Learning

  • Train, validation, testing
  • Universal approximation theorem
  • Cost functions
  • Gradient descent
  • Error backpropagation
  • Automatic differentiation
  • Overfitting and generalizability (regularization)
  • Optimization considerations (vanishing gradients, SGD)

Vision

  • Your Visual System
  • Convolutiononal neural networks

Recurrent Neural Networks

  • Hopfield Networks
  • Backprop through time (BPTT)
  • minGRU

Unsupervised Learning

  • Autoencoders
  • Vector embeddings

Advanced Autoencoders

  • Restricted Boltzmann Machines (RBMs)
  • Residual connections, Diffusion networks
  • Variational autoencoders (VAEs)

Adversarial Attacks

  • Targeted vs untargeted
  • FGSM
  • Defences: TRADES

Neural Engineering

  • Optimal linear decoding
  • Transformations, dynamics

Advanced Topics (time permitting)

  • Biological backprop models
  • GANs