Lessons learned



I have done A LOT of teaching on topics related to data science and machine learning over the past three years. As a PhD student at the Center for Social Data Science I developed course material for the coming Master's degree in Social Data Science at the University of Copenhagen. I produced lectures and exercises on programming, statistics, machine learning, deep learning, and network science, and taught these to multiple batches of students.

I also took a side-job as external lecturer early in my PhD, at DIS. DIS is a small independent institution that offers university courses to US undergraduates studying abroad in Scandinavia. My first course was 'Computational Analysis of Big Data', and more or less I was given entire freedom over what I taught and how I taught it. I used this as an opportunity to design the type of Data Science course I had always wanted to take. Later I started teaching 'Artificial Neural Networks and Deep Learning', and applied very much the same formula.

Key principles

Over time, my principles for teaching have evolved a bit. I have experimented with lots of things that didn't work very well, and borrowed ideas from my mentors and collegues. Most rewarding (and painful) though, has been to try out the things my students suggested in their post-semester feedback. The key principles I have learned to be most effective are:

Effective tools

Beyond my teaching philosophy, I have also settled on a number of tools that I find makes teaching easier and better:

"Hall of fame" projects

Every semester I'm blown away by the quality of some of the projects my students turn in. Most come into my courses having had no practical training with data science and coding in Python, so seeing a blog post for example getting featured on Towards Data Science is pure delight for me. Below I have listed some of my favorites throughout the years, in no particular order: