// How To Get Started with Deep Learning · Learning With Data

How To Get Started with Deep Learning

Mar 25, 2020 13:16 · 1006 words · 5 minute read deep learning getting started

Photo by JESHOOTS.COM on Unsplash Photo by JESHOOTS.COM on Unsplash

I recently sent out a poll to people who subscribe to my email list and asked what they were most interested in learning.

About 86 percent said deep learning.

That blew my mind. I knew deep learning was a hot topic, but I had no idea just how interested people were in learning more.

So — I thought I would write up how I would start learning deep learning if I were to start today.

Deep Learning Specialization

Unsurprisingly, in my opinion, the best place to start is with Andrew Ng’s deep learning specialization.

Andrew Ng has an incredible gift for teaching and does a great job starting from the basics and working up to image and text processing using deep learning. The courses in this specialization are:

  • Neural Networks and Deep Learning
  • Improving Deep Neural Networks
  • Structuring Machine Learning Projects
  • Convolutional Neural Networks
  • Sequence Models

What I love above his approach is he helps you understand how to improve and structure your projects before going into more advanced architectures. I find this bottom-up style approach to be very approachable for beginners. This course also does an excellent job introducing you to the concept of attention in networks which is an extremely important idea for state-of-the-art architectures.

One of the downsides, in my opinion, is the course is in TensorFlow. That isn’t a huge downside as TensorFlow has improved, but I much prefer PyTorch. What I did, and would recommend you do, is do all the homework in PyTorch. Not only will this allow you to get exposure to both TensorFlow and PyTorch code, but it will also ensure you understand the concepts because you won’t be able to lean too heavily on any of the provided code.

Practical Deep Learning for Coders

Jeremy Howard and Rachel Thomas have done a lot to make deep learning more approachable.

They do what I would call a top-to-bottom approach where they start at a very high level and start filling in the details. I actually find this approach a bit confusing if you have no deep learning experience. That is why I recommend you start with Andrew Ng’s course.

You might think this course isn’t worth taking in addition to Andrew Ng’s course since it is also an introductory course, but I find the style of teaching and the subject matter covered is actually fairly different. While you will definitely get subjects which are review from Andrew’s course, I think seeing those subjects in a different way is very valuable.

This course will also discuss deep learning in a very applied sense putting a focus on topics that they have found valuable when applying deep learning to real-world problems without Google level compute.

The course uses a mixture of PyTorch and their own library built on top of PyTorch called fastai. Fastai is a very high-level library that can allow you to use best practices and state-of-the-art models with only a few lines of code. While that is nice, I still prefer writing straight PyTorch code because it will increase your understanding of what is happening underneath and tends not to be too difficult.

If you enjoy this course, you could also consider taking the second part of the course which is more advanced. The course basically walks you through how they built the fastai library, which requires you to dive more deeply into the code and fundamentals.

Implement a Paper

At this point, you should have a really solid grasp of the fundamentals of deep learning.

You’ve been taught in two different styles by two of the best teachers in the world.

I think where people usually get tripped up with deep learning is they don’t spend enough time really understanding the fundamentals. I’m not even referring to the mathematics behind the algorithms. I’m talking about the core concepts, building blocks, and issues. If you don’t feel like you fully grasped the concepts from the previous two courses, go back and review.

If you’re feeling good, move forward with implementing a paper.

This might sounds scary, but its almost certainly not as scary as you think. In fact, one of the reasons I love PyTorch so much, is I find I can almost take the architecture from a paper, type it into Python using PyTorch, and it works.

One paper I would recommend is GoogLeNet. This paper is a bit older, so it won’t use any unknown building blocks, it is well written, and I don’t believe it is covered in any of the courses in much depth.

Sit down with your paper of choice and just work through it so you understand it, convert it to code, and apply it to a standard dataset to make sure you implementation worked. Not only will this help you understand the paper, but it will build your confidence in order to tackle other papers without fear.

Choose Your Own Adventure

The last step in this process is to just continue to find new papers and implement them. Deep learning is moving so quickly as a field that in order to stay on top of state-of-the-art architectures, you will need to be comfortable reading and implementing papers.

Use the confidence you built from implementing your first paper and don’t fear to take the time to sit down and digest a paper. In my opinion, that is the best way to continue to learn. You’ll find that for the most impactful papers, others will have written incredibly great blog posts helping you dissect the paper. Leverage those!

If you don’t find one, then write one after you understand it!

Then, slowly, over time, you will continue to build on your foundational deep learning knowledge until you find others consider you an expert and are impressed with your depth of knowledge.

So while it is a process that takes time, it is a journey that I believe most can successfully undertake with dedication.

Interested in learning more about Python data analysis and visualization? Check out my course.

LinkedIn Share tweet Share