Hey everyone! Daniel here, I'm 50% of the instructors behind the Complete Machine Learning and Data Science: Zero to Mastery course (we've nearly hit 10,000 students enrolled in the course!). I also write regularly about machine learning on my own blog as well as videos on the topic on YouTube.
Welcome to the 2nd edition of Machine Learning Monthly. A 500ish word post detailing some of the most interesting things on machine learning I've found in the last month. If there is enough interest, I will keep doing these every month so please share it with your friends!
If you missed it, you can read the previous and future issues of the Machine Learning Monthly newsletter here.
Since there's a lot going on, the utmost care has been taken to keep things to the point.
Being a Machine Learning Engineer is a fantastic career option and Machine Learning is now one of the fastest growing job markets (including Data Science). Job opportunities are plentiful, you can work around the world, and you get to solve hard problems. However, it’s hard staying up to date with the ever-evolving ecosystem.
This is where this newsletter comes in. Every month, it’ll contain some of my favourite things from the industry, keeping you up to date and helping you stay sharp without wasting your time.
The actual title of this article is teach yourself programming in 10-years but I've replaced programming with machine learning. Machine learning has been around since the dawn of computers.
And with all of the online resources out there (including the Zero to Mastery Machine Learning course), you can get started with machine learning as quick as you can load a browser.
But that doesn't make it any easier to learn.
Yes, you can get started as soon as you choose to. But remember learning any skill worth learning takes time. Peter Norvig reminds us of this.
I used to want to get a job at a large tech company using artificial intelligence (AI). But then I decided I'd prefer to build my own. Something in the health tech space.
This article by Andressen Horowitz (an investment firm who invests in technology companies) discusses some of the roadblocks you or I might run into if/when we start AI-driven businesses.
The key point being AI businesses differ to normal software businesses because they've got an ever-moving target. Data.
This isn't to say normal software businesses don't use data, of course they do. But AI-driven businesses rely on it.
Some of the questions it raises include:
All of these are tied backed to the business. A model living in a Jupyter Notebook may provide great results on the screen, but until it's deployed and out in the wild, it's not really offering business value.
Machine learning is all about experimenting. Especially in the beginning, figure out what works and what doesn't work. Pure trial and error.
What will hold you back the most is the time between your experiments.
There's a reason why I sound like a broken record saying "if in doubt, run the code." It's a reminder. You'll spend hours trying to understand something by looking at it and thinking about it but it won't be until you actually see what's going on will you truly understand it.
This wonderful article by Radek Osmulski talks about some of the best ways you can minimise your time between experiments.
In short:
It's always good to see how machine learning is used in production and the steps it took to get there.
Amenity Detection and Beyond — New Frontiers of Computer Vision at Airbnb by Airbnb's Data Science team walks through a proof of concept they built for using computer vision to detect amenities in photos.
If you're wondering what an amenity is, think, swimming pool, coffee maker, oven, kitchen table.
When a user uploads photos to Airbnb's platform to list their property, their computer vision algorithm looks at the pictures and adds metadata to the listing which contains the amenities the property has. This not only saves a user time from setting up their listing, but it also allows people to search for specific places with specific items.
The article is not only a great description of machine learning being used in the wild, it was so well written, it inspired me to create my own project to replicate it. I decided to spend 42 days (6 weeks) rebuilding it with 2020-level tools. Today is day 13/42.
The video below gives an overview of the project.
Andrei Karpathy makes it back for the 2nd month in a row. This time with an article I've tagged as 'must read' for anyone getting into machine learning.
A Recipe for Training Neural Networks goes through a set of steps you can (and should) refer to before you kick off a training run.
My favourite?
Step 1. Become one with the data.
If you plan on training neural networks, I'm assigning you this article as required reading.
Weights & biases is already an incredible tool for tracking your machine learning experiments. And now they've brought out another to help you improve them even further.
Sweeps is a platform to help you train your model's hyperparameters at scale whilst having a full understanding of what hyperparameter is doing what.
These are the questions Sweeps helps to answer.
There is nothing I love more than well-explained end-to-end project examples. And this one on OCR (optical character recognition) for receipts by Nanonets is one of the best I've seen.
Imagine you're building an application which uses computer vision to take a photo of a purchase receipt and then automatically documents the data in a structured way you can refer to later.
This article walks you through a pipeline which explains how you might do this with deep learning.
As you know, data is an integral part of machine learning. But once you've trained a model on a particular dataset, how do you know what it doesn't know?
In the case of self-driving cars, what if your model performed nearly perfect on everything except bicycle riders at night time?
First of all, this is a problem. Since it's often the edge cases like a bicycle rider at night which are most important.
Second, how would you figure this out? How would know your model doesn't do too well on bicycle riders at night?
You couldn't really test this in the wild, since it may put people in danger.
Active learning to the rescue. Nvidia set up a system which would use several models to make predictions on the same scenarios and then figure out which scenarios the models disagreed the most on. They then used this knowledge of the most uncertain scenarios to improve their original models, seeing up to a 4x improvement in said scenarios!
Active learning is something I'm watching closely. And this article outlines an example of how it's being done at scale.
I love this post just by the name of it. Does radioactive training data give your model superpowers? Or 3 eyes?
No to 3 eyes, but if knowing what data it's trained on and what data it hasn't is a superpower then yes to that.
The Facebook AI Research (FAIR) team published an article last month detailing a way to 'radioactively tag' data.
In other words, if you're Facebook, and you've got billions of images and loads of machine learning models, how do you know which model was trained on what images?
This is where the radioactive part comes in. Radioactively tagging training data involves changing it ever so slightly so the change is identifiable but the overall structure of the data isn't lost.
In other words, imagine adding a QR code to every image in your training data set. But in a way the QR code didn't alter the image. Mind blown? Mine too. Check the article out for more details.
Woah, we've nearly tripled our 500ish word limit. But this month was worth it.
As always, let me know if there's anything particular you liked or anything you think should be included in a future post.
In the meantime, keep learning, keep creating.
See you next month,
Daniel www.mrdbourke.com | YouTube
By the way, I'm a full time instructor with Zero To Mastery Academy teaching people Machine Learning in the most efficient way possible. You can see a couple of our courses below or see all Zero To Mastery courses by visiting the courses page.