🎁 Give the #1 gift request of 2024... a ZTM membership gift card! 🎁

Machine Learning Monthly 💻🤖

Daniel Bourke
Daniel Bourke
hero image

11th issue! If you missed them, you can read the previous issues of the Machine Learning Monthly newsletter here.

Hey everyone, Daniel here, I'm 50% of the instructors behind the Complete Machine Learning and Data Science: Zero to Mastery course. I also write regularly about machine learning and on my own blog as well as make videos on the topic on YouTube.

Welcome to the 11th edition of machine learning monthly. A 500ish (+/-1000ish, usually +) word post detailing some of the most interesting things on machine learning I've found in the last month.

Since there's a lot going on, the utmost care has been taken to keep things to the point.

What you missed in November as a Machine Learning Engineer…


My work (in progress) 👇

A new deep learning with TensorFlow course

I've been putting together a code-first introduction to deep learning with TensorFlow course. And last week, the GitHub repo went public.

If you've done a beginner machine learning course, the upcoming deep learning with TensorFlow course will be a great follow up.

Students of the course will get hands-on practice writing deep learning models with TensorFlow, common deep learning troubleshooting techniques, practice searching for how to solve a problem and more.

The best places to get updates as the course gets ready to go live will be:

The best from the community 🙋‍♀️

Alvaro’s guide to serving PyTorch models with TorchServe

TorchServe is a tool used for serving PyTorch models. In other words, enabling someone to send your model information and then having something returned backed to them.

In Alvaro's case, he trained a PyTorch computer vision model on a subset of data from the Food101 dataset, then using TorchServe, Docker and a few image processing libraries, demonstrated how the model works on to classify a sample image.

My favorite part is that he wrote the whole guide in GitHub so the code and tutorial steps are available right there.

The best from the internet 🕸

TensorFlow for Mac & TensorFlow recommenders

A few huge updates from the TensorFlow space recently.

What they are: TensorFlow Recommenders (TFRS), a TensorFlow package for building recommendation systems like the ones which power YouTube's discovery, Amazon's front page and more. And (much) faster TensorFlow performance on Mac (even on the new M1 chip!).

Why it matters: I remember trying to build a recommendation system for one of Australia's leading eCommerce stores. I started by piecing together examples from the web, then tailoring them to the specific problem I was working on (this is still a standard workflow for me). Well, it looks like TensorFlow Recommenders does basically everything I did but puts in into a nice pip-installable TensorFlow package.

And as for the faster TensorFlow performance on Mac, put it this way, I've written plenty of TensorFlow code but have never executed it on the hardware I'm typing these lines (I use a Mac with a beefy GPU, but run TensorFlow code on Colab/Google Cloud GPUs). So seeing that coming in Apple's tensorflow_macos fork of TensorFlow 2.4 you'll be able to run TensorFlow code on Mac CPUs, GPUs and Apple's M1 has me eager to experiment. More specifically, run TensorFlow code locally on my Mac (and make sure it works) then scale up to more compute on cloud services when needed. I'll be testing out the new fork in the coming weeks.

Yann LeCun’s NYU deep learning course

What it is: Have you ever run a convolutional neural network? Well, Yann LeCun invented them. And his deep learning course, taught by Yann himself, Alfredo Canziani, Mark Goldstein and Zeming Lin at NYU's Center for Data Science is available online for free.

Why it matters: This is one of the most comprehensive introductions to deep learning I've ever come across. You'll need a little experience (6-12 months) in machine learning to go through it, but with a dedicated 3-months of effort the studious machine learner would leave the course with a great overview of many of the most fundamental deep learning techniques.

Note: As with any course resource, there's always going to be more resources out there than you can handle at one given time. Best to choose one which suits the current problem you're working on and stick with it.

Machine Learning created art

What it is: Is that beautiful yet strange picture by Picasso? Or maybe Matisse? Or perhaps van Gogh? No, it's machine learning.

ML x Art (mlart.co) is a curated display of artworks created with machine learning by Emil Wallner.

Some of my favorites include listening to LSTM generated scriptures spoken by synthesized voices played over the top of unsecured surveillance cameras or carving body parts onto canvas using PoseNet.

Why it matters: Perhaps your art teacher told you in year 8 your drawing of a Ninja Turtle didn't fulfill the criteria of what a good drawing should look like. But now you can write code to unlock the hidden patterns of the world's best artworks. Maybe it's time to reinvigorate that inner artist. ML x Art is a great example of the broadness of use-cases for machine learning.

The most important ideas in deep learning

What it is: From AlexNet to Dropout to Encoder-Decoder Networks with Attention to Adam to GANs to ResNets to Transformers, deep learning has moved fast in the past few years. If you're not sure what some of the terms are in the previous sentence, they're what Denny Britz refers to as some of the most important catalysts in the widespread use of deep learning.

Why it matters: How do you tell if something is a good idea or not?

If it's popular?

If it works?

How about if it's stood the test of time?

There's too much going on the world of deep learning for any one individual to keep up with it all.

So what should you do?

Instead of paying attention to everything, look for the ideas which have stood the test of time and then use those to bridge out from. If you're not sure where to start, Denny Britz's Deep Learning's Most Important Ideas — A Brief Historical Review post has you covered.

A great research project would be to replicate these concepts from scratch on your own.

Continuous Delivery for Machine Learning and DevOps for ML Data (MLOps)

What it is: In software development, continuous delivery (CD) involves producing and releasing software in short cycles. For example, developing an update for an app and releasing it to customers immediately to see if it works. DevOps (developer operations) has a broader scope and deals with all of the practices to enable CD (and other forms of development) to happen.

Where traditional software might have only code to maintain, machine learning systems usually involve maintaining code, data and models, which in turn, often makes things significantly harder.

In Continuous Delivery for Machine Learning, the authors discuss the process of bringing CD to machine learning applications.

*DevOps for ML Data* by Tecton.ai discusses the importance of maintaining a disciplined way to store data changes and updates.

Why it matters: Model building in machine learning is becoming more and more accessible. However, every machine learning practitioner knows there's more pieces to the puzzle than just model building.

You've got:

  • Monitoring
  • Testing
  • Data collection
  • Data verification
  • Serving
  • Resource management
    • more...

My experience is in model building. But since I want to build machine learning applications, I'm finding myself spending a lot of time researching and experimenting with MLOps practices. Stay tuned for more on this in the future.

In the meantime, the above two posts are a great introduction to the things you'll need to take care of to get your machine learning applications into the hands of others.

Papers: How hard is your data to model?

What it is: If you wanted to improve your machine learning model, there's always been a simple way: get more data.

But what if you could figure out what kind of more data?

As in, not just more data, but better data. And how about even sorting out your existing data into poor samples and good samples?

A couple of papers I've been digging into on this topic are:

Both show different approaches to measuring how valuable a single sample of data is or how difficult a sample of data is to learn.

For example, in Estimating Example Difficulty Using Variance of Gradients, the authors show examples of using their data difficulty score VOG (variance of gradients) to see what images were hardest for a model to learn. In samples with a low VOG score (easier to learn), the images were often clear with the target object in the centre and samples with a high VOG score (harder to learn) were often occluded or zoomed in.

image-ml

Example of easier images of a magpie versus harder images for a model to learn according to their VOG score. Source: https://arxiv.org/pdf/2008.11600.pdf

Why it matters: We're now getting to the point where the largest models aren't limited by compute power, they're limited by the amount of data we can feed them.

So in light of The Bitter Lesson, perhaps better methods of figuring out what data samples offer the best bang for buck to pass our models rather than just feeding them as much as possible are worth researching.

After all, not everyone has Google-scale compute power at their disposal. Imagine if you're a machine learning startup with limited resources, instead of spending as much as possible on compute power to train ML models, you use methods like DVRL or VOG to figure out which data samples you should spend most of your time on.

Funny: That's not the ball...

What it is: During a recent soccer match, an AI-powered camera spent a great deal of time focussing in on a bald referee's head instead of the ball. A video by James Felton shows the camera jumping around trying to focus in on the ball downfield whilst also continually coming back to zero in on a nearby linesman's head.

Why it matters: This is a great example of how fragile AI systems can be. You might train one of the best soccer ball classifiers in the world but one bald head goes and ruins everything 😂.

Perhaps the developers who designed the AI-powered camera system might benefit from reading the data quality papers mentioned above. I wonder how many referees with bald heads are in their training set? Maybe a few examples could be added to the "not soccer ball" class.

You really never know how your model's going to go until it's deployed to the wild...


See you next month!

What a massive month for the ML world in November.

As always, let me know if there's anything you think should be included in a future post. Liked something here? Tell a friend!

In the meantime, keep learning, keep creating.

See you next month,

Daniel www.mrdbourke.com | YouTube

By the way, I'm a full time instructor with Zero To Mastery Academy teaching people Machine Learning in the most efficient way possible. You can see a couple of our courses below or see all Zero To Mastery courses by visiting the courses page.

More from Zero To Mastery

The Developer’s Edge: How To Become A Senior Developer in 2024 preview
Popular
The Developer’s Edge: How To Become A Senior Developer in 2024

Do you want to be a Senior Developer and excel in your field? You're in the right place. By the end of reading this, you will have a set path with a list of the best resources for you to level up and become a Senior Developer.

Web Developer Monthly 💻🚀 preview
Web Developer Monthly 💻🚀

29th issue of Web Developer Monthly! Read by 1,000s of developers every month. Keeping you up to date with the industry, without wasting your valuable time.