🎁 Give the #1 gift request of 2024... a ZTM membership gift card! 🎁

Machine Learning Monthly Newsletter πŸ’»πŸ€–

Daniel Bourke
Daniel Bourke
hero image

30th issue! If you missed them, you can read the previous issues of the Machine Learning Monthly newsletter here.

Daniel here, I'm 50% of the instructors behind Zero To Mastery's Machine Learning and Data Science Bootcamp course and our new TensorFlow for Deep Learning course.

I also write regularly about machine learning on my own blog and make videos on the topic on my YouTube channel.

Enough about me!

Welcome to this month's edition of Machine Learning Monthly. A 500ish (+/-1000ish, usually +) word post detailing some of the most interesting things on machine learning I've found in the last month.

What you missed in June as a Machine Learning Engineer…

My work πŸ‘‡

Benchmarking PyTorch running on Apple Silicon GPU(s) (collaboration with Alex Ziskind)

Last month I showcased how to setup PyTorch to run on your Apple Silicon Mac GPUs (update: PyTorch 1.12 is out so you no longer need the nightly versions).

This month Alex Ziskind and I teamed up to speed test a bunch of our Apple Silicon machines to see how fast the new versions of PyTorch are.

The results: PyTorch on the Apple Silicon GPU is a fantastic speed up and great for experimenting but not without bugs.

My brand new course is launching this week!

Sign up to the newsletter above and maybe we'll send you a special discount code when it launches πŸ˜‰.

From the Internet πŸ‘‡

This month’s theme is creating embeddings for everything.

What’s an embedding?

An embedding is a learnable representation of data.

The key word being learnable. Rather than representing a piece of data as a static collection of numbers (a tensor), an embedding can be updated over time and continually improved to better represent the data.

An embedding is usually a vector and you’d usually have one vector per piece of data.

For example, an image of pizza might be represented by the numbers [0.24, 0.45, 0.33...] (I’ve made these up, only to give an example, an embedding can be as long or short as you like, longer usually means a better representation but also more compute).

Once you’ve turned all of your data into embeddings, or learnable vectors, you can start to perform similarity methods on them such as the dot product, cosine similarity or nearest neighbour search to find similar examples.

This creates a forever-improving way to search your data.

Why?

Because if your data is in the form of learnable embeddings, these representations can be continually updated and improved over time.

How does this help?

Imagine searching for an article with the text β€œThe best machine learning resources”.

A simple approach would be to return articles with the exact title β€œThe best machine learning resources”.

Another approach would be to turn the articles as well as the search query into embeddings (numerical representations) and then see which article embeddings match the search query embeddings the best.

This is only the tip of the iceberg when it comes to the idea of creating embeddings for everything but it’s been something I’ve been coming across more and more.

Just like turning data into tensors, once you get the idea of embeddings in your head, it never leaves.

The first two resources discuss this further.

1. Semantic search with embeddings: index anything by Romain Beaumont

In this article, Romain Beaumont, one of the founders of Laion AI, shares the idea of applying semantic search to almost anything.

Semantic search is using natural language to search for what you mean, rather than the exact text of your search.

For example, searching images themselves for β€œblue dress” rather than images explicitly labeled with β€œblue dress”.

semantic-search-0 RdavdK2dZeU YjXL

Semantic search demonstration for different applications: text search in images, image search for matching images, recommendation based on similar movies, recommendation based on time-specific parameters (Source).

2. Find anything blazingly fast with Google’s vector search technology

The best thing about turning data into vectors is: you can search it, fast.

Continuing on from the example of turning everything into embeddings above, Google Cloud offers a managed solution called Vertex AI Matching Engine.

You upload your embeddings (or create them on Google Cloud) and the Vertex AI Matching Engine makes them searchable/matchable.

This is the same backend powering Google Image Search, YouTube, Google Play and more.

vector-similarity-search-trimmed

Example demo of using Google Cloud’s Vertex AI Matching Engine to match similar embeddings for various images. Notice how fast it can easily match similar images across a database of 2 million images. Try it out for yourself at MatchIt Fast.

3. Case study: How Instacart manages 600,000 shoppers with machine learning

Instacart is a grocery delivery and pickup service with 600,000 people shopping for others on the app.

Dealing with this many people couldn’t be done without machine learning.

The article above shares how the Instacart team developed their machine learning platform to deal with almost every feature of the app.

My favourite section is towards the end where they discuss:

  • Buying vs build β€” Should you build your own ML tooling or buy prebuilt tools?
  • Make it flexible β€” Using templates to build off (so things stay consistent) and Docker to manage environments (so what works on your machine, also works on my machine).
  • Make incremental progress β€” Start small and scale when necessary. Make sure all members of the team are onboard for different processes.

4. Machine learning design patterns

As you start to get more familiar with machine learning codes, you might notice how different frameworks implement similar features with different styles.

For example, Scikit-Learn has the classic model.fit() built-in.

Whereas if you’re using pure PyTorch, you might create a function called train() or implement the fit() method to your model class on your own.

Or even slightly outside of machine learning but still related to the resources above.

The proxy design pattern is used to cache the most active queries.

For example, if lots of people are searching for β€œblue dress”, it makes sense not to recompute the search every single time. Instead, you could cache the search for β€œblue dress” and the next time someone searches β€œblue dress”, they get delivered the cached results instead of brand new results.

This saves resources on the backend (less computing search queries) and speeds up the experience on the frontend (quicker results).

What kind of design pattern should you use for your application?

Well, it depends.

Usually, it’s best to figure it out by trial and error. But being aware of the different design patterns that exist can be very helpful.

The following two posts discuss many popular and useful machine learning design patterns:

5. Andrew Ng’s new Machine Learning Course

Stop the press!!!!

After years of Andrew Ng’s original course being the gold standard for anyone new to machine learning, the OG ML educator is back with a new and improved version.

And this time everything’s coded in Python instead of MATLAB or Octave (I never got into these languages).

Andrew Ng’s courses are what got me started in machine learning and I’m stoked to see the newest iteration opening up to the public.

6. OCR in the Browser with TensorFlow.js

Optical character recognition (OCR) is the process of identifying text characters with computer vision.

For example, scanning a driver’s license or reading a passport or taking a picture of a credit card and retrieving the numbers.

I’m a big fan of models running the browser.

Because what it often means is a (potentially) faster experience and the data is kept private.

The faster comes from the data not having to be sent across the internet to an inference computer.

And the private is because the data never leaves the browser.

Everything happens on the person’s device.

Until recently, deploying complex machine learning models such as OCR models (models with several stages) to the browser has been quite a challenge.

But the team at Mindee has shared how they deployed a version of their OCR model called DocTR in the browser using TensorFlow.js (a version of TensorFlow in JavaScript, the programming language of the web).

I tried it out on a screenshot of last month’s Machine Learning Monthly and the results turned out pretty good.

ocr-in-the-browser-trimmed

OCR in the browser demo with TensorFlow.js by Mindee.

7. Using SpaCy to see if health supplements are worth it or a waste of time

SpaCy is one of the best natural language processing (NLP) engines on the market.

The use cases for NLP are almost endless.

What about seeing whether a health supplement is worth it or is a waste of time by analysing its reviews?

In Healthsea: an end-to-end spaCy pipeline for exploring health supplement effects, machine learning engineer Edward Schmuhl showcases a technical approach for using Spacy’s NLP capabilities to analyze user reviews of health supplements.

8. Putting a custom object detection model into production by Alex

I’ve been loving reading along with Alex’s series on creating a custom object detection model from scratch including data collection, labeling, synthetic data creation, data versioning and now deployment.

In his latest blog post he shares ideas of how he’s going to deploy his custom redaction detection (finding locations in a document that have been blanked out) into production so others can use it.

This is one of my favourite machine learning blog post series on the internet right now.

A fantastic example of learning MLOps by doing.

9. Chip Huyen’s new book: Designing Machine Learning Systems

You’ve just started learning machine learning, now you’d like to start building your ideas.

How do you do it?

The best option is to just start.

Figure things out on the way.

But while you’re figuring it out, reading Chip Huyen’s Designing Machine Learning Systems book along the way will help tremendously.

Chip is one of my favourite people in the world of ML.

And this book is based on years of experience as well as talking to some of the best machine learning practitioners in the business.

I’ve got a copy for myself on the way.

Once it arrives I’ll be sure to share what I’ve learned.

dmls-cover

You can buy the book online or see an outline on GitHub.

A couple of cool Tweets

Let’s finish off with a couple of fun things from Twitter.

#1: Jeremy Jordan shares his high level overview of MLOps

mlops-overview

I love the simplicity of this.

You could see each section being a collection of Pythons scripts each with a single goal.

#2: ClearBuds: Wireless Binaural Earbuds for Learning-Based Speech Enhancement

One of the best ML demos I’ve ever seen. If you thought noise-cancelling headphones were getting good, check these out.

See you next month!

What a massive month for the ML world in June!

As always, let me know if there's anything you think should be included in a future post.

In the meantime, keep learning, keep creating, keep dancing.

See you next month, Daniel

www.mrdbourke.com | YouTube

By the way, I'm a full-time instructor with Zero To Mastery Academy teaching people Machine Learning in the most efficient way possible. You can see a couple of our courses below or check out all Zero To Mastery courses.

More from Zero To Mastery

ZTM Career Paths: Your Roadmap to a Successful Career in Tech preview
Popular
ZTM Career Paths: Your Roadmap to a Successful Career in Tech

Whether you’re a beginner or an experienced professional, figuring out the right next step in your career or changing careers altogether can be overwhelming. We created ZTM Career Paths to give you a clear step-by-step roadmap to a successful career.

Top 7 Soft Skills For Developers & How To Learn Them preview
Top 7 Soft Skills For Developers & How To Learn Them

Your technical skills will get you the interview. But soft skills will get you the job and advance your career. These are the top 7 soft skills all developers and technical people should learn and continue to work on.

Python Monthly Newsletter πŸ’»πŸ preview
Python Monthly Newsletter πŸ’»πŸ

31st issue of Andrei Neagoie's must-read monthly Python Newsletter: args, kwargs, and classes, CTX, and decorators for Data Science. All this and more. Read the full newsletter to get up-to-date with everything you need to know from last month.