Use Code: BFCM24 to get 58% OFF your annual membership. Expires soon 👇

Machine Learning Monthly 💻🤖

Daniel Bourke
Daniel Bourke
hero image

19th issue! If you missed them, you can read the previous issues of the Machine Learning Monthly newsletter here.

Daniel here, I'm 50% of the instructors behind the Complete Machine Learning and Data Science: Zero to Mastery course and our new TensorFlow for Deep Learning course!. I also write regularly about machine learning and on my own blog as well as make videos on the topic on YouTube.

Welcome to the 19th edition of Machine Learning Monthly. A 500ish (+/-1000ish, usually +) word post detailing some of the most interesting things on machine learning I've found in the last month.

Since there's a lot going on, the utmost care has been taken to keep things to the point.

What you missed in July as a Machine Learning Engineer…

My work 👇

The video version of this post can be watched here.

The Zero to Mastery TensorFlow for Deep Learning course has been completed!

Since launching a couple of months ago, I've been putting together the rest of the materials for the Zero To Mastery TensorFlow for Deep Learning course.

And big news!

They've all been completed.

From TensorFlow Fundamentals to computer vision to natural language processing to time series forecasting to passing the TensorFlow Developer Certification, this course covers it all.

All taught in a code-first way.

Meaning I write code, you write code.

With the addition of the time series and passing the TensorFlow Developer Certification sections, the course now covers 60+ hours worth of content.

For more:

learn-tensorflow-book-homepage Screen Shot 2021-07-01 at 4.49.10 pm Jupyter Notebooks (where all the course materials live) can be hard to load/read/search. So now all of the course Zero to Mastery TensorFlow for Deep Learning materials are available as a beautiful online and searchable book!

From the community 🙌

Skytowner Tutorials and resources

I love seeing this. A small group of people deciding that even though a system works, they could probably do better. And so they did.

Skytowner seeks to solve the following problems for (currently) Python, MySQL, pandas, NumPy, Beautiful Soup and Matplotlib:

  • Reducing the bloat of technical resources
  • A lack of concrete and simple examples
  • Outdated resources
  • Poor UI and UX of documentation

Screen Shot 2021-07-30 at 1.22.41 pm 1500+ and counting. Skytowner is making documentation and coding examples beautiful again.

How?

Through small and specific articles maintained actively through community feedback and revision.

One of my favourites is the Getting Started with Python section.

I'm a big fan of the way things look on a page. It's one of the reasons I turned my new TensorFlow course into a book. The course already looked good in notebook form but it looks far better in book form.

Saed Hussain's Towards Data Science articles

Saed has down an outstanding job putting together two tutorials for deploying machine learning models on Google Cloud.

The first runs through using Google Cloud Functions, which essentially turns your machine learning model in a Python function you can see data and get some kind of return.

The second shows you how to deploy a machine learning within an application using Google App Engine.

  1. Machine Learning Model as a Serverless Endpoint using Google Cloud Functions
  2. Machine Learning Model as a Serverless App using Google App Engine

Thank you for the submissions Saed!

From the Internet 🕸

1. Story time: Building a data team at a mid-stage startup by Erik Bernhardsson

Erik's got plenty of experience in the data world. He spent 6 years at Spotify and built the first version of its recommendation engine (think "Related Artists", Radio, Discover Weekly). After Spotify he joined Better.com, a Company helping people get better mortgages, and ran the technology team for another 6 years.

Erik's latest piece describes the narrative of a smaller company going from having a complete lack of data-driven decision making to developing a data-driven culture.

At the end of day 1 in the story, the protagonist finds data all over the place (no centralised location) — that is, if the data even exists. The product team isn't sure of what the data team does and data team isn't sure of what the product team does.

Everyone wants to do machine learning but no one wants to craft well-written SQL queries.

One quote I really enjoyed about the topic of hiring data generalists to begin with (at the start of forming a data team) was:

You have a bunch of new people in the team that are more excited. Most of them are people who know a bit of software engineering, a bit of SQL, but most importantly have a deep desire to find interesting insights in the data. You think of them as “data journalists” because their goal is to find “the scoop” in the data.

In essence, if you want to do machine learning, you have to be excited about data first. No data, no machine learning.

By the end of the story, a year into building the data team, thanks to a series of small experiments and data scientists taking it upon themselves to get their cool prototypes working with demonstrated value using small-scale demos (ship the things you'd like to see and keep working on), the CEO is now pushing for teams to use data as the truth. And is even more excited to let the data teams upscale their small demos.

Read the full story on Erik's blog.

2. End-to-end ML School

How did I not know this existed?

From Python introductions to time series analysis to optimization to choosing a model to matplotlib to career advice and more.

The end-to-end ML school breaks machine learning topics into small learnable chunks. Such as covering the softmax activation function with math, code and words.

Many of the modules come with a walkthrough video, code breakdown, concept explanation and math derivation.

Check out and read through the topics catalogue, I'm sure you'll learn something new.

3. Using Machine Learning to Enhance Speech Recognition in Cochlear Implants 🦻

This one hit close to home.

My nephew was born deaf and had Cochlear Implant (CI), a device designed to aid with hearing, surgery when he was young. I made a video of the first time he heard any sound.

What it is: Google researchers used neural networks to improve CI speech listening comfort (0 being very poor, 1 being very good) and intelligibility (the fraction of words in a sentence correctly transcribed). Through a combination of YAMNet (a mobilenet architecture designed to classify 21 categories of sounds) and Conv-TasNet (a neural network which splits speech and background sounds), the researchers placed second in the first ever CI hackathon.

Why it matters: Close to half a billion people worldwide are deaf or have some form of hearing impairment. Just try to imagine describing sound to someone who's never heard before. Using machine learning to improve existing technologies such as the CI has the potential to unlock what may be a completely new sense for some people. What a superpower.

Read the full story on Google's AI blog.

4. Learn from the best in the business: The Hugging Face NLP Course 🤗

I remember when BERT (a popular transformer architecture) dropped at the end of 2018. It grabbed the natural laugnage processing (NLP) world by the shoulders and shook it senseless.

However, due to the architecture's size and complexity, it wasn't very accessible to those outside of large companies and large research teams.

Well, times have changed. Now you can leverage Hugging Face's Transformers library to run BERT (and dozens of other pretrained models) in a few lines of code.

# Using HuggingFace Transformers to import and instantiate a pretrained BertModel class 
import transformers
from transformers import BertModel
bert_model = BertModel.from_pretrained(PRETRAINED_MODEL)
...

What it is: A course from the people who brought you word-class NLP models in a few lines of code on Transformers, Datasets, Tokenizers (turning words into numbers for machines to understand them) and Accelerate (a framework for running Pytorch training scripts on any kind of device, single GPU, multi-GPU, TPU, you name it).

What's inside?

All of the above mentioned topics are part of the Hugging Face 🤗 ecosystem.

The course is broken down into three main sections.

Outline of the HuggingFace Course Outline of the HuggingFace Course, from Introduction to Diving In to Advanced.

Source: https://huggingface.co/course/chapter1

Why it matters: What better way to learn the thing than from the people that made the thing? Dozens of companies around the world are using models powered by Hugging Face's Transformers library to power their NLP applications. Yours might be next.

If you've got a good knowledge of Python and done an introductory deep learning course such as the Zero To Mastery TensorFlow for Deep Learning course, you'll be ready to jump into the HuggingFace NLP course.

5. Demo your ML models with Gradio

What it is: Gradio helps you demo your machine learning models. With a few lines of Python code you can go from trained model to functional demo. And even better, the demos come with shareable links (these last for 24-hours on the free version and unlimited with Gradio Hosted), making easy for others to interact with your work.

gradio-demo-fix-res-crop Using Gradio to create an interactive demo of a food recognition model. Notice the shareable link, these last for 24-hours when you first create the demo and can be used by others. See the example code used to make the demo on Google Colab.

Why it matters: Recall from Erik's story about building a data team above that the company moved from being a lack of data company to a data-driven company through a series of small experiments and small-scale demos (it's much easier to understand something if you can see and use it).

6. GitHub Copilot: Blessing or Curse? 🤔 Blog post by Jeremy Howard

What it is: Jeremy Howard, creator and teacher of fast.ai got access to GitHub's Copilot, an AI-powered partner programmer and took it for a test ride. He wrote about his experience noting the good and bad.

The good?

GitHub Copilot is able to write multi-line Python functions (Jeremy tested it with Python) using a docstring as a prompt.

However...

The bad side is that quite often the code it output, although it looked pretty good, didn't work or had bugs.

This makes sense though. After all, Codex the algorithm which powers Copilot only gives the correct answer 29% of the time.

Jeremy brings up some other points about how writing code isn't always about writing code (the current version of Copilot tends prompt a person to write more code). Meaning, writing code also involves designing, debugging and maintaining code.

Why it matters: In the last machine learning monthly, we covered GitHub Copilot and its potential use cases. However, the current version isn't without its drawbacks.

For example, Jeremy notes Copilot suggested using regex for problem which couldn’t be solved with regex.

But let's be frank, it's a phenomenal innovation (blessing), however if it takes advantages of human's inherit biases for automatic decisions (we tend to prefer decisions made for us, rather than making our own) and anchoring (sticking with the first decision rather than changing it), and outputs poor code, it may turn out more harmful than good (curse).

I'm also a very big fan Jeremy's advice for improving at code:

The best ways to get better at coding are to read code and to write code.

Read the full blog post on the fast.ai blog.

7. Hand labelling data considered harmful (the perils of hand labelling data)

What it is: Shayan Mohanty (a decade leading data engineering teams) and Hugo Browne-Anderson (data science educator and practitioner) explore the downsides of hand labelling data. For years, hand labelling has been the gold standard for creating machine learning models. However, it's also been known for years that hand labelling has its limits.

What is hand labelling?

Imagine you've got 100,000 pictures of dogs and cats. Hand labelling could be considered going through every single photo, numbering them and adding a text label, "cat" or "dog" to each.

Of course, such an endeavour would take quite a while (problem 1: time). So you'd probably want to get some help.

But this introduces another issue.

How do you make sure you and the other people labelling the images do so with the same mindset? (problem 2: collaboration)

And even if you do have a well thought-out set of instructions for labelling data, how could you know for sure all of the people you're working with won't introduce their own twists? (problems 3, 4: bias, errors — even ImageNet, a popular computer vision bench mark has ~5.8% errors)

Finally, if you do get data labelled, a machine learning model isn't going to know what's right or wrong, it's only going amplify what's in there, biases, errors, you name it.

Why it matters: Despite the problems with hand labelling, Shayan and Hugo share some good news.

Many alternatives, some of which we've covered in previous issues of machine learning monthly such as semi-supervised learning, self-supervised learning, transfer learning, active learning and synthetic data generation are gaining ground on the original gold standard supervised approach.

They conclude a thoughtful combination of supervised and non-supervised methods can often compensate for the time/performance tradeoff found with fully-hand labelled data.

They reiterate this with:

Rather than asking, “Should I hand label my training data or should I label it programmatically?”, ask, “Which parts of my data should I hand label and which parts should I label programmatically?”

Read the full blog post on the O'Reilly Radar blog.


See you next month!

What a massive month for the ML world in July!

As always, let me know if there's anything you think should be included in a future post.

Liked something here? Tell a friend!

In the meantime, keep learning, keep creating, keep dancing.

See you next month,

Daniel

www.mrdbourke.com | YouTube

PS. You can see also video versions of these articles on my YouTube channel (usually a few days after the article goes live).

By the way, I'm a full-time instructor with Zero To Mastery Academy teaching people Machine Learning in the most efficient way possible. You can see a couple of our courses below or see all Zero To Mastery courses by visiting the courses page.

More from Zero To Mastery

ZTM Career Paths: Your Roadmap to a Successful Career in Tech preview
Popular
ZTM Career Paths: Your Roadmap to a Successful Career in Tech

Whether you’re a beginner or an experienced professional, figuring out the right next step in your career or changing careers altogether can be overwhelming. We created ZTM Career Paths to give you a clear step-by-step roadmap to a successful career.

Python Monthly 💻🐍 preview
Python Monthly 💻🐍

20th issue of Python Monthly! Read by 1,000s of Python developers every month. This monthly Python newsletter is focused on keeping you up to date with the industry and keeping your skills sharp, without wasting your valuable time.