6-Step Framework To Tackle Machine Learning Projects (Full Pipeline)

Daniel Bourke
Daniel Bourke
hero image

Welcome to part 1 in my 7-part series on Machine Learning and Data Science. Be sure to check out the other parts in the series, as they all lead into each other.

Machine Learning is a broad topic to get to grips with. Add in the fact that the media makes it sound like magic, and it can be easy to get overwhelmed.

ml is magic

Well, good news! The goal of this article is to help remove some of that overwhelm, by giving you an overview of the most common types of problems Machine Learning can be used for.

We won’t stop there though.

I’ll also give you a framework to approach the deployment of your future Machine Learning proof of concepts. This way you’ll have a plan of attack when starting new ML projects, and figure out how to apply that model.

Sounds good? Alright, let's dive in…

Sidenote: The topics in this post are fairly complex - especially if you’re just starting out.

If you want to deep dive into this and learn Machine Learning from scratch, then check out my complete Machine Learning and Data Science course, or watch the first few videos for free.

learn machine learning in 2024

It’s one of the most popular, highly rated Machine Learning and data science bootcamps online, as well as the most modern and up-to-date. Guaranteed.

You'll go from a complete beginner with no prior experience to getting hired as a Machine Learning Engineer this year, so it’s helpful for ML Engineers of all experience levels.

Want a sample of the course? Well check out the video below!:

Why listen to me?

My name is Daniel Bourke, and as you can see, I'm the resident Machine Learning instructor here at Zero To Mastery.

Originally self-taught, I worked for one of Australia's fastest-growing artificial intelligence agencies, Max Kelsen, and have worked on Machine Learning and data problems across a wide range of industries including healthcare, eCommerce, finance, retail and more.

I'm also the author of Machine Learning Monthly, write my own blog on my experiments in ML, and run my own Youtube channel - which has hit over 7.8 Million views.

Phew!

With all that out of the way, let’s get back into the article and clear up some definitions and core concepts.

How are Machine Learning and Data Science different?

On paper, these two topics seem quite similar.

Data Science is the art of collecting data, and then making (hopefully) data-driven decisions. This usually involves consulting for C-suite or management on what the data means and then advising on the best course of action.

Whereas Machine Learning is the process of using machines to find patterns in data, to then understand something more, or to predict some kind of future event.

The core difference is that the machine can usually pick up on multiple contributing elements that the casual observer might miss, and find new impacts from that data. Rather than just figuring out what is happening, we want to find out why, and then how to use it.

We can then apply that data model to a new project (deploy it), test that it works, and see its effects.

If you’re fairly new to ML, you’re probably wondering what all that means, so let’s break these concepts down.

Understanding the Machine Learning project pipeline

A Machine Learning pipeline can be broken down into three major steps.

  1. Data collection
  2. Data modeling, and
  3. Deployment

Each of these has direct influence over one another.

1) Data collection

How you collect data will depend on your problem. We'll look at examples in a minute, but a simple example could be as basic as customer purchases in a spreadsheet.

data collection for ml

Again, well look at this more in a second.

2) Modeling

Modeling refers to using a Machine Learning algorithm to then find insights from within your collected data.

What’s the difference between a normal algorithm and a Machine Learning algorithm?

Well, let me explain with an example. Imagine you wanted to cook a roast chicken.

roast chicken

A normal algorithm would be similar to a recipe because it's a set of instructions on how to turn X ingredients into that honey mustard masterpiece.

A Machine Learning algorithm is very different.

Instead of having a set of instructions, you start with the ingredients and the final dish ready to go, but you’re missing the instructions on how to make it.

The Machine Learning algorithm then looks at the ingredients and the final dish and works out the correct set of instructions by testing all possible options until it finds what works.

ai data modelling

It's roast chicken by trial, error, and connecting the dots!

Better still, it can even improve on the original recipe because it's testing all permutations…

KFC

That being said, there are many different types of Machine Learning algorithms and some perform better than others on different problems, so you have to make sure you choose the correct option.

However, the overall premise remains the same - they all have the goal of finding patterns or sets of instructions in data.

3) Deployment

Finally, we have deployment.

This section of the pipeline is focused on taking your set of instructions (i.e. the data you’ve modeled and got insights from), and then putting it to use in an application.

This application could be anything from recommending products to customers on your online store, to a hospital trying to better predict disease presence.

deployment of ai models

Simple!

That pretty much covers the high level of the 3 main parts of the Machine Learning pipeline. Sure, the specifics of these steps will be different for each project, but the principles within each remain similar.

So now that we’ve covered the basics of how the pipeline works, let’s dive into how to create a proof of concept for deploying your data model.

How to build a Machine Learning proof of concept

steps in a full machine learning project

For the sake of simplicity, I’m going to assume that you’ve already collected your data, and are looking to build a Machine Learning proof of concept with it.

With that in place, we’re going to follow these 6 steps:

  1. Problem definition — What business problem are we trying to solve? How can it be phrased as a Machine Learning problem?
  2. Data — If Machine Learning is getting insights out of data, what data do we have? How does it match the problem definition? Is our data structured or unstructured? Static or streaming?
  3. Evaluation — What defines success? Is a 95% accurate Machine Learning model good enough?
  4. Features — What parts of our data are we going to use for our model? How can what we already know influence this?
  5. Modeling — Which model should you choose? How can you improve it? How do you compare it with other models?
  6. Experimentation — What else could we try? Does our deployed model do as we expected? How do the other steps change based on what we’ve found?

Let’s dive a little deeper into each.

Step 1: Problem definition — Rephrase your business problem as a Machine Learning problem

To help decide whether or not your business could use Machine Learning, the first step is to match the business problem you’re trying to solve to a Machine Learning problem.

The five major types of Machine Learning are:

  • Supervised learning
  • Semi-supervised learning (which I’ll cover in another post)
  • Unsupervised learning
  • Transfer learning, and
  • Reinforcement learning

However, the three most commonly used in business applications are supervised learning, unsupervised learning, and transfer learning, so let’s take a closer look at them.

Supervised learning

Supervised learning is called supervised because you have data and labels.

A Machine Learning algorithm tries to learn what patterns in the data lead to the labels, and the supervised part happens during training. If the algorithm guesses a wrong label, it tries to correct itself.

For example

Let’s say that you were trying to predict the risk of heart disease in a new patient.

To model this, you have the anonymized medical records of 100 patients as the data, along with the information on whether or not they had heart disease as the label.

A Machine Learning algorithm could look at the medical records (inputs) and whether or not a patient had heart disease (outputs) and then look for patterns to figure out what other elements in those medical records lead to heart disease.

Once you’ve got a trained algorithm, you could then pass the medical records (input) of a new patient through it and get a prediction of whether or not they have heart disease (output). It’s important to remember this prediction isn’t certain. It comes back as a probability.

The algorithm says, “Based on what I’ve seen before, it looks like this new patient's medical records are 70% aligned to those who have heart disease”.

Unsupervised learning

Unsupervised learning is when you have data but no labels.

For example

The data could be the purchase history of your online video game store customers. Using this data, you may want to group similar customers together so you can offer them specialized deals.

You could then use a Machine Learning algorithm to group your customers by purchase history, and then after inspecting the groups, you provide the labels.

Perhaps there’s a group interested in computer games, another group who prefers console games, and another who only buys discounted older games. This is called clustering.

What’s important to remember here is the algorithm did not provide these labels. It found the patterns between similar customers but by using your domain knowledge, you provided the labels.

Transfer learning

Transfer learning is when you take the information an existing Machine Learning model has learned and adjust it to your own problem.

Why do this?

Well, training a Machine Learning model from scratch can be expensive and time-consuming. The good news is, you don’t always have to. When Machine Learning algorithms find patterns in one kind of data, these patterns can be used in another type of data.

For example

Let’s say you’re a car insurance company and wanted to build a text classification model to classify whether or not someone submitting an insurance claim for a car accident is at fault (caused the accident) or not at fault (didn’t cause the accident).

You could start with an existing text model, one which has read all of Wikipedia and has remembered all the patterns between different words, such as, which word is more likely to come next after another. Then using your car insurance claims (data) along with their outcomes (labels), you could tweak the existing text model to your own problem.

This way, the previously trained algorithm would be able to pick up on specific phrasing that would indicate one result or another.

Simple so far, but let’s take this a little further…

Classification, Regression, and Recommendation

If Machine Learning can be used in your business, it’s likely it’ll fall under one of these three types of learning.

However, you might still be stuck thinking of which ML problem best fits your business problem. With that in mind, let’s break these down further and ask some questions about your issue.

Classification

Do you want to predict whether something is one thing or another?

Such as whether a customer will churn or not churn, or whether a patient has heart disease or not?

Sidenote: A Classification can be more than two things. Two classes is called binary classification, more than two classes is called multi-class classification, while Multi-label is when an item can belong to more than one class

Regression 

Maybe you want to predict a specific number of something?

Such as how much a house will sell for, or how many customers will visit your site next month?

Recommendation

Or perhaps you want to recommend something to someone?

Such as products to buy based on their previous purchases, or articles to read based on their reading history?

Defining your own business problem in Machine Learning terms

Now you know these things, your next step is to define your personal business problem in Machine Learning terms.

For example

Let’s dive deeper into the car insurance example from before.

You work for AA insurance and you receive thousands of claims per day which your staff read and decide whether or not the person sending in the claim is at fault or not.

But now the number of claims is starting to come in faster than your staff can handle them.

Backlog stress

However, you’ve got thousands of examples of past claims which are labeled at fault or not at fault.

Knowing what you know now about ML and pattern recognition, you might be thinking “Hmm… Can Machine Learning help?

Probably, but let’s double-check to see if this problem fits into any of the three major ML problems - Classification, regression, or recommendation.

One of the easiest ways to think of this is if you can rephrase the problem, using one of these terms.

For example

“We’re a car insurance company that wants to classify incoming car insurance claims into at fault or not at fault”.

See the keyword here? Classify.

It turns out that this could potentially be a Machine Learning classification problem. (I say potentially because there’s a chance it might not work, but we can test it out).

tl;dr: When it comes to defining your business problem as a Machine Learning problem, start simple, and see if you can fit your problem into one of those 3 major ML problems.

Step 2: Data — If Machine Learning is getting insights out of data, what data do you have?

OK, so we’ve assumed that the problem can be solved with Machine Learning, and now we need data to work with.

If you already have data, it’s likely it will be in one of two forms:

  • Structured, or
  • Unstructured

Also, within each of these, you’ll probably have either static or streaming data.

So what does this all mean? Well, let’s break it down.

Structured data

This refers to any data that you can organize. Think of a table of rows and columns, such as an Excel spreadsheet of customer transactions, or a database of patient records.

Columns can be numerical, such as average heart rate, categorical, such as sex, or ordinal, such as chest pain intensity.

The key thing is that it can be organized into a structure.

Unstructured data 

Unstructured data refers to anything not immediately able to be put into row and column format, such as images, audio files, or natural language text.

Static data

Static data refers to existing historical data which is unlikely to change.

This could be something like your company's past customer purchase history. You have the data and it’s not going to change.

Streaming data

Streaming data refers to data that is constantly being updated. Older records may be changed, or newer records are constantly being added.

This could be your current customer purchase information. It's always being updated as you make new sales.

What data should you collect and use?

Now, obviously, you will probably get overlaps in these data types.

Your static structured table of information may have columns that contain natural language text, or photos that can be updated constantly.

For example

If we go back to the insurance company, each claim could have multiple types of data.

  • One column may be the text comments that a customer has sent in for the claim (NLP)
  • Another might be an accompanying image they’ve sent in along with the text (Unstructured)
  • While the final column could state the outcome of the claim (Structured)
  • Also, this table gets updated with new claims or altered results of old claims daily (Streaming)
example data collection

So, which data do you have, or need to collect?

Well, it all depends on your problem, and what data you have access to. Also, remember that the main goal is that you want to use the data you have to gain insights or predict something.

Let’s look at this for each ML problem type.

Data for supervised learning

For supervised learning, this could involve using the feature variables to predict the target variables.

If we look at the heart disease example from before, then a feature variable for predicting heart disease could be the person's sex, with the target variable being whether or not the patient has heart disease.

heart disease model

Here you can see that the table is broken into:

  • The ID column (yellow, not used for building Machine Learning model)
  • Feature variables (orange), and
  • Target variables (green)

Our goal here would be to use the Machine Learning model to find the patterns in the feature variables and predict the target variables, i.e. are they at risk of heart disease?

Data for unsupervised learning

Whereas, for unsupervised learning, you won’t have labels, but you’ll still want to find patterns. This means that you’ll need to group together similar samples and find samples that are outliers.

grouping

Data for transfer learning

For transfer learning, your problem stays a supervised learning problem, except you’re leveraging the patterns Machine Learning algorithms have learned from other data sources separate from your own.

What data is available already that you can apply to your current problem?

Step 3: Evaluation — What defines success?

So now you’ve defined your business problem in Machine Learning terms and you have data, the next step is to define what success looks like, so let’s break this down.

There are different evaluation metrics for ‘success’ in classification, regression, and recommendation problems. Which one you choose will depend on your goal.

For example

If we look at the insurance company example from earlier, it could be that for their assessment project to be successful, the model needs to

“Be over 95% accurate at classifying whether someone is at fault or not at fault”.

good enough

Again though, the success metric varies on the problem and end goal.

A 95% accurate model is probably good enough for predicting who’s at fault in an insurance claim, but for predicting heart disease, you’ll likely want better results and higher percentage performance.

There are also a few other things you should take into consideration for classification problems…

False negatives

The model predicts negative, but is actually positive.

In some cases, like email spam prediction, false negatives aren’t too much to worry about. But if a self-driving car's computer vision system predicts no pedestrians when there is one, this is not good.

False positives 

Model predicts positive but is actually negative.

Predicting someone has heart disease when they don’t, might seem okay at first. But not if that original assessment negatively affects the person’s lifestyle or sets them on a treatment plan they don’t need.

True negatives 

Model predicts negative, actually negative. This is good.

True positives 

Model predicts positive, actually positive. This is also good!

Precision 

What proportion of positive predictions were actually correct? A model that produces no false positives has a precision of 1.0.

Recall 

What proportion of actual positives were predicted correctly? A model that produces no false negatives has a recall of 1.0.

F1 score 

A combination of precision and recall. The closer to 1.0, the better.

Receiver operating characteristic (ROC) curve & Area under the curve (AUC) 

The ROC curve is a plot comparing true positive and false positive rates, while the AUC metric is the area under the ROC curve.

A model whose predictions are 100% wrong has an AUC of 0.0, and one whose predictions are 100% right has an AUC of 1.0.

For regression problems (where you want to predict a number), you’ll want to minimize the difference between what your model predicts and what the actual value is.

For example

If you’re trying to predict the price a house will sell for, you’ll want your model to get as close as possible to the actual price.

To do this, use MAE or RMSE.

  • Mean absolute error (MAE) — The average difference between your model's predictions and the actual numbers
  • Root mean square error (RMSE) — The square root of the average of squared differences between your model's predictions and the actual numbers

Use RMSE if you want large errors to be more significant. Such as predicting a house to be sold at $300,000 instead of $200,000 and being off by $100,000 is more than twice as bad as being off by $50,000. Or MAE if being off by $100,000 is twice as bad as being off by $50,000.

Recommendation problems are harder to test in experimentation

One way to do so is to take a portion of your data and hide it away. Then, when your model is built, use it to predict recommendations for the hidden data and see how it lines up.

For example

Let’s say you’re trying to recommend customers' products on your online store, and you have historical purchase data from 2010–2023. You could then build a model on the 2010–2023 data and then use it to predict 2024 purchases.

Then it becomes a classification problem because you’re trying to classify whether or not someone is likely to buy an item. However, traditional classification metrics aren’t the best for recommendation problems, as precision and recall have no concept of ordering.

If your Machine Learning model returned back a list of 10 recommendations to be displayed to a customer on your website, you’d want the best ones to be displayed first right?

  • Precision @ k (precision up to k) — Same as regular precision, however, you choose the cutoff, k. For example, precision at 5, means we only care about the top 5 recommendations. You may have 10,000 products. But you can’t recommend them all to your customers.

tl;dr: To begin with, you may not have an exact figure for each of these. However, knowing what metrics you should be paying attention to gives you an idea of how to evaluate your Machine Learning project.

Step 4: Features — What features does your data have and which can you use to build your model?

Not all data is the same.

In fact, when you hear someone referring to ‘features’ in regard to data in Machine Learning, they’re actually referring to different kinds of data within data.

functions of data

The three main types of features are categorical, continuous (otherwise known as numerical), and derived.

Categorical features 

These are either one or the other.

For example

In an online store, whether or not someone has made a purchase or not would be categorical. They either have or they haven’t. Simple!

Continuous (or numerical) features 

This could be a numerical value such as the number of times logged in.

Derived features 

These are features you create from the data, and are often referred to as ‘feature engineering’. This refers to how a subject matter expert takes their knowledge and encodes it into the data.

For example

You might combine the number of times logged in with timestamps to make a feature called time since last login, or turn dates from numbers into “is a weekday (yes)” and “is a weekday (no)”.

This would then help you further classify that data. You can also use features to create a simple baseline metric.

For example

A subject matter expert on customer churn may know someone is 80% likely to cancel their membership after 3 weeks of not logging in, or a real estate agent who knows the sale prices of houses might know that houses with over 5 bedrooms and 4 bathrooms sell for over $500,000.

baseline metric

These are simplified and don’t have to be exact, but it’s what you’re going to use to see whether Machine Learning can improve upon this assumption or not.

Some final notes on features

Text, images, and almost anything you can imagine can also be a feature. Regardless, they all get turned into numbers before a Machine Learning algorithm can model them.

Some important things to remember when it comes to features.

  • Keep them the same during experimentation (training) and production (testing) — A Machine Learning model should be trained on features that represent as close as possible to what it will be used for in a real system
  • Work with subject matter experts — What do you already know about the problem, how can that influence what features you use? Let your Machine Learning engineers and data scientists know this
  • Are they worth it? — If only 10% of your samples have a feature, is it worth incorporating it in a model? Have a preference for features with the most coverage. The ones where lots of samples have data for
  • Perfect equals broken — If your model is achieving perfect performance, you’ve likely got feature leakage somewhere. Which means the data your model has trained on is being used to test it. No model is perfect.

Step 5: Modeling — Which model should you choose? How can you improve it? How do you compare it with other models?

So by now, you’ve:

  • Defined your problem
  • Prepared your data, and
  • Applied some evaluation criteria and features

Well, now it’s time to model!

data modeling tools

Choosing a model

When choosing a model, you’ll want to take into consideration interpretability, ease of debugging, amount of data required, training, and prediction limitations.

  • Interpretability and ease of debugging — Why did a model make a decision it made? How can the errors be fixed?
  • Amount of data — How much data do you have? Will this change?
  • Training and prediction limitations — This ties in with the above, how much time and resources do you have for training and prediction?

To address these, I recommend that you always start simple.

A state-of-the-art model can be tempting to reach for, but if it requires 10x the compute resources to train, and prediction times are 5x longer for a 2% boost in your evaluation metric, it might not be the best choice to use.

accuracy vs time to run

Not great right? So let’s look at a few options you can use.

Linear models

Linear models such as logistic regression are usually easier to interpret, are very fast for training, and predict faster than deeper models such as neural networks.

However, it’s likely your data is from the real world, and one thing for certain is that data from the real world isn’t always linear.

What then?

Decision tree models

Ensembles of decision trees and gradient-boosted algorithms (fancy words, whose definitions are not important for now) usually work best on structured data, like Excel tables and data frames.

To work with these you can look into XGBoost and CatBoost.

xgboost

Deep models

Deep models such as neural networks generally work best on unstructured data like images, audio files, and natural language text.

However, the trade-off is they usually take longer to train, are harder to debug and prediction time takes longer, but this doesn’t mean you shouldn’t use them.

Transfer learning models

As we mentioned earlier, transfer learning is an approach that takes advantage of deep models and linear models.

It involves taking a pre-trained deep model and using the patterns it has learned as the inputs to your linear model. This saves dramatically on training time and allows you to experiment faster.

Pre-trained models are available on PyTorch hub, TensorFlow hub, model zoo, and within the fast.ai framework.

pretrained models

Sidenote: These sites are also a good place to look first for building any kind of proof of concept.

What about building some other kind of model?

For building a proof of concept, it’s unlikely you’ll have to ever build your own Machine Learning model, simply because a lot of people have already written these models.

With that in mind, don’t focus on trying to create a brand-new type of model. Instead, focus on preparing your inputs and outputs in a way they can be used with an existing model to make your life far easier.

This does mean however that you need to have your data and labels strictly defined and understand what problem you’re trying to solve.

data labelling

Again though, don’t worry about it being absolutely perfect. Machine Learning is a constant cycle of testing and improvement.

Speaking of which…

Tuning and improving your model

Remember that a model's first results aren’t its last. Like tuning a car, Machine Learning models can be tuned to improve performance.

Tuning a model involves changing hyperparameters such as learning rate or optimizer, or model-specific architecture factors such as the number of trees for random forests and the number of and type of layers for neural networks.

Don’t freak out! It seems complex but it’s actually getting easier to do also. These used to be something a practitioner would have to tune by hand but are increasingly becoming automated.

automated tuning tools

Your focus when tuning a model

The priority for tuning and improving models should be reproducibility and efficiency. Someone should be able to reproduce the steps you’ve taken to improve performance.

However, because your main bottleneck will be model training time and not new ideas to improve, your efforts should be dedicated to improving efficiency.

The argument for pre-trained models

Using a pre-trained model through transfer learning often has the added benefit of all of these steps being done for you already.

benefits of a pretrained model

Step 6: Experimentation — What else could we try? How do the other steps change based on what we’ve found? Does our deployed model do as we expected?

Because Machine Learning is a highly iterative process of testing, improvements, and new data collection, let’s look at some final important points.

When it comes to testing via deployment, you’ll want to:

  • Make sure your experiments are always actionable, and
  • Minimize the time between offline experiments and online experiments. (Offline experiments are steps you take when your project isn’t customer-facing yet, whereas online experiments happen when your Machine Learning model is in production).

The less downtime, the faster you’ll improve.

Data to use

Also, all experiments should be conducted on different portions of your data.

  • Training data set — Use this set for model training, 70–80% of your data is the standard
  • Validation/development data set — Use this set for model tuning, 10–15% of your data is the standard
  • Test data set — Use this set for model testing and comparison, 10–15% of your data is the standard

These amounts can fluctuate slightly, depending on your problem and the data you have.

Finally, there are also some common issues that you might encounter.

Common problems

  • Poor performance on training data means the model hasn’t learned properly. Try a different model, improve the existing one, collect more data, and collect better data
  • Poor performance on test data means your model doesn’t generalize well. Your model may be overfitting the training data. Use a simpler model or collect more data
  • Poor performance once deployed (in the real world) means there’s a difference between what you trained and tested your model on and what is actually happening. Revisit steps 1 & 2. Ensure your data matches up with the problem you’re trying to solve

Final thoughts: Putting it all together into a proof of concept

One of the best ways to figure out if Machine Learning can work for your business, is to build a proof of concept and see if ML can work for your own problems.

However: A proof of concept should not be seen as something to fundamentally change how your business operates but as an exploration into whether Machine Learning can bring your business value.

After all, you’re not after fancy solutions to keep up with the hype - You’re after solutions that add value!

Remember though that due to the nature of proof of concepts, it may turn out that Machine Learning isn’t something your business can take advantage of.

strategy

But all is not lost. The value in something not working is now you know what doesn’t work and can direct your efforts elsewhere.

If a Machine Learning proof of concept turns out well, take another step, if not, step back. Learning by doing is a faster process than thinking about something.

Finally, remember that it’s always about the data. Without good data to begin with, no Machine Learning model will help you. If you want to use Machine Learning in your business, it starts with good data collection.

Also, deployment changes everything. A good model offline doesn’t always mean a good model online. Once you deploy a model, there’s infrastructure management, data verification, model retraining, analysis, and more. Any cloud provider has services for these but putting them together is still a bit of a dark art.

Can you apply Machine Learning to your own business problems?

Hopefully, this guide has helped you understand better how to proof of concept Machine Learning problems, apply it to your own business problems, and get it to work for you.

We’ve skimmed over a lot of information here, and each of these steps could deserve an article on its own, but I wanted to give you enough information to understand, without giving you so much you couldn’t move forward.

Again, like I said up top. If you want to deep dive into this and learn Machine Learning from scratch, then check out my complete Machine Learning and Data Science course, or watch the first few videos for free.

It’s one of the most popular, highly rated Machine Learning and Data Science bootcamps online, as well as the most modern and up-to-date. Guaranteed.

You'll go from a complete beginner with no prior experience to getting hired as a Machine Learning Engineer, so it’s helpful for ML Engineers of all experience levels. Even total beginners.

And as an added bonus? You’ll be able to ask questions and get direct access to me, other students, and other Machine learning Engineers via our private Discord channel!

More from Zero To Mastery

Top 4 Reasons Why You Should Learn PyTorch preview
Top 4 Reasons Why You Should Learn PyTorch

Want to get started in machine learning but not sure which framework to choose? PyTorch vs. TensorFlow? Why not the one used by Facebook, Tesla, ChatGPT & more!

The No BS Way To Getting A Machine Learning Job preview
The No BS Way To Getting A Machine Learning Job

Looking to get hired in Machine Learning? Our ML expert tells you how. If you follow his 5 steps, we guarantee you'll land a Machine Learning job. No BS.

Top 10 Machine Learning Projects To Boost Your Resume preview
Top 10 Machine Learning Projects To Boost Your Resume

Looking for the best machine learning projects to make your resume shine? Here are my top 10 recommendations (with 3 'can't miss' projects!)