🔥 SPRING FLASH SALE 🔥
Use Code: SPRING30 to get 30% OFF an annual membership. Expires soon 👇

[March 2026] AI & Machine Learning Monthly Newsletter 🤖

Daniel Bourke
Daniel Bourke
hero image

In This Month's Update:

In This Month's Update:

Want to become an AI/ML Engineer?

Our AI/ML Career path takes you from complete beginner (at any age!) to getting hired as a Machine Learning and/or AI Engineer 👇

Get The Full Career PathGet The Full Career Path

75th issue! If you missed them, you can read the previous issues of my monthly A.I. & Machine Learning newsletter here.

Hey everyone!

Daniel here, I’m a machine learning engineer who teaches the following beginner-friendly machine learning courses:

I also write regularly about machine learning on my own blog as well as make videos on the topic on YouTube.

Since there's a lot going on, the utmost care has been taken to keep things to the point.

Here's what you might have missed in March 2026 as an A.I. & Machine Learning Engineer... let's get you caught up!

My Work

  • Small Language Model fine-tuning with Hugging Face notebook polished, videos coming soon to ZTM - I’ve fully finished the code, materials and videos for fully fine-tuning a small language model using Hugging Face Transformers. Specifically, fine-tuning the Gemma 3 270M model on a custom dataset called FoodExtract-1k for structured data extraction of food and drink items from text. The tutorial covers downloading the base model and dataset, training using Hugging Face TRL, and creating an interactive demo with Gradio for public deployment on Hugging Face Spaces. Stay tuned for the videos on ZTM soon!
  • My brother and I placed 2nd in Google’s MedGemma Impact Challenge Kaggle Competition!!! - Our entry, Sunny, an iOS application uses a fine-tuned version of MedGemma to help privately track skin health over time. All code and models are open-source.
  • The Power of Small Language Models talk YouTube video - I did a talk at the Queensland AI Meetup about the power of Small Language Models and how you can customize them for your own use case. The talk largely covers my experience building Sunny as well as the upcoming ZTM Fine-tuning LLMs with Hugging Face Project (see below).

From The Internet

  • Unsloth Studio for trying and fine-tuning models in a browser/local app. Unsloth Studio is an open-source, no-code web UI for training, running, and exporting open-source LLMs locally on your machine. You can auto-create datasets from PDF, CSV, and JSON docs, then start training with real-time observability. Unsloth’s custom kernels support optimized training for LoRA, FP8, FFT, PT, and 500+ models including text, vision, audio, and embeddings. Everything runs 100% offline and locally.

image

  • Developer’s Guide to AI Agent Protocols. Google walks through the growing list of AI agent protocol acronyms (MCP, A2A, UCP, and more) and explains what each one does. MCP connects agents to tools and data (agent-to-tool), while A2A enables agents to discover and collaborate with each other (agent-to-agent). The guide demonstrates these protocols working together through a practical example of building a multi-step supply chain agent for a restaurant.
  • Search quality assurance with LLM as a judge by Zalando. Zalando built a search quality assurance framework using LLM-as-a-judge to evaluate search quality at scale with multi-language support. The evaluation process involves two steps: generate test queries with NER clustering and LLM translation, then evaluate the search results with an LLM judge that uses both product data and product images for its evaluation context.
  • State of Open Source AI in 2026. Hugging Face examines how the open-source AI landscape has shifted over the past year. The platform nearly doubled in users and artifacts: 11 million users, more than 2 million public models, and over 500k datasets. China now surpasses the United States in monthly and total downloads (around 41%), and robotics has emerged as the fastest-growing community with datasets growing from 1,145 in 2024 to 26,991 in 2025.
  • Be the Idiot. A post arguing that being the person who asks clarifying questions is a superpower. Teams shipping quality software are ones where asking “stupid” questions is not just tolerated but expected. Most disasters start as unasked questions. I love asking the “silly” questions.
  • Embedding Atlas by Daniel van Strien. A UV script that generates and deploys interactive embedding visualizations to Hugging Face Spaces with a single command. The resulting visualization runs entirely in the browser using WebGPU acceleration and is built on Embedding Atlas by Apple.
  • Two sensational articles from Distil Labs on fine-tuning small language models:

When does SFT + RLVR help with smaller models?. Distil Labs ran experiments on Qwen3-1.7B across 12 datasets spanning classification, function calling, and open-ended generation. RLVR reliably improves generative tasks (+2.0pp average, 6 wins and 1 tie out of 7) but provides no reliable benefit on structured tasks (-0.7pp average, 2 wins but also 2 regressions out of 5).

“The practical implication: your training recipe should match your task type. SFT alone is the right call for structured outputs; add RLVR when your task has a large output space and room to explore.”

What small language model is best for fine-tuning?. In another test, Distil benchmarked 12 models across 8 diverse tasks (classification, information extraction, open-book QA, and closed-book QA). Qwen3-4B-Instruct takes first place for best fine-tuned performance, matching a 120B+ teacher while being deployable on a single consumer GPU. For tunability (the improvement from base to fine-tuned), Llama-3.2-1B is the winner, showing the largest gains from fine-tuning.

  • Andrej Karpathy releases autoresearch to automate LLM experiment training. A 630-line Python script that lets an AI agent experiment autonomously overnight on a real LLM training setup. The loop modifies the code, trains for 5 minutes, checks if the result improved, keeps or discards, and repeats. You can expect approximately 12 experiments per hour and approximately 100 experiments while you sleep.

autoresearch-image

autoresearch by Andrei Karpathy automatically tries to find advancements in a small LLM training run while you sleep. The whole program lives in a 1-page markdown document you pass to your coding agent of choice.

  • Mixedbread Release Wholemeal Embedding Model v3. A unified omnimodal multilingual late-interaction retrieval model setting a new state of the art for search across languages, modalities, and real-world retrieval tasks. It brings best-in-class search to text, audio, images, PDFs, and videos across 100+ languages. Benchmarks show it performs better than Gemini Embedding v2 (also multimodal) across almost all tasks. On result that stuck out to me was how good BM25 was at the LIMIT benchmark (test limits of semantic search, a lot of real-world large-scale data is similar to this benchmark).
  • Niels Rogge writes about how Codex helped him contribute several AI models to Hugging Face. Niels Rogge documents how he used Codex 5.3 for porting VidEoMT, a ViT-based video segmentation model, to Hugging Face Transformers as well as several other models. A process which used to take him weeks a few years ago was completed in an hour or two conversing back and forth with Codex. Sure, Codex did much of the heavy lifting but I commented on a LinkedIn post of Niels that it’s likely because of all of his experience making manual ports, he knew where to direct the model to be able to write all of the conversion code (not to mention how much of Niels actual code was probably in the training data).
  • OpenAI’s guide to using skills to help with open source maintenance. OpenAI details how they use Codex to maintain the OpenAI Agents SDK repos, turning recurring engineering work such as verification, release preparation, integration testing, and PR review into repeatable workflows using repo-local skills, an AGENTS.md (a markdown file telling coding agents how to interact with your repository), and GitHub Actions. Between December 2025 and February 2026, the two repos merged 457 PRs, up from 316 in the previous three months (though sheer # number of PRs doesn’t necessarily mean that quality increases).

One of my favourite quotes from the piece:

“What matters more than the exact list is the pattern. Each skill has a narrow contract, a clear trigger, and a concrete output.”

  • Teaching LLMs to speak Spotify with specialized token prediction models. Spotify adapted a 1B-parameter LLM using Semantic IDs (custom sequences of tokens to represent a particular piece of content), compact discrete tokens that encode semantic similarity among catalog items. Through this process, the model learns to associate natural phrases such as melancholic piano, narrative journalism, or comedy show with specific grounded entities in Spotify’s catalog, effectively teaching the LLM to “speak” both human language and Spotify’s catalog language.

image (1)

Spotify’s workflow for using semantic IDs to recommend pieces of content to people with an LLM. The model is able to generate explanations in text as well as recommend specific items by generating the semantic IDs.

  • Antidote by Vicki Boykis. Vicki Boykis discusses how the introduction of AI into everything can feel like the world is telling you that your thinking process is extraneous, unnecessary, and must be commoditized and compressed. The recommendation: give yourself the gift of friction. If you’ve consumed enough tokens, perhaps it’s time to create some. Creation is the cure.
  • Hamel Husain release three excellent resources on LLM evaluations (evals) and the revenge of the data scientist:
    • The Revenge of the Data Scientist. Notes from a recent talk Hamel gave on evaluating LLMs being similar to the work a data scientist would do prior to LLMs (look at the data, plan for distribution shifts, create metrics tailored to the specific problem).
    • LLM Evals, everything you need to know. Hamel Husain and Shreya Shankar curate the most common questions from teaching 700+ engineers and PMs about AI evals. Key recommendations include: start with error analysis not infrastructure, spend 30 minutes manually reviewing 20-50 LLM outputs whenever you make significant changes, and be wary of optimizing for high eval pass rates.
    • Evals Skills. evals-skills, a set of skills for AI agents evals that guard against common mistakes when building AI systems. If you are new to evals or inheriting an existing eval pipeline, start with eval-audit, which inspects your current setup, runs diagnostic checks across six areas, and produces a prioritized list of problems with next steps.

I laughed when I saw this image from Hamel’s talk (The Revenge of the Data Scientist):

image (2)

  • FlexAttention + FlashAttention-4 in PyTorch. On Hopper and Blackwell GPUs, FlexAttention now has a FlashAttention-4 backend. PyTorch added support to automatically generate CuTeDSL score/mask modification functions, leading to performance gains of 1.2x to 3.2x over the existing Triton implementation on compute-bound workloads. FlexAttention lets you implement custom attention variants in a few lines of Python, no CUDA required.
  • Training an image generation model in 24 hours. Photoroom ran 24-hour experiments reaching 1-megapixel output at approximately $1,500 in compute cost, demonstrating how high-resolution diffusion experiments can be structured to lower infrastructure barriers for research teams. The code and configs, as well as the full experimental framework used throughout the PRX series, are available in the PRX repository.
  • Hugging Face releases Storage Buckets on the Hugging Face Hub. Storage Buckets are mutable, S3-like object storage you can browse on the Hub, script from Python, or manage with the hf CLI. They are built on Xet, Hugging Face’s chunk-based storage backend that deduplicates content across chunks. At the 500 TB+ tier, $8/TB/month for public storage undercuts AWS S3 Standard ($23/TB/month) by roughly 3x.
  • Reka release Reka Edge. A 7B parameter vision language model specifically engineered for physical AI applications on the edge. The model achieves 98ms time to first token (faster than human visual reaction time), uses 3x fewer input tokens, and achieves 65% faster throughput compared to leading 8B models. Notably the model uses a 657M ConvNeXt V2 vision encoder for efficient processing of visual inputs, one of the first I’ve seen with ConvNeXt V2 as the vision encoder (the most common one I see is SigLIP2).

Open Source

  • Zed Industries release Zeta 2. Zeta 2 is a code edit prediction (next-edit suggestion) model finetuned from ByteDance-Seed/Seed-Coder-8B-Base. Given code context, edits history, and an editable region around the cursor, it predicts the rewritten content for that region.
  • Cohere releases Cohere-Transcribe under Apache 2.0. Cohere’s first audio model: a 2B encoder-decoder X-attention transformer with a Fast-Conformer encoder trained with cross entropy on 0.5M hours of curated audio transcript pairs across 14 languages. Achieves #1 spot on the open ASR leaderboard against both proprietary and open-source entrants with an average word error rate (WER) of 5.42. Also achieves majority preference across real-world annotations.

Some of my notes from the blog post (mostly on the fact they focused on the data loop with a proven architecture):

“Following Distil-Whisper, 90% of total parameters dedicated to the encoder and maintain a lightweight decoder. This asymmetry keeps the amount of autoregressive inference compute to a minimum while maintaining performance.”

“Previously models use a full LLM backbone and add audio understanding, however, this comes at an expense of inference speed and serving cost.”

💡“We chose a conventional well-tested architecture and dedicated the bulk of our model development cycles to data work.”

“Following rounds of error analysis, we augment real data with synthetic data.”

image (3)

Cohere’s new transcribe model works directly with vLLM (an inference engine) for fast text generation from audio files. Source: Cohere blog.

  • Microsoft Release Harrier V1 open-source models. A family of multilingual text embedding models achieving state-of-the-art results on the Multilingual MTEB v2 benchmark. Three sizes: 270M (Gemma 3 270M), 600M (Qwen3-0.6B), and 27B (Gemma 3 27B) parameters, all supporting a 32k token context window.
  • IBM releases a series of enterprise focused model updates and a large scale chart dataset
    • Granite 4.0 1B Speech. A compact automatic speech recognition model supporting six languages, ranking high on the OpenASR leaderboard despite using significantly fewer parameters than competing solutions.
    • Also see Granite 4.0 Vision, for business/enterprise document styles such as Chart2CSV, Chart2Summary and Chart2Code, built on ChartNet: a million-scale multimodal dataset purpose-built for chart interpretation and reasoning, spanning 24 chart types and 6 plotting libraries with 1.7 million diverse chart samples.
  • Chroma Context 1 for search retrieval from documents within an agentic system. A 20B parameter agentic search model derived from gpt-oss-20B that achieves retrieval performance comparable to frontier-scale LLMs at a fraction of the cost and up to 10x faster inference speed. Excellent writeup on data mix, training setup and objectives for building a dedicated search model.
  • Meta Release SAM 3.1. SAM 3.1 Object Multiplex introduces a shared-memory approach for joint multi-object tracking that is significantly faster without sacrificing accuracy, achieving a 7x speed up at 128 objects on a single H100 GPU. A new suite of improved model checkpoints are available on Hugging Face.
  • Allen AI release MolmoWeb as a web browsing agent. An open visual web agent built on Molmo 2 (available in 4B and 8B sizes) that operates a browser by interpreting the same visual interface humans see: given a task instruction and a live webpage, the model observes through screenshots, predicts the next step, and executes browser actions such as clicking, typing, or scrolling. MolmoWeb sets a new open-weight SOTA across four major web-agent benchmarks and even surpasses agents built on proprietary models like GPT-4o. Fully open-source under Apache 2.0 with weights, training data, and code all released.

molmo-web

MolmoWeb is trained to interact with web pages in a visual way (exactly how humans do) rather than looking at the website code directly.

  • Allen AI release MolmoPoint. MolmoPoint replaces text-coordinate pointing with a coarse-to-fine grounding mechanism built around three special tokens: <PATCH>, <SUBPATCH>, and <LOCATION>. Because the model no longer has to memorize a coordinate system, pointing becomes easier to learn and more robust across resolutions, taking fewer tokens to express each point (down from 8 tokens to 3). Applications include robotics, computer-use agents, visual reasoning, and any setting where the model needs to connect language to specific parts of visual input.
  • Allen AI release Olmo Hybrid. A new 7B hybrid RNN model in the Olmo family from Allen AI. Uses a 3:1 pattern of three DeltaNet sublayers followed by one multihead attention sublayer, achieving roughly 2x data efficiency over Olmo 3 on core evals. On MMLU, Olmo Hybrid reaches the same accuracy as Olmo 3 using 49% fewer tokens, and shows 75% improved inference efficiency on long-context lengths.
  • Allen AI release the full Molmo2 code base. The open code for training and using Allen AI’s Molmo2 and MolmoPoint vision-language models. Molmo 2 is a family of fully open state-of-the-art VLMs that can analyze videos and multiple images at once, with variants including 4B, 8B, and Molmo2-O-7B (built on Olmo for a fully open end-to-end model flow).
  • Rednote hilab releases dots.mocr for multilingual OCR across many document types. Achieves state-of-the-art performance in standard multilingual document parsing among models of comparable size and excels at converting structured graphics (charts, UI layouts, scientific figures) directly into SVG code. A variant optimized for image-to-SVG parsing (dots.mocr-svg) is also released.
  • Mistral release Voxtral TTS 4B as CC BY-NC 4.0. Mistral’s first text-to-speech model: lightweight at 4B parameters, delivering realistic, expressive speech with natural prosody and emotional range across 9 major languages. Supports zero-shot and few-shot voice cloning using as little as 3 seconds of reference audio, with 70ms model latency for a typical 10-second voice sample. Open weights on Hugging Face under CC BY-NC 4.0 license.
  • Detect Anything in Real Time with a SAM3 model converted into multi-class open-vocabulary model. DART is a training-free framework that converts SAM3 into a real-time multi-class detector by exploiting a structural invariant: the visual backbone is class-agnostic, producing image features independent of the text prompt. Achieves 55.8 AP on COCO val2017 (80 classes) at 15.8 FPS on a single RTX 4080, with optimizations yielding 5.6x cumulative speedup at 3 classes scaling to 25x at 80 classes.
  • Chandra OCR 2. A state-of-the-art OCR model from Datalab that outputs markdown, HTML, and JSON. Scores 85.9% on the olmOCR benchmark (state of the art) and 77.8% on an internal 43-language multilingual benchmark. Smaller and more accurate than Chandra 1 (9B) across every category, supporting 90+ languages with features for handling handwriting, forms, tables, math, and complex layouts.
  • NVIDIA Release Nemotron Super, a 120B parameter model with fast inference. A 120B total, 12B active-parameter model using a hybrid Mamba-Transformer MoE architecture that delivers up to 5x higher throughput for agentic AI. The backbone interleaves Mamba-2 layers (for linear-time sequence processing with a 1M-token context window), transformer layers (for advanced reasoning), and Latent MoE (a new technique activating four expert specialists for the cost of one). Pretrained on 25 trillion tokens using NVFP4, NVIDIA’s 4-bit floating-point format optimized for Blackwell.
  • Also, NVIDIA release Nemotron 3 Nano 4B. The newest and most compact member of the Nemotron 3 family, pruned and distilled from Nemotron Nano 9B v2 using the Nemotron Elastic framework. Model was pruned and optimized using the ModelOpt open-source library. FP8 model achieves 100% median accuracy of the performance of the BF16 model.
  • Tada text-to-speech model. Hume AI’s first open-source TTS model, TADA (Text-Acoustic Dual Alignment), resolves the mismatch between text and speech with a novel tokenization schema that synchronizes text and speech one-to-one. Generates speech at a real-time factor of 0.09 (more than 5x faster than similar grade LLM-based TTS systems) with zero hallucinations across 1000+ test samples. Blog post.
  • Context Hub by Andrew Ng for automatically integrating context into your LLM. An open tool that gives your coding agent the up-to-date API documentation it needs. Through the chub CLI, agents can instantly fetch curated, LLM-optimized markdown documentation for specific APIs. Integrates with Claude Code’s skills system and supports agent annotations so discovered workarounds are saved across sessions. Gained over 10,000 GitHub stars in its first week.
  • Google DeepMind release Simply, a JAX-based library for automating LLM research. A minimal and scalable research codebase in JAX designed as an environment where both humans and AI agents can rapidly iterate on frontier LLM research. An AI agent (which can itself be powered by an LLM served with Simply) can read the code, propose new ideas, run experiments, and iterate autonomously or under the guidance of human researchers.
  • Hugging Face release the synthetic data creation playbook. Hugging Face ran over 90 experiments and generated 1 trillion tokens to determine what makes good synthetic data, resulting in FinePhrase: 500 billion of the finest synthetic tokens. The project includes an interactive visualization showing each run as a book whose size and color represent token information.
  • Tencent creates Penguin-VL and trains a vision encoder in a VLM with an LLM initialization (outperforms SigLIP2 on VL tasks). Tencent initializes a vision encoder directly from a pretrained text LLM (Qwen3-0.6B), modified with bidirectional attention and 2D-RoPE for spatial modeling. This avoids the objective mismatch between contrastive learning and autoregressive language modeling. Even under matched data and training settings, the Penguin-encoder remains clearly superior to SigLIP2 across various image and video benchmarks.
  • FireRed-OCR. A 2B OCR model based on Qwen3-VL that achieves 92.94% overall score on OmniDocBench v1.5, significantly outperforming DeepSeek-OCR 2, OCRVerse, and massive general VLMs. Uses Format-Constrained GRPO to enforce strict syntactic validity, eliminating common errors like unclosed tables or invalid LaTeX formulas.

Papers

  • V-JEPA 2.1: dense feature understanding in video self-supervised learning. A family of self-supervised models that learn dense, high-quality visual representations for both images and videos. Uses a dense predictive loss, deep self-supervision across multiple intermediate encoder layers, and multi-modal tokenizers enabling unified training across images and videos. Achieves state-of-the-art on several challenging benchmarks including a 20-point improvement in real-robot grasping success rate over V-JEPA-2. Code on GitHub.
  • PLUM: Adapting Language Models for Industrial Scale Generative Recommendations. A YouTube paper on creating Semantic IDs for generative recommendations at billions scale. Each item in the corpus is represented by a sequence of discrete Semantic ID tokens, and the model is directly trained to generate these IDs based on user context. Adding PLUM to YouTube’s production candidate pool drove a +4.96% lift in Panel CTR for YouTube Shorts in live A/B tests.

Releases

  • trl hits 1.0! TRL v1 marks a major milestone for the Transformer Reinforcement Learning library. It’s now easily one of the most advanced LLM and VLM post training libraries out there. If you’d like to get hands-on with it, check out the upcoming ZTM course on LLM fine-tuning with Hugging Face.
  • PyTorch 2.11 is out. Highlights include a FlashAttention-4 backend for FlexAttention on Hopper and Blackwell GPUs, Differentiable Collectives for distributed training, and performance optimizations for Intel GPUs via XPU Graph.
  • hf-mount for mounting Hugging Face Buckets and repos as local filesystems. Exposes Hugging Face Buckets and Hub repos as a local filesystem via FUSE or NFS. Files are fetched lazily on first read, so only the bytes your code actually touches hit the network. You can attach remote storage that is 100x bigger than your local machine’s disk, with read-write for Storage Buckets and read-only for models and datasets. I’m currently creating a large bucket for Nutrify to see if I can get hands-on with it. Hugging Face have really been working hard on their storage layer lately. I’m very impressed with how efficient Xet makes things.
  • Cursor release Composer 2. Cursor’s new frontier-level coding model, built on Kimi K2.5 from Moonshot AI with additional continued pretraining and reinforcement learning. Scores 61.3 on CursorBench (up from 44.2 for Composer 1.5) and is priced at $0.50/M input and $2.50/M output tokens.
  • Google release an updated Google AI Studio for building real-world production ready apps with Firebase integration and authentication. Google launched a full-stack vibe coding experience in Google AI Studio, pairing a new Antigravity coding agent with native Firebase integration. The agent handles real-time multiplayer functionality, database provisioning, and third-party library installation automatically. Framework support now includes Next.js alongside existing React and Angular options.
  • Google Colab MCP server. Google released an open-source Colab MCP (Model Context Protocol) Server, opening up Google Colab to be accessed directly by any AI agent. Agents can automatically create cells, write and execute Python code, generate visualizations, and format analysis live inside your Colab notebook.
  • Google Release Gemini Embedding 2, a multimodal embedding model. Google’s first natively multimodal embedding model, mapping text, images, video, audio, and PDFs into a single 3,072-dimensional vector space. On MTEB English, it scores 68.32, holding the top spot by a 5.09-point margin (for now, it looks like MixedBread’s new Wholembed-v3 embeddings have overtaken it). Available via the Gemini API and Vertex AI.
  • Google Release Gemini 3.1 Flash Lite. Google’s fastest and most cost-efficient Gemini 3 series model, built for high-volume developer workloads at scale. Outperforms 2.5 Flash with a 2.5x faster time to first answer token and 45% increase in output speed. Priced at $0.25/1M input tokens and $1.50/1M output tokens. I’ve already tried it out on several problems and found it fast and effective.
  • GPT 5.3 Instant. OpenAI’s update delivers more accurate answers, richer web search results, and a 26.8% reduction in hallucinations. Significantly reduces unnecessary refusals while toning down overly defensive or moralizing preambles before answering questions.

Videos

See you next month!

What a massive month for the ML world in March!

As always, let me know if there's anything you think should be included in a future post.

Liked something here? Share it with someone.

In the meantime, keep learning, keep creating, keep dancing.

See you next month,

Daniel

www.mrdbourke.com | YouTube

By the way, I'm also an instructor with Zero To Mastery Academy teaching people Machine Learning & AI in the most efficient way possible. You can see a few of our courses below or check out all Zero To Mastery courses.

You might like these courses

More from Zero To Mastery

The No BS Way To Getting A Machine Learning Job preview
The No BS Way To Getting A Machine Learning Job
19 min read

Looking to get hired in Machine Learning? Our ML expert tells you how. If you follow his 5 steps, we guarantee you'll land a Machine Learning job. No BS.

6-Step Framework To Tackle Machine Learning Projects (Full Pipeline) preview
6-Step Framework To Tackle Machine Learning Projects (Full Pipeline)
30 min read

Want to apply Machine Learning to your business problems but not sure if it will work or where to start? This 6-step guide makes it easy to get started today.

How to Convince Your Boss to Pay for Your Upskilling preview
How to Convince Your Boss to Pay for Your Upskilling
10 min read

Get you company to pay for your tech upskilling. Use this training request email and strategy to make it happen.