Testing is one of the most important (but so often overlooked) development tasks in software engineering.
It helps to ensure that your software is operating as expected and lowers the chance of regressions or even bigger headaches down the road.
In this detailed tutorial, I'm going to walk you through the various testing strategies available in Rust, as well how to get the most out them.
So let's dive in!
Sidenote: I'm going to assume you have a basic knowledge of Rust, otherwise this post may seem a little confusing.
Don't beat yourself up about it though. If you're struggling to understand the concepts here and want to get to grips and become a kick-ass Rust Developer, then check out my Rust Programming course, where you’ll learn everything you need to know to confidently use the world’s most loved programming language!
Rust provides a handy built in testing mechanism through cargo
.
Simply invoking cargo test
will run all the tests defined in the project. However, there is an alternative test runner called cargo-nextest
which offers a cleaner test result interface and also runs faster.
You can install it with cargo install cargo-nextest --locked
. Once installed, tests are ran using cargo nextest run
.
Important:
cargo-nextest
is a drop-in replacement forcargo test
. So if you choose to usecargo-nextest
, you can substitute allcargo test
commands in this post withcargo nextest run
.
To create a new test, we use the #[test]
attribute:
#[test]
fn my_test_name() { /* ... */ }
When invoking cargo test
, the my_test_name
function will be executed as a test.
Rust tests will fail in two situations:
Err
resultThe standard library provides macros that will panic under the right conditions:
// check if something is true
assert!(boolean_expression, "message when false");
// check if one thing is equal to another
assert_eq!(expected, actual, "message when not equal");
// check if two things are not equal
assert_ne!(expr1, expr2, "message when equal");
// unconditional panic
panic!("message");
// example
#[test]
fn this_test_fails() {
assert_eq!(1, 2);
}
Since a test needs to panic to fail, the .expect()
on the Result
and Option
types are great for testing:
#[test]
fn this_test_fails() {
let two: Option<i32> = None;
let two = two.expect("missing two"); // panic
assert_eq!(2 + two, 4);
}
If you don't want to .unwrap()
or .expect()
on Result
, you can instead change the return type of a test function to Result<T, E>
which will trigger a test failure whenever Err
gets returned.
As a bonus, you will gain access to the question mark operator (?
) in your tests:
fn some_fn() -> Result<bool, String> {
Ok(true)
}
#[test]
fn result_test() -> Result<(), String> {
// We can use question mark instead of unwrap.
// If some_fn() is `Err`, then the test will
// fail at this line.
let is_ok = some_fn()?;
if is_ok {
Ok(())
} else {
// `Err` fails the test
Err("not ok!".into())
}
}
Sometimes you may want to test that some code does panic. In these cases you can add the #[should_panic]
attribute, which fails the test whenever the test code does not panic:
#[test]
#[should_panic]
fn panic_ok() {
panic!("test passed");
}
#[test]
#[should_panic]
fn this_fails() {
assert!(true);
}
There are also times when running a test takes a significant amount of time to execute. For these situations, the #[ignore]
attribute will cause the test to get skipped when running cargo test
:
#[test]
#[ignore]
fn only_runs_with_flags() {
std::thread::sleep(std::time::Duration::from_secs(5000));
panic!("test failed");
}
To then come back and run ignored tests, invoke cargo
with the --ignored
flag and then go grab a coffee ☕.
cargo test -- --ignored
Fluent APIs like those found in Jest (for JavaScript) are a popular way to construct tests.
While not part of the standard library, the spectral
crate provides a fluent testing API for Rust:
cargo add spectral
#[test]
fn with_spectral() {
use spectral::prelude::*;
assert_that(&1).is_equal_to(2);
let nums = vec![1, 2, 3];
assert_that(&nums).has_length(3);
assert_that(&nums).contains(1);
Unit testing tests individual functions or 'units' of code.
Unit tests serve two primary functions:
To create a unit test in Rust, we first need to create a test module and annotate it:
#[cfg(test)]
mod tests {
use super::*;
// test code goes here
}
So what's happening here?
The #[cfg(test)]
annotation tells the Rust compiler to compile this code only when running in test mode (like when running cargo test
). And we use a separate module for tests so the test code doesn't get mixed with the program code.
You can write your test code in a separate file if you'd like, but Rust projects tend to keep the test modules in the same file as program code. (Using the same file provides the added benefit of being able to assert!
on state that may not be accessible through a public interface).
The use super::*;
makes all the functionality in your program code available in your test module. This makes it easy to test your functions since you won't have to specify a full path to access the function.
Unit tests work best when you test a single thing at a time.
Trying to put more than one assertion in a test can make the test difficult to work with, and it may not be clear what functionality is under test. Multiple assertions may also be testing something that was already done by another test.
Even though functions should do a single thing, there will still be multiple code paths that may execute in any given function. Each of these paths should have a dedicated unit test.
Wrapping up multiple paths in a single unit test makes it unclear what went wrong when the test fails.
assert!
on intermediate stepsIf you are testing a function named F
that requires data A
and B
, then you should assert!
just on the F
function.
However, if the intermediate steps A
and B
return Result
or Option
, use .unwrap()
or .expect()
on these steps instead.
Why?
Because A
and B
should already have their own unit tests, so there is no need to assert!
them again while trying to test function F
.
Naming a test function functionality_works
or it_does_the_thing
aren't helpful when the test fails because it's not clear what we are testing and what the expected behavior is.
Opt instead for names like succeeds_with_empty_input
or reports_error_when_invalid_syntax_encountered
.
These function names aren't used outside of test results, so don't hold back on being descriptive.
Also? If you're following a SDLC that uses a ticket/bug tracker, then including the tracker ID in the test name is a good idea too, such as fix_1234
, feature_42
, bug_99
.
This way, you can then reference the tracker to discover all the information you need about why the test is there. Huzzah!
Intergration testing tests program behavior when multiple modules become linked together to perform some larger operation.
Before we dive into writing integration tests in Rust, let's take a look at what an integration test is versus a unit test.
Let's assume we have an image processing program with 3 steps:
All three steps can (and should) have unit tests to ensure correct functionality.
These unit tests will focus on just the step that's being tested:
Here's the thing though - All of the unit tests for each step can pass, yet the program can still not function as expected.
So what's the issue here?
Well, perhaps we made a mistake in how we ordered the steps in the code, or maybe there was an error at some step and we handled it wrong.
This is where integration testing comes in. An integration test will test the entire three step process by going through steps 1-3 in the correct order, and then verifying the result.
For our example, the input for the integration test is a file path to an image, and the output is a saved image with filters applied.
Unlike a unit test, an integration test can (and should) assert!
at each step in the process. This is because we are testing an entire process, and each step in the process is significant.
While unit tests get stored in the same file as the source code under test, integration tests get stored outside of the source tree for the project.
Rust treats a tests
directory in the crate root as an integration test directory, and a directory tree for integration testing looks like this:
<crate_root>
├── Cargo.toml
├── src/
│ └── lib.rs
└── tests/
└── integration_1.rs
└── integration_2.rs
└── integration_3.rs
Since integration tests exist outside of the src/
directory, they must reference the crate under test with either use
, or with an absolute path:
#[test]
fn it_works() {
assert!(my_crate::some_fn());
}
#[test]
fn it_works() {
use my_crate::*;
assert!(some_fn());
}
Working with integration tests present some differences from unit tests:
#[cfg(test)]
because integration tests are always ran in a testing contexttests/
get ignored and aren't built as integration tests. This means all integration tests must be present at the root of the tests/
directoryOftentimes you will want to create some shared test code for use throughout multiple tests. Shared modules are able to accomplish this:
<crate_root>
├── Cargo.toml
├── src/
│ └── lib.rs
└── tests/
├── shared/
│ └── mod.rs
├── integration_1.rs
├── integration_2.rs
└── integration_3.rs
All of your shared code can exist in shared/mod.rs
and then referenced in your integration tests. Since each integration test file gets treated as a separate crate, each file needs to have mod shared
to identify and use the shared module:
mod shared;
// now we can use `shared`
#[test]
fn it_works() {
// get some shared data
let data = shared::some_shared_data();
assert!(data);
}
If you have a large amount of integration tests, it can be helpful to create subdirectories for organization.
But wait... don't integration tests have to exist in the root tests/
directory?
Correct! To get around this, you can place the tests into modules and then include the modules in a root-level integration file:
<crate_root>
├── Cargo.toml
├── src/
│ └── lib.rs
└── tests/
├── login/
│ ├── admin.rs
│ ├── user.rs
│ └── mod.rs
└── test_login.rs
The test_login.rs
integration file can then include the login
module:
// in test_login.rs
mod login;
Then, each file in login/
gets included in mod.rs
:
// in login/mod.rs
mod admin;
mod user
And finally, each included module indicated in login/mod.rs
can have tests:
// in admin.rs
#[test]
fn admin_can_login() { /* .. */ }
// in user.rs
#[test]
fn user_can_login() { /* .. */ }
Once the module structure gets created, cargo test
will pick up all the #[test]
annotations in all submodules and then run the tests as usual.
Snapshot testing (also known as baseline testing) is a testing methodology that uses output saved from a prior test run and checks it against the current output of the function under test. If the current output doesn't match the previous output, then the test fails.
Snapshot tests are convenient when the output of a function is large and cumbersome to test.
Instead of running assert!
on all aspects of the output, the output instead gets a manual review by a developer. If the output passes manual review, it is then saved as a snapshot.
After making the snapshot, the test will use this snapshot on subsequent runs to compare against the current values, previously approved by the manual inspection.
The insta crate provides snapshot testing for Rust and it includes a cargo
subcommand for interactive snapshot review in the terminal.
To set up insta
for your project, run:
cargo add --dev insta --features yaml
cargo install cargo-insta
The insta quickstart has a great example of writing a snapshot test:
fn split_words(s: &str) -> Vec<&str> {
s.split_whitespace().collect()
}
#[test]
fn test_split_words() {
let words = split_words("hello from the other side");
// we use this macro instead of the normal `assert!`
insta::assert_yaml_snapshot!(words);
}
After running cargo test
, you'll get output similar to this:
Since the snapshot isn't created yet, the test fails.
We can review the output with cargo insta review
and choose whether we want to accept the snapshot, reject the snapshot, or review it later (skip):
Once accepted, the next run of the test will pass as long as the current output matches the output saved in the accepted snapshot.
Mock testing provides a way to create fake functionality under your control that mimics the behavior of actual functionality. This functionality is then grouped into a mock object (or just a mock).
Mocking is great when you need to test interactions with third-party code outside of your control. An example could be mocking a payment provider--you don't want to send actual transaction requests to the provider while testing, so instead you mock the functionality for your tests.
The mockall crate allows creation of mock objects based on functionality declared with traits (it works for structs
too, but traits are easier to work with).
To get started with mockall
, add it to your project using:
cargo add --dev mockall
Before we can create a mock, we'll need a trait to work with:
use mockall::automock;
// This annotation will generate a mock struct that we can
// use in testing.
#[automock]
trait Calc {
/// adds `n` to some number stored in the struct
fn add(&self, n: u32) -> u32;
}
And we'll implement some functionality just so we can see how the trait behaves:
// we'll implement `Calc` on this struct
struct Ten;
impl Calc for Ten {
/// adds 10 to the input
fn add(&self, n: u32) -> u32 {
10 + n
}
}
// We'll test this function. If we use the `Ten` struct,
// then a function call to `add` will always add 10 to
// `n` because of the `Calc` implementation we wrote above.
fn add(calc: &dyn Calc, n: u32) -> u32 {
calc.add(n)
}
#[cfg(test)]
mod testmock {
use super::*;
#[test]
fn adds_stuff() {
use mockall::predicate;
// The struct generated by `#[automock]` is always
// called `MockX` where `X` is the name of the trait:
let mut mock = MockCalc::new();
// We need to configure the mock object.
//
// `expect_add` tells `mockall` that we are expecting
// a function call to `add`
mock.expect_add()
// and our expected input/argument to `add` is 2
.with(predicate::eq(2))
// we will only call it 1 time
.times(1)
// and it will return the result of this closure
.returning(|n| 2 + n);
// The mock has been configured and will only work if
// we call the `add` function with an argument of
// `2`. It will return the result of 2+2 because the
// `returning` closure adds `2` to whatever was input
// to `add`. In this case the input will always be `2`
// because we forced the mock to only accept `2` as
// the argument, using `predicate`. Removing `predicate`
// will allow the mock to calculate arbitrary values.
assert_eq!(4, add(&mock, 2));
}
}
For this contrived example, the mocking is a bit excessive. However, you can imagine a situation where instead of the Ten
struct and an add
function, we have a Factory
struct and a start_machine
function.
Using mocks allows us to simulate any arbitrary behaviors, so we would be able to fake the start_machine
function in our tests.
This enables us to test the program functionality without impacting or relying on things that exist outside of our program.
Mockall
has a comprehensive user guide with multiple examples for creating effective mocks.
Property testing provides a way to explore a random sampling of a predefined testing space by setting "properties" of test data.
Property tests can help find test inputs that fail your tests, but it cannot test all inputs due to the limited exploration space. When a test fails, the Rust property testing crate goes through a process called shrinking.
This reduces the input to the minimum value required to produce the error, making it easier to identify the root of the problem.
To get started with property testing in Rust, we will use the proptest crate:
cargo add --dev proptest
And we'll test this trivial function that has inappropriate use of unwrap()
:
// This function determines if the input matches
// the pattern "abcNNN" where NNN is a number.
pub fn is_abcnum(s: &str) -> bool {
// ues .as_bytes() for slice matching
let bytes = s.as_bytes();
match bytes {
[b'a', b'b', b'c', num @ ..] => {
// get numeric portion. unwrap() is OK here
// because we started with a &str in the params.
// The byte sequence `abc` lands on proper
// grapheme boundaries.
let num = std::str::from_utf8(num).unwrap();
// convert str to num. This crashes for a few reasons:
// - the input might be empty or not a number
// - the input might be a negative number
// - the input might be too large to fit into u8
let _ = num.parse::<u8>().unwrap();
true
}
_ => false,
}
}
Now that we have some code to work with, we can create a property test:
#[cfg(test)]
mod tests {
use super::is_abcnum;
use proptest::prelude::*;
// the proptest macro is needed for custom syntax in the
// test function parameters
proptest! {
#[test]
// the possible range of inputs is included in the parameters
fn prop(n in 1..100000) {
// using `n` here can be any number between 1 and 100000
is_abcnum(&format!("abc{n}"));
}
}
}
Writing a property test is like writing a regular unit test, but we also wrap it in the proptest!
macro.
This macro provides us with additional syntax we can use in the function parameters of the test function. The syntax allows usage of a range or regular expression and follows the format: VAR in REGEX
or VAR in x..y
where VAR
will be a usable variable name in the test function.
We can also include more than one set of ranges/regular expressions in a single test function:
proptest! {
#[test]
fn any_letters_any_number(letters in "[a-zA-Z]+", n in 1..100000) {
is_abcnum(&format!("{letters}{n}"));
}
#[test]
fn three_letters_any_number(letters in "[a-zA-Z]{3}", n in 1..100000) {
is_abcnum(&format!("{letters}{n}"));
}
#[test]
fn abc_any_number(n in 1..100000) {
is_abcnum(&format!("abc{n}"));
}
}
A cargo test
is sufficient to run the property tests and we'll get output similar to this (formatted for easier viewing):
thread 'tests::abc_any_number' panicked at 'Test failed:
called `Result::unwrap()` on an `Err` value:
ParseIntError { kind: PosOverflow };
minimal failing input: n = 256
The last part of the output is the important bit. It indicates that a failure occurred, and the minimal value to cause the failure is 256
.
This makes sense because we try to parse the number into a u8
, which has a maximum value of 255
.
So using 256
will cause a crash on the .unwrap()
call in our function.
proptest
has a lot of different options, so check out the proptest book for more info.
Fuzz testing (or fuzzing) is a testing method which executes a function using brute force psuedo-random inputs based on genetic algorithms.
The algorithms mutate known good input in an effort to exercise code paths in an efficient way, instead of pure brute-force.
The goal of fuzzing is to crash a program with varying input that developers and testers may not have considered when writing tests.
Rust fuzz testing uses AFLplusplus and has a cargo
subcommand to simplify the testing process.
To install the subcommand, run:
cargo install afl
Since the goal of fuzz testing is to crash the program, we need a binary file that we can run again when it crashes. AFL
will take care of relaunching the program for us, but we do need to set up the project correctly.
For this example, we'll make a hybrid binary+library project, but for typical usage you can just use
your crate in a new binary project.
To create a hybrid project, start with a binary project using cargo init any_name_you_want
and then add this to Cargo.toml
:
[lib]
path = "src/lib.rs"
name = "my_crate"
This will set up the project to also contain the library file lib.rs
where we can write a function to test. We'll use the same example from the property testing section since it crashes:
// src/lib.rs:
pub fn is_abcnum(s: &str) -> bool {
let bytes = s.as_bytes();
match bytes {
[b'a', b'b', b'c', num @ ..] => {
let num = std::str::from_utf8(num).unwrap();
let _ = num.parse::<u8>().unwrap();
true
}
_ => false,
}
}
And we can create the executable in src/bin/fuzzme.rs
:
// src/bin/fuzzme.rs
#[macro_use]
extern crate afl;
fn main() {
// The `fuzz!` macro handles all the boilerplate for us.
// We just need to call our function within the macro:
fuzz!(|data: &[u8]| {
// `is_abcnum` requires &str for input, so we'll only
// try calling it if the generated data is a valid &str
if let Ok(s) = std::str::from_utf8(data) {
// Call the function. Ignore the result because all
// we care about are crashes.
let _ = my_crate::is_abcnum(s);
}
});
}
Before we can start fuzz testing, we need to provide a handful of sample inputs. These sample inputs should be working inputs that don't crash the program.
AFL
will then take these currently working inputs and mutate them to try and cause a crash:
mkdir fuzz_samples # we'll use a folder called `fuzz_samples`
echo -n "abc123" > fuzz_samples/sample1 # any filename works
echo -n "abc12" > fuzz_samples/sample2
echo -n "abc1" > fuzz_samples/sample3
Now we are ready to build the project and begin fuzz testing:
cargo afl build
cargo afl fuzz -i fuzz_samples -o fuzz_out target/debug/fuzzme
Important: You might get warnings about system settings. If you do, you can choose to ignore the warnings by setting the indicated flags and re-running the cargo afl fuzz
command, or you can change your system settings with the commands provided in the error messages.
Once AFL starts running, the status console will display:
There is a lot of information available in the AFL
console. You can learn about the details of the console from the AFL
user guide, but for this example we will look just at the crashes.
Fuzz testing goes on indefinitely while it explores the problem space. Since we have some crashes already, we can exit the testing and look at the inputs that caused crashes:
$ paste fuzz_out/default/crashes/id* | sed 's/\t/\n/g'
abcq2
abcG
abc
The fuzz testing discovered that abc
without numbers caused a failure, and abc
followed by a letter caused a crash. If we take a look at the sample program, we can deduce that this line caused the problem:
let _ = num.parse::<u8>().unwrap();
It tries to unwrap
the data into a u8
, which will fail for empty input, letters, negative numbers, and numbers higher than 255. If we instead change this to:
num.parse::<u8>().is_ok()
And then remove the true
return value, then the program will no longer crash. It still doesn't work as expected (we should use i128
or some heuristic to identify numbers), but we have at least now discovered the root of the crash thanks to fuzz testing.
Running the test again after the above change shows 600k executions with garbage data, but no crashes encountered:
Fuzz testing won't help make programs run as expected, but it will help discover sources of program crashes. Keep this in mind when deciding when and where to utilize fuzzing.
While not an actual way to test programs, fake data is often something you'll need when writing your tests. Rust has multiple crates to generate fake data, and there are two I'd like to highlight.
The fake crate generates fake data such as peeople's names, web addresses, emails, colors, addresses, UUIDs, and more.
The fake
API offers multiple ways to create fake data, including generating entire faked data structures. From the docs:
use fake::{Dummy, Fake, Faker};
use rand::SeedableRng;
#[derive(Debug, Dummy)]
pub struct Foo {
#[dummy(faker = "1000..2000")]
order_id: usize,
customer: String,
paid: bool,
}
let f: Foo = Faker.fake();
For more control of the data generated, there are modules that contain different kinds of fake data generators:
use fake::Fake;
use fake::faker::name::raw::{FirstName, LastName};
use fake::locales::EN;
let first: String = FirstName(EN).fake();
let last: String = LastName(EN).fake();
The locales
module determines what region or language the fake data gets generated for.
synth is a data generation application that generates JSON data based on a schema you provide. The schema uses JSON and can fake arrays, recursive data structures, relational data, and complete objects.
We aren't able to install synth
using cargo
, but it does have a 1-liner installation method, depending on your operating system.
After installing synth
, create a new folder for your schema (like hello_world
) and then create a new hello_world.json
schema in that folder:
{
"type": "array",
"length": {
"type": "number",
"subtype": "u64",
"constant": 3
},
"content": {
"type": "object",
"username": {
"type": "string",
"faker": {
"generator": "username"
}
},
"email": {
"type": "string",
"faker": {
"generator": "safe_email"
}
}
}
}
This schema will generate an array of 3 objects. Each object will have two fields: a username
field and email
field.
We can generate fake data using this schema with the command synth generate hello_world
. The output should look something like this:
{
"hello_world": [
{
"email": "margarete@example.org",
"username": "pat_autem"
},
{
"email": "bertrand@example.com",
"username": "zelda_tempora"
},
{
"email": "miracle@example.org",
"username": "selena_autem"
}
]
}
synth
is a comprehensive data generator with a large amount of customization options. Be sure to check out the docs if you want to learn all it has to offer.
Test code has inherent repetition due to the need to running the same functionality with modified inputs. So, instead of writing tests by hand, we can instead leverage macros to write tests for us!
We'll start with a function that extracts individual words from a phrase or sentence:
fn words(phrase: &str) -> Vec<&str> {
/* imagination 🌈 */
}
The implementation doesn't matter because we just want to write tests using macros:
macro_rules! test_words {
(
$( // begin a repetition ($)
//
// our tests will use this format:
//
// test_name : input -> expected_output
//
$test_name:ident : $in:literal -> $expected:expr
)+ // end repetition: at least 1 test is required (+)
) => {
$( // begin repetition. All code in this block will repeat
// for every complete match found by the matcher (above).
#[test]
fn $test_name() {
// run the `words` function with the provided input ($in)
let actual = words($in);
// make the assertion
assert_eq!($expected, actual);
}
)+ // end repetition
};
}
If you want to use this macro in your own projects, you can copy+paste and then change the #[test]
block. The details of macros are beyond the scope of this post, but if you want to learn more check out The Little Book of Rust Macros, as well as my complete Rust course.
Now that we have a macro, we can invoke it for our tests:
test_words![
ignores_period: "Hello friend." -> vec!["Hello", "friend"]
ignores_comma: "Goodbye, friend." -> vec!["Goodbye", "friend"]
ignores_semicolon: "end; sort of" -> vec!["end", "sort", "of"]
ignores_question_mark: "why?" -> vec!["why"]
separates_dashes: "extra-fun" -> vec!["extra", "fun"]
separates_by_comma_without_space:
"Goodbye,friend." -> vec!["Goodbye", "friend"]
apostrophe_is_one_word:
"let's write macros" -> vec!["let's", "write", "macros"]
];
Macros don't care about whitespace, so you can format the invocation in any way you'd like to maximize readability.
I've used multiple lines for some of the longer tests, and added additional whitespace to help break up the otherwise giant wall of text.
It's also possible to achieve a similar result to the above by using a test table.
The concept is similar to macrosin that we provide the function input and the expected result. But instead of individual test functions generated, there will be a single test function that loops through the table:
#[test]
fn test_words() {
// skip auto formatting
#[rustfmt::skip]
let cases = vec![
// (input, expected output, message on failure)
("Hello friend.", vec!["Hello", "friend"], "excludes period"),
("Goodbye, friend.", vec!["Goodbye", "friend"], "excludes comma"),
("Goodbye,friend.", vec!["Goodbye", "friend"], "separates comma without space"),
("extra-fun", vec!["extra", "fun"], "separates dashes"),
("end; sort of", vec!["end", "sort", "of"], "ignores semicolon"),
("aren't macros great", vec!["aren't", "macros", "great"], "apostrophe is one word"),
("why?", vec!["why"], "ignores question mark"),
];
// `cases` is a collection of tuples, so we can destructure them in the loop
for (input, expected, assert_message) in cases {
// run the `words` function with the provided input
let actual = words(input);
// make the assertion. We _must_ include the "{}" formatting with
// the assert message, otherwise we won't know which test failed.
assert_eq!(expected, actual, "{}", assert_message);
}
}
Since all the tests are in a single function when using a table, cargo
isn't able to spawn multiple threads for the tests. If your tests don't take long to run, then it shouldn't be much of an issue, but long-running tests may be noticably slower when using tables.
Other than the performance issue, both macros and test tables achieve the same result, so choose whichever makes sense for your situation.
Using macros may be more beneficial for complicated setup situations since you can define custom syntax, and you get the multithreaded testing. Whereas Test tables are simple to implement and are nice when your input is short and fits into the table without extra steps.
OK so we've covered quite a bit there and you're head may be spinning a little.
Let's recap:
Like I said, it's a lot to cover in one go, so if you have any more questions or want to learn more about Rust, then come check out my Rust Programming course.
We'll go into testing more, as well as teach you everything you need to know to confidently use the world’s most loved programming language!
You can also ask questions in the dedicated Discord server and chat with other Rust users, as well as myself!
Otherwise, good luck, and may your code always be clean! (or at least, easy to fix).
If you've made it this far, you're clearly interested in Rust so definitely check out all of my Rust posts and content: