Whether you're new to Bash or have been scripting for years, preparing for a tech interview can be nerve-wracking. What if they ask something simple that you know inside out, but your mind suddenly goes blank?
That’s exactly why preparation is key, and in this guide, we’ll cover a range of essential Bash interview questions.
From the basics of what it is and how it works (would hate to get those wrong!), to more advanced topics, and insights into the latest updates and changes in Bash to ensure you’re not just prepared but ahead of the curve.
This way you can go into that interview with confidence that you’ll knock it out of the park.
So grab a coffee and a notepad, and let’s see how many you can answer correctly…
Sidenote: If you find that you’re struggling with the questions in this guide, or perhaps feel that you could use some more training, or simply want to build some more impressive projects for your portfolio, then check out my complete BASH course:
You'll learn Shell Scripting fundamentals, master the command line, and get the practice and experience you need to go from beginner to being able to get hired as a DevOps Engineer, SysAdmin, or Network Engineer! I guarantee that this is the most comprehensive and up-to-date online resource to learn Bash Scripting.
With that out of the way, let’s get into the questions.
Bash (Bourne Again SHell) is a command processor that typically runs in a text window where the user types commands that cause actions. It's the default shell on many Linux distributions and macOS.
Bash is essential for writing shell scripts, automating tasks, and managing system operation.
A Bash script is a text file containing commands for the Bash shell to execute. To create and execute a basic script, start by writing your script in a text editor:
#!/bin/bash
echo "Hello, World!"
The #!/bin/bash
line tells the system to use the Bash shell to run the script, and echo
outputs "Hello, World!" to the terminal. Save this file with a .sh
extension, like script.sh
.
Then, make the script executable with:
chmod +x script.sh
This command gives the file permission to be executed. Finally, run the script from the terminal with:
./script.sh
This command specifies that the script is in the current directory and should be executed, printing "Hello, World!" to the terminal.
#!/bin/bash
at the beginning of a script?The #!/bin/bash
line is called a shebang, (no not Ricky Martin), and specifies the interpreter that should be used to execute the script.
This ensures that the script is executed with the correct interpreter, which is crucial for compatibility and consistency across different environments.
In Bash, variables are used to store data that can be referenced and manipulated throughout your script. This helps make scripts more flexible, reusable, and easier to maintain.
You can define a variable by assigning a value to a name without any spaces around the = sign:
name="John"
echo "Hello, $name"
Variables in Bash are a powerful tool to make scripts dynamic and maintainable by avoiding hard-coded values and enabling reusability of code.
Positional parameters in Bash are a method for passing arguments to a script or function, allowing the script to handle external inputs dynamically.
This makes the script more flexible and reusable because it can operate on different inputs without needing to modify the script itself.
Positional parameters are referenced by numbers: $1
represents the first argument passed to the script, $2
the second, and so on. These numbers allow you to access and manipulate the input directly within your script.
For example
Consider the following simple script:
#!/bin/bash
echo "First argument: $1"
echo "Second argument: $2"
If you run this script with the command ./script.sh hello world
, it will output:
First argument: hello
Second argument: world
Here you can see how the script takes external input (hello
and world
) and processes it using positional parameters.
By using these parameters, you can write scripts that adapt to different inputs, which is essential for creating versatile and reusable Bash scripts.
Loops in Bash are a fundamental construct that allow you to repeat a set of commands multiple times.
They are particularly useful for automating repetitive tasks, such as processing files in a directory or performing an action a specific number of times.
Bash supports three primary types of loops:
For
LoopWhile
Loop, andUntil
LoopEach of these loops is suited to different scenarios, and understanding their usage is key to writing efficient and effective scripts.
For
LoopThe for
loop iterates over a list of items, executing a set of commands for each item in the list. This loop is ideal when you know in advance the exact number of iterations.
For example
for i in 1 2 3; do
echo "Number: $i"
done
In this example, the loop iterates three times, with i
taking on the values 1, 2, and 3 in turn. The command echo "Number: $i"
prints each value of i
as the loop progresses.
This type of loop is often used when you need to perform the same action on a known set of items, like iterating over a list of filenames.
While
LoopThe while
loop continues to execute as long as a specified condition is true. It's commonly used when the number of iterations is not known beforehand and depends on dynamic conditions.
For example
count=1
while [ $count -le 3 ]; do
echo "Count: $count"
((count++))
done
Here, the loop starts with count
set to 1 and continues running as long as count
is less than or equal to 3. During each iteration, count
is printed and then incremented by 1.
This loop is useful for scenarios where you need to continue looping until a certain condition changes, such as waiting for user input or processing data until a threshold is met.
Until
LoopThe until
loop is similar to the while
loop but with a reversed condition: it runs until the specified condition becomes true. This is useful when you want to keep executing commands until an event occurs.
For example
until [ condition ]; do
# Commands to execute
done
The until
loop will execute the commands inside it repeatedly until the condition specified becomes true.
This loop is particularly useful when you're waiting for something to happen, like a process to finish or a file to be created, and you want to keep checking until that condition is met.
Bash allows you to perform arithmetic operations directly within your scripts using several different methods. This is useful for tasks like incrementing counters, calculating totals, or handling any basic mathematical operations that your script might need.
One of the most common ways to do arithmetic in Bash is by using double parentheses (( ))
.
This syntax lets you perform calculations in a straightforward and readable way:
result=$((5 + 3))
echo "Result: $result"
In this example, the (( ))
syntax calculates 5 + 3
, and the result is stored in the result
variable. When you run the script, it will output Result: 8
.
Another method is using the expr
command, which is slightly older but still widely used:
result=$(expr 5 + 3)
echo "Result: $result"
Here, expr
evaluates the expression 5 + 3
and the result is assigned to the result
variable. This will also output Result: 8
.
You can also use the let
command, which is designed for performing arithmetic operations:
let result=5+3
echo "Result: $result"
This method works similarly, and will again output Result: 8
.
These different methods give you flexibility in how you write your scripts, depending on your preference or the specific needs of the task at hand.
Arrays are a way to store multiple values within a single variable, allowing you to manage lists of items like filenames, user inputs, or configuration settings efficiently.
Bash supports indexed arrays, where each element is associated with a numeric index starting from 0.
To define an array, you can list the elements inside parentheses, separated by spaces:
fruits=("Apple" "Banana" "Cherry")
Here, fruits
is an array containing three elements: "Apple", "Banana", and "Cherry". You can access individual elements by referencing their index:
echo ${fruits[0]} # Outputs: Apple
This command retrieves the first element of the fruits
array, which is "Apple".
You can also loop through all the elements in an array using a for
loop, which is particularly useful when you need to process or display each item:
for fruit in "${fruits[@]}"; do
echo $fruit
done
In this loop, "${fruits[@]}"
represents all the elements in the fruits
array, and the loop will print each fruit on a new line.
One common way to handle errors is by using exit statuses, where a command returns 0
on success and a non-zero value on failure.
You can check this exit status using $?
and take action based on the result:
command1
if [ $? -ne 0 ]; then
echo "command1 failed"
exit 1
fi
If command1
fails, this script prints an error message and exits with a status of 1
.
Another effective method is the set -e
option, which automatically exits the script if any command returns a non-zero exit status. This ensures the script stops immediately when an error occurs:
#!/bin/bash
set -e
command1
command2
Here, if command1
fails, the script exits before running command2
. However, set -e
can be too strict in cases where you expect certain commands might fail and want to handle those failures without stopping the script.
To bypass this, you can use || true
to selectively ignore errors:
command3 || true
This allows command3
to fail without causing the script to exit.
These techniques help you control how errors are handled, allowing your script to respond appropriately and avoid unexpected behavior.
Functions in Bash allow you to group commands into a reusable block, making your scripts more modular and easier to manage.
They also help to encapsulate specific tasks, so you can call the same set of commands multiple times without repeating code.
For example
To define a function, you simply give it a name followed by a set of parentheses and then enclose the commands in curly braces:
#!/bin/bash
greet() {
echo "Hello, $1"
}
In this example, the function greet
is defined to take one argument, $1
, which represents the first parameter passed to the function. While the echo
command inside the function prints a greeting message using this argument.
You can call the function by using its name and passing any required arguments:
greet "World"
When you run this script, it will output:
Hello, World
Functions are particularly useful for encapsulating repetitive tasks or complex logic, making your scripts more organized and easier to maintain.
By breaking down your script into smaller, reusable functions, you can improve readability and make updates or changes with less effort.
source
and ./
when executing a script?The source
(or its shorthand .
) command runs a script within the current shell environment, meaning any changes to variables, functions, or the environment persist after the script finishes.
For example
source script.sh
# or
. script.sh
On the other hand, ./
runs the script in a new subshell, which is a separate process. Any changes made by the script do not affect the current shell environment:
./script.sh
Use source
when you need the script to modify the current shell environment, and use ./
when you want to run the script in isolation.
Conditional statements in Bash allow you to control the flow of your script based on specific conditions. The if
, elif
, else
, and case
constructs are commonly used for this purpose.
For example
Using if
might look like this.
case $variable in
pattern1)
echo "Pattern 1 matched"
;;
pattern2)
echo "Pattern 2 matched"
;;
*)
echo "No pattern matched"
;;
esac
These structures allow you to perform different actions depending on the outcomes of conditions, making your script more dynamic and adaptable.
grep
in Bash?grep
is a command-line utility used to search for patterns in files. It’s highly versatile and can be used in various ways.
For example
We could use it to search for a pattern in a file:
grep "pattern" file.txt
To perform a case-insensitive search:
grep -i "pattern" file.txt
To search recursively through directories:
grep -r "pattern" /path/to/directory
Or to count the number of matching lines:
grep -c "pattern" file.txt
grep
is essential for filtering text, searching logs, and extracting specific information from files.
cron
?cron
is a job scheduler that lets you run scripts at specified times or intervals.
You manage cron jobs using the crontab
file, which lists the scheduled tasks.
For example
To edit the crontab file:
crontab -e
A cron job is defined by a line with five time fields followed by the command:
* * * * * /path/to/script.sh
For example
This job runs a script every day at 5:00 AM:
0 5 * * * /path/to/script.sh
cron
is ideal for automating repetitive tasks, such as backups or system maintenance.
Command-line arguments allow you to pass input to your script, making it more dynamic.
The simplest way to access these arguments is through positional parameters like $1
, $2
, etc.
For example
#!/bin/bash
echo "Script name: $0"
echo "First argument: $1"
echo "Second argument: $2"
For more complex argument handling, you can use getopts
to process options:
while getopts ":a:b:" opt; do
case $opt in
a) echo "Option A with value: $OPTARG" ;;
b) echo "Option B with value: $OPTARG" ;;
\?) echo "Invalid option: -$OPTARG" ;;
esac
done
This allows you to create scripts that accept flags and options, similar to many command-line utilities.
trap
command in Bash?The trap
command is used to catch and handle signals, allowing you to define actions that should be taken when a script receives a signal like SIGINT
(Ctrl+C).
This is crucial for ensuring that your script can perform cleanup tasks or execute critical code even when interrupted.
For example
To catch the SIGINT
signal:
#!/bin/bash
trap "echo 'Script interrupted'; exit" INT
while true; do
echo "Running..."
sleep 1
done
If you press Ctrl+C while this script is running, it will print "Script interrupted" and exit gracefully.
You can also use trap
to handle multiple signals or more complex tasks.
For example
Handling the EXIT
signal ensures that a specific command runs whenever the script exits, regardless of the exit status:
trap "echo 'Cleaning up...'; rm -f /tmp/tempfile" EXIT
This trap command removes a temporary file when the script finishes, whether it completes successfully or is interrupted.
trap
is also useful for managing signals like HUP
(hangup) to restart services or reload configurations.
Process substitution is a powerful tool for advanced data processing, allowing you to streamline complex command chains and manage data more flexibly in your scripts.
It also allows you to use the output of a command as if it were a file, making it possible to pass data between commands that expect file inputs. This feature is particularly useful for comparing command outputs or chaining commands together.
There are two main forms of process substitution:
<(...)
)The output of the command inside (...)
is treated as a file, which can be read by another command.
For example
If we wanted to compare the contents of two directories:
diff <(ls dir1) <(ls dir2)
Here, ls dir1
and ls dir2
are executed in subshells, and their outputs are compared by diff
as if they were files.
>(...)
)The command inside (...)
writes its output to a file-like object, which can be read by another command.
For example
You can redirect the output of a command into another process:
tee >(gzip > output.gz) < input.txt
In this example, tee
writes the contents of input.txt
to both the standard output and a gzip-compressed file output.gz
.
exec
in Bash?The exec
command in Bash replaces the current shell process with a specified command, meaning the shell stops running, and the command takes over completely.
This is useful for improving performance by avoiding the overhead of creating a new process.
For example
#!/bin/bash
exec ls -l
echo "This will not be printed"
In this script, exec ls -l
replaces the shell, so the echo
command never executes.
exec
is also powerful for manipulating file descriptors, as it allows you to redirect input/output streams, which is particularly useful for managing how a script interacts with files or other processes.
For example
You can redirect standard output to a file throughout the script:
exec > output.log
echo "This will be logged in output.log"
Or you can redirect standard input from a file:
exec < input.txt
cat # This will read from input.txt
By using exec
to manage file descriptors, you can fine-tune how your scripts handle input and output, making them more efficient and flexible.
A subshell is a child process created by the parent shell to execute commands in a separate environment.
Commands enclosed in parentheses ()
are executed in a subshell, meaning any changes to the environment (like variables or the working directory) are isolated from the parent shell.
For example
If you want to temporarily change the working directory and execute some commands without affecting the main shell:
(
cd /new/directory
echo "Current directory in subshell: $(pwd)"
)
echo "Current directory in parent shell: $(pwd)"
Here, the cd
command only changes the directory within the subshell, and the parent shell remains unaffected.
Subshells are also useful for running commands in parallel, which can improve the efficiency of your scripts:
(
sleep 2
echo "Task 1 completed"
) &
(
sleep 1
echo "Task 2 completed"
) &
wait
This allows multiple tasks to run concurrently, which is particularly useful in scripts that perform time-consuming operations.
You can manage background processes in Bash to run tasks concurrently without blocking the terminal. To start a process in the background, append an &
to the command:
./long_running_script.sh &
If you need to bring a background process to the foreground, use fg
:
fg %1
To see all background jobs, use jobs
jobs
You can also terminate a specific job with kill
: and then the job name
kill %1
In addition to managing background jobs, you can use disown
to detach a job from the terminal, allowing it to continue running even if the terminal is closed:
./long_running_script.sh &
disown %1
You can also run multiple commands simultaneously in the background, which is useful for multitasking:
command1 &
command2 &
wait
This way, you can perform several tasks in parallel while still using the terminal for other commands.
A Here Document allows you to pass a block of text to a command directly within a script, which is useful for creating files or passing multi-line input.
cat <<EOF
This is a
multiline string
EOF
In this example, everything between <<EOF
and EOF
is treated as input to the cat
command. Here Documents are useful for scripting complex text output or generating configuration files.
Command substitution allows you to capture the output of a command and use it as input for another command.
This can be done using either backticks or the preferred $(command)
syntax:
current_date=$(date)
echo "Today is $current_date"
This captures the output of date
and assigns it to current_date
. Command substitution is essential for making scripts dynamic by embedding command outputs within other commands.
Output redirection in Bash allows you to control where the output of a command is sent, which is essential for managing data flow in your scripts. Common redirection operators include:
>
: Redirects output to a file, overwriting it.>>
: Appends output to a file without overwriting.2>
: Redirects standard error to a file.&>
: Redirects both standard output and error to a file.For example
You can redirect standard output to a file and errors to another file:
echo "Hello, World!" > output.txt
ls non_existent_file 2> error.log
If you want to suppress output, you can redirect it to /dev/null
, effectively discarding it:
command > /dev/null 2>&1
Here, both the standard output and error are redirected to /dev/null
.
To combine standard output and error streams into the same file, you can use 2>&1
:
command > output.log 2>&1
This ensures that both outputs are captured in a single log file, making it easier to troubleshoot issues.
A symbolic link is a file that points to another file or directory, acting like a shortcut. You create a symbolic link using ln -s
:
ln -s target_file link_name
Symbolic links are useful for managing shared resources, organizing files, and creating easy access points in your file system.
You can check for the existence of a file or directory using conditional expressions:
if [ -f "file.txt" ]; then
echo "File exists."
else
echo "File does not exist."
fi
if [ -d "directory" ]; then
echo "Directory exists."
else
echo "Directory does not exist."
fi
This ensures your script handles files and directories appropriately, preventing errors by checking their existence first.
>
and >>
in Bash?Both >
and >>
are used for output redirection. The >
operator overwrites the target file, while >>
appends to the file without overwriting it:
echo "Hello, World!" > file.txt # Overwrites file.txt
echo "Hello again!" >> file.txt # Appends to file.txt
These operators control how output is managed, either replacing existing content or adding to it.
Signals are used to control processes, and handling them properly ensures your script can respond to interruptions or termination requests.
For example
The trap
command lets you specify commands to execute when a signal is received, like so:
#!/bin/bash
trap "echo 'Signal received'; exit" SIGINT SIGTERM
while true; do
echo "Running..."
sleep 1
done
This script handles SIGINT
(Ctrl+C) and SIGTERM
by printing a message and exiting gracefully. Handling signals is crucial for managing resources and ensuring clean termination of scripts.
Bash provides several built-in methods for manipulating strings, which are essential for tasks like formatting text, parsing input, and preparing data within scripts.
You can extract a portion of a string by specifying the starting position and length. The syntax ${str:start:length}
is used for this purpose:
str="Hello, World!"
echo ${str:7:5} # Output: World
Here, the substring starting at position 7 (zero-based) with a length of 5 characters is extracted, resulting in "World".
Bash allows you to replace occurrences of a substring within a string using the ${str/old/new}
syntax:
echo ${str/World/Bash} # Output: Hello, Bash!
This replaces the first occurrence of "World" with "Bash". To replace all occurrences, you can use ${str//old/new}
.
You can also convert strings to uppercase or lowercase. The syntax ${str^^}
converts the entire string to uppercase, while ${str,,}
converts it to lowercase:
echo ${str^^} # Output: HELLO, WORLD!
echo ${str,,} # Output: hello, world!
This is particularly useful for normalizing text, ensuring consistency in data processing, and preparing strings for case-sensitive operations.
&&
and ||
in Bash?The &&
and ||
operators in Bash are logical operators that control the flow of commands based on the success or failure of the previous command:
&&
: Executes the next command only if the previous one succeeds (i.e., returns an exit status of 0
).||
: Executes the next command only if the previous one fails (i.e., returns a non-zero exit status).For example
mkdir new_dir && cd new_dir # Change directory only if mkdir succeeds
command || echo "Command failed" # Print message if command fails
These operators allow you to create more robust scripts by chaining commands conditionally.
You can also combine &&
and ||
to create more complex logic in a single line:
command && echo "Success" || echo "Failure"
In this case, if command
succeeds, it prints "Success"; if it fails, it prints "Failure".
This combination can be particularly useful for concise error handling and decision-making within scripts.
In Bash, you can use a for
loop to iterate over files in a directory, allowing you to perform operations on each file.
This is particularly useful for batch processing tasks such as renaming files, converting formats, or applying the same command to multiple files.
For example
#!/bin/bash
for file in /path/to/directory/*; do
echo "Processing $file"
done
This loop processes each file in the specified directory, while the *
wildcard matches all files, but you can also use more specific patterns, like *.txt
, to only process certain types of files.
If your file names contain spaces, it’s important to quote the variable to prevent issues:
for file in /path/to/directory/*; do
echo "Processing \"$file\""
don
For more complex directory structures, you might prefer using a `while` loop with `find` to iterate over files recursively:
```bash
#!/bin/bash
find /path/to/directory -type f | while read file; do
echo "Processing $file"
done
This approach finds all files under the specified directory and processes them, regardless of depth.
It’s particularly useful when dealing with nested directories or when you need more control over which files are processed.
There you have it - 30 of the most common Bash questions and answers that you might encounter in an interview for a DevOps interview using Bash.
What did you score? Did you nail all 30 questions? If so, it might be time to move from studying to actively interviewing!
Didn't get them all? Got tripped up on a few? Don't worry; I'm here to help.
If you want to fast-track your Bash knowledge and interview prep, and get as much hands-on practice as possible, then check out my complete BASH course:
Like I said earlier, you'll learn Shell Scripting fundamentals, master the command line, and get the practice and experience you need to go from beginner to being able to get hired as a DevOps Engineer, SysAdmin, or Network Engineer!
Plus, once you join, you'll have the opportunity to ask questions in our private Discord community from me, other students and working DevOps professionals.
If you join or not, I just want to wish you the best of luck with your interview!