Introduction
What is Command Line Training?
Command line training equips users with the ability to interact with an operating system through textual commands rather than graphical interfaces. It teaches the syntax, structure, and execution flow of shell environments, enabling precise control over files, processes, and system resources.
The training focuses on practical competencies: navigating directories, managing files, automating tasks, and configuring system settings. Mastery of these skills reduces reliance on point‑and‑click tools, improves efficiency, and supports scripting for repetitive operations.
A typical curriculum includes:
- Introduction to the shell prompt and basic command syntax.
- File system navigation (
cd
,ls
,pwd
). - File manipulation (
cp
,mv
,rm
,touch
). - Text processing utilities (
grep
,sed
,awk
). - Permission and ownership management (
chmod
,chown
). - Process monitoring and control (
ps
,top
,kill
). - Script creation with conditional logic and loops.
- Environment customization through configuration files.
Completion of command line training provides a foundation for advanced automation, remote system administration, and integration with development workflows.
Why is Command Line Training Important?
Command‑line proficiency enables rapid interaction with operating systems, allowing tasks that would require multiple graphical steps to be completed with a single instruction. Mastery of basic commands reduces the time needed for file manipulation, system monitoring, and software installation, directly increasing productivity.
Automation becomes feasible when users can compose scripts that repeat reliable sequences without manual oversight. Scripts executed from the shell provide consistent outcomes across diverse environments, supporting reproducible workflows and minimizing human error.
Key advantages of command‑line training include:
- Direct access to system resources and configuration files.
- Ability to manage remote machines via secure protocols without graphical interfaces.
- Enhanced troubleshooting skills through real‑time diagnostic commands.
- Competitive edge in technical roles where scripting and infrastructure automation are standard expectations.
- Foundation for advanced topics such as container orchestration, cloud provisioning, and DevOps pipelines.
Essential Navigation Commands
1. Changing Directories
1.1 cd
The cd
command changes the current working directory of the shell process. Execution of cd
updates the environment variable that tracks the active directory, affecting subsequent file operations and command executions.
Syntax: cd [directory]
. When directory
is omitted, the command switches to the user’s home directory. Paths may be absolute (starting with /
) or relative to the present directory. Quotation marks are required if the path contains spaces or special characters.
Typical usage includes:
cd /var/log
- move to an absolute location.cd ..
- ascend one level in the directory hierarchy.cd ../../src
- ascend two levels then entersrc
.cd
- return to the home directory.cd "My Documents"
- change to a directory whose name includes a space.
The command relies on the shell’s built‑in implementation; external binaries are rarely used. Errors such as “No such file or directory” appear when the specified path does not exist, and the shell retains the previous directory. Effective use of cd
streamlines navigation in any command‑line environment.
1.2 pwd
The pwd
command displays the absolute pathname of the current working directory, allowing users and scripts to ascertain their location in the filesystem hierarchy. It returns a string that begins with the root (/
) and reflects the directory structure traversed to reach the active directory.
Typical usage requires no arguments:
pwd
The command supports two options that affect how symbolic links are resolved:
-L
(default) prints the logical pathname, preserving symbolic links as they appear in the command prompt.-P
prints the physical pathname, resolving all symbolic links to their target directories.
Example demonstrating both options:
cd /tmp
ln -s /var/log link_to_log
cd link_to_log
pwd # logical path: /tmp/link_to_log
pwd -P # physical path: /var/log
In scripting environments, pwd
is often combined with command substitution to store the directory path in a variable:
CURRENT_DIR=$(pwd)
This technique enables scripts to reference files relative to the starting location, regardless of subsequent directory changes.
When integrating pwd
with other commands, consider pipe compatibility. The output is a plain text line terminated by a newline character, making it suitable for tools such as grep
, awk
, or xargs
without additional parsing.
find "$(pwd)" -type f -name "*.log"
Consistent use of pwd
contributes to reliable navigation and file management across interactive sessions and automated processes.
2. Listing Contents
2.1 ls
The ls
command lists directory contents, providing a quick overview of files and subdirectories. By default it displays names in alphabetical order, one per line, using the current working directory as the target.
Common options extend its usefulness:
-l
- long format; shows permissions, ownership, size, and modification timestamp.-a
- includes entries beginning with a dot, revealing hidden files.-h
- human‑readable sizes when used with-l
; converts bytes to KB, MB, etc.-R
- recursive listing; traverses subdirectories and prints their contents.-t
- sorts by modification time, newest first.-S
- sorts by file size, largest first.-r
- reverses the sort order applied by other options.
Combining flags is straightforward; for instance, ls -lah
produces a detailed, human‑readable view that includes hidden items. Specifying a path after the options lists a different directory, e.g., ls /var/log
. The command accepts multiple paths, allowing simultaneous inspection of several locations. Proper use of ls
forms a fundamental skill for anyone learning command‑line navigation and file management.
2.2 ls -l
The ls -l
command produces a detailed directory listing, displaying each entry on a separate line with multiple attributes. Its output consists of ten fields:
- File type and permission bits (e.g.,
-rw-r--r--
). - Number of hard links.
- Owner name.
- Group name.
- File size in bytes.
- Modification timestamp (month, day, time or year).
- File or directory name.
- Optional symbolic‑link target (preceded by
->
).
Key characteristics:
- The leading character indicates the entry type:
-
for regular files,d
for directories,l
for symbolic links, and other letters for special files. - Permission bits are divided into three triads (owner, group, others), each containing read (
r
), write (w
), and execute (x
) flags. - The link count reflects how many directory entries reference the inode; a value greater than one often signals hard links.
- Owner and group fields help enforce access control policies.
- Size is presented in bytes; for human‑readable units, combine
-l
with-h
. - The timestamp reflects the most recent modification; the format changes based on file age.
Common variations:
ls -lh
- human‑readable sizes (e.g.,4.2K
).ls -la
- include hidden files (names beginning with.
).ls -lt
- sort by modification time, newest first.ls -lS
- sort by size, largest first.ls -lR
- recursive listing of subdirectories.
Typical usage examples:
ls -l /var/log
ls -lh *.conf
ls -la ~/projects
Understanding the column layout and frequently paired options enables rapid assessment of file attributes, ownership, and permissions, essential for system administration and troubleshooting tasks.
2.3 ls -a
The ls -a
command lists directory contents, including entries whose names begin with a dot. These hidden files and directories are omitted by default, so the -a
flag is indispensable for a complete view of the filesystem state.
When invoked without additional arguments, ls -a
displays:
.
- the current directory..
- the parent directory- All hidden entries (e.g.,
.bashrc
,.git
) - Regular files and sub‑directories
Typical usage patterns:
ls -a /path/to/dir
- show every item in the specified directory.ls -a -l
- combine with long format to reveal permissions, ownership, size, and timestamps for hidden items.ls -a -h
- produce human‑readable file sizes alongside hidden entries.
Key behaviors:
- Sorting follows the same rules as plain
ls
; hidden entries are interleaved according to name unless overridden by-X
,-t
, etc. - Symbolic links that are hidden are displayed with the
->
notation when combined with-l
. - The command respects the
LC_COLLATE
locale, affecting the order of dot‑prefixed names.
Practical considerations:
- Use
ls -a
before editing configuration files to verify their existence. - Combine with
grep
(e.g.,ls -a | grep '^\.config'
) to filter specific hidden items. - In scripts, avoid relying on the presence of
.
and..
when processing output; they are always included with-a
.
Overall, ls -a
provides a straightforward mechanism to audit the full contents of any directory, ensuring that hidden resources are visible for troubleshooting, configuration, and security checks.
File and Directory Manipulation
1. Creating Files and Directories
1.1 touch
The touch
utility creates an empty file or updates the timestamps of an existing file. Its basic syntax is:
touch filename
- creates filename if it does not exist; otherwise sets the access and modification times to the current moment.touch -a filename
- updates only the access time.touch -m filename
- updates only the modification time.touch -c filename
- does not create a new file; timestamps are changed only if the file already exists.touch -t [[CC]YY]MMDDhhmm[.ss] filename
- sets a specific date and time, where the optional century and seconds may be omitted.
Common use cases include:
- Initialising placeholder files for scripts or build processes.
- Resetting timestamps to force recompilation in make‑based workflows.
- Adjusting file metadata to align with backup or archiving policies.
When combined with wildcards, touch
can modify many files simultaneously, for example touch *.log
updates all log files in the current directory. The command respects the current user's permissions; attempting to modify a file owned by another user without appropriate rights results in an error.
1.2 mkdir
The mkdir
command creates one or more directories in the file system. It accepts a path argument that may be absolute or relative to the current working directory. When multiple directory names are supplied, each is created sequentially.
Key syntax elements:
mkdir [options] directory_name
mkdir -p /path/to/new/dir
creates intermediate directories as needed.mkdir -m 755 newdir
sets the access mode of the new directory at creation time.
Common options:
-p
- creates parent directories that do not exist.-m
- defines the permission bits (e.g.,755
,700
).-v
- prints a message for each directory created.-Z
- assigns a SELinux security context.
Practical examples:
mkdir project
creates a folder named project in the current location.mkdir -p src/main/java
generates the nested structuresrc/main/java
, creating src and main if they are missing.mkdir -m 700 private
creates private with read, write, and execute permissions limited to the owner.
Best practices:
- Use
-p
when scripting to avoid errors caused by missing parent directories. - Apply
-m
to enforce appropriate security settings at creation, reducing the need for subsequentchmod
commands. - Combine
-v
with batch operations to verify successful directory creation.
2. Copying and Moving
2.1 cp
The cp
utility copies files and directories, forming a fundamental part of any command‑line workflow. It reads source items and creates exact replicas at the destination, preserving data integrity and enabling rapid file management.
Key aspects of cp
:
- Basic syntax:
cp [options] source destination
. When multiple sources are specified, the destination must be an existing directory. - Recursive copy:
-R
or-r
copies directories and their contents, traversing the entire hierarchy. - Preserve attributes:
-p
retains mode, ownership, and timestamps;-a
combines-R
,-p
, and additional flags to duplicate a file tree faithfully. - Force overwrite:
-f
removes existing destination files before copying, avoiding prompts. - Interactive mode:
-i
requests confirmation before overwriting each file, useful for cautious operations. - Verbose output:
-v
lists each file as it is processed, helping track progress in large transfers. - Sparse file handling:
--sparse=always
creates sparse copies for files containing long sequences of zero bytes, optimizing storage.
Typical use cases:
- Duplicate a single file:
cp file.txt backup.txt
. - Mirror a directory structure:
cp -a /src/ /dst/
. - Update a directory while preserving existing files:
cp -u source/* target/
.
Understanding these options equips users to execute reliable file replication tasks across diverse environments.
2.2 mv
The mv
command relocates or renames files and directories within a filesystem. It updates the directory entries, leaving the original data blocks untouched, which makes the operation fast and reliable.
Typical syntax: mv [options] source target
. When target
denotes an existing directory, source
is placed inside it; otherwise, source
is renamed to target
.
Common options:
-i
- prompts before overwriting an existing file.-n
- prevents overwriting without prompting.-f
- forces overwrite, suppressing prompts.-u
- moves only when the source is newer than the destination or when the destination does not exist.-v
- displays each move operation.
Practical examples:
- Rename a file:
mv report.txt final_report.txt
. - Move multiple files into a directory:
mv *.log /var/log/archive/
. - Rename a directory while preserving its contents:
mv old_project new_project
. - Prevent accidental overwrite:
mv -n config.cfg backup/config.cfg
.
Safety considerations include using -i
or -n
in scripts that handle critical data, specifying absolute paths to avoid ambiguity, and verifying that wildcard expansions match the intended set of files before execution.
3. Deleting Files and Directories
3.1 rm
The rm
utility permanently removes files and directories from a filesystem. It operates directly on the inode, bypassing the recycle bin, so recovery is difficult without specialized tools.
Typical syntax: rm [options] file…
. Common options include:
-i
- prompts for confirmation before each deletion.-f
- forces removal, ignoring nonexistent files and suppressing prompts.-r
or-R
- recursively deletes a directory and its contents.-d
- removes empty directories.--preserve-root
- prevents accidental deletion of the root directory (enabled by default).
Safety considerations:
- Use
-i
or-I
when deleting multiple items to avoid unintended loss. - Verify the target path with
ls
before executingrm
. - Combine
-f
with-r
only when the operation is intentional and reviewed.
Examples:
- Remove a single file:
rm report.txt
- Delete several files matching a pattern:
rm *.log
- Erase a non‑empty directory:
rm -r old_backup
- Force removal without prompts:
rm -f temp.tmp
In training environments, rm
appears frequently because managing temporary files and cleaning up test directories is routine. Mastery of its options and disciplined use reduces the risk of data loss while maintaining efficient workflow.
3.2 rmdir
The rmdir
utility removes empty directories from the file system. It operates only on directories that contain no files or sub‑directories; attempting to delete a non‑empty directory triggers an error unless a force option is supplied.
Typical syntax:
rmdir [options] directory_name
Key options include:
-p
- removes the specified directory and its parent directories, stopping when a non‑empty directory is encountered.-v
- displays each directory name as it is removed, providing immediate feedback.-f
- forces removal without prompting, useful in scripts where interaction must be avoided.
Example usage:
-
Delete a single empty folder:
rmdir temp_folder
-
Remove a nested empty path:
rmdir -p project/build/output
-
Suppress confirmation and show progress:
rmdir -vf old_logs
When integrating rmdir
into automation, combine it with checks that confirm emptiness, such as test -d
and find
commands, to prevent unintended data loss. The command’s exit status is zero on success and non‑zero on failure, enabling reliable error handling in scripts.
Viewing and Editing Files
1. Viewing File Content
1.1 cat
The cat utility concatenates and displays file contents, serving as a fundamental tool for quick inspection and simple file merging. Its default behavior reads one or more files sequentially and writes the data to standard output, making it ideal for verifying file contents without opening an editor.
Typical usage patterns include:
cat file.txt
- output the entire file.cat file1.txt file2.txt > combined.txt
- merge two files into a new file.cat -n file.txt
- prepend line numbers to each output line.cat -b file.txt
- number only non‑blank lines.cat -s file.txt
- squeeze multiple consecutive blank lines into a single blank line.cat -E file.txt
- display a$
at the end of each line, revealing trailing spaces.cat -T file.txt
- represent tab characters as^I
.
When used with redirection operators, cat can create or extend files (>
and >>
) and feed data into pipelines (|
). Combining cat with other commands, such as grep
or awk
, enables rapid filtering and processing of text streams without intermediate storage.
1.2 less
The less
utility provides a flexible pager for viewing text files and command output without loading the entire content into memory. It supports forward and backward navigation, search patterns, and line numbering, making it suitable for large logs and source code inspection.
Key capabilities include:
- Screen navigation - press
Space
to advance one screen,b
to move back, andq
to quit. - Search - type
/pattern
to locate forward occurrences,?pattern
for reverse search, andn
/N
to repeat the search in respective directions. - Line control -
g
jumps to the beginning,G
to the end, while:number
moves directly to a specified line. - Display options -
-N
prefixes each line with its number,-S
disables line wrapping, and-X
prevents clearing the screen on exit. - Pipe integration - combine with other commands, e.g.,
dmesg | less -R
, to retain color codes in the output.
When invoked as less -F filename
, the program automatically displays the entire file if it fits on a single screen, bypassing the pager. The -R
flag preserves raw control characters, essential for viewing colored logs. The -i
option makes pattern matching case‑insensitive unless the pattern contains uppercase characters.
Understanding these options enables efficient examination of extensive text streams, reducing the need for external editors and minimizing system resource consumption.
1.3 head
The head utility displays the initial portion of a file or stream, enabling quick inspection of data without opening the entire source. It is a standard tool in Unix‑like environments and is frequently included in training curricula for command‑line proficiency.
Typical invocation follows the pattern:
head [options] [file...]
If no file is specified, head reads from standard input. By default, it outputs the first ten lines of each input.
Common options:
-n
- show the first count lines; a leading “+” prints from that line onward.-c <bytes>
- display the first bytes of each file.-q
- suppress header lines when processing multiple files.-v
- always print header lines, even for a single file.
Practical examples:
head -n 5 logfile.txt
- view the top five entries of a log file.head -c 1000 archive.tar
- extract the first kilobyte of a binary archive for verification.head -n +20 data.csv
- start output at line 20, useful for skipping headers.
When combined with pipelines, head limits the volume of data passed to subsequent commands, conserving resources and improving performance. For instance, grep error largefile.log | head -n 20
returns only the first twenty matching lines, preventing unnecessary processing of the entire log.
1.4 tail
The tail
utility displays the final lines of a file, allowing rapid inspection of recent log entries, output streams, or any text source where the newest data appear at the end. Its default behavior prints the last ten lines, but the command accepts parameters that modify the amount and format of output.
Typical usage follows the pattern:
tail [options] [file...]
Key options include:
-n
- prints the specified number of lines from the end of each file.-c
- outputs the last count bytes rather than lines.-f
- follows the file, appending new data to the display as the file grows; useful for real‑time monitoring.-F
- similar to-f
but reopens the file if it is rotated or recreated, preventing interruptions in continuous logs.--max-unchanged-stats=
- limits the number of unchanged file status checks when following, reducing system load.
Common scenarios:
- Monitoring server logs:
tail -f /var/log/syslog
shows incoming entries without reopening the file. - Extracting recent records from a dataset:
tail -n 20 data.csv
reveals the latest twenty rows. - Debugging output streams: piping
tail -c 200
after a command isolates the final segment of binary or textual output.
Understanding these options equips users to retrieve recent information efficiently, maintain visibility on evolving files, and integrate tail
into scripts for automated diagnostics.
2. Basic Text Editors
2.1 nano
Nano is a lightweight, terminal‑based text editor frequently introduced to beginners because it requires no mode switching. The program launches with the command nano filename
, creating the file if it does not exist. Once opened, the cursor appears at the top left corner, ready for input.
Key operations are accessed through on‑screen shortcuts displayed at the bottom of the window. The most frequently used commands include:
- Ctrl + O - write the current buffer to disk (prompting for confirmation if the file already exists).
- Ctrl + X - exit the editor; if unsaved changes exist, Nano asks whether to save.
- Ctrl + K - cut the entire current line; repeated presses remove successive lines.
- Ctrl + U - paste the most recently cut text at the cursor position.
- Ctrl + W - initiate a forward search; the query appears after the prompt.
- Ctrl + \ - perform a replace operation across the document.
Additional options enhance functionality. The -c
flag displays the cursor’s column number, aiding alignment tasks. The -m
flag enables mouse support in compatible terminals, allowing point‑and‑click selection. The -r N
option automatically wraps lines at column N, which is useful for maintaining readable source code width.
Configuration can be persisted in a user‑specific file ~/.nanorc
. Common directives placed there include:
set nowrap
- disables automatic line wrapping.set tabsize 4
- defines the visual width of tab characters.include "/usr/share/nano/*.nanorc"
- loads syntax‑highlighting definitions for many languages.
Effective use of Nano relies on remembering the Ctrl‑key shortcuts, employing the optional flags for specific tasks, and customizing the nanorc file to match personal workflow. Mastery of these basics prepares users for more advanced editing environments while keeping the learning curve minimal.
2.2 vim (Basic Usage)
Vim is a modal text editor widely used for rapid command‑line editing. Mastery of its core commands accelerates workflow in any development environment.
vim filename
- open or create a filei
- enter Insert mode to type textEsc
- return to Normal mode for command execution:w
- write (save) changes to the file:q
- quit the editor:wq
orZZ
- save and quit in one step:q!
- quit without saving
In Normal mode, navigation relies on single‑character commands. The cursor moves left, down, up, and right with h
, j
, k
, and l
respectively. Word‑wise movement uses w
(forward) and b
(backward). Line navigation includes 0
(start of line), $
(end of line), and gg
(first line) or G
(last line).
Editing operations are performed without leaving Normal mode. Deleting text uses x
(character), dw
(word), or dd
(entire line). Copying and pasting employ yy
(yank line) and p
(paste after cursor) or P
(paste before cursor). Undo is triggered by u
; redo uses Ctrl‑r
.
Search and substitution are executed with the colon command line.
/pattern
- search forward for pattern?pattern
- search backward for pattern:%s/old/new/g
- replace old with new throughout the file
These commands constitute the essential toolkit for effective Vim usage. Mastery enables swift file manipulation and seamless integration into command‑driven development pipelines.
Permissions and Ownership
1. Understanding Permissions
1.1 chmod
The chmod
utility modifies file‑system permission bits, allowing owners and administrators to define read, write, and execute rights for the user, group, and others.
Key elements of the command:
- Syntax:
chmod [options] mode file…
- Mode specification:
• Octal: a three‑digit number (e.g.,
755
) where each digit represents permissions for user, group, and others.• Symbolic: combinations of
u
,g
,o
,a
with+
,-
,=
and the lettersr
,w
,x
(e.g.,u+rwx,g+rx,o-r
). - Common options:
•
-R
- apply changes recursively to directories and their contents.•
-v
- output a diagnostic for each processed file.•
-c
- report only when a change occurs.
Typical usage scenarios:
- Grant execute permission to a script for the owner only:
chmod u+x script.sh
. - Set standard web‑server permissions on a document root:
chmod 755 /var/www/html
. - Remove write access for group and others on a configuration file:
chmod go-w config.cfg
. - Apply uniform read/write permissions to all files in a project directory:
chmod -R 664 project/
.
Best practices:
- Use symbolic notation when adjusting specific bits to avoid unintentionally altering unrelated permissions.
- Combine
-R
with careful selection of mode to prevent privilege escalation in nested directories. - Verify changes with
ls -l
orstat
to ensure the intended permission set is in effect.
chmod
remains a fundamental tool for managing access control on Unix‑like systems, enabling precise definition of who may read, modify, or execute each file.
1.2 chown
The chown
command modifies the owner and, optionally, the group associated with a file system object. It is a fundamental tool for managing access rights on Unix‑like systems.
Typical syntax: chown [OPTIONS] USER[:GROUP] FILE…
. The argument USER
may be a username or a numeric UID; GROUP
may be omitted, specified as a name or GID, and follows a colon. When the colon is omitted but a group is supplied, the command interprets it as USER:GROUP
. Paths can be absolute or relative.
-R, --recursive
apply changes to directories and all contained entries-c, --changes
report only when a modification occurs-v, --verbose
display each processed file-f, --silent
suppress most error messages--reference=RFILE
use ownership ofRFILE
as the target attributes
Example 1: chown alice report.txt
sets owner to alice, leaves group unchanged.
Example 2: chown bob:staff /var/log/*.log
assigns owner bob and group staff to all log files.
Example 3: chown -R root:root /etc/ssh
recursively enforces root ownership on the SSH configuration directory.
Proper use of chown
prevents unauthorized modifications and aligns file permissions with service requirements. Restrict execution to privileged accounts, verify target ownership before applying recursive changes, and combine with chmod
to enforce complete access control. Regular audits of ownership settings help maintain a secure environment.
2. Viewing Permissions
2.1 ls -l (Revisited)
The ls -l
command presents directory contents in a detailed, column‑based format. Each line displays file permissions, link count, owner, group, size in bytes, modification timestamp, and filename. Permissions are encoded as a ten‑character string, where the first character indicates the file type and the subsequent nine characters represent read, write, and execute rights for user, group, and others. The link count reflects hard links to the inode. Owner and group identifiers identify the account responsible for the file. Size is shown in bytes unless modified by additional options. The timestamp follows the locale’s date‑time format and may include year or time depending on the file’s age.
-h
- convert size to human‑readable units (K, M, G).-a
- include entries beginning with a dot, which are hidden by default.-S
- sort output by file size, largest first.-t
- sort by modification time, newest first.--color=auto
- apply color coding to differentiate file types.-r
- reverse the sorting order applied by other options.
Combining flags yields tailored listings; for example, ls -lah --color=auto
displays all files with human‑readable sizes and visual cues, facilitating rapid assessment of directory contents. The command’s deterministic column layout supports parsing by scripts, enabling automated inventory or monitoring tasks.
Searching and Filtering
1. Finding Files
1.1 find
The find
utility locates files and directories by traversing a directory hierarchy, applying criteria defined by the user. It operates directly on the file system, eliminating the need for external indexing tools.
Typical syntax:
find [starting‑point…] [expression]
Key components:
- Starting point - one or more directories where the search begins;
.
denotes the current directory. - Expression - a combination of tests, actions, and operators that filter results.
Common options and tests:
-name pattern
- matches file names against a shell‑style pattern (case‑sensitive).-iname pattern
- same as-name
but case‑insensitive.-type [f|d|l]
- restricts results to regular files (f
), directories (d
), or symbolic links (l
).-size [+|-]N[cwbkMG]
- selects files based on size;+
means greater than,-
less than, andN
is a numeric value with optional unit.-mtime [+|-]N
- filters by modification time;+N
older than N days,-N
newer than N days.-perm mode
- matches files with specific permission bits; prefix/
for any of the bits,-
for all bits,=
for exact match.-exec command {} \;
- runs a command on each matched file;{}
is replaced by the file path.-print
- outputs the full path of each match (default action if no other action is specified).
Example usage:
- Locate all Python scripts in the home directory:
find ~/ -type f -name "*.py"
- Remove empty directories under
/var/log
:find /var/log -type d -empty -exec rmdir {} \;
- List files larger than 100 MiB modified within the last 7 days:
find /data -type f -size +100M -mtime -7 -print
When combined with logical operators (-and
, -or
, !
), find
can express complex criteria in a single command line. Mastery of this utility is essential for any training program that emphasizes command‑line proficiency.
2. Filtering Output
2.1 grep
grep
searches text using regular expressions, returning lines that match a pattern. It reads from standard input or files supplied as arguments, making it suitable for quick inspection and integration into scripts.
Typical syntax:
grep [options] pattern [file...]
Key options:
-i
- ignore case differences.-v
- invert match, showing non‑matching lines.-r
or-R
- recursively search directories.-n
- prefix each line with its line number.-c
- output only the count of matching lines.-E
- enable extended regular expressions.-F
- treat the pattern as a literal string.
Common usage examples:
- Find occurrences of “error” in a log file, case‑insensitively:
grep -i error /var/log/syslog
- List lines that do not contain the word “debug” across multiple files:
grep -v debug *.conf
- Count how many times “TODO” appears in source code recursively:
grep -rc TODO src/
Performance tips:
- Pipe output to
grep
for filtering intermediate results, e.g.,ps aux | grep nginx
. - Use
--exclude
or--include
to limit file types during recursive searches. - Combine
-E
with alternation (|
) to search for several patterns in a single command.
Understanding grep
fundamentals equips users to extract information efficiently, automate diagnostics, and streamline data processing tasks.
Process Management
1. Viewing Processes
1.1 ps
The ps
utility displays information about active processes on a Unix-like system. It reads the kernel’s process table and presents selected fields in a tabular format, allowing administrators to verify program execution, resource consumption, and process hierarchy.
Typical usage patterns include:
ps
- shows processes owned by the current user in the current terminal session.ps -e
orps -A
- lists all processes regardless of owner.ps -ef
- provides a full-format listing with UID, PID, PPID, start time, and command line.ps aux
- presents a BSD-style output with CPU and memory percentages, status flags, and command arguments.ps -eo pid,comm,%cpu,%mem
- customizes columns to display only the process identifier, command name, and resource percentages.
Combining options refines the output. For example, ps -ef | grep httpd
filters the full list for web server instances, while ps -eo pid,ppid,cmd --sort=-%mem | head -n 5
identifies the five most memory‑intensive processes. The command’s flexibility makes it essential for routine system monitoring and troubleshooting.
1.2 top
The top
utility, version 1.2, provides a live view of process activity and resource consumption, enabling trainees to monitor system behavior while executing commands. Its core function is to refresh a table of processes at configurable intervals, displaying CPU, memory, and I/O metrics in real time.
Version 1.2 introduces several enhancements: a streamlined interface that reduces visual clutter, the ability to define custom column sets through a configuration file, and a batch mode that outputs data in a parsable format for automated analysis. These improvements increase the utility’s suitability for instructional scenarios that require precise, repeatable observations.
Typical invocation follows the pattern top [options]
. The most relevant options for training purposes include:
-d <seconds>
- sets the refresh delay; a shorter interval yields finer granularity.-b
- activates batch mode, directing output to standard output without interactive controls.-n
- limits the number of refresh cycles; useful for scripted demonstrations.-p
- restricts the display to a specific process identifier, allowing focused analysis.-c
- toggles between command name and full command line display, clarifying script execution paths.
Interpreting the display requires attention to the summary area at the top, which aggregates CPU load, memory usage, and swap activity. The process list below ranks entries by CPU consumption by default; sorting can be altered with interactive keys (M
for memory, T
for time). Trainees should correlate spikes in these metrics with the commands they execute, using the detailed columns to pinpoint resource‑intensive processes and to adjust workloads accordingly.
2. Terminating Processes
2.1 kill
The kill
command terminates one or more processes identified by their process IDs (PIDs) or by symbolic names. Its basic syntax is kill [options]
; the most common option -9
forces immediate termination, bypassing graceful shutdown procedures. Without explicit signals, kill
sends the default SIGTERM
, allowing the target process to perform cleanup before exiting.
In training pipelines, kill
is employed to stop stalled or runaway jobs that consume resources without producing results. Automated scripts often monitor job logs and invoke kill -9
when a timeout threshold is exceeded, ensuring that compute nodes remain available for subsequent tasks.
Key considerations include:
- Verify the PID belongs to the intended training process; accidental termination of system services can cause instability.
- Prefer
SIGINT
(signal 2) orSIGTERM
(signal 15) before resorting toSIGKILL
(signal 9) to allow orderly resource release. - Record termination events in audit logs to facilitate post‑mortem analysis.
Proper use of kill
maintains efficiency in large‑scale training environments by preventing resource lock‑up and enabling rapid reallocation of compute capacity.
Networking Commands
1. Basic Network Information
1.1 ip
The ip
utility replaces legacy networking tools and provides a unified interface for configuring network interfaces, routing tables, and address management. It operates on the kernel’s networking stack, allowing administrators to query and modify network parameters without restarting services.
Key functions include:
- Displaying interface details with
ip link show
orip a
. - Assigning IPv4 or IPv6 addresses using
ip address add
./ dev - Removing addresses via
ip address del
./ dev - Managing routes through
ip route add <dest>/
andvia dev ip route del
. - Adjusting interface state with
ip link set
orup down
. - Configuring neighbor (ARP/ND) entries using
ip neigh
.
Common options enhance precision:
-4
or-6
restrict operations to IPv4 or IPv6.dev
specifies the target interface.scope
defines address visibility (global, link, host).metric
influences route selection priority.
Practical usage examples:
-
Assign a static address:
ip address add 192.168.10.20/24 dev eth0
-
Activate an interface:
ip link set eth0 up
-
Add a default gateway:
ip route add default via 192.168.10.1 dev eth0
-
Remove an obsolete route:
ip route del 10.0.0.0/8
The command’s syntax is consistent across Linux distributions, enabling rapid adoption in training environments. Mastery of ip
equips users to diagnose connectivity issues, implement network policies, and automate configuration scripts with reliability.
1.2 ping
The ping
command tests network reachability by sending ICMP echo requests to a target host and measuring the round‑trip time of each reply. It verifies connectivity, identifies latency issues, and helps isolate routing problems.
Typical syntax:
ping [options] destination
Key options include:
-c
- send a specific number of packets.-i
- set interval between packets (seconds).-s <size>
- define payload size in bytes.-W
- specify wait time for each reply.
Example usage:
ping -c 5 -i 0.5 -s 64 example.com
The output displays:
- Sequence number of each packet.
- Time‑to‑live (TTL) value.
- Round‑trip time (RTT) in milliseconds.
- Packet loss statistics after completion.
Interpretation guidelines:
- Consistently low RTT values indicate a healthy connection.
- High variability or spikes suggest congestion or unstable links.
- Packet loss above 1 % typically warrants further investigation, such as traceroute or checking firewall rules.
Common pitfalls:
- Blocking of ICMP traffic by firewalls can produce false negatives.
- Using default packet size on low‑bandwidth links may inflate RTT measurements.
- Forgetting to specify a count may cause
ping
to run indefinitely, consuming resources unnecessarily.
2. Remote Access
2.1 ssh
Secure Shell (ssh) provides encrypted remote access to a host over a network. The command initiates a client session, authenticates the user, and establishes a secure channel for command execution or file transfer. Syntax follows the pattern ssh [options] user@host
, where user
denotes the remote account and host
specifies the target address.
Common options enhance functionality and security:
-i
- use a specific private key for authentication.-p
- connect to a non‑default port.-L
- create a local port forwarding tunnel.:<remote_host>: -R
- create a remote port forwarding tunnel.:<local_host>: -C
- enable compression to reduce bandwidth usage.-N
- open a tunnel without executing a remote command.
Typical usage examples:
ssh [email protected]
- start an interactive session as alice.ssh -i ~/.ssh/id_rsa [email protected] -p 2222
- connect using a custom key on port 2222.ssh -L 8080:internal.service:80 user@gateway
- forward local port 8080 to a service behind a firewall.
Understanding these parameters allows reliable, secure remote management and forms a core component of any command‑line proficiency toolkit.
Package Management
1. Installing Software (e.g., Debian/Ubuntu)
1.1 apt
The apt utility is the default package manager for Debian‑based Linux distributions. It consolidates functions of older tools such as apt-get
and apt-cache
, offering a streamlined interface for installing, updating, and removing software packages.
Typical operations include:
apt update
- refreshes the local package index, ensuring the system knows about the latest versions available from configured repositories.apt upgrade
- upgrades all installed packages to the newest versions that satisfy dependency constraints.apt install
- retrieves and installs the specified package along with any required dependencies.apt remove
- deletes the package while preserving its configuration files.apt purge
- removes the package and all associated configuration data.apt autoremove
- eliminates packages that were installed as dependencies but are no longer needed.apt search
- queries the package database for entries matching the given pattern.apt show
- displays detailed information about a package, including version, description, and dependencies.
Advanced options enhance control:
-y
or--yes
- automatically answers “yes” to prompts, useful for scripted installations.--no-install-recommends
- restricts installation to essential dependencies, avoiding optional recommended packages.-t <release>
- selects a specific release (e.g.,stable
,testing
) when installing a package.--dry-run
- simulates an operation without making changes, allowing verification of outcomes.
Effective use of apt requires regular execution of apt update
to keep the package index current, followed by apt upgrade
to apply security patches and bug fixes promptly. Combining these commands with apt autoremove
maintains a clean system by discarding orphaned libraries.
2. Installing Software (e.g., Fedora/CentOS)
2.1 dnf or yum
DNF and YUM are command‑line package managers for RPM‑based Linux distributions. DNF supersedes YUM in recent Fedora releases, while YUM remains standard on older Red Hat, CentOS, and Amazon Linux versions. Both tools handle installation, removal, updating, and querying of software packages, relying on repository metadata to resolve dependencies.
Essential commands (applicable to both managers) include:
install
- adds the specified package and required dependencies.remove
- deletes the package and any unneeded dependencies.update
- upgrades all installed packages to the latest versions available in configured repositories.info
- displays detailed information about a package, such as version, description, and source repository.list installed
- enumerates all packages currently present on the system.search
- finds packages whose names or descriptions contain the given keyword.
Key differences:
- DNF uses a faster dependency resolver and provides a more stable API; YUM relies on an older resolver that may be slower on large repositories.
- DNF supports the
--best
and--allowerasing
options for finer control over upgrade behavior; YUM offers comparable functionality through--skip-broken
. - Output formatting in DNF is more concise, while YUM includes additional progress details.
Best practices:
- Prefer DNF on systems released after Fedora 22 or on any distribution that lists DNF as the default manager.
- Use YUM on legacy platforms where DNF is unavailable or unsupported.
- Run
dnf clean all
oryum clean all
periodically to purge cached metadata and prevent stale information from affecting transactions. - Apply the
--refresh
flag (dnf --refresh
oryum --refresh
) before large updates to ensure the latest repository data is used.
Advanced Concepts
1. Redirecting Input/Output
Redirecting input and output enables command-line tools to read from or write to files, pipelines, or other streams without manual interaction. By altering the default data flow, scripts become more flexible and automation‑friendly.
>
writes standard output to a file, overwriting existing content.>>
appends standard output to a file, preserving existing data.<
feeds a file’s contents into a command as standard input.<<
creates a here‑document, supplying inline text as input.2>
captures standard error in a file, separating it from normal output.2>>
appends standard error to a file.&>
directs both standard output and standard error to the same destination.
Practical examples illustrate typical usage. grep "error" logfile.txt > errors.txt
stores matching lines, while cat < config.cfg
feeds configuration data directly into a command. Combining operators, command > out.txt 2> err.txt
, isolates successful results from diagnostics. The tee
utility duplicates output: command | tee log.txt
writes to a file while preserving the pipeline.
Effective redirection relies on precise quoting to prevent word splitting and on explicit error handling. Redirecting to /dev/null
discards unwanted streams, for instance command 2>/dev/null
. When constructing complex pipelines, parentheses group redirections, ensuring the intended scope: (cmd1 | cmd2) > combined.txt
. Mastery of these operators forms a foundational skill for reliable scripting and system administration.
2. Piping Commands
Piping commands enable the seamless flow of output from one command directly into the input of another, eliminating the need for intermediate files. The most common operator, the vertical bar (|
), connects two processes so that the left-hand command’s standard output becomes the right-hand command’s standard input. For example, ls -l | grep "^d"
lists directories only.
Advanced piping utilities extend this concept:
tee
duplicates a stream, writing it to a file while still passing it along the pipeline.Example:
ps aux | tee processes.txt | grep ssh
.xargs
builds and executes command lines from standard input, useful when the receiving command does not accept piped data.Example:
find . -name "*.log" | xargs rm -f
.&
runs a command in the background, allowing the pipeline to continue without waiting for completion.Example:
tar -czf archive.tar.gz /data &
.- Redirection operators (
>
,>>
,2>
) can be combined with pipes to separate standard output and error streams.Example:
make 2> errors.log | tee build.log
.
Effective use of these constructs streamlines data processing, reduces disk I/O, and fosters modular command composition. Mastery of piping forms a critical component of any comprehensive command‑line training curriculum.
3. Aliases and Custom Commands
Aliases compress frequently used command strings into short identifiers, reducing typing effort and minimizing error risk. They are defined by assigning a name to a command line, optionally including arguments. In most Unix-like shells, the syntax is alias name='command'
. For example, alias ll='ls -alF'
creates a shortcut that lists directory contents with detailed formatting. To make an alias permanent, place the definition in a startup file such as ~/.bashrc
or ~/.zshrc
; the shell reads the file on each session start.
Custom commands extend the concept by encapsulating multiple steps into a single executable script. Create a script file, add a shebang line (#!/usr/bin/env bash
for Bash scripts), write the desired logic, and set executable permission with chmod +x scriptname
. Store the script in a directory listed in the PATH
environment variable, such as ~/bin
, to invoke it from any location without specifying a path.
Key practices for managing aliases and custom commands:
- Keep definitions concise; avoid overly long or ambiguous names.
- Document each entry with a comment describing purpose and parameters.
- Group related aliases in the same configuration file for easier maintenance.
- Use function definitions instead of simple aliases when parameter handling is required.
- Regularly audit the collection to remove obsolete entries and prevent conflicts.
Version control systems support command shortcuts as well. Git, for instance, allows alias creation via git config --global alias.st status
. This technique applies the same principles: short, descriptive names that map to longer commands, stored in the user’s configuration file.
Effective use of aliases and custom commands streamlines workflow, enforces consistency, and accelerates task execution across development environments.