Skip to main content

How to Loop Through Command Output

• 4 min read
bash

Quick Answer: Loop Through Command Output in Bash

Use command | while read -r line; do ... done to process each line from a command. This preserves whitespace and handles long lines. For arrays, use mapfile -t array < <(command), then loop the array. Avoid for item in $(command) because it breaks on whitespace.

Quick Comparison: Command Output Loop Methods

MethodSyntaxWhitespacePerformanceBest For
while readcmd | while readPreservedFastLine-by-line processing
for with $()for item in $(cmd)LostSlowerSimple lists, no spaces
mapfilemapfile -t arrPreservedFastArray processing
Process substwhile read < <(cmd)PreservedFastVariables persist

Bottom line: Use while read for most cases. Use process substitution < <(cmd) if you need variables to persist after the loop.


Looping Through Command Output

Processing the output of commands is one of the most common scripting tasks. Whether you’re parsing log files, processing lists of files, or extracting data, Bash gives you several clean ways to loop through command results.

Method 1: Basic Command Output Loop

The simplest approach:

#!/bin/bash

# Loop through each line of command output
for line in $(ls /tmp); do
  echo "File: $line"
done

When to Use Basic Command Output Loop

  • Command output is space-separated words
  • Filenames don’t have spaces
  • You want simple, readable code
  • You’re processing lists of items

Method 2: Handling Multiline Output

Use while read for each line:

#!/bin/bash

# Get command output and loop through each line
ps aux | while read -r line; do
  echo "Process: $line"
done

This is better because it preserves whitespace and handles long lines correctly.

When to Use while read

  • You’re processing log files
  • Lines contain multiple spaces or tabs
  • You need reliable whitespace preservation
  • You’re parsing structured text

Practical Examples

Process Files

#!/bin/bash

# Loop through files in a directory
for file in $(find . -name "*.txt"); do
  echo "Processing: $file"
  wc -l "$file"
done

When to Use Process Files Pattern

  • You’re finding and processing multiple files
  • You want to apply operations to each file
  • You need file paths with full paths
  • You’re batch processing documents

Parse CSV Data

#!/bin/bash

# Read CSV and process each line
csv_file="data.csv"

while IFS=',' read -r name age email; do
  echo "Name: $name, Age: $age, Email: $email"
done < "$csv_file"

When to Use CSV Parsing

  • You’re processing structured data
  • You have delimited fields
  • You need to extract specific columns
  • You’re importing/exporting data

Looping Through grep Results

#!/bin/bash

# Find lines matching a pattern and process each
grep "ERROR" /var/log/syslog | while read -r line; do
  echo "Error found: $line"
done

When to Use grep Looping

  • You’re filtering log files
  • You need to find matching lines
  • You’re searching for patterns
  • You want to process matches

Processing Command Output with Index

Combine command output looping with counters:

#!/bin/bash

count=0
find . -name "*.log" | while read -r logfile; do
  ((count++))
  echo "[$count] Processing: $logfile"
done

When to Use Indexed Output Looping

  • You need line numbers
  • You want progress tracking
  • You’re showing position information
  • You need first/last item handling

Backup Directories

#!/bin/bash

# Get list of directories and backup each
du -sh */ | while read -r size dir; do
  echo "Backing up $dir (Size: $size)"
  tar czf "backups/${dir%/}.tar.gz" "$dir"
done

When to Use Backup Pattern

  • You’re backing up multiple directories
  • You want to track sizes or counts
  • You’re archiving groups of files
  • You need organized output

Piping to while read

Preserves variables across loop iterations:

#!/bin/bash

total=0
count=0

# Count file sizes
find . -type f | while read -r file; do
  size=$(stat -c%s "$file")
  total=$((total + size))
  ((count++))
done

echo "Files: $count, Total: $total bytes"

Note: Variables modified inside the while loop are local to the subshell. Use process substitution instead:

#!/bin/bash

total=0
count=0

# Better approach - no subshell
while read -r file; do
  size=$(stat -c%s "$file")
  total=$((total + size))
  ((count++))
done < <(find . -type f)

echo "Files: $count, Total: $total bytes"

When to Use Variable Persistence

  • You’re accumulating totals or counts
  • Variables need to exist after the loop
  • You’re avoiding subshell scope issues
  • You want process substitution approach

Processing with mapfile

Store command output in an array:

#!/bin/bash

# Get list of processes into array
mapfile -t processes < <(ps aux | tail -n +2 | awk '{print $11}')

# Loop through array
for process in "${processes[@]}"; do
  echo "Process: $process"
done

When to Use mapfile

  • You want array-based processing
  • You need multiple passes through data
  • You want random access to items
  • You’re building arrays from commands

Handling Special Characters

Be careful with spaces and special characters:

#!/bin/bash

# Dangerous - breaks on spaces
for file in $(ls); do
  echo "$file"  # May break if filenames have spaces
done

# Better - handles spaces and special chars
while IFS= read -r file; do
  echo "$file"  # Handles any filename
done < <(ls)

# Or with find
find . -maxdepth 1 -type f -print0 | while read -d '' file; do
  echo "$file"  # Also handles special characters
done

When to Use Special Character Handling

  • Filenames might have spaces/special chars
  • You’re processing untrusted input
  • You need robust filename handling
  • You’re finding files from find command

Log Analysis

#!/bin/bash

# Analyze web server logs
logfile="/var/log/apache2/access.log"
count=0
errors=0

while IFS= read -r line; do
  ((count++))

  # Count error responses (5xx)
  if echo "$line" | grep -q " 5[0-9][0-9] "; then
    ((errors++))
  fi
done < "$logfile"

echo "Total requests: $count"
echo "Server errors: $errors"

When to Use Log Analysis Pattern

  • You’re analyzing access/error logs
  • You need to count or filter events
  • You’re monitoring for errors
  • You’re generating reports from logs

Combining Multiple Commands

#!/bin/bash

# Get processes using more CPU and format
ps aux | sort -k3 -rn | head -5 | while read -r line; do
  echo "High CPU: $line"
done

When to Use Command Chaining

  • You need to filter and process output
  • You want to use pipes and sorting
  • You’re combining multiple filters
  • You’re selecting top/bottom results

Performance Considerations

#!/bin/bash

file_list="/tmp/files.txt"

# Slow - spawns new process for each line
for line in $(cat "$file_list"); do
  # Process...
done

# Fast - reads directly from file
while IFS= read -r line; do
  # Process...
done < "$file_list"

# Use process substitution to avoid subshells
while IFS= read -r line; do
  # Process...
done < <(command)

Quick Reference

# Basic loop through command output
for item in $(command); do
  echo "$item"
done

# Better - handles whitespace
while read -r line; do
  echo "$line"
done < <(command)

# Process CSV
while IFS=',' read -r field1 field2 field3; do
  echo "$field1, $field2, $field3"
done < file.csv

# Find and process files
find . -name "*.txt" | while read -r file; do
  echo "$file"
done

Summary

Looping through command output is fundamental to Bash scripting. Use while read with proper quoting for robust, reliable processing of command output, logs, and data files. Choose while read for line-by-line processing with whitespace preservation, use process substitution < <(cmd) if variables must persist after the loop, and prefer mapfile when you need array-based access to multiple items.