Skip to main content

How to Read File into Variable

• 1 min read
bash file reading variables command substitution input handling

Quick Answer: Read File into Variable in Bash

To read an entire file into a variable, use: content=$(cat filename.txt) or content=$(<filename.txt). For better performance with large files, avoid cat: content=$(<filename.txt). Always quote the variable: echo "$content" to preserve newlines.

Quick Comparison: File Reading Methods

MethodSpeedBest ForMemoryNewlines
$(< file)FastestLarge filesGoodPreserved
$(cat file)FastSmall filesGoodPreserved
read -r lineMediumLine-by-lineLowParsed
mapfileFastArray storageGoodPreserved

Bottom line: Use $(< file) for speed; use read for memory efficiency with large files.


Read file contents into variables and process different parts of files in Bash. Whether you need the entire file content in memory, line-by-line processing, or working with specific sections of files, understanding different reading methods is essential.

Method 1: Read Entire File into Variable

Use command substitution to read the complete file content.

# Read entire file using command substitution
content=$(cat file.txt)

# Or more efficiently without cat
content=$(<file.txt)

# With error handling
if [ -f "file.txt" ]; then
  content=$(<file.txt)
  echo "File read successfully"
else
  echo "File not found"
  exit 1
fi

Example:

$ cat myfile.txt
Line 1
Line 2
Line 3

$ content=$(<myfile.txt)
$ echo "$content"
Line 1
Line 2
Line 3

$ echo "Length: ${#content}"
Length: 21

Method 2: Read Line by Line

For large files or when you need to process one line at a time without loading everything into memory.

# Read line by line
while IFS= read -r line; do
  echo "Processing: $line"
done < file.txt

# Or using read with specific variables
while IFS=',' read -r id name email; do
  echo "User: $name"
done < data.csv

Example:

#!/bin/bash

while IFS= read -r line; do
  echo "Length: ${#line} - Content: $line"
done < myfile.txt

# Output:
# Length: 6 - Content: Line 1
# Length: 6 - Content: Line 2
# Length: 6 - Content: Line 3

Method 3: Read Specific Number of Lines

Extract a range of lines from a file.

# Read first N lines
head -n 5 file.txt

# Read last N lines
tail -n 5 file.txt

# Read lines between start and end
sed -n '5,10p' file.txt

# Read lines into variable
first_lines=$(head -n 5 file.txt)

Example:

# File with 10 lines
$ head -n 3 myfile.txt
Line 1
Line 2
Line 3

$ tail -n 2 myfile.txt
Line 9
Line 10

$ sed -n '3,5p' myfile.txt
Line 3
Line 4
Line 5

Method 4: Read Until Marker or Condition

Read file content up to a specific marker or condition.

# Read until empty line
while IFS= read -r line; do
  [ -z "$line" ] && break
  echo "$line"
done < file.txt

# Read until specific pattern
while IFS= read -r line; do
  [ "$line" = "END" ] && break
  echo "$line"
done < file.txt

# Skip until marker, then read
skip=true
while IFS= read -r line; do
  [ "$line" = "START" ] && skip=false && continue
  [ "$skip" = true ] && continue
  echo "$line"
done < file.txt

Method 5: Read into Array

Store each line as an array element for indexed access.

# Read entire file into array
mapfile -t lines < file.txt

# Or using loop
declare -a lines
while IFS= read -r line; do
  lines+=("$line")
done < file.txt

# Access lines
echo "${lines[0]}"      # First line
echo "${lines[-1]}"     # Last line
echo "${#lines[@]}"     # Total lines

Example:

$ mapfile -t lines < myfile.txt
$ echo "Total lines: ${#lines[@]}"
Total lines: 3

$ echo "First line: ${lines[0]}"
First line: Line 1

$ echo "Last line: ${lines[-1]}"
Last line: Line 3

Method 6: Read Specific Parts

Extract specific fields or columns from file content.

# Read and extract fields from CSV
while IFS=',' read -r id name age; do
  echo "ID: $id, Name: $name"
done < data.csv

# Read and extract using awk
awk '{print $2}' file.txt

# Extract specific columns
cut -d',' -f1,3 data.csv

Method 7: Read and Transform Content

Read file and modify content while processing.

# Read and convert to uppercase
while IFS= read -r line; do
  echo "${line^^}"  # Bash 4+
done < file.txt

# Read and replace patterns
while IFS= read -r line; do
  echo "${line/old/new}"
done < file.txt

# Read, filter, and transform
while IFS= read -r line; do
  [ -z "$line" ] && continue  # Skip empty
  echo ">> $line"
done < file.txt

Practical Examples

Example 1: Read and Count File Statistics

#!/bin/bash

file="$1"

if [ ! -f "$file" ]; then
  echo "File not found"
  exit 1
fi

# Read file content
content=$(<"$file")

# Calculate statistics
total_lines=$(echo "$content" | wc -l)
total_chars=${#content}
total_words=$(echo "$content" | wc -w)

echo "File Statistics for: $file"
echo "Lines: $total_lines"
echo "Words: $total_words"
echo "Characters: $total_chars"

Output:

File Statistics for: myfile.txt
Lines: 10
Words: 42
Characters: 256

Example 2: Read CSV and Process Specific Columns

#!/bin/bash

csv_file="$1"

echo "Processing CSV file: $csv_file"
echo "===================="

while IFS=',' read -r id name email age; do
  # Skip header
  [ "$id" = "id" ] && continue

  # Skip invalid entries
  [ -z "$id" ] || [ -z "$name" ] && continue

  # Process and display
  printf "ID: %2d | Name: %-15s | Age: %2d\n" "$id" "$name" "$age"
done < "$csv_file"

Input:

id,name,email,age
1,John Smith,john@example.com,30
2,Jane Doe,jane@example.com,25
3,Bob Johnson,bob@example.com,35

Output:

Processing CSV file: data.csv
====================
ID:  1 | Name: John Smith | Age: 30
ID:  2 | Name: Jane Doe | Age: 25
ID:  3 | Name: Bob Johnson | Age: 35

Example 3: Read Configuration File

#!/bin/bash

# Read configuration file
config_file="$1"

# Declare associative array for config
declare -A config

while IFS='=' read -r key value; do
  # Skip comments and empty lines
  [[ "$key" =~ ^#.* ]] && continue
  [ -z "$key" ] && continue

  # Trim whitespace
  key=$(echo "$key" | xargs)
  value=$(echo "$value" | xargs)

  # Store in array
  config["$key"]="$value"
done < "$config_file"

# Access configuration
echo "Database Host: ${config[db_host]}"
echo "Database Port: ${config[db_port]}"
echo "Database Name: ${config[db_name]}"

Input (config.conf):

# Database Configuration
db_host=localhost
db_port=5432
db_name=myapp_db

Output:

Database Host: localhost
Database Port: 5432
Database Name: myapp_db

Example 4: Read File Sections

#!/bin/bash

# Read file in sections separated by blank lines
file="$1"
section=0
content=""

while IFS= read -r line; do
  if [ -z "$line" ]; then
    # End of section
    if [ -n "$content" ]; then
      ((section++))
      echo "=== Section $section ==="
      echo "$content"
      echo ""
      content=""
    fi
  else
    # Add line to current section
    content+="$line"$'\n'
  fi
done < "$file"

# Don't forget last section
if [ -n "$content" ]; then
  ((section++))
  echo "=== Section $section ==="
  echo "$content"
fi

Example 5: Read and Merge File Parts

#!/bin/bash

# Merge multiple input files
output_file="$1"
shift  # Remove first argument

echo "Merging files..."

# Read and concatenate all files
while [ $# -gt 0 ]; do
  if [ -f "$1" ]; then
    while IFS= read -r line; do
      echo "$line"
    done < "$1"
  else
    echo "Warning: File not found - $1" >&2
  fi
  shift
done > "$output_file"

echo "Files merged into: $output_file"

Usage:

bash script.sh output.txt file1.txt file2.txt file3.txt

Example 6: Function to Read File Parts

#!/bin/bash

# Function to read specific range of lines
read_lines() {
  local file="$1"
  local start="${2:-1}"
  local end="${3:-$(wc -l < "$file")}"

  if [ ! -f "$file" ]; then
    echo "Error: File not found"
    return 1
  fi

  sed -n "${start},${end}p" "$file"
}

# Function to read first N lines
read_first() {
  local file="$1"
  local count="${2:-5}"

  head -n "$count" "$file"
}

# Function to read last N lines
read_last() {
  local file="$1"
  local count="${2:-5}"

  tail -n "$count" "$file"
}

# Usage
read_lines "myfile.txt" 5 10
read_first "myfile.txt" 3
read_last "myfile.txt" 2

Example 7: Read and Process Large File

#!/bin/bash

# Process large file line-by-line (memory efficient)
large_file="$1"

# Counter for progress
count=0

while IFS= read -r line; do
  ((count++))

  # Process line
  if [ $((count % 1000)) -eq 0 ]; then
    echo "Processed $count lines..."
  fi

  # Do actual processing
  length=${#line}
  if [ "$length" -gt 100 ]; then
    echo "Long line at $count: ${line:0:50}..."
  fi

done < "$large_file"

echo "Total lines processed: $count"

Performance Comparison

For reading file content:

MethodSpeedMemoryBest For
Read entire fileFastHighSmall files
Line by lineSlowerLowLarge files
Using mapfileFastMediumIndexed access
Head/tailVery FastLowSpecific sections

Best choice: Line-by-line for large files, entire file for small ones.

Important Considerations

File Size Matters

Be aware of memory usage:

# For small files, reading entirely is fine
small_file=$(<small.txt)

# For large files, use line-by-line processing
while read -r line; do
  process "$line"
done < large_file.txt

Error Handling

Always check file existence:

if [ ! -f "$file" ]; then
  echo "Error: File not found"
  exit 1
fi

content=$(<"$file") || {
  echo "Error reading file"
  exit 1
}

Preserving Whitespace

Use IFS= with read -r to preserve whitespace:

# Preserves whitespace
while IFS= read -r line; do
  echo ">>$line<<"
done < file.txt

# Trims whitespace
while read line; do
  echo ">>$line<<"
done < file.txt

Key Points

  • Use $(<file) to read entire files efficiently
  • Use while read -r line for line-by-line processing
  • Use mapfile for array storage of lines
  • Choose method based on file size and processing needs
  • Always check file existence before reading
  • Use IFS= with read -r to preserve formatting
  • For large files, process line-by-line to save memory

Quick Reference

# Read entire file
content=$(<file.txt)

# Read line by line
while read -r line; do
  echo "$line"
done < file.txt

# Read into array
mapfile -t lines < file.txt

# Read first N lines
head -n 5 file.txt

# Read last N lines
tail -n 5 file.txt

# Read lines between
sed -n '5,10p' file.txt

# Read with field splitting
while IFS=',' read -r f1 f2 f3; do
  echo "$f1"
done < file.csv
#!/bin/bash

file="$1"

# For small files: entire content
if [ -s "$file" ]; then
  content=$(<"$file")
  echo "File size: ${#content} chars"
fi

# For processing: line by line
while IFS= read -r line; do
  # Process $line
  echo "Processing: $line"
done < "$file"

# For array storage
mapfile -t lines < "$file"
echo "Total lines: ${#lines[@]}"