How to Count Occurrences in Bash
Quick Answer: Count Pattern Matches in Bash
To count matching lines, use grep -c "pattern" file.txt. For count of individual pattern occurrences (not lines), use grep -o "pattern" | wc -l. For strings in variables, use ${variable//pattern/} to count removals.
Quick Comparison: Pattern Counting Methods
| Method | Syntax | Count | Best For | Speed |
|---|---|---|---|---|
| grep -c | grep -c pattern file | Lines | Files | Fast |
| grep -o | wc | grep -o pattern | wc -l | Occurrences | Individual matches | Medium |
| awk | awk '/ pattern / {count++}' file | Custom | Complex logic | Medium |
Bottom line: Use grep -c for line counts; use grep -o | wc -l for pattern occurrences.
Count occurrences of patterns in files and strings. Learn different methods using grep, awk, sed, and bash built-ins.
Method 1: Count Matching Lines with grep (Simplest)
The simplest way to count lines matching a pattern:
grep -c "pattern" file.txt
# Example: Count errors in log file
grep -c "ERROR" app.log
# Output: 15
Grep Count Examples
Test file (access.log):
[2026-02-21] User login successful
[2026-02-21] User login failed
[2026-02-21] User login successful
[2026-02-21] User login failed
[2026-02-21] User login failed
# Count successful logins
grep -c "successful" access.log
# Count failed logins
grep -c "failed" access.log
# Count total 2026-02-21 entries
grep -c "2026-02-21" access.log
# Case-insensitive count
grep -ci "error" app.log
Output:
2
3
5
Count Pattern Occurrences (not lines)
When you need to count individual matches, not just matching lines:
# Count total word occurrences
grep -o "error" file.txt | wc -l
# Count pattern matches that may appear multiple times per line
echo "apple apple banana" | grep -o "apple" | wc -l
# Output: 2
Using awk for Counting
Awk can count based on conditions:
# Count lines where field exceeds value
awk '$2 > 100' data.txt | wc -l
# Count matching pattern
awk '/ERROR/ {count++} END {print count}' logfile.txt
# Count with multiple conditions
awk '$2 > 100 && $3 < 50 {count++} END {print count}' data.txt
Practical awk Examples
Test file (sales.txt):
John 150 2
Jane 200 1
Bob 75 3
Alice 180 2
# Count sales over 100
awk '$2 > 100 {count++} END {print count}' sales.txt
# Count by condition with message
awk '$2 > 100 {high++} $2 <= 100 {low++} END {print "High:", high, "Low:", low}' sales.txt
Output:
3
High: 3 Low: 1
Count Occurrences in a String
#!/bin/bash
text="apple apple banana apple cherry"
pattern="apple"
# Method 1: Using grep -o
count=$(echo "$text" | grep -o "$pattern" | wc -l)
echo "Count: $count"
# Method 2: Using sed
count=$(echo "$text" | sed "s/$pattern//g" | wc -c)
echo "Original length: $(echo $text | wc -c)"
echo "After removal: $count"
# Method 3: Pure bash (no external tools)
count=0
remaining="$text"
while [[ "$remaining" == *"$pattern"* ]]; do
((count++))
remaining="${remaining#*$pattern}"
done
echo "Bash count: $count"
Output:
Count: 3
Bash count: 3
Practical Example: Traffic Analysis
#!/bin/bash
# File: analyze_traffic.sh
logfile="$1"
if [ ! -f "$logfile" ]; then
echo "Usage: $0 <logfile>"
exit 1
fi
echo "=== Traffic Analysis ==="
echo ""
# Count different HTTP status codes
echo "HTTP Status Codes:"
grep -o "HTTP/1.1 [0-9]*" "$logfile" | cut -d' ' -f2 | sort | uniq -c
echo ""
echo "Request Types:"
grep -o "GET\|POST\|PUT\|DELETE" "$logfile" | sort | uniq -c
echo ""
echo "Total Requests:"
wc -l < "$logfile"
echo ""
echo "Unique IPs:"
grep -o "^[0-9.]*" "$logfile" | sort -u | wc -l
Usage with sample log:
192.168.1.1 GET /index.html HTTP/1.1 200
192.168.1.2 POST /api/users HTTP/1.1 201
192.168.1.1 GET /index.html HTTP/1.1 200
192.168.1.3 GET /about.html HTTP/1.1 404
Output:
=== Traffic Analysis ===
HTTP Status Codes:
2 200
1 201
1 404
Request Types:
3 GET
1 POST
Total Requests: 4
Unique IPs: 3
Count Matches in Multiple Files
#!/bin/bash
pattern="$1"
search_dir="."
if [ -z "$pattern" ]; then
echo "Usage: $0 <pattern> [directory]"
exit 1
fi
[ ! -z "$2" ] && search_dir="$2"
echo "Searching for: $pattern"
echo "Directory: $search_dir"
echo ""
total=0
for file in $(find "$search_dir" -type f -name "*.txt" -o -name "*.log"); do
count=$(grep -c "$pattern" "$file" 2>/dev/null || true)
if [ "$count" -gt 0 ]; then
echo "$(printf '%4d' $count) : $file"
((total += count))
fi
done
echo ""
echo "Total matches: $total"
Usage:
$ chmod +x count_matches.sh
$ ./count_matches.sh "ERROR" /var/log
Output:
Searching for: ERROR
Directory: /var/log
15 : /var/log/app.log
8 : /var/log/error.log
23 : /var/log/system.log
Total matches: 46
Count with Context
# Count and show context
grep -c -B2 -A2 "ERROR" app.log
# Count unique matches
grep -o "pattern" file.txt | sort | uniq | wc -l
# Count with weight
awk '/ERROR/ {count += 2} /WARNING/ {count += 1} END {print count}' log.txt
Using wc for Line Counting
# Count total lines
wc -l file.txt
# Count words
wc -w file.txt
# Count bytes
wc -c file.txt
# Pipeline counting
grep "pattern" file.txt | wc -l
Combine Multiple Patterns
#!/bin/bash
file="$1"
echo "Error count: $(grep -c "ERROR" "$file")"
echo "Warning count: $(grep -c "WARNING" "$file")"
echo "Info count: $(grep -c "INFO" "$file")"
echo "Debug count: $(grep -c "DEBUG" "$file")"
echo ""
echo "Total issues: $(grep -c -E "ERROR|WARNING|DEBUG" "$file")"
Performance Tips
For large files, use these faster approaches:
# Faster than grep -c for huge files
wc -l < <(grep "pattern" file.txt)
# Use awk for multiple patterns at once
awk '/pat1/ {c1++} /pat2/ {c2++} /pat3/ {c3++} END {print c1, c2, c3}' file.txt
Common Mistakes
- Confusing line count with occurrence count - a line can have multiple matches
- Forgetting quotes around patterns - spaces break matching
- Not using
-cefficiently - it’s much faster than piping to wc - Case sensitivity issues - use
-ifor case-insensitive - Regex special characters - escape them:
\.for dot,\*for asterisk
Key Points
- Use
grep -cfor counting matching lines (fastest) - Use
grep -o | wc -lfor counting individual occurrences - Use
awkfor conditional counting on fields - Always quote your patterns
- Test with sample data first
Summary
Counting matches is crucial for log analysis and monitoring. Use grep -c for simplicity, awk for complex conditions, and combine tools for powerful analysis. Always verify counts with sample data.