AWK Commands
AWK pattern scanning and text processing commands.
Basic Output
awk '{print}' file.txtPrint all lines from file
awk '{print $0}' file.txtPrint entire line (same as {print})
awk '{print $1}' file.txtPrint first field of each line
awk '{print $NF}' file.txtPrint last field of each line
awk '{print $1, $3}' file.txtPrint first and third fields
awk '{print $1 ":" $2}' file.txtPrint fields with custom separator
Field Separators
awk -F"," '{print $1}' file.csvUse comma as field separator
awk -F":" '{print $1}' /etc/passwdUse colon as field separator
awk -F"\t" '{print $1}' file.tsvUse tab as field separator
awk 'BEGIN {FS=","} {print $1}' file.csvSet field separator in BEGIN block
awk 'BEGIN {OFS=","} {print $1, $2}' file.txtSet output field separator
Pattern Matching
awk '/pattern/ {print}' file.txtPrint lines matching pattern
awk '!/pattern/ {print}' file.txtPrint lines not matching pattern
awk '/start/,/end/ {print}' file.txtPrint lines between patterns
awk '$1 ~ /pattern/ {print}' file.txtPrint if first field matches pattern
awk '$1 !~ /pattern/ {print}' file.txtPrint if first field does not match
awk '/error|warning/ {print}' log.txtMatch multiple patterns with OR
Line Selection
awk 'NR==5 {print}' file.txtPrint 5th line
awk 'NR>=5 && NR<=10 {print}' file.txtPrint lines 5 through 10
awk 'NR%2==1 {print}' file.txtPrint odd numbered lines
awk 'NR%2==0 {print}' file.txtPrint even numbered lines
awk 'END {print NR}' file.txtPrint total number of lines
awk 'NR>1 {print}' file.txtSkip first line (header)
Conditional Operations
awk '$3 > 100 {print}' file.txtPrint lines where 3rd field > 100
awk '$2 == "active" {print}' file.txtPrint lines where 2nd field equals "active"
awk 'length($0) > 80 {print}' file.txtPrint lines longer than 80 characters
awk '$1 != "" {print}' file.txtPrint lines where first field is not empty
awk 'NF > 3 {print}' file.txtPrint lines with more than 3 fields
awk '$5 >= 50 && $5 <= 100 {print}' file.txtPrint if 5th field between 50-100
Calculations
awk '{sum += $1} END {print sum}' file.txtSum values in first column
awk '{sum += $1; count++} END {print sum/count}' file.txtCalculate average of first column
awk 'BEGIN {max=0} {if ($1>max) max=$1} END {print max}' file.txtFind maximum value
awk '{total += $2} END {print total}' file.txtSum values in second column
awk '{print $1, $2*1.1}' file.txtMultiply second field by 1.1
awk '{print NR, $0}' file.txtAdd line numbers to output
String Operations
awk '{print toupper($0)}' file.txtConvert to uppercase
awk '{print tolower($0)}' file.txtConvert to lowercase
awk '{print length($0)}' file.txtPrint length of each line
awk '{print substr($1, 1, 3)}' file.txtPrint first 3 characters of field 1
awk '{gsub(/old/, "new"); print}' file.txtReplace all occurrences globally
awk '{sub(/old/, "new"); print}' file.txtReplace first occurrence only
BEGIN & END Blocks
awk 'BEGIN {print "Header"} {print} END {print "Footer"}' file.txtAdd header and footer
awk 'BEGIN {FS=","; OFS="|"} {print $1, $2}' file.csvConvert CSV to pipe-delimited
awk 'BEGIN {count=0} /pattern/ {count++} END {print count}' file.txtCount matching lines
awk 'BEGIN {print "Name,Score"} {print $1","$2}' file.txtAdd CSV header
Advanced Techniques
awk '{a[$1]++} END {for (i in a) print i, a[i]}' file.txtCount occurrences of each value in field 1
awk '!seen[$0]++' file.txtRemove duplicate lines
awk '{print $2, $1}' file.txtSwap first and second fields
awk 'NF {print $NF}' file.txtPrint last field of non-empty lines
awk '{for(i=1;i<=NF;i++) print $i}' file.txtPrint each field on separate line
awk '{s=""; for(i=NF;i>=1;i--) s=s $i " "; print s}' file.txtReverse field order
Working with Files
awk '{print > $1".txt"}' file.txtSplit file based on first field value
awk 'FNR==1 {print FILENAME}' *.txtPrint filename for each file
awk 'FNR==NR {a[$1]; next} $1 in a' file1 file2Print lines from file2 where field 1 is in file1
awk '{print FILENAME, $0}' *.logPrepend filename to each line