raku.gg / one-liners

CSV One-Liners

2026-04-01

CSV files are everywhere. Raku's built-in string processing makes it easy to parse, transform, and analyze CSV data right from the command line without installing any modules.

Basic CSV Reading

Print all rows, splitting on commas:
raku -ne 'say .split(",").raku' data.csv
Print a specific column (0-indexed):
raku -ne 'say .split(",")[1]' data.csv

Selecting Columns

Print columns 1 and 3 (name and email from a user CSV):
raku -ne 'my @f = .split(","); say "@f[0],@f[2]"' users.csv
Reorder columns:
raku -ne 'my @f = .split(","); say @f[2,0,1].join(",")' data.csv

Filtering Rows

Show rows where the third column equals a value:
raku -ne 'my @f = .split(","); .say if @f[2] eq "active"' users.csv
Show rows where a numeric column exceeds a threshold:
raku -ne 'my @f = .split(","); .say if @f[3].Num > 100' sales.csv
Filter rows matching a pattern in any column:
raku -ne '.say if .contains("Toronto")' locations.csv

Skipping the Header

Skip the first line and process the rest:
raku -ne 'state $first = True; if $first { $first = False; next }; say .split(",")[0]' data.csv
A cleaner approach using line number tracking:
raku -e 'for "data.csv".IO.lines[1..*] -> $l { say $l.split(",")[0] }'

Aggregating Data

Sum a numeric column:
raku -e 'say "data.csv".IO.lines[1..*].map(*.split(",")[2].Num).sum'
Average of a column:
raku -e 'my @v = "data.csv".IO.lines[1..*].map(*.split(",")[2].Num); say @v.sum / @v.elems'
Min and max:
raku -e 'my @v = "data.csv".IO.lines[1..*].map(*.split(",")[2].Num); say "min={@v.min} max={@v.max}"'

Counting by Category

Group and count by a column:
raku -e 'my %c; for "data.csv".IO.lines[1..*] { %c{.split(",")[1]}++ }; .say for %c.sort(-*.value)'
This is useful for questions like "how many users per country?" or "how many sales per product?".

Sorting CSV

Sort by the second column (alphabetically):
raku -e 'my @h = "data.csv".IO.lines; say @h[0]; .say for @h[1..*].sort(*.split(",")[1])'
Sort by a numeric column (descending):
raku -e 'my @h = "data.csv".IO.lines; say @h[0]; .say for @h[1..*].sort(-*.split(",")[2].Num)'

Adding a Column

Append a calculated column:
raku -ne 'my @f = .split(","); @f.push(@f[1].Num * @f[2].Num); say @f.join(",")' prices.csv
Example: if column 1 is quantity and column 2 is unit price, this adds a total column.

Removing Columns

Remove column 2 (0-indexed):
raku -ne 'my @f = .split(","); @f.splice(2, 1); say @f.join(",")' data.csv

Handling Quoted Fields

Simple CSV one-liners break on quoted fields containing commas. For basic quoted CSV:
raku -ne 'say .comb(/ <-[,"]>+ | \" <-["]>* \" /).raku' data.csv
For serious CSV with embedded commas and quotes, use a grammar or the Text::CSV module. But for clean, simple CSV (which is most CSV), the .split(",") approach works perfectly.

Converting Formats

CSV to tab-separated:
raku -pe '$_ = .split(",").join("\t")' data.csv
Tab-separated to CSV:
raku -pe '$_ = .split("\t").join(",")' data.tsv
CSV to a simple markdown table:
raku -ne 'say "| " ~ .split(",").join(" | ") ~ " |"' data.csv

Joining Two CSV Files

A simple join on the first column:
raku -e ' my %lookup = "lookup.csv".IO.lines[1..*].map({ my @f = .split(","); @f[0] => @f[1] }); for "main.csv".IO.lines -> $line { my @f = $line.split(","); @f.push(%lookup{@f[0]} // "N/A"); say @f.join(","); } '

Deduplicating

Remove duplicate rows:
raku -e 'my @l = "data.csv".IO.lines; say @l[0]; .say for @l[1..*].unique' data.csv
Remove duplicates based on a key column:
raku -e 'my %seen; for "data.csv".IO.lines -> $l { my $k = $l.split(",")[0]; $l.say unless %seen{$k}++; }'

Quick Statistics

A summary one-liner for a numeric column:
raku -e ' my @v = "data.csv".IO.lines[1..*].map(*.split(",")[2].Num); say "count: {@v.elems}"; say "sum: {@v.sum}"; say "mean: {@v.sum / @v.elems}"; say "min: {@v.min}"; say "max: {@v.max}"; say "range: {@v.max - @v.min}"; '
These one-liners cover the most common CSV operations you would need on the command line. For anything more complex, consider writing a short Raku script or using the Text::CSV module for robust parsing.