Async HTTP Requests
2026-04-06
Making HTTP requests is one of the most common real-world tasks in programming. Raku's concurrency primitives let you fire off multiple requests in parallel, dramatically reducing total wait time when you need to talk to several endpoints.
Basic HTTP with Cro::HTTP
The most popular HTTP client in Raku is Cro::HTTP::Client:
use Cro::HTTP::Client;
my $client = Cro::HTTP::Client.new;
my $response = await $client.get('https://httpbin.org/get');
my $body = await $response.body-text;
say $body;
Cro's client is async by default. Every request returns a Promise.
Parallel Requests with Promises
Fire off multiple requests simultaneously:
use Cro::HTTP::Client;
my $client = Cro::HTTP::Client.new;
my @urls = (
'https://httpbin.org/delay/1',
'https://httpbin.org/delay/2',
'https://httpbin.org/delay/1',
'https://httpbin.org/delay/3',
);
my $start = now;
my @promises = @urls.map: -> $url {
start {
my $resp = await $client.get($url);
my $body = await $resp.body-text;
"$url: {$body.chars} bytes"
}
};
my @results = await @promises;
say $_ for @results;
say "Total time: {(now - $start).round(0.1)}s";
Simple HTTP Without Dependencies
If you do not want to install Cro, Raku can make basic HTTP requests using built-in socket support or shell out to curl:
sub http-get(Str $url --> Str) {
my $proc = run 'curl', '-sS', $url, :out;
$proc.out.slurp(:close)
}
my @urls = <
https://httpbin.org/ip
https://httpbin.org/user-agent
https://httpbin.org/headers
>;
my @promises = @urls.map: -> $url {
start { $url => http-get($url) }
};
for await @promises -> $pair {
say "=== {$pair.key} ===";
say $pair.value.substr(0, 200);
say "";
}
Handling Timeouts
Add timeout handling to prevent hanging on slow endpoints:
sub fetch-with-timeout(Str $url, Real $timeout = 10 --> Str) {
my $work = start {
my $proc = run 'curl', '-sS', '--max-time', $timeout.Str, $url, :out;
$proc.out.slurp(:close)
};
my $timer = Promise.in($timeout);
await Promise.anyof($work, $timer);
if $work.status == Kept {
$work.result
} else {
die "Request to $url timed out after {$timeout}s";
}
}
try {
say fetch-with-timeout('https://httpbin.org/delay/2', 5);
CATCH { default { say "Failed: {.message}" } }
}
Rate-Limited Parallel Requests
When hitting an API, you often need to limit concurrency to avoid being throttled:
sub fetch-throttled(@urls, Int :$max-concurrent = 5) {
my $semaphore = Channel.new;
$semaphore.send(True) for ^$max-concurrent;
my @promises = @urls.map: -> $url {
start {
$semaphore.receive;
LEAVE $semaphore.send(True);
my $proc = run 'curl', '-sS', $url, :out;
my $body = $proc.out.slurp(:close);
$url => $body
}
};
await @promises
}
my @urls = (1..20).map: { "https://httpbin.org/get?n=$_" };
my @results = fetch-throttled(@urls, max-concurrent => 3);
say "Fetched {+@results} URLs";
Retry Logic
Add retries for unreliable endpoints:
sub fetch-with-retry(Str $url, Int :$max-retries = 3, Real :$delay = 1.0 --> Str) {
my $attempts = 0;
loop {
$attempts++;
try {
my $proc = run 'curl', '-sS', '--fail', $url, :out, :err;
my $body = $proc.out.slurp(:close);
return $body;
CATCH {
default {
if $attempts >= $max-retries {
die "Failed after $max-retries attempts: {.message}";
}
say "Attempt $attempts failed, retrying in {$delay}s...";
sleep $delay;
}
}
}
}
}
Collecting Results as They Arrive
Use a Channel to process results as they come in rather than waiting for all to complete:
my @urls = (1..10).map: { "https://httpbin.org/delay/{(1..3).pick}" };
my $results = Channel.new;
my @promises = @urls.map: -> $url {
start {
my $start = now;
my $proc = run 'curl', '-sS', $url, :out;
my $body = $proc.out.slurp(:close);
$results.send("$url completed in {(now - $start).round(0.1)}s ({$body.chars} bytes)");
}
};
start { await @promises; $results.close };
for $results.list -> $result {
say $result;
}
POST Requests
Sending data with POST:
sub http-post(Str $url, Str $body, Str :$content-type = 'application/json' --> Str) {
my $proc = run 'curl', '-sS', '-X', 'POST',
'-H', "Content-Type: $content-type",
'-d', $body,
$url, :out;
$proc.out.slurp(:close)
}
my @payloads = (1..5).map: { "\{\"id\": $_, \"name\": \"item$_\"\}" };
my @promises = @payloads.map: -> $payload {
start {
http-post('https://httpbin.org/post', $payload)
}
};
my @responses = await @promises;
say "Sent {+@responses} requests";
Practical Example: API Health Checker
sub check-endpoints(@endpoints) {
my @promises = @endpoints.map: -> %ep {
start {
my $start = now;
my $proc = run 'curl', '-sS', '-o', '/dev/null',
'-w', '%{http_code}', '--max-time', '5',
%ep<url>, :out, :err;
my $code = $proc.out.slurp(:close);
my $elapsed = now - $start;
{
name => %ep<name>,
url => %ep<url>,
status => $code,
time => $elapsed.round(0.001),
ok => $code eq '200',
}
}
};
await @promises
}
my @endpoints = (
{ name => 'httpbin', url => 'https://httpbin.org/get' },
{ name => 'example', url => 'https://example.com' },
);
my @results = check-endpoints(@endpoints);
for @results -> %r {
my $icon = %r<ok> ?? 'OK' !! 'FAIL';
say "[$icon] {%r<name>}: HTTP {%r<status>} ({%r<time>}s)";
}
Raku's concurrency model makes parallel HTTP requests natural and composable. Whether you use Cro for a full-featured async HTTP client or shell out to curl for simplicity, the pattern is the same: wrap each request in a start block and await the results.