You have a spreadsheet of prospects. Names, LinkedIn URLs, maybe a company column. You need full profiles: work experience, education, current title. Here is how to build a complete enrichment pipeline using nothing but curl, jq, and the ScrapeLinkedIn API.

No Python, no SDKs, no dependencies beyond what is already on your machine.

What We're Building

CSV → ScrapeLinkedIn API → JSON → jq → Enriched CSV

A shell script that reads a CSV of LinkedIn URLs, sends them to the ScrapeLinkedIn batch endpoint, polls until complete, and outputs a clean enriched CSV. Five steps, one script, zero external dependencies.

Prerequisites

Getting your API key

Three commands. Register, verify your email, and generate a key.

# 1. Register with your email
curl -s -X POST https://api.scrapelinkedin.com/api/v1/auth/register \
  -H "Content-Type: application/json" \
  -d '{"email": "you@example.com"}'

# 2. Verify with the 6-digit code sent to your inbox
curl -s -X POST https://api.scrapelinkedin.com/api/v1/auth/verify \
  -H "Content-Type: application/json" \
  -d '{"email": "you@example.com", "code": "123456"}'

# 3. Generate your API key (save the token from step 2)
curl -s -X POST https://api.scrapelinkedin.com/api/v1/auth/api-key \
  -H "Authorization: Bearer YOUR_TOKEN"

Save the API key that comes back. It starts with li_live_. You will use it for every request.

Step 1: Prepare Your Input

Start with a CSV. The only required column is the LinkedIn URL. Here is an example:

# prospects.csv
name,linkedin_url,company
Jane Smith,https://linkedin.com/in/janesmith,Acme Corp
John Doe,https://linkedin.com/in/johndoe,Globex Inc
Sarah Chen,https://linkedin.com/in/sarahchen,Initech
Mike Ross,https://linkedin.com/in/mikeross,Hooli

Extract the LinkedIn URLs into a JSON array that the batch endpoint expects:

# Extract URLs from column 2, skip header, format as JSON array
URLS=$(tail -n +2 prospects.csv | cut -d',' -f2 | jq -R . | jq -s .)

# Verify the output
echo "$URLS"
# ["https://linkedin.com/in/janesmith","https://linkedin.com/in/johndoe",...]

Step 2: Batch Scrape

Send the URL array to the batch endpoint. This queues all profiles for scraping in a single request (up to 1,000 URLs).

# Submit the batch
RESPONSE=$(curl -s -X POST https://api.scrapelinkedin.com/api/v1/scrape/batch \
  -H "Content-Type: application/json" \
  -H "X-API-Key: $SCRAPELINKEDIN_API_KEY" \
  -d "{\"urls\": $URLS}")

# Extract the batch ID
BATCH_ID=$(echo "$RESPONSE" | jq -r '.data.batch_id')
echo "Batch submitted: $BATCH_ID"

The response includes a batch_id and the total number of profiles queued. Each URL costs 1 credit. Failed lookups are automatically refunded.

Step 3: Poll for Results

Profiles take 20 to 30 seconds each. For a batch, poll the status endpoint until all profiles are done.

# Poll every 15 seconds until the batch is complete
while true; do
  STATUS=$(curl -s https://api.scrapelinkedin.com/api/v1/scrape/batch/$BATCH_ID \
    -H "X-API-Key: $SCRAPELINKEDIN_API_KEY")

  COMPLETED=$(echo "$STATUS" | jq '.data.completed')
  TOTAL=$(echo "$STATUS" | jq '.data.total')
  echo "Progress: $COMPLETED / $TOTAL"

  # Check if all profiles are done
  if [ "$COMPLETED" -eq "$TOTAL" ]; then
    echo "Batch complete."
    echo "$STATUS" | jq '.data.results' > profiles.json
    break
  fi

  sleep 15
done

When the loop exits, profiles.json contains the full array of profile results.

Step 4: Transform with jq

Now extract the fields you need and output a flat CSV. Select only completed profiles to skip any that failed.

# Build enriched CSV from completed profiles
echo "linkedin_url,full_name,experience,education,status" > enriched.csv

jq -r '.[] | select(.status == "completed") |
  [
    .linkedin_url,
    .profile_data.full_name,
    (.profile_data.experience | gsub(","; ";")),
    (.profile_data.education | gsub(","; ";")),
    .status
  ] | @csv' profiles.json >> enriched.csv

# Check the result
wc -l enriched.csv
# 5 enriched.csv (header + 4 profiles)

The gsub(","; ";") replaces commas inside fields with semicolons so they do not break CSV column alignment. Adjust the field list to match your needs.

Step 5: Putting It All Together

Here is the complete enrich.sh script. It reads your API key from the environment, validates inputs, runs the full pipeline, and writes the enriched CSV.

#!/usr/bin/env bash
# enrich.sh — Enrich a CSV of LinkedIn prospects via ScrapeLinkedIn API
set -euo pipefail

# ── Config ──────────────────────────────────────────────────
API_BASE="https://api.scrapelinkedin.com/api/v1"
POLL_INTERVAL=15

# ── Validate ────────────────────────────────────────────────
if [ -z "${SCRAPELINKEDIN_API_KEY:-}" ]; then
  echo "Error: SCRAPELINKEDIN_API_KEY is not set." >&2
  echo "Run: export SCRAPELINKEDIN_API_KEY=li_live_YOUR_KEY" >&2
  exit 1
fi

if [ $# -lt 1 ]; then
  echo "Usage: ./enrich.sh <input.csv> [output.csv]" >&2
  exit 1
fi

INPUT_CSV="$1"
OUTPUT_CSV="${2:-enriched.csv}"

if [ ! -f "$INPUT_CSV" ]; then
  echo "Error: File not found: $INPUT_CSV" >&2
  exit 1
fi

# ── Step 1: Extract URLs ────────────────────────────────────
echo "Extracting LinkedIn URLs from $INPUT_CSV..."
URLS=$(tail -n +2 "$INPUT_CSV" | cut -d',' -f2 | jq -R . | jq -s .)
COUNT=$(echo "$URLS" | jq 'length')
echo "Found $COUNT URLs."

if [ "$COUNT" -eq 0 ]; then
  echo "Error: No URLs found in column 2." >&2
  exit 1
fi

# ── Step 2: Submit batch ────────────────────────────────────
echo "Submitting batch of $COUNT profiles..."
RESPONSE=$(curl -s -X POST "$API_BASE/scrape/batch" \
  -H "Content-Type: application/json" \
  -H "X-API-Key: $SCRAPELINKEDIN_API_KEY" \
  -d "{\"urls\": $URLS}")

BATCH_ID=$(echo "$RESPONSE" | jq -r '.data.batch_id')

if [ "$BATCH_ID" = "null" ] || [ -z "$BATCH_ID" ]; then
  echo "Error: Batch submission failed." >&2
  echo "$RESPONSE" | jq . >&2
  exit 1
fi

echo "Batch ID: $BATCH_ID"

# ── Step 3: Poll for results ────────────────────────────────
echo "Polling for results every ${POLL_INTERVAL}s..."
while true; do
  STATUS=$(curl -s "$API_BASE/scrape/batch/$BATCH_ID" \
    -H "X-API-Key: $SCRAPELINKEDIN_API_KEY")

  COMPLETED=$(echo "$STATUS" | jq '.data.completed')
  TOTAL=$(echo "$STATUS" | jq '.data.total')
  echo "  Progress: $COMPLETED / $TOTAL"

  if [ "$COMPLETED" -eq "$TOTAL" ]; then
    echo "$STATUS" | jq '.data.results' > profiles.json
    break
  fi

  sleep "$POLL_INTERVAL"
done

# ── Step 4: Transform to CSV ────────────────────────────────
echo "linkedin_url,full_name,experience,education,status" > "$OUTPUT_CSV"

jq -r '.[] | select(.status == "completed") |
  [
    .linkedin_url,
    .profile_data.full_name,
    (.profile_data.experience | gsub(","; ";")),
    (.profile_data.education | gsub(","; ";")),
    .status
  ] | @csv' profiles.json >> "$OUTPUT_CSV"

ENRICHED=$(tail -n +2 "$OUTPUT_CSV" | wc -l | tr -d ' ')
echo "Done. $ENRICHED profiles enriched -> $OUTPUT_CSV"

Usage

chmod +x enrich.sh
export SCRAPELINKEDIN_API_KEY="li_live_YOUR_KEY"
./enrich.sh prospects.csv

The script writes enriched.csv by default, or you can pass a second argument for a custom output path: ./enrich.sh prospects.csv output/enriched.csv.

Cost Breakdown

ScrapeLinkedIn charges $0.01 per successful lookup. Failed lookups are refunded. Cached results (within 24 hours) are free.

LeadsCostTime
100$12-3 min
500$58-10 min
1,000$1015-20 min

If you re-run the same list, cached profiles return instantly at no cost. Only new or expired URLs consume credits.

Tips for Production

Get your API key and enrich your first list in under a minute.

5 free credits included. No credit card required.

Get Your API Key