Skip to content
This guide is for the queue-only version of Edge Worker. Main docs are here

Create your first worker

We will create a simple web scraping worker that listens for URLs to scrape, fetches the content, and logs the results.

Web scraping is a perfect use case for task queue workers, because of built in retries and ability to fetch websites in parallel.

Setting it up is straightforward and takes just a few minutes.

  1. First, create a new Edge Function using the Supabase CLI. Then, replace its content with this code:

    import { EdgeWorker } from "jsr:@pgflow/edge-worker";
    EdgeWorker.start(async (payload: { url: string }) => {
    const response = await fetch(payload.url);
    console.log("Scraped website!", {
    url: payload.url,
    status: response.status,
    });
    });
  2. Start the Edge Runtime with the following command:

    npx supabase functions serve

    This makes Supabase listen for incoming HTTP requests, but does not start your worker yet.

  3. Start the worker by sending an HTTP request to your new Edge Function (replace <function-name> with your function name):

    curl -X POST http://localhost:54321/functions/v1/<function-name>

    This will boot a new instance and start your worker:

    [Info] worker_id=<uuid> [WorkerLifecycle] Ensuring queue 'tasks' exists...
  4. Your worker is now polling for messages on the tasks queue (which was automatically created during startup).

    Send a test message:

    SELECT pgmq.send(
    queue_name => 'tasks',
    msg => '{"url": "https://example.com"}'::jsonb
    );

    The message will be processed immediately and you should see the following output:

    [Info] worker_id=<uuid> [ExecutionController] Scheduling execution of task 1
    [Info] Scraped website! { url: "https://example.com", status: 200 }