Skip to content

pgflow vs Trigger.dev

Technical CriteriapgflowTrigger.dev
Architecture ModelDatabase-centric orchestrationBackground job execution and orchestration
Infrastructure RequirementsZero additional (just Supabase)Hosted service with dashboard or self-hosted
Workflow DefinitionTypeScript DSL with explicit dependenciesTypeScript functions with tasks and subtasks
Supabase IntegrationBuilt-inRequires manual configuration
Type SystemEnd-to-end type safetyStandard TypeScript typing
Failure HandlingAutomatic per-step retriesConfigurable per-task retries
Event HandlingNo explicit hooksRich hooks (init, cleanup, onSuccess, onFailure)
Developer ExperienceDSL-based APIFunction-based API with lifecycle hooks
Maturity LevelActive developmentProduction-ready
Monitoring ToolsSQL queries for visibilityComprehensive web dashboard
Execution StructureDatabase-driven DAGTask/subtask hierarchy
Concurrency ManagementSimple per-flow limitsAdvanced queues and limits
Scheduling CapabilitiesBuilt into Supabase (pg_cron)Built-in cron scheduler

Both systems provide reliable task execution with proper retries and error handling. The key difference is who controls the workflow orchestration: the database (pgflow) or the task execution platform (Trigger.dev).

  • Committed to Supabase - Want a workflow solution that lives entirely inside your Supabase project
  • DAG-based workflows - Need to express complex data dependencies as a directed acyclic graph
  • SQL visibility - Prefer direct database access to monitor workflow state via SQL queries
  • Lightweight deployment - Need minimal infrastructure with no additional services to maintain
  • PostgreSQL transactions - Want to leverage database transactions throughout workflows
  • Edge Function execution - Want to run workflow steps as Supabase Edge Functions
  • Explicit data flow - Prefer declarative data dependencies over imperative control flow
  • User-friendly dashboard - Need comprehensive UI for monitoring and managing workflow runs
  • Task relationships - Prefer parent-child task relationships over explicit dependency graphs
  • Advanced concurrency - Need sophisticated queuing with per-tenant concurrency controls
  • Rich lifecycle hooks - Want init, cleanup, onSuccess, and onFailure lifecycle hooks
  • Delay capabilities - Need to schedule tasks for future execution with precise timing
  • Framework integration - Desire tight integration with Next.js and other frameworks
  • Immediate production use - Need a mature, battle-tested solution ready for production

pgflow puts PostgreSQL at the center of your workflow orchestration. Workflows are defined in TypeScript but compiled to SQL migrations with all orchestration logic running directly in the database. The database decides when tasks are ready to execute based on explicit dependencies.

// In pgflow, the database orchestrates the workflow
new Flow<{ url: string }>({
slug: 'analyze_website',
})
.step(
{ slug: 'extract' },
async (input) => /* extract data */
)
.step(
{ slug: 'transform', dependsOn: ['extract'] },
async (input) => /* transform using extract results */
);

Trigger.dev provides a task-based system where you define individual tasks with explicit handlers and lifecycle hooks. Tasks can be triggered manually or from other tasks.

// In Trigger.dev, you define tasks with handlers and hooks
import { task } from "@trigger.dev/sdk/v3";
export const extractData = task({
id: "extract-data",
// Optional initialization hook
init: async (payload) => {
return { client: createApiClient() };
},
run: async (payload: { url: string }, { init }) => {
// Use resources from init
return await init.client.fetch(payload.url);
}
});
// Parent task that coordinates workflow
export const analyzeWebsite = task({
id: "analyze-website",
run: async (payload: { url: string }) => {
// Extract data and wait for result
const data = await extractData.triggerAndWait({
url: payload.url
}).unwrap();
// Process data and return result
return processData(data);
}
});

Both systems provide reliable execution of workflow tasks:

  • Both can properly handle retries and error recovery
  • Both track execution state and progress
  • Both provide typing and type safety
  • Both support parallel execution of independent tasks
  • Both allow the scheduling/triggering of workflow execution

The difference is architectural:

  • pgflow: PostgreSQL orchestrates when steps run based on dependencies
  • Trigger.dev: Platform manages task execution with flexible triggering options
  • Native integration - Built specifically for Supabase ecosystem
  • Zero infrastructure - Uses Supabase Edge Functions and PostgreSQL
  • Simple setup - Single command (npx pgflow install) sets up all required components
  • Direct database access - All workflow state directly accessible in your Supabase database
  • Possible integration - Can work with Supabase but not specifically designed for it
  • Additional infrastructure - Requires either hosted service or self-hosted instance
  • Connection - Requires connecting Supabase to Trigger.dev infrastructure
  • Separate dashboard - Uses its own dashboard for monitoring and management