Idempotent Webhook Handling in Laravel

Idempotent Webhook Handling in Laravel

If you need webhook idempotency in laravel prevent duplicate processing, this guide gives you practical patterns you can ship now. You will implement dedupe keys, short-circuit duplicate attempts, and keep background jobs replay-safe.

Even when delivery looks successful, retries can still arrive. Your endpoint should treat duplicates as normal and make side effects happen exactly once from your app’s perspective.

Examples use Laravel/PHP, but the model applies to any webhook receiver.

Reference implementation: Idempotent inbox pattern (GitHub Pages)

Why duplicates happen (even when things “work”)

Timeout then success → sender retries

A common path is webhook duplicate event after timeout but succeeded: your app completes work, but the sender never receives your success response in time, so it retries.

Network flaps → sender can’t confirm delivery

Packet loss, transient TLS issues, or upstream proxy resets can break response confirmation even when your app already processed the payload.

Parallel retries (multiple in-flight attempts)

When retries overlap, two attempts can hit your endpoint at nearly the same time. Without a dedupe guard, both can process.

Mini incident: A team charged the same order twice because attempt 1 timed out at the edge while the charge call succeeded. Attempt 2 arrived 20 seconds later and replayed the same side effect.

Prove it to yourself: Trigger one event from Sample Project, then simulate a timeout once—watch a retry come through, and confirm your dedupe blocks duplicates.

Distinguish two kinds of idempotency

SendPromptly ingestion idempotency (Idempotency-Key, 24h TTL)

SendPromptly ingestion idempotency protects event creation at ingestion time. Review Ingestion idempotency (Idempotency-Key with 24-hour TTL) and Required ingestion headers (Authorization + Idempotency-Key) when publishing events.

Your webhook receiver idempotency (dedupe per payload/event)

Receiver idempotency protects processing after delivery attempts arrive at your endpoint. This is separate from sender-side guarantees and should follow Webhook delivery rules (2xx success + retries).

LayerScopeYou should enforce
Ingestion idempotencyEvent creation at API ingestStable Idempotency-Key per create action
Webhook receiver idempotencyEvent processing in your appDedupe key + unique guard + idempotent writes

Dedupe strategy options

Use a deterministic dedupe key and a unique index so concurrent attempts cannot both insert. For most teams, deduplicate webhook events by event id database unique index is the safest default.

Redis SETNX + TTL (fast path)

Redis can reject duplicates quickly before DB work. If you need an idempotency key storage redis setnx ttl example php, use SET key value NX EX <seconds> and keep TTL long enough for delayed retries.

Exactly-once illusion vs practical “at-least-once”

Webhook systems are at-least-once by design. The practical goal is “process at least once, apply side effects once” using idempotent storage and handlers.

Common gotcha: Redis-only dedupe can fail open during evictions or failover. Keep a database unique constraint as the durable source of truth.

Implementation (Laravel)

Assumption: when a stable upstream delivery ID is not available in the payload, your dedupe key is derived from the exact raw request bytes.

Compute dedupe key

Prefer the X-SP-Message-Id header (if present) as your primary dedupe/correlation key. If that is not available, build the key from $request->getContent() so payload byte changes are reflected consistently and duplicates of the same raw payload collide.

Upsert + short-circuit duplicates

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
$raw = $request->getContent();
$dedupeKey = hash('sha256', $raw);

// If insert fails due to unique constraint, it's a duplicate attempt.
try {
    DB::table('webhook_inbox')->insert([
        'source' => 'sendpromptly',
        'dedupe_key' => $dedupeKey,
        'payload' => json_decode($raw, true),
        'received_at' => now(),
        'status' => 'pending',
        'created_at' => now(),
        'updated_at' => now(),
    ]);
} catch (\Illuminate\Database\QueryException $e) {
    return response()->json(['duplicate' => true], 200);
}

dispatch(new \App\Jobs\ProcessWebhookInbox($dedupeKey));
return response()->json(['accepted' => true], 200);

Optional fast-path guard:

1
2
3
4
5
6
$key = "webhook:dedupe:$dedupeKey";
$ok = Redis::set($key, 1, 'NX', 'EX', 86400); // 24h TTL

if (!$ok) {
    return response()->json(['duplicate' => true], 200);
}

Process once in background

Queue processing should be idempotent too, so job retries do not repeat external side effects. Pair this with DLQ + replay with idempotency for safe recovery.

Test steps:

  1. Send the same signed payload twice.
1
2
3
4
5
curl -i -X POST "http://localhost:8000/webhooks/sendpromptly" \
  -H "Content-Type: application/json" \
  -H "X-SP-Timestamp: 1700000000" \
  -H "X-SP-Signature: <valid_signature_here>" \
  --data '{"event_key":"order.created","payload":{"order_id":"O-1001"}}'
  1. Re-run the exact same command.

Expected: first call 200 {"accepted":true}, second call 200 {"duplicate":true} (or similar), and only one job processes.

Replay safety checklist

Side-effect boundaries (billing, emails, inventory)

Mark every side effect boundary explicitly and require an idempotency guard before charging cards, sending emails, decrementing inventory, or mutating irreversible state.

Idempotent writes (upserts) not inserts

Use upserts, uniqueness constraints, and state-transition guards so replaying the same event does not create duplicate rows or actions.

Common failure modes checklist:

  • No unique constraint means concurrent attempts can both process.
  • Dedupe key built from decoded JSON can change bytes and let duplicates slip through.
  • Too-short TTL allows delayed retries to reprocess later.
  • Dedupe key built from an overly broad or wrong subset can collide unrelated events.
  • Non-idempotent side effects (charge card, send email) run again with no guard.
  • Queue retry duplicates re-run side effects because the job itself is not idempotent.

Key takeaways

  • Duplicate delivery attempts are normal in at-least-once systems.
  • Use a durable unique key in storage as your primary dedupe control.
  • Add Redis SETNX as an optimization, not as your only guarantee.
  • Make background jobs idempotent, not just the HTTP endpoint.
  • Validate behavior with repeated payload tests before production rollout.

Verify in Message Log: Ensure your endpoint returns 2xx consistently while your app processes exactly once.