If you are seeing make pagination missing records in Make (formerly Integromat), the problem is almost never “random.” It is typically a predictable interaction between an API’s paging model, your sorting/filtering strategy, and how the scenario iterates pages during Make Troubleshooting.
In practice, the stakes are high: a skipped page can silently drop invoices, CRM updates, support tickets, or inventory deltas. That is why you need both a correct pagination strategy and a proof of completeness, not just “it ran without errors.”
Beyond missed items, pagination fixes can introduce a second failure mode: retry behavior and replay runs can create duplicates if you do not design idempotent writes. So the right approach addresses missing records and duplicate writes together.
Giới thiệu ý mới: below is a field-tested method to diagnose the root cause, implement stable paging, validate coverage, and harden the scenario so it stays correct as your dataset grows and changes.
What does “make pagination missing records” mean in Make, and how do you confirm it fast?
It means your scenario completes successfully but returns fewer distinct items than the source system actually contains for the same query window, so the gap is logical rather than a runtime error.
To move from suspicion to certainty, you need a quick verification loop that compares what Make processed against what the API claims exists, and that is the first anchor step in make troubleshooting.

What are the “tell-tale symptoms” that records are being skipped?
The strongest signal is a repeatable mismatch between expected count and processed count, especially when the API supports a “total,” “count,” or “has_more/next” indicator.
Next, look for a “hole” in ordering—IDs jump forward, timestamps skip a range, or you notice that items around page boundaries never appear.
- Boundary gaps: items you know exist around page 2/page 3 never show up.
- Inconsistent results: rerunning the same query window yields a different subset.
- Silent truncation: Make run ends without errors, but your downstream store is incomplete.
How do you confirm missing records with a minimal “control” test?
Use a narrow time window or a filter that produces 2–5 pages, then compare page-by-page output. If the API provides a stable unique key (e.g., id), you can export keys from Make and compare to the source keys for the same filter.
To make this practical without code blocks, use a table-based checklist and keep the test deterministic.
This table helps you confirm whether the gap comes from the API, from your paging logic, or from unstable sorting.
| Control check | What you compare | What it proves |
|---|---|---|
| API total vs processed | API “total/count” vs number of unique IDs written | Whether you are missing items at all |
| Page boundary diff | Last item on page N vs first item on page N+1 | Whether offset/cursor is drifting or sorting is unstable |
| Rerun consistency | Same window, same filter, same page size | Whether data mutations are invalidating offset pagination |
Which single “metadata field” should you always log in Make during diagnosis?
Log a stable identifier per item (e.g., id) and the pagination state (offset/page/cursor token). If you cannot reconstruct which cursor produced which records, you cannot isolate where the gap begins.
To connect this to later fixes, you will reuse the same logged fields as your “proof-of-completeness” artifacts.
Why do APIs skip pages or repeat items when you paginate in Make (make troubleshooting)?
APIs typically skip or repeat items when your pagination model is not stable under change, meaning records are inserted/updated/deleted while you are paging, or your sort order is not deterministic.
To fix this cleanly, you must understand whether you are dealing with offset/limit pagination, page-number pagination, or cursor-based pagination—and then align your Make loop to that model.

How does offset/limit pagination cause “missing records” under live data?
Offset/limit effectively says “skip the first N rows.” If new rows are inserted before your current offset (or rows are deleted), the “meaning” of offset N shifts, and you can skip items or see duplicates.
However, the real break often comes from unstable sorting: if the API sorts by a non-unique field (like updated_at) without a tie-breaker (like id), results can reorder between requests.
- Offset drift: insertions push rows forward; you jump past items.
- Deletion shift: deletions pull rows backward; you re-read items.
- Non-deterministic sort: ties reorder; boundaries become unreliable.
Theo nghiên cứu của Gusto (Embedded Engineering Blog), vào November 2025, offset pagination is simple but can become unreliable under dynamic datasets, while cursor-based pagination is typically more predictable for “continue from last item” flows.
When does page-number pagination break in practice?
Page-number pagination breaks for the same reasons as offset: “page 3” is just an offset calculation. If the underlying collection changes during the run, page boundaries shift and you can miss items.
To bridge into solutions, you should treat page numbers as a convenience UI concept, not a correctness guarantee for automation.
Why is cursor-based pagination usually safer for Make scenarios?
Cursor-based pagination uses a “next” token that points to a position in the result set. It is usually safer because it tells the server, “continue from here,” rather than “skip N.”
That said, cursor safety depends on the cursor semantics: some cursors are time-based, others are opaque; you must store and reuse the token correctly inside your Make loop.
How do you implement reliable pagination in Make (HTTP module) without missing records?
You implement reliable pagination by choosing the API’s intended paging mechanism, iterating until the API signals completion, and persisting the paging state so retries and reruns do not skip or replay unseen pages.
To make that work in Make, build the loop around a single source of truth: the cursor/offset value you store, increment, and log on every request.

How do you build a safe loop for cursor-based APIs?
Start with an initial request (no cursor), capture the “next” token, and continue requesting while the token exists. The critical point is: never “compute” next; always use what the API returned.
This table gives a practical blueprint for mapping cursor pagination into Make variables and module outputs.
| Loop component | What you store in Make | Why it prevents missing records |
|---|---|---|
| Current cursor | Scenario variable (e.g., cursor) | Ensures every request continues from the prior position |
| Next cursor | Parsed response field (e.g., next, next_cursor) | Uses API’s canonical continuation pointer |
| Stop condition | Next cursor is empty / has_more=false | Prevents premature exit and infinite loops |
| Progress log | Cursor + first/last item id per page | Creates an audit trail to locate gaps |
How do you handle offset pagination safely when cursor is not available?
If you must use offset, stabilize the ordering. Sort by a unique, monotonic key when possible (e.g., created_at + id tie-breaker), and freeze the dataset with a time-bounded window.
Next, increment offset by the exact number of items returned (not always the requested limit), because some APIs return fewer items on intermediate pages due to filtering or permissions.
- Stabilize sort: add a deterministic tie-breaker if the API supports it.
- Fix the window: “created between T0 and T1” to reduce live churn.
- Advance correctly: offset = offset + items_returned.
What is the simplest “end-to-end correctness” test after you change pagination?
Rerun the same frozen window twice and compare: (1) unique IDs count, (2) min/max timestamps, and (3) whether every downstream write is idempotent. If any of those vary, your pagination is still unstable.
To deepen the test, you can add a reconciliation run that only counts IDs, without writing business effects.
How can you prove you did not miss records before writing to the destination system?
You can prove coverage by adding a lightweight completeness layer: reconcile counts, checkpoint pages, and detect gaps in the ID/timestamp sequence before committing writes that are hard to undo.
After that, you convert the proof into operational telemetry so future runs self-diagnose rather than silently failing.

Which “proof signals” are most reliable across APIs?
The most reliable signals are those that come from the API itself (total/next/has_more) and those you compute from stable identifiers (unique id sets). Avoid relying solely on Make bundle counts if your flow includes filters/routers.
- API-reported total: best when available, but confirm it respects your filters.
- Unique ID cardinality: robust when id is stable and unique.
- Boundary continuity: last id of page N should logically precede first id of page N+1 under a stable sort.
How do checkpointing and “resume tokens” prevent silent gaps?
Checkpointing means you store the last successful pagination state (cursor/offset and a boundary id). If the scenario times out or fails, you resume from the checkpoint instead of restarting and hoping the API returns the same ordering.
To connect this to Make operations, store checkpoints in a Data Store, a database, or even a low-friction sheet—what matters is durability and traceability.
When should you block writes and run a reconciliation-only pass?
Block writes when (1) you are migrating a scenario, (2) the API is known to be eventually consistent, or (3) you cannot guarantee stable sorting. In those cases, first collect IDs and validate coverage, then replay writes from the validated set.
That two-phase approach reduces the blast radius of a pagination mistake to a “retryable read,” not a permanent data loss.
What Make scenario design mistakes commonly lead to missing pages (make troubleshooting)?
The most common causes are premature stop conditions, incorrect variable scoping, and rate-limit behavior that truncates or alters page results without throwing a hard error.
To fix them, you need to treat pagination as a first-class control flow with explicit state, rather than an implicit “repeat until it works.”

How do stop conditions accidentally cut pagination short?
A classic bug is stopping when “items returned < limit,” even though some APIs return fewer items for intermediate pages due to permissions, filters, or partial availability. Another is ignoring the API’s “next” field and assuming the last page is when “page == total_pages.”
To prevent this, always use the API’s canonical stop signal (has_more/next is empty) when provided.
How do variable scopes and routers break paging state in Make?
When you update the cursor/offset inside one route but read it from another, your loop can reuse an old cursor and either repeat pages or skip forward unexpectedly. This is especially common when pagination is mixed with conditional routers and error handlers.
To keep state consistent, centralize cursor updates in one place and feed all routes from the same “current page” output.
How do rate limits and timeouts turn into “missing records” instead of visible errors?
Some APIs respond to overload with partial data, empty pages, or inconsistent paging tokens. If your Make scenario treats empty pages as “done,” you silently stop early.
To harden this, add explicit handling: if the API returns an empty page but still returns a next token or indicates more data, retry with backoff rather than stopping.
To support long-running scenarios, this video provides practical background on designing predictable API pagination patterns:
How do you avoid duplicates while fixing pagination so retries do not corrupt data?
You avoid duplicates by making downstream writes idempotent: each source record maps to exactly one destination record, and retries resolve to “update existing” rather than “create new.”
From there, you can safely re-run backfills and recovery jobs without turning a missing-record fix into a data-quality incident.

What is the safest idempotency model for Make scenarios?
The safest model is: use a stable source key (or a derived composite key) as a unique constraint in the destination, then perform upserts. If the destination cannot enforce uniqueness, implement a lookup-first pattern and only create when absent.
In real operations, teams often describe the resulting discipline as a “Make Troubleshooting” standard because it prevents both duplicates and silent partial writes during incident recovery.
Theo nghiên cứu của Stripe Engineering từ Engineering Blog, vào February 2017, idempotency keys enable safe retries by making repeated requests return the same outcome for the same key rather than duplicating side effects.
Where do duplicates usually sneak in when you “fix missing pages”?
Duplicates often appear when you backfill a wider window after changing pagination, but your destination write is still “create-only.” Another common trigger is an error handler that retries the same bundle without a unique write constraint.
In this context, teams report the symptom as “make duplicate records created” even though the root cause is usually lack of idempotent writes or missing dedup keys in the destination mapping.
How do you design a dedup key when the API has no stable id?
Use a composite key built from immutable fields (e.g., external_reference + created_at + type). If immutability is uncertain, incorporate a content hash (normalized JSON fields) to stabilize identity across runs.
This table shows common dedup key strategies and when each one is appropriate.
| Dedup strategy | Key example | Best for |
|---|---|---|
| Native ID | source_id | Most APIs with stable identifiers |
| Composite immutable | order_number + created_at | Systems with strong business keys |
| Content hash | sha256(normalized_fields) | Event streams without stable IDs |
How do you handle missing fields and “empty payload” issues during pagination in Make?
You handle missing fields by validating schema expectations per page, using safe defaults for optional fields, and distinguishing between “no items on this page” and “items exist but fields are null/omitted.”
Once that is clear, you can keep pagination correct while preventing downstream modules from failing or misinterpreting empty data.

Why does “empty payload” often get mistaken for “end of pagination”?
Some APIs return empty arrays temporarily (eventual consistency, permission filtering, throttling) while still providing a next token. If your Make loop stops on empty arrays, you can miss the remaining pages.
To avoid that, key your stop condition to the API’s continuation signal, not to the payload length alone.
How do you keep downstream modules stable when fields are missing or null?
Use defensive mapping: treat optional fields as optional, provide placeholders for required downstream inputs, and branch only when a field is truly absent versus empty. In Make, this usually means mapping with defaults and placing validation logic before write operations.
Some teams label this symptom “make missing fields empty payload” because the visible break happens in mapping, but the underlying cause is inconsistent field presence across pages or objects.
Theo nghiên cứu của Postmark Engineering Blog, vào June 2019, webhook delivery is commonly “at least once,” so receivers should expect retries and missing/late data and design idempotent processing rather than assuming exactly-once behavior.
When should you fail fast instead of patching with defaults?
Fail fast when the missing field indicates corruption or a contract break (for example, an id field is missing, or a timestamp that defines ordering is absent). In that case, stop and alert, because continuing can create incorrect paging state and compound data loss.
When the field is optional (e.g., phone, secondary address, description), defaulting is acceptable as long as you log it for later review.
Contextual border: Up to this point, the focus has been on making pagination correct and auditable for typical Make scenarios. Next is a compact set of advanced guardrails that help at scale—high volume, slow APIs, long runs, and frequent reruns.
Advanced guardrails for long-running pagination jobs in Make
You harden long-running pagination jobs by combining observability, adaptive throttling, and reconciliation so that a slow or flaky API cannot silently degrade your coverage over time.
After that, you convert these guardrails into reusable templates so future scenarios inherit the same correctness without re-learning the same failure modes.

How do you instrument pagination runs so you can replay safely?
Assign each run a correlation ID and log: (1) paging state per request, (2) first/last item id per page, (3) destination upsert key used, and (4) retry attempts. Store these logs outside the scenario if you need long retention.
This lets you replay from a specific cursor and prove that a replay run only “fills gaps,” not duplicates existing rows.
How do you tune throttling, backoff, and timeouts without losing pages?
Use an exponential backoff strategy on transient errors (429/5xx), but ensure the retry reuses the same cursor/offset and does not advance state unless the response is valid. Also, treat empty pages during overload as “retryable” if a continuation signal indicates more data.
If your scenario must run within a tight execution window, split the workload: paginate IDs first, then process records in batches.
How do you reconcile completeness over days or weeks without re-reading everything?
Run a periodic “delta reconciliation” job that compares counts and boundary markers for fixed windows (e.g., daily created_at ranges). If a day’s window has fewer processed IDs than expected, queue a targeted backfill for that day only.
This table shows what to track in a delta reconciliation job and how it helps you find gaps quickly.
| Metric | Tracked per | How it helps |
|---|---|---|
| Expected count | Time window | Detects under-collection without scanning all IDs |
| Min/Max id or timestamp | Time window | Detects boundary truncation |
| Missing page markers | Run logs | Pinpoints the exact cursor/offset where gaps began |
FAQ: What should you do when the API is “correct” but Make still misses records?
If the API is stable but you still see make pagination missing records, the issue is usually in control flow (stop conditions, variable scope), error handling (empty page treated as done), or downstream routing that drops bundles.
- Do you stop early? Ensure stop conditions rely on has_more/next, not payload length alone.
- Do you mutate state in multiple routes? Centralize cursor updates and log them.
- Do you drop bundles downstream? Verify routers/filters are not filtering out items unintentionally.
- Do retries change behavior? Ensure retries reuse the same paging state and writes are idempotent.

