Understanding Stored Requests in Prebid Server: A Practical Guide for Publishers

Configuring Prebid Server can feel daunting—especially when your header bidding setup spans dozens (or hundreds) of placements, formats, and partner requirements. Maintaining gigantic, repetitive auction request payloads is both error-prone and a major time sink for publisher ad ops teams.
Stored Requests offer a smarter way to centralize, reuse, and efficiently manage auction configuration. Whether you’re running a single site or a large multi-property operation, understanding Stored Requests in Prebid Server is essential to streamline operations and achieve more predictable revenue outcomes.
What Are Stored Requests in Prebid Server?
Stored Requests let you move repetitive, static configuration for auctions out of each incoming HTTP call and into the server’s backend or file system. Instead of sending the full OpenRTB payload on every request, you reference an ID and let Prebid Server merge in the necessary settings at runtime. This dramatically reduces data transfer, centralizes config management, and cuts down setup errors.
How Stored Requests Work in Practice
Imagine you have a set of display placements (300×250, 300×600) across multiple pages. Rather than copying their full configurations for every bid request, you save the setup under specific IDs (such as ‘home-leaderboard’ or ‘sidebar-skyscraper’). Future auction calls reference these IDs, and Prebid Server dynamically looks up and injects the details—banner sizes, bidder params, price granularity, and more—at the time of auction. If any parameters need to vary per request (like placement ID), these can be provided in the HTTP call and will override the stored values where specified.
Implementing Stored Requests: Configuration and Use Cases
Prebid Server supports multiple storage backends for Stored Requests, including the local filesystem, HTTP endpoints, and databases like Postgres. Which backend you choose will shape your workflow and operational overhead.
Filesystem Example
For small or test environments, you can configure Prebid Server to read Static JSON files directly from local storage. Set the ‘stored_requests.filesystem.enabled’ flag, then place your request definitions in structured folders by ID. Each auction HTTP request then references these IDs, and the server merges the contents as needed.
Database and HTTP Backends
As your operation scales, moving Stored Requests to a database or an API endpoint becomes essential. For example, a larger publisher might store all site and bidder configs in Postgres, enabling centralized updates. Prebid Server queries these backends for the needed IDs, dramatically easing fleet-wide changes and supporting dynamic, real-time updates without server restarts.
Combining Stored Request Types
You’re not limited to storing entire auction payloads. You can separate and store just the BidRequest (global auction parameters), just the Imp objects (per-placement details), or both. Prebid Server applies global settings first, then merges in line-item (Imp) details based on incoming references. This supports highly reusable, DRY (Don’t Repeat Yourself) configurations. For example, the general auction may have a standard timeout and targeting settings, while each placement pulls from its own Imp template.
Merging Logic and Error Handling
When both the stored request and the HTTP payload provide the same fields, Prebid Server uses a JSON Merge Patch: fields in the live HTTP request always overwrite those from storage. This is both a benefit (for flexibility) and a potential pitfall (if field names clash or the stored data becomes out-of-sync).
Common Pitfalls and Debugging Tips
A typical mistake is unintentionally overwriting essential stored values with partial or conflicting HTTP data. For example, if the Imp ID or banner format is left blank in the live request, Prebid Server will default to stored values; if present, they take precedence. The merging is non-recursive—fields inside nested objects are not fully combined—so a partial field update could lead to unexpected auction behavior.
To debug, inspect the actual OpenRTB payload Prebid Server generates after merging. Most errors stem from ID mismatches or unexpected parameter overrides in live requests. Centralized version control for stored configs is highly recommended.
Scaling and Real-Time Management with Caching and Events
High-traffic publishers need efficient access and update strategies for stored configs. Prebid Server allows fetching, caching, and event-listening layers for maximum performance and flexibility. You can use in-memory caches to minimize database hits and define event listeners to update or invalidate cached data in real time.
Real World Example: Hybrid Backend with In-Memory Cache
A common setup involves using a primary database (e.g., Postgres) for persistent config storage, a fallback HTTP endpoint for redundancy, and in-memory caching in Prebid Server nodes for speed. Event listeners—such as polling an update endpoint—ensure the freshest configs. As new configs are saved or invalidated, Prebid Server can push those changes instantly into its cache, reducing hot patching or manual restarts.
What this means for publishers
For publishers, Stored Requests are a game-changer in operational efficiency. Centralizing your auction and placement config permits easier version control, reduces the risk of misconfigurations, and allows site or ad ops teams to push global changes without updating multiple client integrations or rolling restarts. They’re particularly useful for large portfolios, fast-changing ad partnerships, or multi-site setups where consistency and speed matter for revenue and troubleshooting.
Practical takeaway
To get started with Stored Requests, assess your current header bidding payloads for repetitive or static configs—these are prime candidates for centralization. Roll out file-based stored requests for a proof of concept, then migrate to a database or API-backed model for production scale. Always implement robust versioning and change tracking to ensure any updates are intentional and traceable.
Train your ad ops teams to interpret merge logic: fields sent via live requests always win, but improper overrides or missing IDs are frequent sources of error. Leverage Prebid Server’s caching and event modules to keep live nodes in sync without downtime. Finally, document your ID conventions and config standards across your operation for consistent, scalable header bidding management.