17 min read
    Developer workflow: custom consent code verified by a runtime privacy audit

    SecureSpells

    Own Your Consent Layer: Vibe Code, AI, and Runtime Proof With SecureSpells

    The black box: when configuration and reality diverge

    In EU/EEA contexts, reviewers care about observable behaviour—requests and scripts before valid consent—not only a CMP admin toggle. Treat consent like correctness: explicit gating, clean browser tests, and a runtime audit on staging (then production) as the channel that shows what still fires too early.

    The shift: code you ship vs. a script you rent

    Large third-party CMP bundles can add weight to the critical path and hide behaviour behind a hosted configuration UI. None of that removes your responsibility for what executes on your origin. A practical middle path for engineering-led teams:

    1. Prompt for a small, auditable first-party module (vanilla JS or your framework’s equivalent)—you keep the diff in Git.
    2. Deploy to staging with the same tag inventory you run in production.
    3. Verify with a runtime audit (SecureSpells or equivalent)—evidence beats dashboard toggles.
    4. Iterate until pre-consent network and storage match your policy.

    That is “fast iteration with taste,” with one non-negotiable: the browser trace is the referee. Skipping a commercial CMP for a leaner first-party gate does not mean skipping ongoing checks—monitoring and re-scanning (including subscription-style schedules—see why subscription beats a one-off GDPR audit) are about proof after every deploy, not about renting someone else’s banner code.

    Test-driven privacy (in the sense used here) means: define the behaviour you want (no non-essential trackers before consent), measure it in a real browser, change code when the measurement fails, repeat. It is not a legal framework; it is an engineering discipline paired with GDPR/ePrivacy expectations around storage and access before consent.

    Scope: EU/EEA GDPR and ePrivacy Directive (cookies and similar identifiers on websites). UK GDPR follows the same broad principles for consent and transparency. This article is educational, not legal advice.

    This article is for educational purposes and does not constitute legal advice. For compliance decisions, consult a qualified legal or privacy professional.

    Runtime compliance audit

    An automated check that loads your URL in a real browser session and inspects what executes or leaves the browser (for example script lifecycle and network requests) under a defined consent state—contrasted with static HTML or cookie-inventory-only tools in runtime vs static scanning.

    Evidence-first consent

    An implementation stance where “done” means you can point to reproducible technical proof that non-essential tags do not initialise or send data before consent—not only that a banner renders.

    Default deny (technical)

    A loading strategy where third-party analytics and marketing bootstraps are inert until an explicit branch (user accept / granular allow) runs; often combined with delayed tag-manager injection or type="text/plain" patterns for controlled activation.


    The trade-off: performance and control vs proof and maintenance

    Swapping a vendor CMP (Cookiebot, OneTrust, or another hosted product) for a first-party consent layer is a familiar engineering trade-off: you gain less third-party JavaScript on the critical path and full control in Git, but you inherit burden of proof, UX obligations, and ongoing maintenance. Building the banner UI is a fraction of the work; the rest is records, withdrawal, granularity, geo logic, and a verification loop every time marketing ships a tag.

    Technically: A strict default deny (do not load GTM or marketing scripts until your gate runs) is often faster to reason about than trusting a remote script to “auto-block” after it has already booted.

    Legally and operationally: This article focuses on runtime enforcement—what cookies and requests appear before valid consent. It does not write your privacy policy, reclassify vendors’ cookies for you, or track every change in how Google or Meta label “essential” vs “marketing.” Those questions stay with your organisation and qualified advisors.


    Why “toggle the CMP switch” fails engineering review

    Commercial CMPs can be the right operational choice for many organisations. The failure mode for developers is different: the CMP becomes a black box. Marketing edits containers, A/B tests swap snippets, and preview mode does not match production. Without a runtime check, you discover problems via complaint, DPA correspondence, or a competitor’s screenshot—not from your own pipeline.

    Common gaps (see also how trackers bypass cookie consent):

    Failure patternWhat static checks often miss
    Tag Manager loads earlyGTM container present on first paint; tags fire before your banner logic runs
    “Consent mode” without hard gateSignals update, but the tag still initialised too early for your risk model
    Race on fast connectionsScript injects between your small inline script and a deferred gate
    Cached consentSecond visit looks “clean” while first visit is not

    None of that is visible from “we installed the script.” It is visible from execution order and network evidence in a controlled run.


    The workflow: code → scan staging → read findings → refactor → re-scan

    Treat a runtime audit like a failing test in CI:

    1. Freeze the surface — Same staging URL, private window, no prior accept, same CMP/geo variant you ship.
    2. Run the audit — Execute a runtime scan against that URL (SecureSpells runs an isolated browser session and returns prioritised findings: severity, narrative, remediation guidance, and technical detail where available).
    3. Pick one hypothesis — Example: “Facebook or Meta endpoints should not appear before accept.” Match the finding to load order, GTM trigger, or your custom gate.
    4. Change one thing — Prefer load-order and injector control over cosmetic banner tweaks.
    5. Re-scan — Same profile. If the finding moves to passed for the checks you care about, you have a baseline; if marketing ships a new container, you repeat.

    This is the same feedback loop as test-driven development: red → green → refactor, except the “test” is a browser execution trace, not only a unit test on your consent state machine (you should still unit-test pure logic where it pays off).


    Lightweight implementation patterns (illustrative)

    The following patterns are teaching sketches, not drop-in production libraries. Your stack (SSR, hydration, third-party embeds, service workers) changes the correct design.

    Default deny for third-party injectors

    If Google Tag Manager (or another tag hub) is the root of early fires—often the same class of third-party trackers and GDPR compliance risks that bypass consent at the edges—do not load its bootstrap until after consent. That is stricter than “load GTM and let it internally wait,” and it is often the difference between hope and control.

    type="text/plain" for controlled activation

    A widely documented pattern is to keep marketing tags inert in the DOM until consent flips their type to text/javascript (or injects a fresh script). Pair that with a single owner module that decides when flip is allowed—scatter if (consent) checks across dozens of templates and you will lose consistency.

    MutationObserver as defence in depth

    A MutationObserver can alert you when new <script> nodes appear. It is not a guarantee: execution order, parser-inserted scripts, and third-party strategies vary. Use observers to catch regressions during development; treat the runtime audit as the authority for “does this URL still leak pre-consent in our scanner profile?”

    Observer flow (simplified): document parses → your bootstrap runs early → third parties try to append <script> → observer rewrites or blocks according to consent state → runtime audit confirms whether anything still slipped through (including prefetch and non-script injectors the observer never saw). The figure uses a text fence (fixed width, no word-wrap); LLM prompts below use plaintext (wrapped for long lines). Spacing uses ASCII | and v so columns line up in any monospace font.

    [HTML parse]
         |
         v
    [Your inline gate: default deny] -----------------------+
         |                                                   |
         v                                                   |
    [DOM ready / GTM inject? / marketing partial?]           |
         |                                                   |
         v                                                   v
    [MutationObserver: new <script> nodes]                   |
         |                                                   |
         v                                                   v
    [Rewrite / defer until consent]          [SecureSpells: runtime audit]

    The non-negotiables of a custom stack

    If you treat custom consent as a product feature (versioned, tested, monitored) rather than a one-off UI tweak, it can be more inspectable than a black-box CMP—provided you close the gaps below. Skip any of them and you mostly built theatre.

    A. Proof of consent (the Art. 7 gap)

    Hosted CMPs typically store consent logs for you. If you go DIY, you must be able to demonstrate what was accepted, when, and under which policy text (GDPR Art. 7(1); see EUR-Lex). A flag in localStorage alone is a weak audit trail.

    Client-only is not enough for a serious posture: ship a small server endpoint that writes append-only rows (timestamp, policy/banner version, category booleans, pseudonymous session or authenticated user id). What you log is itself personal data in many setups—retention, security, and lawful basis need a real review, not a blog comment.

    B. Withdrawal as easy as acceptance (UX)

    GDPR expects that withdrawing consent is as easy as giving it. Your UI should include a persistent control (footer link, small “Privacy / cookies” chip, or equivalent) that re-opens the same panel after the first decision—not a one-time modal users cannot find again.

    C. Granularity that maps to code

    “Accept all” as the only meaningful path is a regulatory red flag. Your toggles (analytics, marketing, preferences, or whatever you actually implement) must each map to different load branches in code. If a toggle does not change network behaviour, it is decoration.

    D. Geo, TCF, and “who sees the banner?”

    Geo: If you do not want to integrate a geo service, the conservative engineering choice is often to show the same banner to everyone and rely on categories—annoying for some non-EU visitors, but fewer “dark regions” where EU users slip through without a choice. If you deploy on an edge platform, country headers your host documents (e.g. x-vercel-ip-country on Vercel) can drive routing—never invent IP logic without reading that vendor’s privacy contract.

    IAB TCF: Programmatic stacks that require the IAB Transparency and Consent Framework expect a TC string and related APIs. Major CMPs implement that machinery; a hand-rolled banner usually does not. If TCF-driven revenue matters, either keep a TCF-capable CMP for the ads surface or budget serious engineering—not a weekend prompt.

    E. Vendor discovery and classification (the “cookie list”)

    Commercial CMPs automatically scan and categorise new cookies into your public policy. If you remove the CMP, you need a vendor registry. A custom gate does not know the legal difference between an “essential” security token and a “marketing” pixel.

    The SecureSpells bridge: Scheduled runtime audits replace the CMP crawler as the engineering loop: full audits discover new cookies, trace script lifecycle in the run, and surface new or changed vendors so nothing hits production quietly. That yields a technical mapping (script → observed behaviour → evidence your team can label) so legal can update the policy without flying blind—category labels remain a legal decision, not something the scanner invents. For how to compare tool classes before you standardise on a pipeline, see best cookie audit tools (2026).

    F. Cross-border data transfers (GDPR Chapter V)

    A perfect technical gate does not solve data transfer law. If your custom banner waits for consent before loading a US-hosted analytics tool, execution can be correct while the transfer still needs a documented mechanism (for example Standard Contractual Clauses, adequacy, or another approved safeguard—your counsel picks the instrument).

    The SecureSpells bridge: You cannot manage what you do not observe. Audits surface where requests terminate (hostnames and, where the engine exposes it, geographic or regional signals for endpoints). Use that to connect front-end behaviour to your privacy team’s transfer risk assessments—evidence for humans, not a substitute for the legal paperwork.

    Custom vs hosted CMP (honest sketch)

    DimensionCustom (first-party gate + runtime audit)Cookiebot / OneTrust-style CMP
    Critical-path JSUsually minimal if you keep the module smallOften a larger third-party bootstrap
    Control of load orderFull (your repo)Limited to what the vendor exposes
    Consent log storageYou build and operate itVendor-hosted (check contract & exit)
    Vendor & cookie discoveryDiscovered via scheduled SecureSpells auditsOften bundled with the CMP crawler
    Cross-border transfer alertsSecureSpells endpoint geolocation (where surfaced)Vendor database / manual mapping
    TCF / programmatic depthYou must implement or hybridiseTypically handled for supported stacks
    Setup effortHigh (UI + gate + API + scans)Lower for standard sites

    For product-level sketches (not legal advice), see SecureSpells vs Cookiebot and SecureSpells vs OneTrust.


    Evidence-driven refactor: read the finding, then change code

    BLUF: Map each runtime finding to one concrete change (load order, injector gate, consent branch). Fix several unrelated symptoms at once and you will not know what worked.

    When SecureSpells reports an issue, the card typically includes severity, title, description, recommendation, and often technical details—exact strings and UI labels depend on engine version; always read what your run actually returned. If the narrative references a class of third-party request, confirm in DevTools that the same class appears under the same consent state you used in the scan—then adjust gating.

    If the evidence suggests…Aim your change at…
    Analytics or ads domains on first loadBootstrap of the tag hub or early tag registration
    Script in DOM before interactionGate running too late relative to injection
    Different behaviour on repeat visitCached consent, service worker, or returning-user fast path

    After each deploy to staging, re-scan the same URL. When findings you care about sit in the passed bucket, tag the commit or release: that is your reproducible baseline.


    LLM prompts and raw findings: make the model read evidence, not vibes

    Fast iteration with an assistant is useful only if the ground truth is in the context window. After a SecureSpells run, open a finding card → Technical Details (raw JSON / structured payload). On the public demo report, long arrays are truncated the same way as on other public scans so the page stays readable; paid tiers can expose the full list. Copy the snippet your UI actually shows—then paste it into your editor chat with explicit constraints.

    Paste bundle (copy with every prompt):

    URL: [staging or production URL]
    
    Consent state: [e.g. first visit, no click yet / Reject all / Accept analytics only]
    
    Environment: [incognito yes/no, geography, ad blocker on/off]
    
    Raw Technical Details: [paste JSON from SecureSpells finding card; trim keys if context limit is tight]

    Example prompt — gate LinkedIn / analytics cookies

    You are editing our site’s consent layer.
    
    Below is a pre-consent Technical Details excerpt from SecureSpells (EU profile; reject not yet applied).
    
    Do not suggest adding more trackers.
    
    Propose the smallest change to our loader order or type="text/plain" pipeline so _li_ss and related tracking cookies are not set until analytics consent is true.
    
    If information is missing, ask one clarifying question instead of guessing.

    Example prompt — tag manager bootstrap

    Here is SecureSpells raw JSON for a FAIL on first paint. Our stack uses Google Tag Manager.
    
    Give a step-by-step change list:
    
    (1) What to remove or defer from index.html.
    
    (2) What to wrap in a consent callback.
    
    (3) What to verify in DevTools Network after the change.
    
    Assume we ship on staging first.

    Example prompt — diff-only

    Treat the JSON as a failing test.
    
    Output a unified-diff style description of edits to our existing consent-bootstrap.js (pseudocode OK) so thirdPartyTrackingCount goes to zero pre-consent.
    
    Do not add new third-party scripts.

    Representative Technical Details excerpt (demo-style truncation)

    Exact keys and nesting depend on engine version and check type; always copy from your report. The fragment below illustrates how truncated lists appear in public views (including the interactive demo):

    {
      "status": "FAIL",
      "cookies": {
        "tracking": [
          "_li_ss",
          "_pctx",
          "_pcid",
          "...and 8 more items hidden. Purchase Full Audit to unlock."
        ],
        "essential": ["sd-session-id"],
        "consent": [
          "_ketch_consent_v1_",
          "_swb_consent_",
          "usprivacy",
          "...and 5 more items hidden. Purchase Full Audit to unlock."
        ],
        "__truncated_keys__": "...and 3 more keys hidden. Purchase Full Audit to unlock."
      },
      "thirdPartyTrackingCount": 1
    }

    Use this block to anchor the model: names you must not set early (tracking bucket), consent storage keys you may still need for the CMP itself (consent bucket—treat separately from marketing firing), and the aggregate signal (thirdPartyTrackingCount). Ask the assistant to explain which script likely wrote each named cookie, then map that to your tag inventory.

    Guardrails for any generated code

    • Reject suggestions that load marketing pixels “for debugging” before consent.
    • Require the assistant to name load order (inline bootstrap vs deferred bundle vs tag manager).
    • Re-run SecureSpells on staging after applying the patch; paste the new JSON into the same thread so the model sees the delta.

    The default-deny master prompt (copy into Cursor / ChatGPT)

    Use this when you want a single aggressive spec the model must satisfy—then trim anything that does not match your stack. Safety note: rewriting real src URLs into data: placeholders can break restoration, prefetch semantics, and CSP; many teams prefer type="text/plain" + data-* attributes that store the original URL until consent. Treat any “strip src to block prefetch” idea as optional hard mode, not a default.

    Act as a senior web privacy engineer. Output a single vanilla JS module (no npm dependencies) that:
    
    1. Runs immediately at the top of <head> (inline or first synchronous file) before marketing tags.
    
    2. Installs a MutationObserver on document.documentElement for added <script> nodes.
    
    3. If src matches analytics/ads hostnames you list (e.g. google-analytics, googletagmanager, connect.facebook, static.hotjar.com, snap.licdn.com), neutralise execution until marketing consent is true.
    
       Prefer type="text/plain" and moving the original URL to data-delayed-src.
    
       Only if the user insists, discuss stripping src to block prefetch and document trade-offs.
    
    4. Renders a minimal banner UI with separate toggles for Performance and Marketing; default is deny for both.
    
    5. On Accept, activates delayed scripts in a deterministic order and POSTs a JSON consent record to /api/consent with:
    
       - timestamp
       - policy version string
       - boolean map { performance, marketing }
       - pseudonymous session id you generate client-side
    
    6. Includes comments telling the team to paste the latest SecureSpells Technical Details JSON into the PR when behaviour changes.
    
    Refuse to add trackers that fire before consent.
    
    Ask one clarifying question if the host list is unknown.

    Prove it on your URL: Run a free runtime cookie audit on staging or production when your gate ships—especially after GTM or CMP changes.

    Also available: free cookie audit tool for a structured pass on the same page.


    Consent Mode, CMPs, and custom code can coexist

    If you use Google Consent Mode v2, read what Consent Mode v2 means for GDPR so you know what the signals do and do not prove; you still need to validate that tags and requests match your policy—signals and behaviour can diverge. Use the Google Consent Mode v2 test guide as a checklist alongside runtime scans, not instead of them.

    If you use a commercial CMP, custom code often still matters at the edges: single-page app navigations, lazy-loaded marketing components, and partner pixels added outside the CMP template. Runtime audits catch those integration seams.


    Honest boundaries (keeps trust high)

    A runtime auditor answers: “What did this browser run do on this URL under this profile?” It gives technical enforcement and an evidence loop. It is not a legal operating system on its own.

    Consent ≠ automatically valid lawful basis for every processing activity. A custom gate plus SecureSpells still does not replace:

    • Legal classification: Whether a specific tracker is “essential” vs “non-essential” (a legal judgment, not something the gate decides).
    • Data subject rights (DSRs): Your backend must still handle access, erasure, and other requests your regime requires (for example GDPR Articles 15 and 17 where they apply).
    • Records of processing activities (RoPA): Internal documentation under GDPR Article 30 where it applies.
    • Data protection impact assessments (DPIAs): Where high-risk processing triggers them.
    • Transfer mechanisms: SCCs and other safeguards for third-country transfers—even when the technical gate behaves perfectly.
    • Accessibility and UX for the banner itself: See cookie banner compliance as a separate lens from “did scripts fire early?”

    Say that once in security and privacy reviews; technical and legal stakeholders both respect the boundary. You ship the undeniable browser trace; they own lawful basis, contracts, and filings.


    Methodology and sources

    Consent records when the CMP database is yours

    Under GDPR Art. 7(1), where consent is your lawful basis, you must be able to show that valid consent was obtained—see the consolidated GDPR on EUR-Lex (Article 7). A DIY banner still needs an evidence trail you control: not a screenshot of the UI, but durable records (who toggled what, which policy version applied, when).

    Pattern (engineering sketch, not legal sign-off):

    FieldRole
    TimestampWhen the choice was stored
    Policy / banner versionWhich copy and categories the user saw
    Category mape.g. { performance: false, marketing: true }
    Pseudonymous idRandom session UUID or authenticated subject id from your auth—avoid treating “hashed IP” as a magic compliance trick; IP can still be personal data and processing it needs a lawful basis and retention policy
    User agent (optional)Debugging only; define retention and minimisation with counsel

    Building the consent “receipt” API (LLM prompt)

    Use this to scaffold a server-owned trail (adapt paths to Laravel, Rails, etc.):

    Create a POST /api/consent route (example: Next.js App Router) that:
    
    - Validates JSON body: domain, policyVersion, categories (booleans for performance, marketing), sessionId (UUID from client), optional userAgent.
    
    - Inserts an append-only row into Postgres with server created_at.
    
    - Does not store full IP by default. If you must store truncated or hashed values for abuse tracing, add a TODO for legal review of lawful basis and retention.
    
    - Returns 204 on success.
    
    - Provides a vanilla fetch from the banner on Accept / Save: JSON body, credentials: 'same-origin', and error handling that does not fire non-essential tags on failure.

    That pattern is the DIY substitute for “Cookiebot stored the receipt”—you now host the database, backups, and access reviews.

    Store append-only rows in your infrastructure (Postgres, Supabase, Vercel KV, etc.). That removes third-party lock-in for records, while a runtime auditor still answers “did the site behave accordingly?” Align retention, security, and access controls with your DPA and privacy notice—this article does not replace that work.

    Last updated: 2026-04-22.


    Runtime audits as the linter for privacy code

    In a traditional workflow, “green” meant the marketing portal showed a checkmark. In an evidence-first workflow, a failed SecureSpells run is closer to a compiler error for privacy behaviour: something on the URL still disagrees with your policy under the tested consent state. A clean run is strong technical evidence for that profile—not a legal guarantee for every processing activity, but materially stronger than guessing from a hosted CMP dashboard alone.

    A custom gate is not a set-and-forget asset. Marketing adds pixels, vendors change injection patterns, and your MutationObserver allowlist can go stale. That is why continuous or scheduled runtime audits belong in the same engineering system as your consent code—same way you would not ship payment logic without tests. Wire scans into staging on every material PR and production on a cadence you can defend; when a new tag appears, fix the gate, then let the scan prove the regression is gone.


    Related Articles

    Share:

    Share:
    SecureSpells

    SecureSpells

    Find GDPR risks on your live site before regulators do

    Check it out on Product Hunt →

    Read Next

    Agency-first runtime compliance

    Turn runtime compliance
    into a sellable agency advantage

    Use SecureSpells to prove what shipped, hand clients defendable evidence, and keep monitoring attached after launch so your agency finds regressions before trust erodes.

    Free scan wedge
    Handoff-ready evidence
    Monitoring-led retention