SDUI Analytics: Measuring What Matters

With server-driven UI, you control every pixel from the server. That means you can instrument analytics at a depth traditional mobile apps can only dream of. Here's exactly what to measure β€” and how.

1. Why SDUI Changes the Analytics Game

In traditional mobile development, analytics is bolted on. You ship a build, sprinkle trackEvent() calls throughout your code, and hope someone remembers to instrument the new feature before it goes live. With server-driven UI, the dynamic is fundamentally different.

Every UI the user sees was decided by the server. That means the server already knows exactly what it sent β€” which components, in what order, with what configuration, for which user. You don't need to hope a client-side event fires. The server has a complete record of every rendering decision it made.

This creates a natural analytics layer that doesn't exist in traditional apps:

Yet most teams underinvest in measurement after adopting SDUI. They build the rendering pipeline, ship server-driven screens, and then… measure the same surface-level metrics they always did. That's leaving the most powerful analytics capability of SDUI on the table.

The Core Insight

SDUI doesn't just change how you build UI β€” it changes what you can measure. The server's rendering decisions are a first-class data source. Treat them that way.

2. Key Metrics for SDUI Teams

If you're running an SDUI system, these are the metrics that separate teams who are guessing from teams who know. We break them into three categories: performance, reliability, and engagement.

Render Time (p50)
142ms
Server response + client render
Cache Hit Rate
87%
Layout cache effectiveness
Fallback Rate
0.3%
Unknown component renders
Interaction Rate
34%
Users acting on components

Performance Metrics

Reliability Metrics

Engagement Metrics

For a deeper dive into how these metrics connect to SDUI performance optimization, see our dedicated guide.

3. Automatic Exposure Tracking

This is the capability that makes analytics teams fall in love with SDUI β€” and it's a core differentiator for Pyramid.

In traditional experimentation, tracking exposures is a pain. You define an experiment, assign variants, and then you need to fire an exposure event when the user actually sees the variant. Miss the event? Your experiment data is polluted. Fire it too early? Same problem.

With SDUI, every component render is a tracked exposure. The server decides to show Variant B of a hero banner to User #4821. That decision is logged server-side the instant it's made. When the client renders it and confirms visibility, you have a guaranteed, deduplicated exposure record.

Server Decision Log
{
  "user_id": "u_4821",
  "screen": "home",
  "timestamp": "2026-03-31T14:22:01Z",
  "components": [
    {
      "id": "hero_banner",
      "experiment": "homepage_hero_v2",
      "variant": "B",
      "position": 0,
      "exposure_logged": true
    },
    {
      "id": "product_carousel",
      "experiment": null,
      "variant": "default",
      "position": 1
    }
  ]
}

No extra instrumentation. No "did someone remember to call trackExposure()?" The rendering pipeline is the exposure tracking pipeline.

This matters for two reasons:

  1. Zero instrumentation overhead for new experiments. When you add a new A/B test, exposures are tracked automatically. Your experimentation velocity is no longer bottlenecked by analytics engineering.
  2. Guaranteed accuracy. You can't forget to track an exposure, and you can't accidentally track one before the user sees it (visibility confirmation from the client closes the loop).

Teams using SDUI for growth engineering often cite automatic exposure tracking as the single biggest productivity gain.

4. Building an SDUI Analytics Pipeline

A production-grade SDUI analytics pipeline has three layers. Here's the architecture that works at scale:

Layer 1: Client-Side Event Collection

The client captures what the server can't observe directly β€” actual user behavior:

Client SDK (Kotlin)
class SDUIAnalyticsObserver(
    private val analytics: AnalyticsClient
) : ComponentLifecycleObserver {

    override fun onComponentVisible(
        component: SDUIComponent,
        metadata: RenderMetadata
    ) {
        analytics.track("sdui.component.visible", mapOf(
            "component_id" to component.id,
            "component_type" to component.type,
            "screen" to metadata.screen,
            "position" to metadata.position,
            "render_time_ms" to metadata.renderDuration,
            "experiment" to component.experiment,
            "variant" to component.variant,
            "layout_version" to metadata.layoutVersion
        ))
    }

    override fun onComponentInteraction(
        component: SDUIComponent,
        action: String
    ) {
        analytics.track("sdui.component.interaction", mapOf(
            "component_id" to component.id,
            "component_type" to component.type,
            "action" to action
        ))
    }
}

Layer 2: Server-Side Decision Logging

Every layout response the server generates is logged with full context β€” what was sent, why it was sent, and to whom. This includes experiment assignments, targeting rules that matched, and any personalization signals used.

Server Decision Log (Python)
def log_layout_decision(user, screen, layout):
    decision_event = {
        "event": "sdui.layout.served",
        "user_id": user.id,
        "screen": screen,
        "layout_hash": layout.content_hash(),
        "component_count": len(layout.components),
        "experiments": [
            {
                "experiment_id": c.experiment_id,
                "variant": c.variant,
                "component_id": c.id
            }
            for c in layout.components
            if c.experiment_id
        ],
        "targeting_signals": user.targeting_context(),
        "cache_status": layout.cache_status,
        "compute_time_ms": layout.compute_duration_ms
    }
    event_bus.publish(decision_event)

Layer 3: Aggregation & Dashboards

Join client events with server decisions. The layout_version or layout_hash is your join key β€” it connects "here's what the server sent" with "here's what the user did."

Your aggregation layer should compute:

5. Component-Level Analytics

This is where SDUI analytics gets genuinely powerful. Because every component is defined server-side with a stable identifier, you can track engagement per component across your entire user base.

Think about what this enables:

Component Performance Query (SQL)
SELECT
    component_type,
    COUNT(DISTINCT CASE WHEN event = 'visible' THEN user_id END)
        AS users_seen,
    COUNT(DISTINCT CASE WHEN event = 'interaction' THEN user_id END)
        AS users_interacted,
    ROUND(
        COUNT(DISTINCT CASE WHEN event = 'interaction' THEN user_id END)::numeric /
        NULLIF(COUNT(DISTINCT CASE WHEN event = 'visible' THEN user_id END), 0),
        3
    ) AS interaction_rate,
    PERCENTILE_CONT(0.5) WITHIN GROUP (ORDER BY render_time_ms)
        AS p50_render_ms
FROM sdui_events
WHERE screen = 'home'
  AND date >= CURRENT_DATE - INTERVAL '7 days'
GROUP BY component_type
ORDER BY users_seen DESC;

This query gives you a clear picture: which components are earning their place on the screen, and which need to be rethought. For product managers evaluating SDUI, component-level analytics provides the data to make layout decisions with confidence rather than instinct.

6. A/B Testing Metrics in SDUI

SDUI makes experimentation metrics dramatically cleaner. Here's why β€” and what to track.

Automatic Cohort Assignment

The server assigns users to experiment variants when it builds the layout. No client-side randomization, no SDK initialization races, no "user saw control before the experiment loaded" problems. Assignment is deterministic, server-controlled, and instant.

Guaranteed Exposure Tracking

As covered in Section 3, every render = tracked exposure. But for A/B testing specifically, this solves the intent-to-treat problem. You know exactly who was exposed to each variant, with no gaps.

Key Experimentation Metrics

Server-Side Metric Computation

Because the server knows the assignment, it can compute experiment metrics server-side β€” joining assignment logs with conversion events in your data warehouse. No reliance on client-side event delivery. Your experiment results are exactly as reliable as your backend data pipeline.

7. Common Analytics Mistakes in SDUI

After working with dozens of teams adopting SDUI, these are the mistakes we see repeatedly:

❌ Not Tracking Fallbacks

When the client encounters an unknown component type and renders a fallback, that's an invisible failure. If you're not tracking fallback renders, you have no idea that 2% of your users are seeing a blank space where your new carousel should be. Track every fallback render with the unknown component type and the client version.

❌ Ignoring Render Performance

You optimized your server response time to 50ms. Great. But the client takes 400ms to parse and render a complex layout. End-to-end, users feel 450ms β€” and you're only measuring half of it. Instrument client-side render time, not just server response time.

❌ Missing Client Events for Server Components

The server knows what it sent. But it doesn't know if the user actually saw it (maybe they navigated away), or how they interacted with it. Client-side visibility and interaction events are essential β€” server logs alone are not enough.

❌ Double-Counting Exposures

User opens screen β†’ exposure logged. User backgrounds app, returns β†’ same layout renders again β†’ another exposure? If you're not deduplicating, your experiment data is inflated. Deduplicate exposures per user per session, keyed on layout version + experiment + variant.

❌ Not Correlating Server and Client Events

Server logs say you sent Layout V3. Client logs show interaction events. But without a shared identifier linking them, you can't answer "did users who received Layout V3 interact more than those who got V2?" Include a layout_id or request_id in both server logs and client events.

8. The SDUI Analytics Dashboard

Here's what a well-designed SDUI analytics dashboard looks like. We recommend four panels:

πŸ“Š SDUI Analytics β€” Home Screen
Today
7d
30d
142ms
P50 Render
↓ 12ms
87.3%
Cache Hit
↑ 2.1%
0.3%
Fallback Rate
↓ 0.1%
34.1%
Interaction
↓ 0.8%
Render Time by Component
Hero Carousel Banner Grid Search Footer
Component Visibility Heatmap (7 days)
MonTueWedThuFriSatSun

Panel 1: Real-Time Health

Top-level KPIs: p50/p90/p99 render time, error rate, fallback rate, cache hit rate. Use sparklines for the last 24 hours. Alert thresholds: fallback rate > 1%, error rate > 0.5%, p99 render time > 1s.

Panel 2: Component Performance Table

Sortable table of every component type with columns: render count, visibility rate, interaction rate, avg render time, error count. This is your at-a-glance health check for every component in your system.

Panel 3: Experiment Overview

Active experiments with exposure counts per variant, conversion rate per variant, statistical significance indicator, and days running. Link directly to detailed experiment analysis.

Panel 4: Schema Adoption

Stacked area chart showing what percentage of requests use each schema version over time. This tells you when old component types can be safely deprecated β€” critical for maintaining a clean SDUI architecture.

9. How Pyramid Approaches Analytics

Everything in this article reflects how we've built analytics into Pyramid from day one. It's not an afterthought β€” it's core infrastructure.

Built-In Exposure Tracking

Every component render through Pyramid automatically generates an exposure event. No configuration needed. If a component is part of an experiment, the exposure is tagged with the experiment ID and variant. It just works.

Component-Level Events

Pyramid's client SDKs ship with visibility observers and interaction tracking out of the box. Drop in the SDK and you immediately get:

Integration with Existing Analytics

Pyramid doesn't replace your analytics stack β€” it feeds into it. Native integrations with Amplitude, Mixpanel, Segment, and a generic webhook interface mean Pyramid events flow directly into whatever tools your team already uses.

Pyramid Analytics Config
pyramid:
  analytics:
    auto_exposure: true
    visibility_threshold: 0.5   # 50% of component visible
    dedup_window: "session"      # per-session dedup
    integrations:
      - type: amplitude
        api_key: "${AMPLITUDE_KEY}"
        events:
          - pyramid.component.visible
          - pyramid.component.interaction
      - type: webhook
        url: "https://your-pipeline.com/events"
        batch_size: 100
        flush_interval_ms: 5000

Pre-Built Dashboard Templates

Pyramid ships with Grafana and Looker dashboard templates that implement the four-panel layout described in Section 8. Import them, connect your data source, and you have production-grade SDUI analytics in minutes β€” not weeks.

Curious how this all fits together? Use the SDUI ROI Calculator to estimate the measurement impact for your specific team size and experiment velocity.

Stop guessing. Start measuring.

Pyramid gives you automatic exposure tracking, component-level analytics, and pre-built dashboards β€” out of the box. No extra instrumentation required.

Join the Waitlist β†’

Further Reading