1. Why SDUI Changes the Analytics Game
In traditional mobile development, analytics is bolted on. You ship a build, sprinkle trackEvent() calls throughout your code, and hope someone remembers to instrument the new feature before it goes live. With server-driven UI, the dynamic is fundamentally different.
Every UI the user sees was decided by the server. That means the server already knows exactly what it sent β which components, in what order, with what configuration, for which user. You don't need to hope a client-side event fires. The server has a complete record of every rendering decision it made.
This creates a natural analytics layer that doesn't exist in traditional apps:
- Every UI change is a logged event. When you update a layout for 10% of users, that decision exists in your server logs.
- No instrumentation gaps. New components are tracked the moment you add them to a layout β no waiting for a client deploy.
- Server + client correlation. You can join "what I sent" with "what the user did" for a complete picture.
Yet most teams underinvest in measurement after adopting SDUI. They build the rendering pipeline, ship server-driven screens, and then⦠measure the same surface-level metrics they always did. That's leaving the most powerful analytics capability of SDUI on the table.
SDUI doesn't just change how you build UI β it changes what you can measure. The server's rendering decisions are a first-class data source. Treat them that way.
2. Key Metrics for SDUI Teams
If you're running an SDUI system, these are the metrics that separate teams who are guessing from teams who know. We break them into three categories: performance, reliability, and engagement.
Performance Metrics
- Server response time β How long the SDUI endpoint takes to compute and return a layout. Break this down by screen and by complexity (number of components).
- Client render time β Time from receiving the JSON to pixels on screen. This reveals whether your client renderer is efficient.
- End-to-end render time β The metric users actually feel. Server response + network + client render. Track p50, p90, and p99.
- Cache hit rate β What percentage of layout requests are served from cache vs. computed fresh? Low hit rates mean you're doing unnecessary work.
Reliability Metrics
- Fallback rate β How often the client encounters a component type it doesn't recognize and renders a fallback. This spikes after server-side changes that outpace client adoption.
- Error rate β Failed layout fetches, malformed responses, rendering crashes. Track separately for network errors vs. parsing errors vs. render errors.
- Schema version adoption β What percentage of your active users are on each schema version? This tells you when it's safe to deprecate old component types.
Engagement Metrics
- Component visibility rate β What percentage of rendered components actually appear in the viewport? A component below the fold that nobody scrolls to is invisible.
- Component interaction rate β Of the components seen, how many are tapped, swiped, or otherwise engaged with?
- Layout completion rate β Do users scroll through the entire server-driven layout, or drop off midway?
For a deeper dive into how these metrics connect to SDUI performance optimization, see our dedicated guide.
3. Automatic Exposure Tracking
This is the capability that makes analytics teams fall in love with SDUI β and it's a core differentiator for Pyramid.
In traditional experimentation, tracking exposures is a pain. You define an experiment, assign variants, and then you need to fire an exposure event when the user actually sees the variant. Miss the event? Your experiment data is polluted. Fire it too early? Same problem.
With SDUI, every component render is a tracked exposure. The server decides to show Variant B of a hero banner to User #4821. That decision is logged server-side the instant it's made. When the client renders it and confirms visibility, you have a guaranteed, deduplicated exposure record.
Server Decision Log
{
"user_id": "u_4821",
"screen": "home",
"timestamp": "2026-03-31T14:22:01Z",
"components": [
{
"id": "hero_banner",
"experiment": "homepage_hero_v2",
"variant": "B",
"position": 0,
"exposure_logged": true
},
{
"id": "product_carousel",
"experiment": null,
"variant": "default",
"position": 1
}
]
}
No extra instrumentation. No "did someone remember to call trackExposure()?" The rendering pipeline is the exposure tracking pipeline.
This matters for two reasons:
- Zero instrumentation overhead for new experiments. When you add a new A/B test, exposures are tracked automatically. Your experimentation velocity is no longer bottlenecked by analytics engineering.
- Guaranteed accuracy. You can't forget to track an exposure, and you can't accidentally track one before the user sees it (visibility confirmation from the client closes the loop).
Teams using SDUI for growth engineering often cite automatic exposure tracking as the single biggest productivity gain.
4. Building an SDUI Analytics Pipeline
A production-grade SDUI analytics pipeline has three layers. Here's the architecture that works at scale:
Layer 1: Client-Side Event Collection
The client captures what the server can't observe directly β actual user behavior:
Client SDK (Kotlin)
class SDUIAnalyticsObserver(
private val analytics: AnalyticsClient
) : ComponentLifecycleObserver {
override fun onComponentVisible(
component: SDUIComponent,
metadata: RenderMetadata
) {
analytics.track("sdui.component.visible", mapOf(
"component_id" to component.id,
"component_type" to component.type,
"screen" to metadata.screen,
"position" to metadata.position,
"render_time_ms" to metadata.renderDuration,
"experiment" to component.experiment,
"variant" to component.variant,
"layout_version" to metadata.layoutVersion
))
}
override fun onComponentInteraction(
component: SDUIComponent,
action: String
) {
analytics.track("sdui.component.interaction", mapOf(
"component_id" to component.id,
"component_type" to component.type,
"action" to action
))
}
}
Layer 2: Server-Side Decision Logging
Every layout response the server generates is logged with full context β what was sent, why it was sent, and to whom. This includes experiment assignments, targeting rules that matched, and any personalization signals used.
Server Decision Log (Python)
def log_layout_decision(user, screen, layout):
decision_event = {
"event": "sdui.layout.served",
"user_id": user.id,
"screen": screen,
"layout_hash": layout.content_hash(),
"component_count": len(layout.components),
"experiments": [
{
"experiment_id": c.experiment_id,
"variant": c.variant,
"component_id": c.id
}
for c in layout.components
if c.experiment_id
],
"targeting_signals": user.targeting_context(),
"cache_status": layout.cache_status,
"compute_time_ms": layout.compute_duration_ms
}
event_bus.publish(decision_event)
Layer 3: Aggregation & Dashboards
Join client events with server decisions. The layout_version or layout_hash is your join key β it connects "here's what the server sent" with "here's what the user did."
Your aggregation layer should compute:
- Visibility rate per component = visible events / served events
- Interaction rate per component = interaction events / visible events
- Conversion by experiment variant = goal events / exposure events
- Performance distributions = render time percentiles by screen, component type, and device class
5. Component-Level Analytics
This is where SDUI analytics gets genuinely powerful. Because every component is defined server-side with a stable identifier, you can track engagement per component across your entire user base.
Think about what this enables:
- Which components get seen? Measure viewport visibility for every component in every layout. If your "Featured Products" carousel is below the fold for 60% of users, maybe it should be higher.
- Which components drive action? Track taps, swipes, scrolls, and completions per component type. A component with high visibility but low interaction is dead weight.
- Component-level conversion attribution. Connect component interactions to downstream conversions. "Users who interacted with the comparison table component convert at 2.3x the rate of those who didn't."
Component Performance Query (SQL)
SELECT
component_type,
COUNT(DISTINCT CASE WHEN event = 'visible' THEN user_id END)
AS users_seen,
COUNT(DISTINCT CASE WHEN event = 'interaction' THEN user_id END)
AS users_interacted,
ROUND(
COUNT(DISTINCT CASE WHEN event = 'interaction' THEN user_id END)::numeric /
NULLIF(COUNT(DISTINCT CASE WHEN event = 'visible' THEN user_id END), 0),
3
) AS interaction_rate,
PERCENTILE_CONT(0.5) WITHIN GROUP (ORDER BY render_time_ms)
AS p50_render_ms
FROM sdui_events
WHERE screen = 'home'
AND date >= CURRENT_DATE - INTERVAL '7 days'
GROUP BY component_type
ORDER BY users_seen DESC;
This query gives you a clear picture: which components are earning their place on the screen, and which need to be rethought. For product managers evaluating SDUI, component-level analytics provides the data to make layout decisions with confidence rather than instinct.
6. A/B Testing Metrics in SDUI
SDUI makes experimentation metrics dramatically cleaner. Here's why β and what to track.
Automatic Cohort Assignment
The server assigns users to experiment variants when it builds the layout. No client-side randomization, no SDK initialization races, no "user saw control before the experiment loaded" problems. Assignment is deterministic, server-controlled, and instant.
Guaranteed Exposure Tracking
As covered in Section 3, every render = tracked exposure. But for A/B testing specifically, this solves the intent-to-treat problem. You know exactly who was exposed to each variant, with no gaps.
Key Experimentation Metrics
- Sample Ratio Mismatch (SRM) β Is your 50/50 split actually 50/50? SRM indicates bugs in assignment logic. With SDUI, you can check this server-side before any client data arrives.
- Exposure-to-conversion latency β Time from seeing the variant to converting. SDUI gives you precise exposure timestamps.
- Variant render performance β Does Variant B render slower? If so, observed conversion differences might be performance-driven, not design-driven.
- Interaction depth by variant β Are users engaging differently with each variant? Track scroll depth, component interactions, and time-on-screen per variant.
Server-Side Metric Computation
Because the server knows the assignment, it can compute experiment metrics server-side β joining assignment logs with conversion events in your data warehouse. No reliance on client-side event delivery. Your experiment results are exactly as reliable as your backend data pipeline.
7. Common Analytics Mistakes in SDUI
After working with dozens of teams adopting SDUI, these are the mistakes we see repeatedly:
β Not Tracking Fallbacks
When the client encounters an unknown component type and renders a fallback, that's an invisible failure. If you're not tracking fallback renders, you have no idea that 2% of your users are seeing a blank space where your new carousel should be. Track every fallback render with the unknown component type and the client version.
β Ignoring Render Performance
You optimized your server response time to 50ms. Great. But the client takes 400ms to parse and render a complex layout. End-to-end, users feel 450ms β and you're only measuring half of it. Instrument client-side render time, not just server response time.
β Missing Client Events for Server Components
The server knows what it sent. But it doesn't know if the user actually saw it (maybe they navigated away), or how they interacted with it. Client-side visibility and interaction events are essential β server logs alone are not enough.
β Double-Counting Exposures
User opens screen β exposure logged. User backgrounds app, returns β same layout renders again β another exposure? If you're not deduplicating, your experiment data is inflated. Deduplicate exposures per user per session, keyed on layout version + experiment + variant.
β Not Correlating Server and Client Events
Server logs say you sent Layout V3. Client logs show interaction events. But without a shared identifier linking them, you can't answer "did users who received Layout V3 interact more than those who got V2?" Include a layout_id or request_id in both server logs and client events.
8. The SDUI Analytics Dashboard
Here's what a well-designed SDUI analytics dashboard looks like. We recommend four panels:
Panel 1: Real-Time Health
Top-level KPIs: p50/p90/p99 render time, error rate, fallback rate, cache hit rate. Use sparklines for the last 24 hours. Alert thresholds: fallback rate > 1%, error rate > 0.5%, p99 render time > 1s.
Panel 2: Component Performance Table
Sortable table of every component type with columns: render count, visibility rate, interaction rate, avg render time, error count. This is your at-a-glance health check for every component in your system.
Panel 3: Experiment Overview
Active experiments with exposure counts per variant, conversion rate per variant, statistical significance indicator, and days running. Link directly to detailed experiment analysis.
Panel 4: Schema Adoption
Stacked area chart showing what percentage of requests use each schema version over time. This tells you when old component types can be safely deprecated β critical for maintaining a clean SDUI architecture.
9. How Pyramid Approaches Analytics
Everything in this article reflects how we've built analytics into Pyramid from day one. It's not an afterthought β it's core infrastructure.
Built-In Exposure Tracking
Every component render through Pyramid automatically generates an exposure event. No configuration needed. If a component is part of an experiment, the exposure is tagged with the experiment ID and variant. It just works.
Component-Level Events
Pyramid's client SDKs ship with visibility observers and interaction tracking out of the box. Drop in the SDK and you immediately get:
pyramid.component.renderedβ when a component is inflatedpyramid.component.visibleβ when it enters the viewport (configurable threshold)pyramid.component.interactionβ taps, swipes, and custom actionspyramid.component.errorβ render failures and fallbacks
Integration with Existing Analytics
Pyramid doesn't replace your analytics stack β it feeds into it. Native integrations with Amplitude, Mixpanel, Segment, and a generic webhook interface mean Pyramid events flow directly into whatever tools your team already uses.
Pyramid Analytics Config
pyramid:
analytics:
auto_exposure: true
visibility_threshold: 0.5 # 50% of component visible
dedup_window: "session" # per-session dedup
integrations:
- type: amplitude
api_key: "${AMPLITUDE_KEY}"
events:
- pyramid.component.visible
- pyramid.component.interaction
- type: webhook
url: "https://your-pipeline.com/events"
batch_size: 100
flush_interval_ms: 5000
Pre-Built Dashboard Templates
Pyramid ships with Grafana and Looker dashboard templates that implement the four-panel layout described in Section 8. Import them, connect your data source, and you have production-grade SDUI analytics in minutes β not weeks.
Curious how this all fits together? Use the SDUI ROI Calculator to estimate the measurement impact for your specific team size and experiment velocity.
Stop guessing. Start measuring.
Pyramid gives you automatic exposure tracking, component-level analytics, and pre-built dashboards β out of the box. No extra instrumentation required.
Join the Waitlist βFurther Reading
- What Is Server-Driven UI? β The fundamentals of SDUI and why it matters
- SDUI Architecture Patterns β Design patterns for production SDUI systems
- SDUI for Growth Engineering β Using SDUI to accelerate growth experimentation
- SDUI Testing Strategies β How to test server-driven UI effectively
- SDUI Performance Optimization β Making SDUI fast at scale