Real User Monitoring Implementation: From Setup to Analysis
Understanding how real users experience your website or application is critical for maintaining performance and improving user satisfaction. While our previous comparison of Odown vs. New Relic examined monitoring solutions more broadly, this guide focuses specifically on implementing Real User Monitoring (RUM) - a powerful approach that captures actual user interactions with your digital properties.
Unlike synthetic monitoring, which simulates user behavior, RUM collects data from real visitors as they navigate your site or application. This provides invaluable insights into performance as experienced by your actual users across different devices, browsers, and network conditions.
This technical guide walks through the complete process of implementing RUM, from initial setup to meaningful analysis of the collected data.
Configuring Effective Real User Monitoring
Successful RUM implementation starts with proper configuration. Let's explore the key components and setup considerations.
JavaScript Implementation Fundamentals
Most RUM solutions use JavaScript to collect user experience data. The implementation typically follows these steps:
1. Adding the Monitoring Script
The foundation of RUM is a JavaScript snippet that needs to be added to your website or application:
html
<script>
// Create a performance observer to capture key metrics
const observer = new PerformanceObserver ((list) => {
const entries = list.getEntries();
entries. forEach ((entry) => {
// Send performance data to your collection endpoint
sendToAnalytics (entry);
});
});
// Observe specific performance metrics
observer. observe({
entryTypes: ['navigation', 'resource', 'paint', 'layout-shift', 'largest- contentful-paint']
});
// Function to send data to your analytics endpoint
function sendToAnalytics (performanceEntry) {
// Implement data transmission logic here
fetch('https://your-analytics- endpoint.com/collect', {
method: 'POST',
body: JSON.stringify (performanceEntry),
headers: {
'Content-Type': 'application/json'
}
});
}
</script>
This simplified example demonstrates the basic approach. In practice, you'll likely use a RUM service provider's script that handles the complexity for you.
2. Script Placement Considerations
Where you place the monitoring script significantly impacts data collection:
- Head placement: Enables earlier monitoring but may impact initial page load performance
- End of body: Reduces impact on critical rendering path but misses some early metrics
- Async/defer attributes: Balance between performance impact and data collection
Most RUM providers recommend placing their script in the <head>
with async or defer attributes:
html
<!-- Other head elements -->
<script async src= "https://your-rum-provider .com/tracking.js"> </script>
3. Configuration Options
Modern RUM solutions offer various configuration options to tailor data collection:
javascript
window.RUMConfig = {
applicationId: 'YOUR_APP_ID',
sampleRate: 100, // Percentage of sessions to monitor
reportingEndpoint: 'https://analytics. example.com/collect',
captureErrors: true,
captureResourceTimings: true,
capturePageLoadTimings: true,
anonymizeIp: true,
allowedDomains: ['yourdomain.com', 'cdn.yourdomain.com']
};
Key configuration parameters to consider include:
- Sampling rate: Percentage of user sessions to monitor (100% provides complete data but increases costs)
- Data collection scope: Which metrics and interactions to capture
- Error tracking: Whether to include JavaScript errors in monitoring
- Domain restrictions: Limiting monitoring to specific domains or subdomains
- Custom event tracking: Configuration for tracking business-specific interactions
Core Web Vitals Measurement
Google's Core Web Vitals have become essential performance metrics, affecting both user experience and search rankings. Properly configuring RUM to capture these metrics is crucial.
Largest Contentful Paint (LCP)
LCP measures loading performance - specifically when the largest content element becomes visible.
javascript
new PerformanceObserver ((entryList) => {
const entries = entryList. getEntries();
const lastEntry = entries [entries.length - 1];
console.log ('LCP:', lastEntry.s tartTime);
sendMetric ('lcp', lastEntry. startTime);
}).observe ({type: 'largest- contentful-paint', buffered: true});
First Input Delay (FID)
FID measures interactivity - the time from when a user first interacts with your page to when the browser can begin processing that interaction.
javascript
new PerformanceObserver ((entryList) => {
const entries = entryList. getEntries();
entries.forEach((entry) => {
console.log ('FID:', entry. processingStart - entry.startTime);
sendMetric ('fid', entry. processingStart - entry.startTime);
});
}).observe({type: 'first-input', buffered: true});
Cumulative Layout Shift (CLS)
CLS measures visual stability - quantifying how much layout elements shift during the page lifecycle.
javascript
let cumulative LayoutShift = 0;
new PerformanceObserver ((entryList) => {
for (const entry of entryList. getEntries()) {
// Only count layout shifts without recent user input
if (!entry. hadRecentInput) {
cumulative LayoutShift += entry.value;
}
}
console.log ('Current CLS:', cumulativeLayoutShift);
sendMetric ('cls', cumulativeLayoutShift);
}).observe({type: 'layout-shift', buffered: true});
For each Core Web Vital, it's important to:
- Set appropriate thresholds aligned with Google's recommendations (e.g., LCP under 2.5s)
- Segment data by device type, as mobile and desktop have different expectations
- Establish baseline measurements before making optimizations
- Track changes over time to identify improvements or regressions
User Journey Performance Tracking
Beyond individual page metrics, understanding performance across complete user journeys provides deeper insights.
Configuring Multi-Page Tracking
To track user journeys across multiple pages:
- Maintain session continuity: Use consistent session IDs across page navigations
- Implement route change detection: For single-page applications, track virtual page views
- Connect related interactions: Associate clicks, form submissions, and page loads within a journey
javascript
let previousPath = window.location. pathname;
// Using MutationObserver to detect DOM changes that might indicate route changes
const observer = new MutationObserver (() => {
if (window.location. pathname !== previousPath) {
// Route has changed
console.log('Route changed from', previousPath, 'to', window.location. pathname);
// Capture performance data for this new route
captureVirtualPageView ({
previousPath: previousPath,
currentPath: window.location. pathname,
timestamp: performance. now()
});
previousPath = window.location. pathname;
}
});
// Start observing changes to the body element
observer.observe (document.body, {
childList: true,
subtree: true
});
Custom Interaction Tracking
To gain deeper insights into user journeys, track custom interactions:
javascript
function trackUserInteraction (element, interactionType) {
element. addEventListener (interactionType, (event) => {
const interactionData = {
type: interactionType,
element: event. target.tagName,
id: event. target.id,
class: event. target.className,
path: getElementPath (event.target),
timestamp: performance. now(),
pageUrl: window.location .href
};
sendInteractionData (interactionData);
});
}
// Apply to critical interactions
document. querySelectorAll ('.critical-button') .forEach(button => {
trackUserInteraction (button, 'click');
});
document. querySelectorAll ('form') .forEach (form => {
trackUserInteraction (form, 'submit');
});
User Journey Context
Enhance journey tracking by including contextual information:
- Attribution data: Referrer, campaign, and source information
- User segments: Anonymous cohort information (avoid personally identifiable information)
- Technical context: Device, browser, and connection information
- Business context: Customer status, subscription type, or other non-sensitive business data
This contextual information helps analyze performance patterns across different user segments and journeys.
Geographic Performance Distribution Analysis
User location significantly impacts experienced performance. Configuring RUM to analyze geographic distribution provides critical insights.
Capturing Location Data
To effectively analyze geographic performance:
- IP geolocation: Usually handled by your RUM provider (server-side)
- Privacy-conscious approach: Store general location (city/region level) rather than precise coordinates
- Regulatory compliance: Ensure GDPR and similar regulations are followed in location data processing
Connection Type Analysis
Network connectivity varies significantly by location. Capture connection information when available:
javascript
function getConnectionInfo () {
if (navigator.connection) {
return {
effectiveType: navigator. connection. effectiveType, // 4g, 3g, 2g, slow-2g
rtt: navigator. connection.rtt, // Round-trip time
downlink: navigator. connection.downlink, // Bandwidth estimate
saveData: navigator. connection.saveData // Data-saving mode
}
}
return null;
}
// Include this information in your RUM data
const performanceData = {
// Standard metrics
pageLoadTime: performance. now(),
// ...
// Connection information
connection: getConnection Info(),
// Location will typically be added server-side
};
CDN and Edge Performance
If your application uses CDN or edge network distribution, configure your RUM solution to:
- Track the serving location/node for each request
- Measure performance differences between geographic regions and edge locations
- Identify misconfigurations in your distribution network
This data helps optimize content delivery network configurations and edge deployments.
Collecting Meaningful User Experience Metrics
Effective RUM implementation requires collecting the right metrics to provide actionable insights.
Technical Performance Metrics
A comprehensive RUM implementation should collect these technical performance metrics:
Navigation Timing Metrics
The Navigation Timing API provides detailed timings for the full page load process:
javascript
function captureNavigation Timing() {
const navigation = performance. getEntriesByType ('navigation')[0];
if (navigation) {
return {
// DNS lookup time
dnsTime: navigation. domainLookupEnd - navigation. domainLookupStart,
// TCP connection time
connectionTime: navigation. connectEnd - navigation. connectStart,
// TLS negotiation time (if applicable)
tlsTime: navigation. secure ConnectionStart > 0 ?
navigation. connectEnd - navigation. secure ConnectionStart : 0,
// Time to first byte (TTFB)
ttfb: navigation. responseStart - navigation. requestStart,
// Document download time
downloadTime: navigation. responseEnd - navigation. responseStart,
// DOM processing time
domProcessingTime: navigation. domInteractive - navigation. responseEnd,
// DOM Content Loaded event
domContentLoadedTime: navigation. domContentLoadedEventEnd - navigation. navigationStart,
// Load event
loadEventTime: navigation. loadEventEnd - navigation. loadEventStart,
// Total page load time
totalPageLoadTime: navigation. loadEventEnd - navigation. navigationStart
};
}
return null;
}
Resource Timing Metrics
Resource Timing data helps identify slow-loading assets:
javascript
function captureResource Timing() {
const resources = performance. getEntriesByType ('resource');
return resources.map (resource => {
return {
name: resource.name,
type: resource. initiatorType,
startTime: resource. startTime,
duration: resource. duration,
transferSize: resource. transferSize,
decodedBodySize: resource. decodedBodySize,
protocol: resource. nextHopProtocol
};
});
}
Paint Timing Metrics
Paint timing captures when key visual elements appear:
javascript
function capturePaint Timing() {
const paintMetrics = performance. getEntriesByType ('paint');
const result = {};
paintMetrics. forEach (metric => {
if (metric.name === 'first-paint') {
result.firstPaint = metric.startTime;
} else if (metric.name === 'first-contentful -paint') {
result. firstContentfulPaint = metric.startTime;
}
});
return result;
}
Long Tasks
Identify tasks that may block the main thread and cause poor interactivity:
javascript
function monitorLongTasks () {
if ( 'PerformanceObserver' in window) {
const observer = new PerformanceObserver ((list) => {
const entries = list.getEntries ();
entries. forEach ((entry) => {
console.log ('Long task detected:', entry.duration, 'ms');
sendMetric ('longTask', {
duration: entry.duration,
startTime: entry.startTime,
attribution: entry.attribution
});
});
});
observer.observe ({ entryTypes: ['longtask'] });
}
}
User Experience Metrics
Beyond technical performance, collect metrics that directly reflect user experience:
Interaction to Next Paint (INP)
INP is an evolving metric that measures responsiveness to user interactions:
javascript
let interactionEvents = [];
function captureINP () {
new PerformanceObserver ((entryList) => {
for (const entry of entryList.getEntries()) {
// Capture the interaction data
interactionEvents. push({
startTime: entry. startTime,
processingStart: entry. processingStart,
processingEnd: entry. processingEnd,
duration: entry .duration,
target: entry.target ? entry. target.nodeName : null,
name: entry.name
});
// Report the 75th percentile after accumulating enough data
if (interactionEvents. length >= 10) {
const sortedDurations = interactionEvents
.map(e => e.duration)
.sort((a, b) => a - b);
const p75Index = Math.floor (sortedDurations. length * 0.75);
const inp = sortedDurations [p75Index];
sendMetric ('INP', inp);
}
}
}).observe ({
type: 'event',
buffered: true,
durationThreshold: 16 // minimum duration to consider (in ms)
});
}
Rage Clicks
Detect user frustration through rapid repeated clicks:
javascript
function detectRageClicks() {
let clickTimes = [];
let clickLocations = [];
document. addEventListener ('click', (event) => {
const now = performance. now();
const position = {
x: event.clientX,
y: event.clientY
};
// Add the current click
clickTimes.push (now);
clickLocations. push (position);
// Only keep clicks from the last 3 seconds
const recentClick Threshold = now - 3000;
clickTimes = clickTimes.filter (time => time >= recentClickThreshold);
clickLocations = clickLocations.slice (-clickTimes. length);
// If we have 3+ clicks in a small area in quick succession
if (clickTimes. length >= 3) {
const isCloseClicks = areClicksClose (clickLocations, 30); // 30px radius
if (isCloseClicks) {
console.log ('Rage click detected!');
sendMetric ('rageClick', {
count: clickTimes. length,
target: event.target.tagName,
path: getElementPath (event.target),
pageUrl: window.location.href
});
}
}
});
}
// Helper function to check if clicks are close to each other
function areClicksClose (positions, threshold) {
const xCoords = positions.map (p => p.x);
const yCoords = positions.map (p => p.y);
const xVariance = calculateVariance (xCoords);
const yVariance = calculateVariance (yCoords);
return Math.sqrt (xVariance) < threshold && Math.sqrt (yVariance) < threshold;
}
// Calculate variance helper
function calculateVariance (array) {
const n = array.length;
const mean = array.reduce((a, b) => a + b) / n;
return array.map (x => Math.pow (x - mean, 2)). reduce((a, b) => a + b) / n;
}
Form Abandonment
Track form interactions and abandonment:
javascript
function trackFormAbandonment () {
document .querySelectorAll ('form'). forEach(form => {
// Track form field interactions
form.querySelectorAll ('input, select, textarea'). forEach(field => {
field. addEventListener ('change', () => {
// Mark this form as having been interacted with
form.dataset. interacted = 'true';
});
});
// Track submission
form. addEventListener ('submit', () => {
// Reset the interaction flag on successful submission
delete form.dataset. interacted;
});
// Check for abandonment when user navigates away
window. addEventListener ('beforeunload', () => {
if (form.dataset. interacted === 'true') {
// User interacted with form but didn't submit
sendMetric ('formAbandonment', {
formId: form.id,
formAction: form.action,
pageUrl: window.location.href
});
}
});
});
}
Scroll Depth
Measure how far users scroll on your pages:
javascript
function trackScrollDepth() {
let maxScrollPercentage = 0;
let breakpoints = [25, 50, 75, 90, 100];
let reachedBreakpoints = {};
// Initialize breakpoints
breakpoints.forEach(point => {
reachedBreakpoints [point] = false;
});
// Listen for scroll events
window.addEventListener ('scroll', () => {
const scrollHeight = document.documentElement. scrollHeight - window. innerHeight;
const scrollPosition = window.scrollY;
const scrollPercentage = Math.round ((scrollPosition / scrollHeight) * 100);
// Update max scroll percentage
if (scrollPercentage > maxScrollPercentage) {
maxScrollPercentage = scrollPercentage;
// Check if we've hit any breakpoints
breakpoints. forEach (point => {
if (!reachedBreakpoints [point] && maxScrollPercentage >= point) {
reachedBreakpoints [point] = true;
sendMetric ( 'scrollDepth', {
percentage: point,
pageUrl: window.location.href,
timestamp: performance.now()
});
}
});
}
}, { passive: true });
// Report final scroll depth on page unload
window.addEventListener ('beforeunload', () => {
sendMetric ('finalScrollDepth', {
percentage: maxScrollPercentage,
pageUrl: window.location.href
});
});
}
Privacy and Consent Considerations
Proper RUM implementation requires careful attention to privacy:
Consent Management
Integrate RUM with your consent management platform:
javascript
function initializeRUM() {
// Check if we have consent
if (hasUserConsent ('analytics')) {
initializeFullRUM();
} else {
// Initialize minimal RUM (no user-specific data)
initialize MinimalRUM();
// Listen for consent changes
subscribeToConsent Changes ((categories) => {
if (categories.includes ('analytics')) {
initializeFullRUM ();
}
});
}
}
function initializeFullRUM () {
// Full RUM implementation with all metrics
// ...
}
function initialize MinimalRUM () {
// Minimal RUM implementation with only anonymous, aggregated metrics
// ...
}
Data Minimization
Implement data minimization practices:
- Avoid personal data: Never collect personally identifiable information
- IP anonymization: Truncate IP addresses before storage
- Session-level data: Use session IDs instead of user IDs where possible
- Limited retention: Define and enforce data retention policies
javascript
function prepareRUMData(data) {
// Create a copy to avoid modifying the original
const sanitizedData = { ...data };
// Remove potentially sensitive data
delete sanitizedData. userAgent;
// Sanitize URLs
if (sanitizedData. pageUrl) {
sanitizedData. pageUrl = removeQueryParameters (
sanitizedData. pageUrl,
['email', 'name', 'user', 'token']
);
}
// Ensure no form values are captured
if (sanitizedData.formData) {
sanitizedData. formFields = Object.keys (sanitizedData.formData). length;
delete sanitizedData. formData;
}
return sanitizedData;
}
// Helper to remove sensitive query parameters
function removeQueryParameters (url, paramsToRemove) {
const urlObj = new URL(url);
paramsToRemove. forEach (param => {
urlObj. searchParams. delete (param);
});
return urlObj.toString();
}
Regulatory Compliance
Ensure compliance with relevant regulations:
- GDPR: Implement appropriate consent management for EU users
- CCPA/CPRA: Honor do-not-sell requests for California residents
- ePrivacy: Address cookie requirements for EU users
- Child protection: Consider COPPA if your site may be used by children
Document your compliance approach within your RUM implementation plan.
Analyzing RUM Data for Performance Optimization
Collecting data is only valuable if you analyze it effectively to drive improvements.
Data Aggregation and Visualization
Effective RUM analysis requires proper data handling:
Statistical Analysis
When analyzing RUM data, focus on these statistical approaches:
- Percentiles over averages: Use p50 (median), p75, p90, and p95 rather than mean values
- Distribution analysis: Examine the full distribution of performance metrics
- Segmentation: Analyze data across different dimensions (device, location, etc.)
- Trend analysis: Track changes over time to identify improvements or regressions
javascript
function calculatePercentiles (metricValues, percentiles = [50, 75, 90, 95, 99]) {
// Sort the values
const sortedValues = [...metricValues].sort((a, b) => a - b);
const results = {};
percentiles. forEach(p => {
const index = Math.floor((p / 100) * sortedValues.length);
results [`p${p}`] = sortedValues[index];
});
return results;
}
Visualization Approaches
Effective RUM data visualization includes:
- Heatmaps: Geographic performance visualization
- Histograms: Distribution of metric values
- Time-series charts: Performance trends over time
- Scatter plots: Correlation between metrics
- Funnel visualizations: Performance across user journey steps
Geographic Performance Distribution Analysis
Analyzing performance across locations provides valuable insights:
Regional Performance Patterns
Look for patterns in performance across regions:
- Identify high-latency regions: Look for locations with consistently poor performance
- Compare similar regions: Analyze variations between similar markets
- Correlate with infrastructure: Compare performance with CDN/hosting locations
- Time-of-day analysis: Examine performance variations by local time
Connection Type Impact
Analyze how different connection types affect experience:
- Mobile network performance: Compare 3G, 4G, and 5G experiences
- Fixed broadband analysis: Evaluate fiber, cable, and DSL performance
- Correlation with device types: Examine how devices perform on various connections
- Regional connection variations: Identify regions with poor connectivity infrastructure
User Journey Performance Tracking
Analyzing performance across user journeys provides business-relevant insights:
Critical Path Analysis
Identify and analyze performance on business-critical paths:
- Conversion funnel performance: Analyze each step in conversion processes
- Login/authentication flows: Examine performance of authentication processes
- Search and navigation paths: Analyze common navigation patterns
- Checkout procedures: Carefully monitor e-commerce checkout performance
Correlation with Business Metrics
Connect performance data with business outcomes:
- Conversion rate correlation: Analyze how performance impacts conversion
- Bounce rate relationship: Correlate performance with bounces
- Session duration impact: Examine how performance affects engagement
- Revenue impact modeling: Estimate financial impact of performance improvements
This business context makes performance data more actionable and helps prioritize optimizations.
Optimization Prioritization
Use RUM data to prioritize performance improvements:
Impact Assessment
Evaluate potential optimizations based on:
- User impact scope: Percentage of users affected by an issue
- Performance improvement potential: Expected metric improvements
- Business value alignment: Impact on critical business flows
- Implementation complexity: Resource requirements for fixing issues
- Maintenance considerations: Long-term sustainability of optimizations
Iterative Improvement Process
Implement a continuous improvement cycle:
- Baseline measurement: Establish current performance
- Hypothesis development: Identify potential improvements
- Implementation: Deploy changes (ideally A/B tested)
- Validation: Measure impact on RUM metrics
- Refinement: Adjust approach based on results
This data-driven approach ensures that optimization efforts deliver meaningful improvements.
Implementation Best Practices
Based on experience implementing RUM across various organizations, consider these best practices:
Phased Implementation
Rather than implementing everything at once, consider a phased approach:
- Phase 1: Basic page load metrics and Core Web Vitals
- Phase 2: Resource timing and user journey tracking
- Phase 3: Custom event tracking and user experience metrics
- Phase 4: Advanced correlation and business impact analysis
This approach allows you to gain value quickly while building toward a comprehensive implementation.
Error Handling and Resilience
Ensure your RUM implementation doesn't create problems:
javascript
(function() {
try {
// RUM implementation code
initializeRUM();
} catch (error) {
// Log the error but don't affect user experience
console.error('RUM initialization error:', error);
// Attempt to report the error if possible
try {
fetch ('/rum-error-log', {
method: 'POST',
body: JSON.stringify({
message: error.message,
stack: error.stack,
timestamp: new Date().toISOString()
}),
headers: {
'Content-Type': 'application/json'
}
});
} catch (e) {
// Silently fail if error reporting fails
}
}
})();
Key principles include:
- Fail gracefully: RUM should never impact user experience
- Self-monitoring: Track and report RUM script performance
- Throttling mechanisms: Limit resource usage in problematic scenarios
- Automatic recovery: Implement retry logic for transient issues
Performance Budget Integration
Connect RUM data with performance budgets:
- Define metric thresholds: Establish acceptable limits for key metrics
- Automated monitoring: Alert when metrics exceed thresholds
- Trend analysis: Track metrics against budgets over time
- Release integration: Check performance budgets during deployment
This integration helps maintain performance standards across development cycles.
Conclusion
Implementing Real User Monitoring effectively transforms abstract performance concerns into concrete, actionable data. By properly configuring RUM, collecting meaningful metrics, and analyzing the results strategically, you can continuously improve user experience based on real-world usage patterns.
Remember that RUM implementation is not a one-time project but an ongoing process. Technology evolves, user expectations change, and new performance metrics emerge. Regularly review and update your RUM approach to ensure it continues to provide valuable insights.
For organizations looking to implement comprehensive monitoring solutions that include both synthetic monitoring and RUM capabilities, consider exploring Odown's monitoring solutions to understand how our platform can help you gain deeper visibility into both availability and user experience.