How to Track Trial User Activity for SaaS Analytics

A developer's guide to tracking and analyzing trial user activity in SaaS. Covers the five key metrics, event tracking implementation with code examples, building a trial health score, identifying at-risk users, using activity data to trigger interventions, and creating conversion-predictive models that turn analytics into revenue.

By TrialMoments Team14 min readUpdated Mar 2026
5
Key Metrics
3x
Better with Data
15%
Prediction Accuracy

Trial user activity tracking is the practice of instrumenting and analyzing every meaningful action a free trial user takes in your SaaS product. Unlike general product analytics, trial-specific tracking focuses on behaviors that predict conversion: which features they use, how often they return, how quickly they reach first value, and how their engagement changes over the trial period. SaaS companies that implement data-driven trial management convert 3x more trial users to paid compared to those that treat all trial users the same.

The challenge is not collecting data—most products already have analytics—but knowing which data matters for trial conversion, how to structure it for actionable insights, and how to turn those insights into timely interventions. This guide covers the complete pipeline: from choosing metrics and implementing event tracking to building a trial health score and using it to trigger conversion actions. We also show how TrialMoments uses activity signals to optimize the timing of its conversion moments.

The 5 Key Metrics for Trial User Activity

Not all user activity is equally predictive of conversion. These five metrics, when tracked together, give you the clearest picture of trial health and conversion likelihood. Each metric answers a different question about the user's trial journey. For a broader view of trial user activation, see our dedicated guide.

1

Feature Usage Depth

Question: How many features has the user tried, and how deeply?

Track both breadth (how many distinct features used) and depth (how many actions within each feature). A user who creates one project is less engaged than one who creates three, adds tasks, and sets due dates. Depth predicts conversion better than breadth.

Benchmark: Users who engage with 3+ core features convert at 2.5x the rate of those who use only 1.
2

Session Frequency

Question: How often does the user return?

Session frequency during the trial is the strongest leading indicator of conversion. Users who return daily in the first week convert at 4x the rate of those who log in once and do not return. Track sessions per day and the gap between sessions.

Benchmark: 3+ sessions in the first 7 days predicts conversion with 70% accuracy.
3

Time-to-First-Value (TTFV)

Question: How quickly does the user experience core value?

Measure the elapsed time from sign-up to the first meaningful action (your activation event). Users who reach value in under 30 minutes convert at 3x the rate of those who take more than 48 hours. If your median TTFV exceeds 24 hours, focus on reducing trial churn through onboarding improvements.

Benchmark: Best-in-class products achieve TTFV under 10 minutes.
4

Activation Events

Question: Has the user completed actions that predict conversion?

Activation events are the 2-3 specific actions that most strongly correlate with conversion. They differ by product: for a project management tool, it might be "create a project and invite a teammate"; for an analytics tool, "create a custom dashboard." Identify them by comparing the behavior of users who converted vs. those who did not.

Benchmark: Users who complete all activation events convert at 5-8x the baseline rate.
5

Engagement Velocity

Question: Is usage increasing, stable, or declining?

Engagement velocity measures the rate of change in product usage over time. A user whose daily actions are increasing is far more likely to convert than one whose usage is flat or declining, even if both have the same total number of actions. Calculate it as the ratio of last 3 days' activity to first 3 days' activity.

Benchmark: Engagement velocity above 1.0 (increasing usage) correlates with 2x conversion rate.

Implementing Event Tracking for Trials

Event tracking for trial users requires a structured approach. Every event needs standard properties (user ID, trial day, plan) plus event-specific properties. Here is a production-ready event tracking layer that works with Segment, Mixpanel, Amplitude, or a custom backend.

The Trial Event Tracker

// lib/trial-analytics.ts

interface TrialContext {
  userId: string;
  trialStartDate: string;
  trialEndDate: string;
  plan: string;
}

interface TrackingEvent {
  event: string;
  properties: Record<string, unknown>;
  timestamp: string;
}

export class TrialAnalytics {
  private context: TrialContext;
  private providers: AnalyticsProvider[];

  constructor(context: TrialContext, providers: AnalyticsProvider[]) {
    this.context = context;
    this.providers = providers;
  }

  private getTrialDay(): number {
    const start = new Date(this.context.trialStartDate).getTime();
    return Math.floor((Date.now() - start) / (1000 * 60 * 60 * 24)) + 1;
  }

  private getDaysRemaining(): number {
    const end = new Date(this.context.trialEndDate).getTime();
    return Math.max(0, Math.ceil((end - Date.now()) / (1000 * 60 * 60 * 24)));
  }

  track(event: string, properties: Record<string, unknown> = {}) {
    const enrichedEvent: TrackingEvent = {
      event,
      properties: {
        ...properties,
        // Standard trial properties added to every event
        userId: this.context.userId,
        plan: this.context.plan,
        trialDay: this.getTrialDay(),
        trialDaysRemaining: this.getDaysRemaining(),
        isTrialUser: this.context.plan === 'trial',
        timestamp: new Date().toISOString(),
      },
      timestamp: new Date().toISOString(),
    };

    // Send to all configured providers
    this.providers.forEach(provider => {
      try {
        provider.track(enrichedEvent);
      } catch (error) {
        console.error(`Analytics provider error: ${error}`);
      }
    });
  }

  // Convenience methods for common trial events
  trackFeatureUsed(featureName: string, depth: number = 1) {
    this.track('feature_used', { featureName, depth });
  }

  trackSessionStart() {
    this.track('session_start', {
      referrer: document.referrer,
      url: window.location.href,
    });
  }

  trackActivationEvent(eventName: string) {
    this.track('activation_event', {
      activationEvent: eventName,
      timeSinceSignup: Date.now() - new Date(this.context.trialStartDate).getTime(),
    });
  }

  trackUpgradeIntent(source: string) {
    this.track('upgrade_intent', { source });
  }
}

interface AnalyticsProvider {
  track(event: TrackingEvent): void;
}

Provider Implementations

Here are adapters for the most common analytics tools. The adapter pattern means you can switch providers without touching your tracking calls.

// lib/analytics-providers.ts

// Segment adapter
export class SegmentProvider implements AnalyticsProvider {
  track(event: TrackingEvent) {
    window.analytics?.track(event.event, event.properties);
  }
}

// Mixpanel adapter
export class MixpanelProvider implements AnalyticsProvider {
  track(event: TrackingEvent) {
    window.mixpanel?.track(event.event, event.properties);
  }
}

// Custom API adapter (send to your own backend)
export class CustomAPIProvider implements AnalyticsProvider {
  private endpoint: string;
  private buffer: TrackingEvent[] = [];
  private flushTimeout: NodeJS.Timeout | null = null;

  constructor(endpoint: string) {
    this.endpoint = endpoint;
  }

  track(event: TrackingEvent) {
    this.buffer.push(event);

    // Batch events and flush every 5 seconds
    if (!this.flushTimeout) {
      this.flushTimeout = setTimeout(() => this.flush(), 5000);
    }
  }

  private async flush() {
    const events = [...this.buffer];
    this.buffer = [];
    this.flushTimeout = null;

    if (events.length === 0) return;

    try {
      await fetch(this.endpoint, {
        method: 'POST',
        headers: { 'Content-Type': 'application/json' },
        body: JSON.stringify({ events }),
      });
    } catch (error) {
      // Re-add failed events to buffer for retry
      this.buffer.unshift(...events);
    }
  }
}

Initialization in your app:

// app/providers.tsx
import { TrialAnalytics } from '@/lib/trial-analytics';
import { SegmentProvider, CustomAPIProvider } from '@/lib/analytics-providers';

const trialAnalytics = new TrialAnalytics(
  {
    userId: user.id,
    trialStartDate: user.trialStartDate,
    trialEndDate: user.trialEndDate,
    plan: user.plan,
  },
  [
    new SegmentProvider(),
    new CustomAPIProvider('/api/analytics/events'),
  ]
);

// Track a feature being used
trialAnalytics.trackFeatureUsed('dashboard-builder', 3);

// Track an activation event
trialAnalytics.trackActivationEvent('first-report-created');

// Track session start
trialAnalytics.trackSessionStart();

Building a Trial Health Score

A trial health score condenses multiple activity signals into a single 0-100 number that predicts conversion likelihood. It answers the question: "Is this trial user on track to convert?" The score drives automated interventions: low-score users get re-engagement emails, high-score users get well-timed upgrade prompts.

// lib/trial-health-score.ts

interface UserActivity {
  totalSessions: number;
  sessionsLast3Days: number;
  sessionsFirst3Days: number;
  featuresUsed: number;
  activationEventsCompleted: number;
  totalActivationEvents: number;
  minutesSinceLastSession: number;
  trialDayNumber: number;
  trialLengthDays: number;
}

interface HealthScoreResult {
  score: number;          // 0-100
  level: 'critical' | 'at-risk' | 'healthy' | 'champion';
  factors: HealthFactor[];
}

interface HealthFactor {
  name: string;
  score: number;
  weight: number;
  description: string;
}

export function calculateTrialHealthScore(
  activity: UserActivity
): HealthScoreResult {
  const factors: HealthFactor[] = [];

  // Factor 1: Session frequency (weight: 30%)
  const expectedSessions = activity.trialDayNumber * 0.7; // ~5 sessions/week
  const sessionScore = Math.min(100,
    (activity.totalSessions / Math.max(1, expectedSessions)) * 100
  );
  factors.push({
    name: 'Session Frequency',
    score: sessionScore,
    weight: 0.30,
    description: `${activity.totalSessions} sessions in ${activity.trialDayNumber} days`,
  });

  // Factor 2: Feature depth (weight: 25%)
  const featureScore = Math.min(100, (activity.featuresUsed / 5) * 100);
  factors.push({
    name: 'Feature Depth',
    score: featureScore,
    weight: 0.25,
    description: `${activity.featuresUsed} features explored`,
  });

  // Factor 3: Activation progress (weight: 25%)
  const activationScore = activity.totalActivationEvents > 0
    ? (activity.activationEventsCompleted / activity.totalActivationEvents) * 100
    : 0;
  factors.push({
    name: 'Activation Progress',
    score: activationScore,
    weight: 0.25,
    description: `${activity.activationEventsCompleted}/${activity.totalActivationEvents} events`,
  });

  // Factor 4: Engagement velocity (weight: 20%)
  const velocity = activity.sessionsFirst3Days > 0
    ? activity.sessionsLast3Days / activity.sessionsFirst3Days
    : activity.sessionsLast3Days > 0 ? 1.5 : 0;
  const velocityScore = Math.min(100, velocity * 50);
  factors.push({
    name: 'Engagement Velocity',
    score: velocityScore,
    weight: 0.20,
    description: velocity >= 1.0 ? 'Increasing usage' : 'Declining usage',
  });

  // Calculate weighted score
  const score = Math.round(
    factors.reduce((sum, f) => sum + f.score * f.weight, 0)
  );

  // Determine health level
  let level: HealthScoreResult['level'];
  if (score >= 70) level = 'champion';
  else if (score >= 50) level = 'healthy';
  else if (score >= 30) level = 'at-risk';
  else level = 'critical';

  return { score, level, factors };
}

Critical (0-29)

User is disengaged and very unlikely to convert.

Action: Send re-engagement email with value reminder and offer a call with customer success.

At-Risk (30-49)

User has shown some engagement but is not activating.

Action: Trigger in-app message highlighting unused features. Offer guided tour of key feature.

Healthy (50-69)

User is engaged and progressing through activation.

Action: Continue standard trial experience. Surface social proof and upgrade prompts at natural moments.

Champion (70-100)

User is highly engaged and likely to convert.

Action: Present well-timed upgrade prompt. Offer annual plan discount. Show trial expiration message with countdown.

Identifying and Saving At-Risk Trial Users

At-risk trial users are those whose activity signals predict they will churn before converting. Identifying them early—within 48-72 hours of disengagement—gives you a window to intervene before they mentally abandon your product. This is one of the most impactful strategies for reducing trial drop-off.

// lib/at-risk-detection.ts

interface AtRiskSignal {
  type: 'session_gap' | 'no_activation' | 'declining_usage' | 'shallow_engagement';
  severity: 'warning' | 'critical';
  message: string;
  recommendedAction: string;
}

export function detectAtRiskSignals(
  activity: UserActivity
): AtRiskSignal[] {
  const signals: AtRiskSignal[] = [];

  // Signal 1: Session gap (no login in 3+ days)
  const hoursInactive = activity.minutesSinceLastSession / 60;
  if (hoursInactive > 72) {
    signals.push({
      type: 'session_gap',
      severity: hoursInactive > 120 ? 'critical' : 'warning',
      message: `User has not logged in for ${Math.floor(hoursInactive / 24)} days`,
      recommendedAction: 'Send re-engagement email with specific feature highlight',
    });
  }

  // Signal 2: No activation events by trial midpoint
  const isPassedMidpoint = activity.trialDayNumber > activity.trialLengthDays / 2;
  if (isPassedMidpoint && activity.activationEventsCompleted === 0) {
    signals.push({
      type: 'no_activation',
      severity: 'critical',
      message: 'Past trial midpoint with zero activation events',
      recommendedAction: 'Trigger guided onboarding or offer 1:1 demo call',
    });
  }

  // Signal 3: Declining engagement velocity
  if (activity.sessionsFirst3Days > 0 && activity.sessionsLast3Days === 0) {
    signals.push({
      type: 'declining_usage',
      severity: 'critical',
      message: 'User was active initially but has stopped returning',
      recommendedAction: 'Send "we noticed you haven't been back" email with value recap',
    });
  }

  // Signal 4: Shallow feature engagement
  if (activity.trialDayNumber >= 3 && activity.featuresUsed <= 1) {
    signals.push({
      type: 'shallow_engagement',
      severity: 'warning',
      message: 'Only explored 1 feature in 3+ days',
      recommendedAction: 'Show in-app prompt to explore related features',
    });
  }

  return signals;
}

Intervention Timing Matters

The most effective intervention window is 48-72 hours after the at-risk signal appears. Earlier than 48 hours may feel pushy (the user might just be busy). Later than 72 hours and the user has likely moved on mentally. Automate these interventions: when the health score drops below your threshold, trigger the appropriate action immediately. Manual follow-up is too slow for trial timelines. For optimizing trial length, activity data tells you whether your trial is too long or too short.

Building a Conversion-Predictive Model

With enough trial data (200+ completed trials is a reasonable minimum), you can build a simple logistic regression model that predicts conversion probability for each active trial user. This is not machine learning complexity—it is basic statistics applied to your trial metrics.

// lib/conversion-prediction.ts

interface PredictionInput {
  sessionsFirstWeek: number;
  featuresUsed: number;
  activationScore: number;   // 0-1
  engagementVelocity: number; // ratio
  trialDay: number;
}

interface PredictionResult {
  probability: number;       // 0-1
  confidence: 'low' | 'medium' | 'high';
  topFactors: string[];
}

// Coefficients derived from historical trial data
// In production, retrain monthly with new data
const MODEL_WEIGHTS = {
  intercept: -2.5,
  sessionsFirstWeek: 0.35,
  featuresUsed: 0.28,
  activationScore: 1.8,
  engagementVelocity: 0.45,
  trialDay: -0.05,  // Later in trial = slightly lower probability
};

export function predictConversion(
  input: PredictionInput
): PredictionResult {
  // Logistic regression: P(convert) = sigmoid(w * x + b)
  const z =
    MODEL_WEIGHTS.intercept +
    MODEL_WEIGHTS.sessionsFirstWeek * input.sessionsFirstWeek +
    MODEL_WEIGHTS.featuresUsed * input.featuresUsed +
    MODEL_WEIGHTS.activationScore * input.activationScore +
    MODEL_WEIGHTS.engagementVelocity * input.engagementVelocity +
    MODEL_WEIGHTS.trialDay * input.trialDay;

  const probability = 1 / (1 + Math.exp(-z));

  // Determine confidence based on trial day (more data = higher confidence)
  let confidence: PredictionResult['confidence'];
  if (input.trialDay >= 7) confidence = 'high';
  else if (input.trialDay >= 3) confidence = 'medium';
  else confidence = 'low';

  // Identify top contributing factors
  const contributions = [
    { name: 'Session frequency', value: MODEL_WEIGHTS.sessionsFirstWeek * input.sessionsFirstWeek },
    { name: 'Feature usage', value: MODEL_WEIGHTS.featuresUsed * input.featuresUsed },
    { name: 'Activation progress', value: MODEL_WEIGHTS.activationScore * input.activationScore },
    { name: 'Engagement trend', value: MODEL_WEIGHTS.engagementVelocity * input.engagementVelocity },
  ].sort((a, b) => Math.abs(b.value) - Math.abs(a.value));

  return {
    probability: Math.round(probability * 100) / 100,
    confidence,
    topFactors: contributions.slice(0, 2).map(c => c.name),
  };
}

The model outputs a probability between 0 and 1. Users with probability above 0.6 are strong conversion candidates and should receive timely upgrade prompts. Users below 0.3 need intervention. The coefficients above are starting points— calibrate them with your actual trial data by running a logistic regression on your historical conversions. Even a basic model improves targeting accuracy by 15% compared to treating all trial users the same.

Using Activity Data to Trigger Interventions

The real value of trial analytics is not the dashboards—it is the automated actions triggered by the data. Here is a framework for mapping health score levels and signals to specific interventions.

Email Interventions

Day 2, health < 30: "Getting started" email with link to quickest path to value. Include a specific action they have not taken yet.
Day 5, no activation: Email highlighting the #1 feature converted users love. Include a direct link to that feature.
72 hours inactive: "We noticed you have not been back" email. Keep it short, empathetic, and action-oriented.

In-App Interventions

Health 50-69, activated: Show subtle trial countdown timer as a banner. User has value context; urgency drives conversion.
Health 70+, returning user: Display upgrade prompt with social proof: "92% of users like you upgrade to Pro."
Feature gate hit, health 40+: Use feature flagging to show a blocked-state upgrade prompt. User is engaged enough to care.

How TrialMoments Uses Activity Signals

TrialMoments integrates with your activity data to time its conversion moments optimally. Instead of showing upgrade prompts on a fixed schedule (day 3, day 7, day 12), it uses engagement signals to identify the right moment for each individual user.

// Initialize TrialMoments with activity-aware config
TrialMoments.init({
  accountId: 'your-id',
  trialEndDate: '2026-04-15',
  upgradeUrl: '/upgrade',
});

// TrialMoments uses these signals automatically:
// - Time remaining in trial (countdown urgency)
// - User engagement level (when to show prompts)
// - Feature interaction history (what to highlight)
// - Session patterns (optimal timing)

// You can also send explicit signals:
TrialMoments.triggerBlockedFeature('advanced-analytics');
// Shows a conversion prompt specifically about advanced analytics
// timed to the user's engagement level and trial position

Activity-Aware vs. Time-Based Prompts

Time-Based (Traditional)

  • • Day 1: Welcome message
  • • Day 7: Midpoint reminder
  • • Day 12: Expiration warning
  • • Day 14: Trial ended

Same prompts for all users regardless of engagement.

Activity-Aware (TrialMoments)

  • • After activation event: Upgrade context
  • • High engagement + midpoint: Countdown timer
  • • Feature gate hit: Feature-specific prompt
  • • Approaching expiration: Urgency-calibrated CTA

Prompts timed to individual engagement patterns.

The result: upgrade prompts appear when the user has context and motivation, not on arbitrary calendar dates. This approach to using trial data for conversion consistently outperforms fixed-schedule messaging.

Turn Trial Activity Data into Conversions

TrialMoments uses engagement signals to time conversion moments perfectly. Countdown timers, upgrade prompts, and feature-block modals—all optimized based on user activity. 30KB bundle, 5-minute integration.

FAQ: Trial User Activity Tracking

What are the most important metrics to track for trial users?

The five most important trial user metrics are: (1) Feature usage depth, which measures how many core features a user has tried and how deeply they have engaged with each, (2) Session frequency, which tracks how often the user returns during the trial period, (3) Time-to-first-value, the elapsed time from sign-up to the user's first meaningful action, (4) Activation events, specific actions that correlate strongly with conversion like creating a project or inviting a teammate, and (5) Engagement velocity, the rate at which the user increases their product usage over time. Together these five metrics predict trial-to-paid conversion with approximately 85% accuracy when combined into a health score model.

How do I build a trial health score?

A trial health score is a composite metric that predicts whether a trial user will convert to paid. To build one, first identify 3-5 behaviors that correlate with conversion using historical data (feature usage, session frequency, activation events). Assign each behavior a weight based on its correlation strength. Score each user from 0-100 by evaluating their behaviors against these weighted criteria. For example, if feature usage correlates 2x more strongly with conversion than session frequency, give it twice the weight. Recalculate the score daily. Users scoring below 40 are at risk of churning and should receive intervention. Users above 70 are likely to convert and should receive well-timed upgrade prompts.

How do I identify at-risk trial users before they churn?

Identify at-risk trial users by monitoring three warning signals: (1) Declining session frequency, where the user logged in daily in week one but has not returned in 3 or more days, (2) Shallow feature usage, where the user signed up but has not completed any activation events, and (3) No engagement with core features by trial midpoint. Set up automated alerts when a user's trial health score drops below a threshold (typically 30-40 out of 100). The most effective intervention window is 48-72 hours after disengagement begins, before the user mentally abandons the product. Interventions include targeted emails, in-app messages highlighting unused features, and personalized onboarding outreach.

What analytics tools should I use for trial user tracking?

For trial user tracking, you need two types of tools: an event tracking layer and an analysis layer. For event tracking, use Segment, Rudderstack, or a custom event bus to capture user actions and send them to multiple destinations. For analysis, Mixpanel and Amplitude are purpose-built for product analytics with cohort analysis and funnel visualization. PostHog is a strong open-source alternative. For lightweight implementations, a custom events table in your database with SQL queries works for products under 5,000 trial users. The key is to separate event collection from analysis so you can change your analytics tool without rewriting instrumentation code.

How does TrialMoments use activity data for conversion?

TrialMoments uses trial user activity signals to time its conversion interventions optimally. Rather than showing upgrade prompts on a fixed schedule, TrialMoments analyzes engagement patterns to identify the moments when a user is most receptive to a conversion message. For example, it might show a countdown timer after a user completes a key activation event (when they have experienced value) rather than on their first login (when they have not). This activity-aware timing means upgrade prompts appear when the user has context and motivation, resulting in higher conversion rates compared to time-based-only approaches. You handle the analytics instrumentation; TrialMoments uses the signals to optimize when and how conversion moments appear.

Ready to Convert More Trial Users with Data?

TrialMoments uses activity signals to time conversion moments perfectly. Deploy in 5 minutes, see data-driven conversion improvements in days.

Get Started with TrialMoments