Click on the tabs below to explore company specific portfolios.
You are running two Facebook ad campaigns (Campaign 1 and Campaign 2) to drive users to a product that offers a free trial followed by the ability to book and join a class. Users land on one of three Landing Page (LP) variants, with an equal likelihood of being assigned to any variant. The objective is to determine which campaign and LP variant should be used to scale up the campaign based on user behavior, conversion rates, and performance. When a user lands on LP, they will follow a funnel and can do the following things on the product
1. Signup 2. Book a class 3. Join a class 4. Become a member
Identify the best recommendation. Which campaign and Landing page should be used to scale up the campaign. Do the data analysis of the attached sheet
Insights Copilot is a Gen AI-based product designed to provide contextual insights from structured and unstructured data sources, enabling users to query in natural language. The tool delivers summarized responses, generates insights, and visualizes data across multiple domains, ensuring quick and efficient information retrieval. Identify essential growth metrics for Insights Copilot, focusing on acquisition, retention, engagement, and monetization, and propose ways to measure them effectively.
Identify and measure essential growth metrics, including acquisition, retention, engagement, and monetization, to evaluate the product's performance and growth.
Analyze usage data to categorize user behaviors (e.g., core, casual, power users) and define activation metrics to assess user engagement and feature adoption.
Evaluate retention trends and Product-Market Fit (PMF) using usage and feedback data, identify gaps in analytics, and recommend suitable tools for tracking and improvement.
In the above dashboard I have provide the insight about some of the metrics that I will discuss
in the below explanation. There are 839 total queries submitted by 27 unique users, averaging
31.07 queries per user. The overall success rate is high at 94.76%, with a failure rate of just 5.24%.
We can analyse user behaviour by categorizing users based on their interaction with the
queries. So, we will divide the users in 3 types. Power Users, with more than 30 queries per
month, i.e., demonstrate high engagement and complex behaviour, representing interaction
through various departments. Core Users, who fall within 6-30 queries per month, exhibit a
more routine usage pattern with a focus on specific departments. Casual Users, those with
fewer than 6 queries per month, show less engagement, often with simpler queries and lower
success rates.
Also, we can assess user activation, I would define metrics such as Primary Activation (the
first successful query), Secondary Activation (5+ queries in the first month), and Full Activation.
By tracking these behaviours, I can refine product development and marketing strategies,
ensuring alignment with user needs and enhancing engagement over time.
We can also assess retention trends, I'd monitor week-over-week (WoW) retention. Key
indicators include an increasing percentage of power users, success rates, and consistent
user activity over the time period. Power User growth is another key indicator, as they drive
long-term engagement. Regular tracking of query success rates is crucial; consistent
performance suggests users are deriving value from the product. For evaluating Product
Market Fit (PMF), I'd focus on the percentage of power users, engagement depth (average
queries per active user), success rate trends, and the proportion of active departments. We
can see the implementation in the above dashboard. These indicators would help understand
if users are increasingly finding value, and if the product has achieved fit with its target market.
The current usage data may have gaps in areas such as user path analysis, session-level
metrics, and feature-specific engagement. Without a clear view of the user journey, it can be
difficult to understand where friction points exist. Additionally, understanding user satisfaction
through direct feedback and categorizing errors is crucial for refining the product. Among the
product analytics tools, I recommend Amplitude for its advanced tracking capabilities,
including cohort analysis, custom event tracking, and AI-powered insights, which would allow
a deep dive into user behaviour and feature usage patterns. Google Analytics, while cost
effective, lacks the B2B-focused features needed for more granular analysis. Usersnap is
useful for gathering visual feedback and session recordings but falls short on analytics. By
selecting Amplitude, ABI Insights Copilot can address current data gaps and drive more
informed decision-making.
The success of the chat bar feature can be tracked using key metrics like Query Completion
Rate, Average Response Time, and Query Refinement Rate. Query Completion Rate
measures how often users successfully complete their queries, indicating the feature’s
effectiveness. Average Response Time reflects the speed at which the feature resolves
queries, while Query Refinement Rate shows how frequently users modify their queries, which
may indicate user frustration or the need for feature improvement. Feature Adoption, which
tracks the percentage of users who actively use the chat bar, is another critical metric. Impact
analysis should include User Engagement Metrics, such as Queries per Session and Session
Duration, to assess overall interaction with the feature. Churn analysis can identify early
indicators of disengagement, like declining query frequency or increasing error rates. By
regularly reviewing these metrics, integrating user feedback, and iterating based on insights,
the chat bar can be continuously improved to meet user expectations and enhance
satisfaction.
Certain Metrics which were used and there calculation are given below:
Engagement Depth = Avg Queries per Active User
Success Rate Trend = Successful Queries / Total Queries
User Satisfaction Score = (Positive Feedback / Total Feedback) × 100
Success Rate = (Completed queries in department / Total queries in department) × 100