Executive Summary
Providing business intelligence and analytics consulting for a technology company, delivering SQL-based insights and interactive dashboards supporting product and operational decision-making. Built Power BI dashboards tracking product KPIs (user engagement, feature adoption, retention) for product management and executive stakeholders. Write SQL queries analysing user behaviour patterns, conversion funnels, and feature usage informing product roadmap priorities. Developed data quality validation frameworks ensuring 99%+ accuracy for executive reporting and created self-service analytics solutions reducing ad-hoc report requests by 60%.
Returning to Syilum With Fresh Perspective
Coming back to Syilum after my Signet Jewelers engagement was eye-opening. At Signet, I'd built financial consolidation pipelines processing 15.2M+ records, automated Databricks workflows, and delivered M&A intelligence for $450M+ in acquisitions. Returning to Syilum, I brought that enterprise rigor to product analytics. The first thing I did was audit the existing reporting: I found 23 dashboards with subtly different KPI definitions. "Active user" meant different things on different reports. My first deliverable wasn't a new dashboard—it was a governed KPI dictionary that everyone agreed on. That unglamorous work became the foundation for everything I've built since.
Scope
Ongoing freelance BI consulting: Power BI dashboards, SQL analytics, data quality frameworks, and self-service reporting solutions for product and executive teams.
Key Deliverables
Product KPI dashboards (engagement, retention, feature adoption), conversion funnel analysis, user behaviour SQL queries, validated executive reports with 99%+ accuracy.
Impact
60% reduction in ad-hoc report requests through self-service analytics; 99%+ data accuracy; actionable product roadmap recommendations from SQL-driven user behaviour insights.
Business Challenge
- Fragmented Product Metrics: Product KPIs (engagement, feature adoption, retention) were tracked inconsistently across multiple tools with no single source of truth for executive decision-making
- Ad-Hoc Overload: Product and engineering teams submitted frequent one-off data requests, consuming analyst time and creating bottlenecks for strategic work
- Data Quality Concerns: Executive reports lacked formal validation, leading to occasional discrepancies that eroded stakeholder confidence in the data
- Limited Self-Service: Business users relied entirely on analysts for every data question, creating dependency and slowing product decisions
- Conversion Visibility Gap: No structured analysis of user conversion funnels or feature usage patterns to inform product roadmap priorities
Product KPI Dashboard
Interactive Power BI dashboards tracking user engagement, feature adoption, retention cohorts, and conversion funnels for product management and executive stakeholders.
Feature Adoption Trends (Last 6 Months)
Why I Start With Feature Adoption, Not Revenue
Most product dashboards lead with revenue or active users. I learned at Syilum that those are lagging indicators—by the time revenue dips, the underlying problem (low feature adoption, poor retention) has been festering for weeks. I structured the dashboard to surface leading indicators first: which features are users actually adopting? Where do they drop off? The Advanced Analytics module at 38% adoption immediately told the product team something was wrong with the onboarding flow for that feature. We traced it to a missing tooltip in the setup wizard. A $0 fix that's already pushed adoption to 38% from 22% the prior month.
Solution Architecture
- Product KPI Dashboards: Built Power BI dashboards with DAX measures tracking user engagement scores, feature adoption rates, retention cohorts, and conversion funnels; configured scheduled refresh and row-level security for team-specific views
- SQL Analytics Engine: Wrote SQL queries in Snowflake analysing user behaviour patterns, conversion funnels, and feature usage; created reusable query templates for common product analysis patterns (cohort retention, funnel drop-off, engagement scoring)
- Data Quality Framework: Developed automated validation checks ensuring 99%+ accuracy for executive reporting; implemented row count reconciliation, null detection, outlier flagging, and cross-report consistency checks
- Self-Service Analytics Layer: Created Power BI semantic model enabling product managers and engineering leads to independently explore data; built guided report templates with embedded documentation reducing ad-hoc requests by 60%
- Product Roadmap Intelligence: Partnered with product and engineering teams translating business questions into SQL analysis and actionable recommendations; delivered weekly insights briefs identifying user behaviour trends and feature prioritization signals
The Self-Service Breakthrough
The 60% reduction in ad-hoc requests didn't come from fancy technology—it came from understanding what people actually needed. I spent a week logging every data request: who asked, what they wanted, how often, and what they did with the answer. Patterns emerged immediately. 70% of requests were variations of the same 8 questions: "How many active users this week?", "What's the conversion rate for feature X?", "Show me retention by cohort." Instead of building 8 custom reports, I built one semantic model with guided exploration paths. PMs could filter by feature, time range, and user segment without writing a single query. The remaining 30% of requests were genuinely complex—those became my strategic analysis work instead of repetitive SQL queries.
Sample SQL — Conversion Funnel Analysis
SQL — Feature Adoption Funnel by User Segment
-- Conversion funnel: feature discovery β activation β adoption
WITH funnel AS (
SELECT
u.user_segment,
COUNT(DISTINCT CASE WHEN e.event_name = 'feature_viewed' THEN e.user_id END) AS discovered,
COUNT(DISTINCT CASE WHEN e.event_name = 'feature_activated' THEN e.user_id END) AS activated,
COUNT(DISTINCT CASE WHEN e.event_name = 'feature_used_7d' THEN e.user_id END) AS adopted
FROM analytics.events e
JOIN analytics.dim_users u ON e.user_id = u.user_id
WHERE e.event_date >= DATEADD('day', -30, CURRENT_DATE)
AND e.feature_id = 'dashboard_builder'
GROUP BY 1
)
SELECT
user_segment,
discovered,
activated,
adopted,
ROUND(activated::FLOAT / NULLIF(discovered, 0) * 100, 1) AS activation_rate,
ROUND(adopted::FLOAT / NULLIF(activated, 0) * 100, 1) AS adoption_rate
FROM funnel
ORDER BY discovered DESC;
Why Funnel Queries Need NULLIF
That NULLIF in the query isn't just defensive coding—it's a lesson I learned the hard way. Early in my career, a division-by-zero error in a scheduled report went unnoticed for three days because the error was silently caught by the BI tool and displayed as "0% conversion." Three days of executive decisions based on a broken metric. Now every ratio calculation gets NULLIF protection, and I build alerting into the validation framework that flags any metric that drops to exactly zero—because zero is almost never a real business metric; it's almost always a data issue.
Results
Self-Service Adoption
60% reduction in ad-hoc report requests through self-service Power BI semantic model; product managers independently explore KPIs without analyst dependency.
Data Quality
99%+ report accuracy achieved through automated validation framework; zero data discrepancy escalations since implementation.
Product Intelligence
SQL-driven user behaviour analysis directly informed product roadmap priorities; identified feature adoption gaps leading to targeted UX improvements.
The Report Nobody Asked For (That Everyone Uses)
My most impactful deliverable wasn't in any SOW. While building the feature adoption dashboard, I noticed an interesting pattern in the SQL data: users who engaged with 3+ features in their first week had 4x higher 90-day retention than single-feature users. I built a "First Week Feature Breadth" report and shared it during a product review. The VP of Product immediately asked: "Can we identify users in their first week who haven't hit 3 features yet?" That question led to a targeted in-app nudge campaign for new users. It's now one of the most-viewed reports in the entire Power BI workspace. Sometimes the best analytics work isn't answering the question you were asked—it's surfacing the question nobody thought to ask.
Challenges & Lessons
- KPI Governance First: Standardizing metric definitions across teams before building dashboards prevented downstream confusion and rework.
- Self-Service Requires Guardrails: Unrestricted data access led to misinterpretation; guided exploration paths with embedded context improved data literacy.
- Validation is a Feature: Treating data quality checks as first-class deliverables (not afterthoughts) built executive trust and eliminated escalation incidents.