Skip to main content

Focus Areas

Last updated: January 2026

My Favorite Problems

Like many of my friends, I take great inspiration from Richard Feynman's idea of having "favorite problems" - the story goes that Feynman's advice on becoming a "genius" was to keep a dozen favorite problems in mind to filter new data and information against. While I'm not religious about maintaining exactly 12, I do regularly update my "favorite problems" and treat them as focus areas for research, experimentation, and writing over some long-ish time horizon.

Currently, my favorite problems include:

Technology and Human Flourishing
Meditation and Awakening
Writing and Ideas
Health

By Day

At Datadog, my remit is creating practitioner-focused content on experimentation, feature flagging, and product analytics. That could mean writing blog posts and giving talks, helping craft educational courses and workshops, collaborating on research, interviewing interesting folks for the Outperform podcast, or even hosting community meetups. My area of speciality is the scaling of experimentation programs across organizations, but I regularly act as an editor, co-author, or general collaborator to my colleagues who are experts in statistics, engineering, or other fields.

The specific topics we're focused on are informed by a triangulating what customers are thinking about, the product features we're building, and where the space is being pushed forwards by innovative thinking and research from both academia and industry. A few current highlights:

Proxy Metrics
Solid business decisions often require evaluating long-term metrics (LTV, retention, subscriptions), but experiments need short-term signals to make timely decisions. How do you select and correct for proxy metrics to enable this?

Policy Analysis
What is the best statistical inference regime for testing? Currently, many organizations use one-size-fits-all statistical policies (p < 0.05, 80% power) across all teams and use cases – but a growing confluence of research and practical experience suggests that traditional statistical approaches are inappropriate for the context and parameters of online A/B testing.

Root Causing
Nobody likes seeing dead-neutral experiment results. The current “state of the art” is to slice-dice results by every segment imaginable. But what if changes aren't explained by segment heterogeneity? How can we get a better understanding of what drives our experiment results?

Reducing Experiment Restarts
We lose as much time to bugs and restarts that require discarding experimental data and starting over as we gain from the most powerful statistical models (CUPED, sequential testing). How can observability and statistical diagnostics tools rapidly detect issues within minutes/hours?

Offline Evaluation / Simulated Experiments
AI has turbocharged development speed, demanding faster, comprehensive testing. Could agentic simulated traffic help QA experiments or even predict outcomes? How could offline AI evals be improved to better align with online A/B test results?

Self-Service Experimentation Infrastructure
The oldest topic in experimentation platforms remains one of its biggest: running/launching tests asks too much of an end user. Too much statistical literacy required, too many decisions that need to be made, UI/UX friction, cultural blockers like approvals, etc. How does more intentional product thinking solve for this?

Let's connect!

If these questions are at all of interest to you too, feel free to reach out or even proactively grab a time to get introduced on my calendar (no need to reach out first). I would love to discuss these themes and jam on ideas with anyone, especially folks with perspectives or backgrounds different than mine.

More about contacting me on the homepage.