Focus Areas
Last updated: January 2026
My Favorite Problems
Like many of my friends, I take great inspiration from Richard Feynman's idea of having "favorite problems" - the story goes that Feynman's advice on becoming a "genius" was to keep a dozen favorite problems in mind to filter new data and information against. While I'm not religious about maintaining exactly 12, I do regularly update my "favorite problems" and treat them as focus areas for research, experimentation, and writing over some long-ish time horizon.
Currently, my favorite problems include:
Technology and Human Flourishing- What does it mean to use technology “skillfully”?
- How could individuals rethink their relationship with technology from the ground up, being more intentional about cost:benefit tradeoffs? How do we diagnose and address current problems in our consumption?
- How do we craft new technology that explicitly enables human flourishing? And can we use experimentation to get there?
- What are the overlaps between mindfulness/spiritual development and technology? How might mindfulness help us use technology more intentionally? And how might technology support or accelerate spiritual development?
Meditation and Awakening
- What meditative practices, etc. will help me most efficiently and effectively develop the skills of concentration, sensory clarity, and equanimity? (see: Shinzen Young, "What is Mindfulness?")
- How do I bring mindfulness plus Buddhist ethics into different aspects of daily life? How do I best "show up?"
- What does it mean to "grow up" and "clean up" in addition to "waking up"? What is the work required? (see: Wilber)
Writing and Ideas
- How do I continue to improve my ability to formulate, communicate, and collaborate on ideas via writing and speaking? How might AI contribute to the "end of the knowledge economy", and what could a new emergent "wisdom economy" look like?
- In what ways does writing parallel or overlap with spiritual practice? (see: Natalie Goldberg et. al.)
Health
- What is the most efficient and effective 80/20 approach to physical health and longevity?
By Day
At Datadog, my remit is creating practitioner-focused content on experimentation, feature flagging, and product analytics. That could mean writing blog posts and giving talks, helping craft educational courses and workshops, collaborating on research, interviewing interesting folks for the Outperform podcast, or even hosting community meetups. My area of speciality is the scaling of experimentation programs across organizations, but I regularly act as an editor, co-author, or general collaborator to my colleagues who are experts in statistics, engineering, or other fields.
The specific topics we're focused on are informed by a triangulating what customers are thinking about, the product features we're building, and where the space is being pushed forwards by innovative thinking and research from both academia and industry. A few current highlights:
Proxy Metrics
Solid business decisions often require evaluating long-term metrics (LTV, retention, subscriptions), but experiments need short-term signals to make timely decisions. How do you select and correct for proxy metrics to enable this?
Policy Analysis
What is the best statistical inference regime for testing? Currently, many organizations use one-size-fits-all statistical policies (p < 0.05, 80% power) across all teams and use cases – but a growing confluence of research and practical experience suggests that traditional statistical approaches are inappropriate for the context and parameters of online A/B testing.
Root Causing
Nobody likes seeing dead-neutral experiment results. The current “state of the art” is to slice-dice results by every segment imaginable. But what if changes aren't explained by segment heterogeneity? How can we get a better understanding of what drives our experiment results?
Reducing Experiment Restarts
We lose as much time to bugs and restarts that require discarding experimental data and starting over as we gain from the most powerful statistical models (CUPED, sequential testing). How can observability and statistical diagnostics tools rapidly detect issues within minutes/hours?
Offline Evaluation / Simulated Experiments
AI has turbocharged development speed, demanding faster, comprehensive testing. Could agentic simulated traffic help QA experiments or even predict outcomes? How could offline AI evals be improved to better align with online A/B test results?
Self-Service Experimentation Infrastructure
The oldest topic in experimentation platforms remains one of its biggest: running/launching tests asks too much of an end user. Too much statistical literacy required, too many decisions that need to be made, UI/UX friction, cultural blockers like approvals, etc. How does more intentional product thinking solve for this?
Let's connect!
If these questions are at all of interest to you too, feel free to reach out or even proactively grab a time to get introduced on my calendar (no need to reach out first). I would love to discuss these themes and jam on ideas with anyone, especially folks with perspectives or backgrounds different than mine.
More about contacting me on the homepage.