Chris Moody

Founder building world-changing ML products

Skunkworked & scaled Style Shuffle ($200M+/yr), now building Popgot.com.

Open-source: Barnes–Hut t-SNE in scikit-learn; Research: LDA2Vec.

Chris Moody - ML Product Builder and Researcher

About Chris Moody

I am a founder building world-changing ML products. I blend hard science with scrappy product instincts to create systems people love. My work spans AI, RecSys, NLP, and vision

I'm currently building Popgot.com, using LLMs to save families money on basic daily essentials. Previously I founded Gumtap where I built a billion-scale vector database on commodity object storage.

At Stitch Fix I conceived, prototyped, and skunk-worked Style Shuffle into production—a Tinder-for-clothes game that became a daily habit. It pulled in 10B+ signals, boosted engagement ~30×, and drove over $200M in incremental revenue.

I also ship research: contributed the Barnes-Hut t-SNE to scikit-learn (O(N²) → O(N log N)), created LDA2Vec, and teach at major conferences. My preferred stack is PyTorch, Python, Modal and Supabase with React and TypeScript.


Style Shuffle

In 2018 at Stitch Fix, we prided ourselves on personalization — but lacked the interaction data to drive it. I created Style Shuffle, a Tinder-for-clothes game that let clients rapidly express style preferences. Launched in March '18, it became a daily habit for millions of clients, generating 10+ billion interactions from 6M+ players, fueling our Latent Style models, and driving ~30× engagement, 100× more data per user, and $200M/year in incremental AB-tested revenue. Style Shuffle transformed Stitch Fix from a monthly service into a daily experience, earned press in WIRED, Fast Company, and Quartz, and transformed me from a scientist/engineer into a founder focused on world-changing ML/AI products.


Popgot

I'm building Popgot.com, an AI shopping agent that saves families and small businesses money on basics. Think: toothpaste, laundry pods, hand soap.

Popgot finds hidden shrinkflation and calculates true unit prices across Amazon, Walmart, Costco, Target, and more. It cleans up messy catalogs with LLM pipelines that parse product text, read photos, and normalize data so comparisons are actually fair. AI crawls thousands of SKUs, breaks down ingredient lists, and generates shopping guides that surface 40%+ savings.

On the backend: Cloudflare Workers, Supabase, ClickHouse, and Modal power large-scale crawls, structured LLM classification, and affiliate integrations.

The goal is simple: make transparent, trustworthy shopping the default - the Wirecutter for daily essentials.

Price List with Unit Prices

Demonstrates a price list showing unit prices correctly extracted and calculated (per ounce, pod, sheet) across multiple retailers, making true cost comparisons transparent.

Custom Annotation UI

Shows our custom annotation interface for fine-tuning LLM models. This UI allows for efficient data labeling and model training to improve product classification and data extraction accuracy.

Deep Research Mode

Demonstrates real-time web scraping that builds a classifier model on the fly for specific queries, then automatically generates a comprehensive spreadsheet with structured data and insights.


Machine Learning

I contributed the Barnes-Hut t-SNE approximation to scikit-learn that reduced the algorithm from O(N²) to O(N log N) complexity — so my code is likely on your machine right now. I used Kingma's work on variational tricks to make t-SNE also represent uncertainties (it's interesting to see that comingled with the optimization process). I've also tinkered with Poincaré embeddings for t-SNE — the key idea being that hyperbolic embeddings can embed "more space" as a function of depth, which intuitively models hierarchies in space.

LDA2Vec was an early attempt at building interpretable methods — the key idea was to extend word2vec to document embeddings, but make document embeddings use a new loss function derived from Dirichlet distributions. I had fun fitting 3D body shape meshes from multiple frames to try and solve try-at-home body sizing.


Products

I have created many side projects, all revolving around creating strong data-driven feedback loops. To get there, I've built engaging full-stack apps with components like game-like frontends, scalable data backends, and model deployment and logging infrastructure.

Corner Champ was an app for learning crowd-sourced home valuations, addressing the ~5% accuracy gap in Zillow's estimates that matters for home buyers placing offers.

Hype the Like served as "Tinder for Product Analytics" - a Shopify-compatible tool enabling zero-risk product exploration by showing merchants which items to stock before investing in inventory.

Style Tyles offered Stitch Fix clients a sneak peek at potential stylist selections, allowing them to create personalized style boards through an interactive picking experience.


Conferences

I've spoken at numerous machine learning and AI conferences, sharing insights on recommendation systems, NLP, and scalable machine learning architectures.


Writing

I've written extensively about natural language processing and machine learning techniques, contributing to the technical blog at Stitch Fix and sharing research insights. My most popular posts include "A Word is Worth a Thousand Vectors", which explores Word2Vec embeddings, and "LDA2Vec: Mixing Dirichlet Topic Models and Word Embeddings", introducing a novel approach to combining topic modeling with word embeddings.

I've also written about advanced techniques in "Word Tensors", exploring higher-order word embedding methods, and provided practical guidance in "Stop Using Word2Vec", discussing when and why to move beyond traditional word embedding approaches.