Deep learning TL and investor. Currently building at Google Research and investing on the weekends with First Cap. Grad @ Stanford AI ('14) and YC (S16).
The world is full of places where thoughtful time or money can deliver 10x returns on happiness and efficiency. Many of the best entrepreneurs and CEOs also tend to think like investors (Stripe, etc.).
Started and help run a private ~61-person investment group called First Cap which has delivered strong returns for members. Investments historically have returned 30%-170% in 1-2 months with a max drawdown of 7%, and the group has been running for over two years through several macro periods. We're a bunch of nice graduates of top PhD programs (Stanford, MIT, etc.), successful entrepreneurs and financial experts building scalable, diversified investment strategies even our parents can invest in.
For angel investments, I'm a Venture Partner at Pioneer Fund
and occasionally advise VCs like Zhenfund
. Love meeting great people, especially those working in women's health, D2C health, genomics, deep RL and autonomy.
Investing your time as an entrepreneur can be much more scientific and efficient than it is right now. I wrote a collection of articles to put first principles and empirical grounding behind growing yourself through early-stage, which will hopefully increase the hit rate of the ecosystem (see Essays
). Hopefully, this will be helpful to others starting out.
ML Research & Product
Despite the massive volume of papers today, there are still many underinvested areas as ML scales out. Especially enjoy those where we're just figuring out the machine learning to unlock a novel product spec (PhDs, meet PMs in a world of uncertainty^2), and working across the stack (frontend, ML, data, backend) with a team to launch and beyond. Some projects:
Unlocking human potential via personalized language learning modules for job seekers, powered by AI. 1.2 years of hard ML x early product and engineering. Currently is launched in Indonesia.
Keyword spotting, voice activity detection and large-vocabulary speech recognition used to be treated as separate tasks. We published a model which does them all in a single architecture and with less retraining, which also happens to 2x VAD accuracy.
In the past, I've also published on speech recognition at Stanford (see Publications).
Supporting women through the menopause transition. Co-founded the company (thanks to Khosla for an incubating EIR opportunity), wrote the first website and mission statement, scaled the team from 2 to 5, talked to a bunch of clients. This ran for several years with an even larger team, and helped attract significant attention
to the space that continues to this day.
It used to be that we didn't know much about health between doctor's visits. Shipped the first ML models for continuous pulse tracking for the Baseline watch for 10,000 volunteers to track pulse over all activities, including through noise like running, walking and outdoor conditions (a Google X-backed project).
Art pricing has small datasets and an exponentially-scaled output space, which makes it challenging for models found in much of the ML literature. Shipped a completely new neural architecture in a month which contributed ~X0% relative increase in accuracy and requires fewer features than prior models. Was adopted by Arthena and descendants are still in use in 2021.
Get in touch
Say hi on Twitter