Rewind AI: Shaping the Future of Personalized Artificial Intelligence
If deep tech is any new technology that gives people superpowers, large language models are definitely in that category. With their capacity to process enormous amounts of data, understand complex patterns, and provide valuable insights, these models have the potential to give humans information processing superpowers as well.
Rewind AI leverages large language models to help you remember everything you’ve seen, said, or heard before. Founded by Dan Siroker, Brett Bejcek, and Paul Stamatiou in 2020, Rewind AI raised a 10M seed in late 2022 from Andreessen Horowitz, First Round Capital, and Vela Partners, and a $12M Series A at a $350 million valuation in summer 2023 from New Enterprise Associates, after an unconventional fundraising campaign in public.
Learn more about the future of personalized artificial intelligence from our interview with the co-founder and CEO, Dan Siroker:
Why Did You Start Rewind AI?
Back in my 20s, my hearing got increasingly worse to the point that I needed a hearing aid. Although my normal hearing sense was gone, using the hearing aid felt like gaining a superpower. As humans, we’re constrained by the biological matter that we’re made of—oftentimes not appreciating enough what superpowers we already have and not knowing what superpowers we could have thanks to technology.
Since that moment when I got the hearing aid, I’ve been on the hunt for technologies to augment human superpowers. In particular, I wondered, what if there were an equivalent for human memory? Human memory gets overridden by new experiences and generally worsens over time, so even after just a week, something like 90% of memories are lost. What if we could stop that? The answer is Rewind AI, a personalized AI to help you remember everything you’ve read, seen, and experienced.
How Does Personalized AI Work?
Rewind AI captures everything you type, say, or hear. We compress and store all the data and then use large language models to retrieve information and put them into context. For example, before this interview, I asked Rewind AI, “How do I know Benjamin?” And it reminded me of how we got in touch.
We’re primarily using two large language models, GPT-4 by OpenAI and Claude by Anthropic, as these are the two most advanced and useful for us. We’re not fine-tuning these models but instead provide very specific context in the prompts. Training these models has been an enormous effort, so I guess we’ll see only very few giant and general-purpose models in the future. However, there will be many smaller models for more specific tasks. For example, we use Whisper, a language model developed and open-sourced by OpenAI, to turn audio into words,
As computing hardware continues to advance, it becomes increasingly easier to run language models locally. Apple Silicon, the M1 and M2 chips, has already made a leap forward. With the more powerful M2 Ultra chip, you can already run large language models smoothly on your Mac. I think the future of large language models is hosting them locally.
We just did a demo at the Intel Innovation 2023 conference, where I shared the stage with Intel CEO Pat Gelsinger and demonstrated Rewind AI on Windows, running Llama 2 entirely locally. You can watch the demo on our YouTube channel. Right now, the quality is not good enough, but a major inflection point will come when open-source models become as good as GPT-4 and can be run locally, which has many advantages for performance and privacy.
The main challenge right now is clever prompt engineering: it’s amazing how much you can achieve with that. Prompt engineering opens a whole new field, as large language models behave non-deterministically, unlike traditional code. It takes a bit of trial and error, but once you get the prompting right, it works beautifully.
Large language models are trained to reason, not to remember facts. So we’re not relying on knowledge that the large language models might have picked up during training. We include the facts right in the prompt and tell the large language model to refer to the sources. This way, it can’t hallucinate or make up things, as all the necessary facts are contained in the prompt. It’s all about how we provide the facts and prompt the large language model to take raw, unstructured data and turn them into useful answers.
Going from GPT-3.5 to GPT-4 has been a game changer for us as the context window of the large language model grew a lot. It can now process longer inputs and return 100 instead of just 10 results at once. Also, Claude’s extremely long context window of 100K tokens has been a big breakthrough, making it much easier to process long meeting notes. It is likely that we won’t need much larger context windows as it’s a tradeoff with other performance metrics and costs.
Once a new user signs up, Rewind starts from a blank page, but typically, after a couple of days, we have enough data to provide useful answers. Having more user data is all about having context. We don’t train on user data, but having more data—not just text but images and audio from in-person conversations—helps us understand what has been captured and how it is connected.
How Did You Evaluate Your Startup Idea?
I started by building the product I wanted to have that could solve an actual problem for me. My validation comes from other people finding our product genuinely helpful in retaining memory, not because some consultants said the market is big and promising. We’re not trying to beat an incumbent but rather create a new category, solving a previously unsolved problem. So it all comes down to having a deep understanding of the problem. Deep within me, I wanted to improve my memory and felt that there would be a business around it.
Most founders wait too long before they launch. We launched the first version painfully early. There’s a certain percentage of the market, your early adopters, that love even an early version of your product and are willing to tolerate bugs. If these early adopters don’t exist, you’re not working on a real problem, and the risk of not working on a real problem far outweighs the risk of turning off people by bugs early on.
It’s okay if bugs turn some people off: they’re not your early adopters. Find the first few hundred people who really love your product and focus on doing a few things really well for them!
For us, this meant focusing on just one or two features initially. Not capturing meetings, not implementing payments from the start, just focused on text and Apple users with a Mac since their M1 and M2 chips provided a lot of processing power.
From there, we expanded the product, closely monitoring our annual recurring revenue as the north star metric. It’s easy to fool yourself into thinking you’re solving a problem that isn’t one. Ultimately, what matters is that someone is willing to pay for the solution. Avoid fake metrics that look good when the business does poorly or become an end in itself.
What Advice Would You Give Fellow Deep Tech Founders?
My best advice is to build something people want. It may sound trivial, but it’s really not. Most people talk to their potential customers in the totally wrong way. Read the book The Mom Test; it’s great. It will help you better understand what people care about so that you build something they want. Focus most of your time on understanding the problem instead of building solutions that are looking for a problem.