Reka was founded in 2023 by Dani Yogatama, Yi Tay, and Che Zheng — researchers whose collective pedigree reads like a tour of the most important AI labs on the planet. Yogatama spent years at DeepMind working on language understanding and reasoning. Yi Tay was a senior researcher at Google Brain (later Google DeepMind), known for his work on efficient Transformers, scaling laws, and the UL2 unified language learner. Zheng brought deep engineering expertise from building large-scale systems. The founding thesis was straightforward but ambitious: the next generation of AI models shouldn't bolt on multimodal capabilities after the fact. Instead, they should be natively multimodal from the start — trained from the ground up to process text, images, video, and audio in a unified architecture. That conviction attracted early funding and a team of researchers who believed the "add vision later" approach used by most labs was fundamentally limiting.
The technical distinction Reka draws is between models that are "multimodal" because someone fine-tuned a vision encoder on top of a text model, and models that are multimodal because multiple modalities were woven into pre-training from day one. Their flagship models — Reka Core, Reka Flash, and the smaller Reka Edge — were designed to handle text, images, video, and audio natively. This isn't just a marketing claim; it shows up in capabilities like video understanding, where the model can reason about temporal sequences rather than just captioning individual frames. Reka Flash, their mid-size model, became notable for punching well above its weight on multimodal benchmarks, often matching or exceeding models several times its parameter count. The team published their technical report in April 2024, showing competitive results against GPT-4V, Gemini Pro, and Claude 3 Sonnet across a range of tasks — a remarkable achievement for a company that was barely a year old.
Reka raised a $58 million Series A led by DST Global and Radical Ventures in 2024, with participation from SoftBank and notable angel investors. By AI lab standards, this is modest — the kind of money that buys you a few months of serious GPU time, not the multi-billion-dollar war chests that OpenAI, Anthropic, and xAI have accumulated. Reka has compensated by being unusually efficient: their team remained small (under 30 people for much of their first year), their models were trained with careful compute budgeting, and they shipped product quickly. They launched an API and a consumer-facing assistant called Reka Playground, but the real play has always been the models themselves — offering frontier-class multimodal AI to developers and enterprises who need more than text-only reasoning. The company has also made its smaller models available as open weights, following the pattern of using open releases to build developer mindshare while keeping their most capable models proprietary.
In mid-2024, reports emerged that Snowflake was in advanced talks to acquire Reka for approximately $1 billion. The deal made strategic sense on both sides: Snowflake needed in-house AI capabilities to compete with Databricks (which had acquired MosaicML for $1.3 billion the year before), and Reka needed the distribution, compute resources, and enterprise relationships that a major data platform could provide. For Reka's founders, the acquisition offered a path to deploying their multimodal models at massive scale inside Snowflake's data cloud, where customers already store the unstructured data — images, documents, video — that multimodal models are uniquely suited to process. The deal underscored a broader trend in AI: that standalone research labs, no matter how talented, face enormous pressure to either raise billions independently or find a strategic home where their technology can reach customers without burning through capital on go-to-market. Reka's story, from founding to acquisition in roughly eighteen months, is one of the fastest trajectories in AI company history.