Liquid AI, co-founded by renowned robotics expert Daniela Rus and a team of researchers from MIT, has developed a new AI model called the Liquid Foundation Model (LFM), powered by liquid neural networks. This architecture offers a more efficient and adaptable solution compared to traditional AI models.
Liquid AI recently emerged from stealth mode, securing an impressive $37.5 million in seed funding from investors such as OSS Capital, Automattic, and angel investors like GitHub’s Tom Preston Werner and Shopify’s Tobias Lütke. This backing values Liquid AI at $303 million. The founding team includes Ramin Hasani (CEO), Mathias Lechner (CTO), and Alexander Amini (Chief Scientific Officer), all of whom have made significant contributions to the development of liquid neural networks.
Unlike traditional models, liquid neural networks are designed with flexibility in mind. Inspired by the neural mechanisms of simple organisms like roundworms, these networks consist of neurons governed by equations that predict behaviour over time. The "liquid" aspect refers to the model's ability to adapt dynamically to new environments or changing conditions, offering more flexibility than static models like GPT-3 or GPT-4.
Liquid neural networks bring several key advantages over traditional AI models. One of the most compelling features is their smaller size and reduced computational requirements. For instance, while GPT-3 contains 175 billion parameters, a liquid neural network designed for drone navigation might only require 20,000 parameters and 20 neurons. This drastic reduction in complexity allows liquid neural networks to perform well on resource-constrained devices like Raspberry Pi, making them suitable for edge deployments, such as mobile applications, autonomous vehicles, and drones.
In addition to their compact size, liquid neural networks are highly interpretable. With fewer parameters, it’s easier to understand the function of individual neurons, which enhances transparency — a crucial aspect in industries requiring high accountability, such as healthcare and finance.
Perhaps the most significant advantage of liquid neural networks is their ability to learn and adapt in real time. While most AI models process static snapshots of data, liquid neural networks analyze sequences of data, allowing them to adjust to new information dynamically. This makes them particularly useful in unpredictable environments, such as changing weather conditions for autonomous driving or navigating unfamiliar terrains.
Liquid neural networks have already demonstrated impressive performance across various fields. In 2022, Rus and her team tested liquid neural networks on drones, training them using data from a professional drone pilot. The models outperformed traditional algorithms in navigating complex outdoor environments, such as dense forests and city neighborhoods, even in noisy and unpredictable conditions. Notably, the liquid neural network was the only model that could generalize to new, unseen scenarios without requiring additional fine-tuning.
These results highlight the potential applications of liquid neural networks in areas like search and rescue, wildlife monitoring, and autonomous deliveries. Beyond these use cases, liquid neural networks are poised to revolutionize industries that rely on analyzing fluctuating data over time, such as financial markets, electric power grids, and medical diagnostics.
Liquid AI has introduced three LFM variants at launch:
- LFM-1B: A dense model with 1.3 billion parameters optimized for resource-constrained environments.
- LFM-3B: A mid-tier model with 3.1 billion parameters, ideal for edge deployments like mobile devices and drones.
- LFM-40B: A high-capacity model with 40.3 billion parameters for complex applications on cloud servers.
These models offer state-of-the-art performance while requiring significantly less memory compared to transformer-based models. For example, the LFM-3B model uses only 16 GB of memory, whereas similar-sized models, such as Meta's Llama, require over 48 GB. This efficiency makes LFMs particularly attractive for businesses looking to integrate AI into applications that need to run on low-resource hardware.