Industry:
Use Case:
AI companion
Employees
7
Location
The Netherlands
Tidalflow is an Amsterdam-based AI app studio making expert health guidance accessible to everyone. Its first product, Lila, helps women take control of (peri)menopause — tackling fatigue, weight gain, and other symptoms without hormone therapy. Thousands of women already trust Lila to reclaim their energy, health and feel like themselves again — because every woman deserves to thrive, not just cope.

Kyle Kinsey
Founding Engineer @ Tidalflow
Tidalflow’s first step into LLM software development was building a React Native app using OpenAI’s API. However, they quickly realized that managing an LLM-powered solution this way wasn’t scalable. Their team had to manually handle parameters, token usage, and message creation within the product’s backend. This meant that they had to redeploy their backend every time they needed to make a change. As a result, iterating on prompts and adjusting LLM configurations using OpenAI’s API became a slow and cumbersome process, requiring a full backend update for every modification.
Tidalflow’s team had to redeploy their backend every time they needed to adjust LLM configurations, prompts, or parameters. This made even minor changes a time-consuming process, slowing down the development process and iterative workflow.
Without a streamlined way to experiment with prompts and configurations, Tidalflow’s team faced long feedback loops. Every change required engineering effort, making it difficult to rapidly iterate based on user feedback and performance.
Manually managing token usage, system messages, and user inputs within the backend made handling dynamic variables complex. This made it challenging to control and maintain the performance of LLMs within their app.
Orchestrating LLMs with Orq.ai

Kyle Kinsey
Founding Engineer @ Tidalflow
Tidalflow realized they needed robust tooling to scale their LLM-powered app. Initially, they tried developing an in-house solution compatible with their product’s backend to log API calls and manage key aspects of LLM orchestration, including memory management, variable storage, retry mechanisms, and output routing. However, after three weeks of development, they abandoned the idea — it was too tedious to build and would demand significant ongoing maintenance on top of their existing LLM-powered products. After searching for tooling, they chose Orq.ai to manage end-to-end LLM orchestration and decouple prompt engineering from their codebase. Since then, Tidalflow credits Orq.ai for being a defining platform in their tech stack that has helped them accelerate time-to-market and scale GenAI functionalities in Lila AI.

Kyle Kinsey
Founding Engineer @ Tidalflow
"We no longer have to build this whole other product to orchestrate LLMs – Orq.ai does that for us."
While 2024 has brought a lot for the team at Tidalflow, they are excited for 2025. Their top priority is to continue growing the base of active users for Lila AI.
In doing so, they plan to double-down on the parts of the platform that users love and find innovative ways to add even more value to users. The team is eager to tackle these new challenges knowing that they have a robust full-stack LLMOps platform in Orq.ai.

Kyle Kinsey
Founding Engineer @ Tidalflow
"Orq.ai saved us from having to build systems ourselves to orchestrate LLMs.
We're excited to continue using it to help us improve and scale Lila AI for our growing customer base."