Platform

Solutions

Resources

Company

Case Study

Case Study

How Tidalflow delivers innovative

How Tidalflow delivers innovative GenAI-powered

GenAI-powered apps with Orq.ai

apps with Orq.ai

Discover how Orq.ai’s platform helps Tidalflow accelerate its product’s
time-to-market and deliver reliable LLM-based features for its growing user base.

Discover how Orq.ai’s platform helps Tidalflow accelerate its product’s time-to-market and deliver reliable LLM-based features for its growing user base.

Industry:

Health care

Health care

Use Case:

AI companion

Employees

7

Location

The Netherlands

Company Overview

Company Overview

Company Overview

Tidalflow is an Amsterdam-based AI app studio making expert health guidance accessible to everyone. Its first product, Lila, helps women take control of (peri)menopause — tackling fatigue, weight gain, and other symptoms without hormone therapy. Thousands of women already trust Lila to reclaim their energy, health and feel like themselves again — because every woman deserves to thrive, not just cope.

3x

Easier Prompt Engineering

3x

Easier Prompt Engineering

3x

Easier Prompt Engineering

4x

Faster Time to Market

4x

Faster Time to Market

4x

Faster Time to Market

2x

Better Team Collaboration

2x

Better Team Collaboration

2x

Better Team Collaboration

Challenges

Challenges

Challenges

Managing LLM-based features from

Managing LLM-based features from a product's

a product’s codebase is a nightmare

codebase is a nightmare

Kyle Kinsey

Founding Engineer @ Tidalflow

"Since we were building using OpenAI's API, we had to manage all parameters, token,
temperature, and prompts in our codebase. This was a real headache since we also had to figure out
a framework to structure everything and, at the same time, maintain that framework as we scaled."

"Since we were building using OpenAI's API, we had to manage all parameters, token, temperature, and prompts in our codebase. This was a real headache since we also had to figure out a framework to structure everything and, at the same time, maintain that framework as we scaled."

"Since we were building using OpenAI's API, we had to manage all parameters, token, temperature, and prompts in our codebase. This was a real headache since we also had to figure out a framework to structure everything and, at the same time, maintain that framework as we scaled."

Tidalflow’s first step into LLM software development was building a React Native app using OpenAI’s API. However, they quickly realized that managing an LLM-powered solution this way wasn’t scalable. Their team had to manually handle parameters, token usage, and message creation within the product’s backend. This meant that they had to redeploy their backend every time they needed to make a change. As a result, iterating on prompts and adjusting LLM configurations using OpenAI’s API became a slow and cumbersome process, requiring a full backend update for every modification.

Frequent Backend Deployments

Frequent Backend Deployments

Frequent Backend Deployments

Tidalflow’s team had to redeploy their backend every time they needed to adjust LLM configurations, prompts, or parameters. This made even minor changes a time-consuming process, slowing down the development process and iterative workflow.

Hard-Coded Prompts

Hard-Coded Prompts

Hard-Coded Prompts

Storing prompts directly in the backend made it challenging to quickly manage and adjust them. Because of that, it became challenging to fine-tune responses, adapt to user feedback, and iterate on AI behavior without heavy engineering involvement.

Caro Health needed to move fast to stay ahead of the competition and reach its ambitious growth targets. Not being able to experiment quickly and improve their product was an obstacle that would stunt their growth in the short term.

Slow iterations

Slow iterations

Slow iterations

Without a streamlined way to experiment with prompts and configurations, Tidalflow’s team faced long feedback loops. Every change required engineering effort, making it difficult to rapidly iterate based on user feedback and performance.

Variable input management

Variable input management

Variable input management

Manually managing token usage, system messages, and user inputs within the backend made handling dynamic variables complex. This made it challenging to control and maintain the performance of LLMs within their app.

Solution

Solution

Solution

Orchestrating LLMs with Orq.ai

Kyle Kinsey

Founding Engineer @ Tidalflow

"At first, I started building a system in-house that would log our API calls through OpenAI. But it took way too much time and made it difficult to focus on scaling Lila AI. Our search for an alternative led us to Orq.ai. "

"Because Orq.ai is an end-to-end platform to integrate GenAI into products, we can focus fully on building and scaling Lila AI, which is what’s most important to us. "

Tidalflow realized they needed robust tooling to scale their LLM-powered app. Initially, they tried developing an in-house solution compatible with their product’s backend to log API calls and manage key aspects of LLM orchestration, including memory management, variable storage, retry mechanisms, and output routing. However, after three weeks of development, they abandoned the idea — it was too tedious to build and would demand significant ongoing maintenance on top of their existing LLM-powered products. After searching for tooling, they chose Orq.ai to manage end-to-end LLM orchestration and decouple prompt engineering from their codebase. Since then, Tidalflow credits Orq.ai for being a defining platform in their tech stack that has helped them accelerate time-to-market and scale GenAI functionalities in Lila AI.

Kyle Kinsey

Founding Engineer @ Tidalflow

"We no longer have to build this whole other product to orchestrate LLMs – Orq.ai does that for us."

Structured Outputs

Template Variables

Deployments

Fallback Models

Logs

Experiments

Structured Outputs

With structured outputs, Tidalflow configures the AI models they use to generate output that always follows their JSON Schema. This helps shorten both the post-processing and error-handling time for their team.

Response Formats

JSON Mode

JSON Schema

OpenAI Models

Function Calling

Kyle Kinsey, Founding Engineer @ Tidalflow

"Structured outputs has become something we rely on. Since we always expect a certain output, it's really simplified our rollout."

Structured Outputs

Structured Outputs

With structured outputs, Tidalflow configures the AI models they use to generate output that always follows their JSON Schema. This helps shorten both the post-processing and error-handling time for their team.

Response Formats

JSON Mode

JSON Schema

OpenAI Models

Function Calling

Kyle Kinsey, Founding Engineer @ Tidalflow

"Structured outputs has become something we rely on. Since we always expect a certain output, it's really simplified our rollout."

Structured Outputs

Structured Outputs

With structured outputs, Tidalflow configures the AI models they use to generate output that always follows their JSON Schema. This helps shorten both the post-processing and error-handling time for their team.

Response Formats

JSON Mode

JSON Schema

OpenAI Models

Function Calling

Kyle Kinsey, Founding Engineer @ Tidalflow

"Structured outputs has become something we rely on. Since we always expect a certain output, it's really simplified our rollout."

SDK & API

Prompt Management

RAG-as-a-Service

Deployment

LLM Evaluation

LLM Observability

LLM Orchestration

Simplify API management by orchestrating LLM capabilities into your application through one unified programming interface.

API Management

Client Libraries

Rate Limit Configs

Webhooks

Fallbacks & Retries

SDK & API

Prompt Management

RAG-as-a-Service

Deployment

LLM Evaluation

LLM Observability

LLM Orchestration

Simplify API management by orchestrating LLM capabilities into your application through one unified programming interface.

API Management

Client Libraries

Rate Limit Configs

Webhooks

Fallbacks & Retries

SDK & API

Prompt Management

RAG-as-a-Service

Deployment

LLM Evaluation

LLM Observability

LLM Orchestration

Simplify API management by orchestrating LLM capabilities into your application through one unified programming interface.

API Management

Client Libraries

Rate Limit Configs

Webhooks

Fallbacks & Retries

SDK & API

Prompt Management

RAG-as-a-Service

Deployment

LLM Evaluation

LLM Observability

LLM Orchestration

Simplify API management by orchestrating LLM capabilities into your application through one unified programming interface.

API Management

Client Libraries

Rate Limit Configs

Webhooks

Fallbacks & Retries

SDK & API

Prompt Management

RAG-as-a-Service

Deployment

LLM Evaluation

LLM Observability

LLM Orchestration

Simplify API management by orchestrating LLM capabilities into your application through one unified programming interface.

API Management

Client Libraries

Rate Limit Configs

Webhooks

Fallbacks & Retries

SDK & API

Prompt Management

RAG-as-a-Service

Deployment

LLM Evaluation

LLM Observability

LLM Orchestration

Simplify API management by orchestrating LLM capabilities into your application through one unified programming interface.

API Management

Client Libraries

Rate Limit Configs

Webhooks

Fallbacks & Retries

SDK & API

Prompt Management

RAG-as-a-Service

Deployment

LLM Evaluation

LLM Observability

LLM Orchestration

Simplify API management by orchestrating LLM capabilities into your application through one unified programming interface.

API Management

Client Libraries

Rate Limit Configs

Webhooks

Fallbacks & Retries

SDK & API

Prompt Management

RAG-as-a-Service

Deployment

LLM Evaluation

LLM Observability

LLM Orchestration

Simplify API management by orchestrating LLM capabilities into your application through one unified programming interface.

API Management

Client Libraries

Rate Limit Configs

Webhooks

Fallbacks & Retries

SDK & API

Prompt Management

RAG-as-a-Service

Deployment

LLM Evaluation

LLM Observability

LLM Orchestration

Simplify API management by orchestrating LLM capabilities into your application through one unified programming interface.

API Management

Client Libraries

Rate Limit Configs

Webhooks

Fallbacks & Retries

SDK & API

Prompt Management

RAG-as-a-Service

Deployment

LLM Evaluation

LLM Observability

LLM Orchestration

Simplify API management by orchestrating LLM capabilities into your application through one unified programming interface.

API Management

Client Libraries

Rate Limit Configs

Webhooks

Fallbacks & Retries

SDK & API

Prompt Management

RAG-as-a-Service

Deployment

LLM Evaluation

LLM Observability

LLM Orchestration

Simplify API management by orchestrating LLM capabilities into your application through one unified programming interface.

API Management

Client Libraries

Rate Limit Configs

Webhooks

Fallbacks & Retries

SDK & API

Prompt Management

RAG-as-a-Service

Deployment

LLM Evaluation

LLM Observability

LLM Orchestration

Simplify API management by orchestrating LLM capabilities into your application through one unified programming interface.

API Management

Client Libraries

Rate Limit Configs

Webhooks

Fallbacks & Retries

SDK & API

Prompt Management

RAG-as-a-Service

Deployment

LLM Evaluation

LLM Observability

LLM Orchestration

Simplify API management by orchestrating LLM capabilities into your application through one unified programming interface.

API Management

Client Libraries

Rate Limit Configs

Webhooks

Fallbacks & Retries

SDK & API

Prompt Management

RAG-as-a-Service

Deployment

LLM Evaluation

LLM Observability

LLM Orchestration

Simplify API management by orchestrating LLM capabilities into your application through one unified programming interface.

API Management

Client Libraries

Rate Limit Configs

Webhooks

Fallbacks & Retries

SDK & API

Prompt Management

RAG-as-a-Service

Deployment

LLM Evaluation

LLM Observability

LLM Orchestration

Simplify API management by orchestrating LLM capabilities into your application through one unified programming interface.

API Management

Client Libraries

Rate Limit Configs

Webhooks

Fallbacks & Retries

SDK & API

Prompt Management

RAG-as-a-Service

Deployment

LLM Evaluation

LLM Observability

LLM Orchestration

Simplify API management by orchestrating LLM capabilities into your application through one unified programming interface.

API Management

Client Libraries

Rate Limit Configs

Webhooks

Fallbacks & Retries

SDK & API

Prompt Management

RAG-as-a-Service

Deployment

LLM Evaluation

LLM Observability

LLM Orchestration

Simplify API management by orchestrating LLM capabilities into your application through one unified programming interface.

API Management

Client Libraries

Rate Limit Configs

Webhooks

Fallbacks & Retries

SDK & API

Prompt Management

RAG-as-a-Service

Deployment

LLM Evaluation

LLM Observability

LLM Orchestration

Simplify API management by orchestrating LLM capabilities into your application through one unified programming interface.

API Management

Client Libraries

Rate Limit Configs

Webhooks

Fallbacks & Retries

What’s Next?

What’s Next?

What’s Next?

While 2024 has brought a lot for the team at Tidalflow, they are excited for 2025. Their top priority is to continue growing the base of active users for Lila AI.

In doing so, they plan to double-down on the parts of the platform that users love and find innovative ways to add even more value to users. The team is eager to tackle these new challenges knowing that they have a robust full-stack LLMOps platform in Orq.ai.

Kyle Kinsey

Founding Engineer @ Tidalflow

"Orq.ai saved us from having to build systems ourselves to orchestrate LLMs.
We're excited to continue using it to help us improve and scale Lila AI for our growing customer base."

The end-to-end platform for LLM app lifecycle management

End-to-end solution for LLM app lifecycle management

RAG

RAG

RAG

AI Gateway

AI Gateway

AI Gateway

LLM Observability

LLM Observability

LLM Observability

LLM Evaluation

LLM Evaluation

LLM Evaluation

Deployments

Deployments

Deployments

SDK & API

SDK & API

SDK & API

Experimentation

Experimentation

Experimentation

Prompt Management

Prompt Management

Prompt Management

Start building AI apps with Orq.ai

Take a 14-day free trial. Start building AI products with Orq.ai today.

Start building AI apps with Orq.ai

Take a 14-day free trial. Start building AI products with Orq.ai today.

Start building AI apps with Orq.ai

Take a 14-day free trial. Start building AI products with Orq.ai today.