Case Study

How Tidalflow delivers innovative GenAI-powered apps with Orq.ai

Discover how Orq.ai’s platform helps Tidalflow accelerate its product’s time-to-market and deliver reliable LLM-based features for its growing user base.

Case Study

How Tidalflow delivers innovative GenAI-powered apps with Orq.ai

Discover how Orq.ai’s platform helps Tidalflow accelerate its product’s time-to-market and deliver reliable LLM-based features for its growing user base.

Case Study

How Tidalflow delivers innovative GenAI-powered apps with Orq.ai

Discover how Orq.ai’s platform helps Tidalflow accelerate its product’s time-to-market and deliver reliable LLM-based features for its growing user base.

Case Study

How Tidalflow delivers innovative GenAI-powered apps with Orq.ai

Discover how Orq.ai’s platform helps Tidalflow accelerate its product’s time-to-market and deliver reliable LLM-based features for its growing user base.

Company Overview

Tidalflow is an Amsterdam-based AI app studio making expert health guidance accessible to everyone. Its first product, Lila, helps women take control of (peri)menopause — tackling fatigue, weight gain, and other symptoms without hormone therapy. Thousands of women already trust Lila to reclaim their energy, health and feel like themselves again — because every woman deserves to thrive, not just cope.

Challenges

Managing LLM-based features from a product’s codebase is a nightmare

Challenges

Managing LLM-based features from a product’s codebase is a nightmare

Challenges

Managing LLM-based features from a product’s codebase is a nightmare

Kyle Kinsey

Founding Engineer @ Tidalflow

"Since we were building using OpenAI's API, we had to manage all parameters, token, temperature, and prompts in our codebase. This was a real headache since we also had to figure out a framework to structure everything and, at the same time, maintain that framework as we scaled."

Tidalflow’s first step into LLM software development was building a React Native app using OpenAI’s API. However, they quickly realized that managing an LLM-powered solution this way wasn’t scalable. Their team had to manually handle parameters, token usage, and message creation within the product’s backend. This meant that they had to redeploy their backend every time they needed to make a change. As a result, iterating on prompts and adjusting LLM configurations using OpenAI’s API became a slow and cumbersome process, requiring a full backend update for every modification.

Frequent Backend Deployments

Frequent Backend Deployments

Frequent Backend Deployments

Tidalflow’s team had to redeploy their backend every time they needed to adjust LLM configurations, prompts, or parameters. This made even minor changes a time-consuming process, slowing down the development process and iterative workflow.

Hard-Coded Prompts

Hard-Coded Prompts

Storing prompts directly in the backend made it challenging to quickly manage and adjust them. Because of that, it became challenging to fine-tune responses, adapt to user feedback, and iterate on AI behavior without heavy engineering involvement.

Slow Iterations

Slow Iterations

Slow Iterations

Without a streamlined way to experiment with prompts and configurations, Tidalflow’s team faced long feedback loops. Every change required engineering effort, making it difficult to rapidly iterate based on user feedback and performance.

Variable input management

Variable input management

Variable input management

Manually managing token usage, system messages, and user inputs within the backend made handling dynamic variables complex. This made it challenging to control and maintain the performance of LLMs within their app.

Solution

Orchestrating LLMs with Orq.ai

Solution

Orchestrating LLMs with Orq.ai

Solution

Orchestrating LLMs with Orq.ai

Kyle Kinsey

Founding Engineer @ Tidalflow

"At first, I started building a system in-house that would log our API calls through OpenAI. But it took way too much time and made it difficult to focus on scaling Lila AI. Our search for an alternative led us to Orq.ai. "

Tidalflow realized they needed robust tooling to scale their LLM-powered app. Initially, they tried developing an in-house solution compatible with their product’s backend to log API calls and manage key aspects of LLM orchestration, including memory management, variable storage, retry mechanisms, and output routing. However, after three weeks of development, they abandoned the idea — it was too tedious to build and would demand significant ongoing maintenance on top of their existing LLM-powered products. After searching for tooling, they chose Orq.ai to manage end-to-end LLM orchestration and decouple prompt engineering from their codebase. Since then, Tidalflow credits Orq.ai for being a defining platform in their tech stack that has helped them accelerate time-to-market and scale GenAI functionalities in Lila AI.

Kyle Kinsey

Founding Engineer @ Tidalflow

"The response from our team was overwhelming. You go to Orq.ai, you change the prompt, and boom – it’s already in production. It’s amazing to have this flexibility."

Structured Outputs

Template Variables

Deployments

Fallback Models

Logs

Experiments

Structured Outputs

With structured outputs, Tidalflow configures the AI models they use to generate output that always follows their JSON Schema. This helps shorten both the post-processing and error-handling time for their team.

Response Formats

OpenAI Models

JSON Mode

Function Calling

JSON Schema

Kyle Kinsey, Founding Engineer @ Tidalflow

"Structured outputs has become something we rely on. Since we always expect a certain output, it's really simplified our rollout."

Structured Outputs

Structured Outputs

With structured outputs, Tidalflow configures the AI models they use to generate output that always follows their JSON Schema. This helps shorten both the post-processing and error-handling time for their team.

Response Formats

OpenAI Models

JSON Mode

Function Calling

JSON Schema

Structured Outputs

Structured Outputs

With structured outputs, Tidalflow configures the AI models they use to generate output that always follows their JSON Schema. This helps shorten both the post-processing and error-handling time for their team.

Response Formats

OpenAI Models

JSON Mode

Function Calling

JSON Schema

Structured Outputs

Structured Outputs

With structured outputs, Tidalflow configures the AI models they use to generate output that always follows their JSON Schema. This helps shorten both the post-processing and error-handling time for their team.

Response Formats

OpenAI Models

JSON Mode

Function Calling

JSON Schema

Structured Outputs

Template Variables

Deployments

Fallback Models

Logs

Experiments

Structured Outputs

With structured outputs, Tidalflow configures the AI models they use to generate output that always follows their JSON Schema. This helps shorten both the post-processing and error-handling time for their team.

Response Formats

OpenAI Models

JSON Mode

Function Calling

JSON Schema

Kyle Kinsey, Founding Engineer @ Tidalflow

"Structured outputs has become something we rely on. Since we always expect a certain output, it's really simplified our rollout."

Structured Outputs

Template Variables

Deployments

Fallback Models

Logs

Experiments

Structured Outputs

With structured outputs, Tidalflow configures the AI models they use to generate output that always follows their JSON Schema. This helps shorten both the post-processing and error-handling time for their team.

Response Formats

OpenAI Models

JSON Mode

Function Calling

JSON Schema

Kyle Kinsey, Founding Engineer @ Tidalflow

"Structured outputs has become something we rely on. Since we always expect a certain output, it's really simplified our rollout."

Structured Outputs

Template Variables

Deployments

Fallback Models

Logs

Experiments

Structured Outputs

With structured outputs, Tidalflow configures the AI models they use to generate output that always follows their JSON Schema. This helps shorten both the post-processing and error-handling time for their team.

Response Formats

OpenAI Models

JSON Mode

Function Calling

JSON Schema

Kyle Kinsey, Founding Engineer @ Tidalflow

"Structured outputs has become something we rely on. Since we always expect a certain output, it's really simplified our rollout."

Structured Outputs

Template Variables

Deployments

Fallback Models

Logs

Experiments

Structured Outputs

With structured outputs, Tidalflow configures the AI models they use to generate output that always follows their JSON Schema. This helps shorten both the post-processing and error-handling time for their team.

Response Formats

OpenAI Models

JSON Mode

Function Calling

JSON Schema

Kyle Kinsey, Founding Engineer @ Tidalflow

"Structured outputs has become something we rely on. Since we always expect a certain output, it's really simplified our rollout."

Structured Outputs

Template Variables

Deployments

Fallback Models

Logs

Experiments

Structured Outputs

With structured outputs, Tidalflow configures the AI models they use to generate output that always follows their JSON Schema. This helps shorten both the post-processing and error-handling time for their team.

Response Formats

OpenAI Models

JSON Mode

Function Calling

JSON Schema

Kyle Kinsey, Founding Engineer @ Tidalflow

"Structured outputs has become something we rely on. Since we always expect a certain output, it's really simplified our rollout."

Structured Outputs

Template Variables

Deployments

Fallback Models

Logs

Experiments

Structured Outputs

With structured outputs, Tidalflow configures the AI models they use to generate output that always follows their JSON Schema. This helps shorten both the post-processing and error-handling time for their team.

Response Formats

OpenAI Models

JSON Mode

Function Calling

JSON Schema

Kyle Kinsey, Founding Engineer @ Tidalflow

"Structured outputs has become something we rely on. Since we always expect a certain output, it's really simplified our rollout."

Structured Outputs

Template Variables

Deployments

Fallback Models

Logs

Experiments

Structured Outputs

With structured outputs, Tidalflow configures the AI models they use to generate output that always follows their JSON Schema. This helps shorten both the post-processing and error-handling time for their team.

Response Formats

OpenAI Models

JSON Mode

Function Calling

JSON Schema

Kyle Kinsey, Founding Engineer @ Tidalflow

"Structured outputs has become something we rely on. Since we always expect a certain output, it's really simplified our rollout."

Structured Outputs

Template Variables

Deployments

Fallback Models

Logs

Experiments

Structured Outputs

With structured outputs, Tidalflow configures the AI models they use to generate output that always follows their JSON Schema. This helps shorten both the post-processing and error-handling time for their team.

Response Formats

OpenAI Models

JSON Mode

Function Calling

JSON Schema

Kyle Kinsey, Founding Engineer @ Tidalflow

"Structured outputs has become something we rely on. Since we always expect a certain output, it's really simplified our rollout."

Structured Outputs

Template Variables

Deployments

Fallback Models

Logs

Experiments

Structured Outputs

With structured outputs, Tidalflow configures the AI models they use to generate output that always follows their JSON Schema. This helps shorten both the post-processing and error-handling time for their team.

Response Formats

OpenAI Models

JSON Mode

Function Calling

JSON Schema

Kyle Kinsey, Founding Engineer @ Tidalflow

"Structured outputs has become something we rely on. Since we always expect a certain output, it's really simplified our rollout."

Structured Outputs

Template Variables

Deployments

Fallback Models

Logs

Experiments

Structured Outputs

With structured outputs, Tidalflow configures the AI models they use to generate output that always follows their JSON Schema. This helps shorten both the post-processing and error-handling time for their team.

Response Formats

OpenAI Models

JSON Mode

Function Calling

JSON Schema

Kyle Kinsey, Founding Engineer @ Tidalflow

"Structured outputs has become something we rely on. Since we always expect a certain output, it's really simplified our rollout."

Structured Outputs

Template Variables

Deployments

Fallback Models

Logs

Experiments

Structured Outputs

With structured outputs, Tidalflow configures the AI models they use to generate output that always follows their JSON Schema. This helps shorten both the post-processing and error-handling time for their team.

Response Formats

OpenAI Models

JSON Mode

Function Calling

JSON Schema

Kyle Kinsey, Founding Engineer @ Tidalflow

"Structured outputs has become something we rely on. Since we always expect a certain output, it's really simplified our rollout."

Structured Outputs

Template Variables

Deployments

Fallback Models

Logs

Experiments

Structured Outputs

With structured outputs, Tidalflow configures the AI models they use to generate output that always follows their JSON Schema. This helps shorten both the post-processing and error-handling time for their team.

Response Formats

OpenAI Models

JSON Mode

Function Calling

JSON Schema

Kyle Kinsey, Founding Engineer @ Tidalflow

"Structured outputs has become something we rely on. Since we always expect a certain output, it's really simplified our rollout."

Structured Outputs

Template Variables

Deployments

Fallback Models

Logs

Experiments

Structured Outputs

With structured outputs, Tidalflow configures the AI models they use to generate output that always follows their JSON Schema. This helps shorten both the post-processing and error-handling time for their team.

Response Formats

OpenAI Models

JSON Mode

Function Calling

JSON Schema

Kyle Kinsey, Founding Engineer @ Tidalflow

"Structured outputs has become something we rely on. Since we always expect a certain output, it's really simplified our rollout."

Structured Outputs

Template Variables

Deployments

Fallback Models

Logs

Experiments

Structured Outputs

With structured outputs, Tidalflow configures the AI models they use to generate output that always follows their JSON Schema. This helps shorten both the post-processing and error-handling time for their team.

Response Formats

OpenAI Models

JSON Mode

Function Calling

JSON Schema

Kyle Kinsey, Founding Engineer @ Tidalflow

"Structured outputs has become something we rely on. Since we always expect a certain output, it's really simplified our rollout."

Structured Outputs

Template Variables

Deployments

Fallback Models

Logs

Experiments

Structured Outputs

With structured outputs, Tidalflow configures the AI models they use to generate output that always follows their JSON Schema. This helps shorten both the post-processing and error-handling time for their team.

Response Formats

OpenAI Models

JSON Mode

Function Calling

JSON Schema

Kyle Kinsey, Founding Engineer @ Tidalflow

"Structured outputs has become something we rely on. Since we always expect a certain output, it's really simplified our rollout."

Structured Outputs

Template Variables

Deployments

Fallback Models

Logs

Experiments

Structured Outputs

With structured outputs, Tidalflow configures the AI models they use to generate output that always follows their JSON Schema. This helps shorten both the post-processing and error-handling time for their team.

Response Formats

OpenAI Models

JSON Mode

Function Calling

JSON Schema

Kyle Kinsey, Founding Engineer @ Tidalflow

"Structured outputs has become something we rely on. Since we always expect a certain output, it's really simplified our rollout."

Structured Outputs

Template Variables

Deployments

Fallback Models

Logs

Experiments

Structured Outputs

With structured outputs, Tidalflow configures the AI models they use to generate output that always follows their JSON Schema. This helps shorten both the post-processing and error-handling time for their team.

Response Formats

OpenAI Models

JSON Mode

Function Calling

JSON Schema

Kyle Kinsey, Founding Engineer @ Tidalflow

"Structured outputs has become something we rely on. Since we always expect a certain output, it's really simplified our rollout."

Structured Outputs

Template Variables

Deployments

Fallback Models

Logs

Experiments

Structured Outputs

With structured outputs, Tidalflow configures the AI models they use to generate output that always follows their JSON Schema. This helps shorten both the post-processing and error-handling time for their team.

Response Formats

OpenAI Models

JSON Mode

Function Calling

JSON Schema

Kyle Kinsey, Founding Engineer @ Tidalflow

"Structured outputs has become something we rely on. Since we always expect a certain output, it's really simplified our rollout."

What’s Next?

For Evergrowth.io, the journey with Orq.ai has opened new doors to scale their AI-driven Customer Intelligence Platform. With seamless prompt management and an enhanced engineering workflow, the team is now setting its sights on the next big challenge: user experience.

Evergrowth’s mission is to bring the power of their AI agents directly to their users in the most intuitive and impactful ways possible. To achieve this, the team is exploring innovative solutions like browser extensions and CRM integrations to ensure that AI-generated insights are delivered to their customers in a seamless way.

Orq.ai is proud to support Evergrowth in their efforts to redefine customer intelligence through Generative AI. Our team looks forward to continuing to help Evergrowth tackle new challenges and expand their product’s impact.

Kyle Kinsey

Founding Engineer @ Tidalflow

"Orq.ai saved us from having to build systems ourselves to orchestrate LLMs. We're excited to continue using it to help us improve and scale Lila AI for our growing customer base."

The end-to-end platform for LLM app lifecycle management

The end-to-end platform for LLM app lifecycle management

The end-to-end platform for LLM app lifecycle management

SDK & API

Prompt Management

Experimentation

LLM Evaluation

LLM Observability

AI Gateway

RAG

Deployments

SDK & API

Prompt Management

Experimentation

LLM Evaluation

LLM Observability

AI Gateway

RAG

Deployments

SDK & API

Prompt Management

Experimentation

LLM Evaluation

LLM Observability

AI Gateway

RAG

Deployments

Create an account and start building today.

Create an account and start building today.

Create an account and start building today.

Create an account and start building today.