Case Study

Case Study

Odyss nails its AI development

workflow with Orq.ai

Odyss needed to ship Generative AI products fast. Discover how Orq.ai’s no-code platform enables them to accelerate their LLMOps workflow and deliver reliable AI use cases for their clients.

Odyss needed to ship Generative AI products fast. Discover how Orq.ai’s no-code platform enables them to accelerate their LLMOps workflow and deliver reliable AI use cases for their clients.

Industry:

Software Development

Software Development

Use Case:

Co-pilots

Employees

5-10

Location

The Netherlands

Company Overview

Company Overview

Company Overview

Odyss is an AI development agency that builds custom co-pilots for companies looking to transform their operations with AI. Having built dozens of tailor-made AI solutions for companies like AstraZeneca, Farm Frites and Spacewell, Odyss works hand in hand with their clients to co-create complex LLM-powered technology for niche use cases.

15+

AI products shipped

15+

AI products shipped

15+

AI products shipped

5x

Better collaboration

5x

Better collaboration

5x

Better collaboration

3x

Faster LLMOps workflows

3x

Faster LLMOps workflows

3x

Faster LLMOps workflows

Challenges

Challenges

Challenges

Dependency bottlenecks block

teams from shipping AI faster

— Philip Gast

Co-founder @ Odyss

“We needed to separate our backend development from our frontend/low-code developers.”

Before using Orq.ai, Odyss's back-end developer was the only person on the team who could communicate with Odyss's Open AI API. Their back-end developer had to make all user and system prompt changes. This slowed down his workflow and prevented him from focusing on other tasks. It also meant that their front-end developers and non-technical teams could not independently experiment with different prompts.

Prompt

Risky Workflows

Risky Workflows

Risky Workflows

Because prompts were hard-coded into products, Odyss did not have a safe test environment where the team could rapidly test and experiment with user and system prompts.

Productivity

Scarce Resources

Scarce Resources

Scarce Resources

Odyss’s back-end developer was constantly tied down to prompt management tasks. The team had to find a way to free up his time to focus on more strategic development tasks.

Workflows

Time-to-Market

Time-to-Market

Time-to-Market

To scale their operations, Odyss needed to speed up the team's entire Generative AI product development workflow and ship AI solutions to their clients faster.

Experiment

Collaboration

Collaboration

Collaboration

Odyss needed to improve the collaboration between its technical and non-technical teams so that everyone could test AI use cases and continuously refine AI products.

Solution

Solution

Solution

The solution with Orq.ai

The solution with Orq.ai

The solution with Orq.ai

— Philip Gast

Co-founder @ Odyss

"We wanted a solution in which our back-end developer got an API that could be adjusted and changed by front-end/low-code developers. Orq perfectly bridged this gap."

Driven to find a solution, Odyss's search led them to Orq. Now, Odyss's entire team can work together on Generative AI use cases and ship them to market fast. Before, only Odyss's back-end developer managed prompts. Now Orq.ai's AI gateway allows front-end developers and domain experts to independently experiment, test, and adjust prompts from the front-end.

Experiment

Prompt Management Tooling

Orq.ai enables Odyss's entire team to carry out end-to-end prompt management workflows. That way, front-end developers and non-technical teams do not have to depend on a back-end developer to work on AI use cases.

Workflows

Production Use Cases & API Deployments

Since Orq.ai has a prompt management tool for production use cases, Odyss's team can easily experiment with Generative AI hypotheses and run them to production all in the same platform.

Security & Privacy

Robust Data Security

Since Orq.ai is SOC-2 compliant, Odyss can build Generative AI use cases for their clients within their own cloud and VPC environments.

AI Monitoring

Monitoring & Observability

Odyss also uses Orq.ai to view a log of all deployment data in one place. That way, the team can monitor the responses generated from their AI in one single source of truth.

What’s Next?

What’s Next?

What’s Next?

Equipped with Generative AI-native tooling, Odyss has the tools to continue scaling its operations and build reliable custom-made LLM-powered solutions for its clients. 

They are excited about Orq.ai’s product roadmap and the features that will be released soon.

Odyss believes these features will facilitate greater control over large language models and further enhance the performance of AI products.


Thom van Lieshout

Co-founder @ Odyss

"We love Orq's platform and recommend all our clients to build products with them. If anyone wants to migrate their existing code and prompts to Orq, we would be more than happy to assist."

Platform

Platform

Platform

Odyss loves Orq.ai's

platform features

Start building AI features with Orq.ai

Take a 14-day free trial. Start building AI features with Orq.ai today.

Start building AI features with Orq.ai

Take a 14-day free trial. Start building AI features with Orq.ai today.

Start building AI features with Orq.ai

Take a 14-day free trial. Start building AI features with Orq.ai today.