Odyss nails its AI development
workflow with Orq.ai
Industry:
Use Case:
Co-pilots
Employees
5-10
Location
The Netherlands
Odyss is an AI development agency that builds custom co-pilots for companies looking to transform their operations with AI. Having built dozens of tailor-made AI solutions for companies like AstraZeneca, Farm Frites and Spacewell, Odyss works hand in hand with their clients to co-create complex LLM-powered technology for niche use cases.
Dependency bottlenecks block
teams from shipping AI faster
— Philip Gast
Co-founder Odyss
“We needed to separate our backend development from our frontend/low-code developers.”
Before using Orq.ai, Odyss's back-end developer was the only person on the team who could communicate with Odyss's Open AI API. Their back-end developer had to make all user and system prompt changes. This slowed down his workflow and prevented him from focusing on other tasks. It also meant that their front-end developers and non-technical teams could not independently experiment with different prompts.
Prompt
Because prompts were hard-coded into products, Odyss did not have a safe test environment where the team could rapidly test and experiment with user and system prompts.
Productivity
Odyss’s back-end developer was constantly tied down to prompt management tasks. The team had to find a way to free up his time to focus on more strategic development tasks.
Workflows
To scale their operations, Odyss needed to speed up the team's entire Generative AI product development workflow and ship AI solutions to their clients faster.
Experiment
Odyss needed to improve the collaboration between its technical and non-technical teams so that everyone could test AI use cases and continuously refine AI products.
— Philip Gast
Co-founder Odyss
"We wanted a solution in which our back-end developer got an API that could be adjusted and changed by front-end/low-code developers. Orq perfectly bridged this gap."
Driven to find a solution, Odyss's search led them to Orq. Now, Odyss's entire team can work together on Generative AI use cases and ship them to market fast. Before, only Odyss's back-end developer managed prompts. Now Orq.ai's AI gateway allows front-end developers and domain experts to independently experiment, test, and adjust prompts from the front-end.
Experiment
No-Code Prompt Management Tooling
Orq.ai enables Odyss's entire team to carry out end-to-end prompt management workflows. That way, front-end developers and non-technical teams do not have to depend on a back-end developer to work on AI use cases.
Workflows
Production Use Cases & API Deployments
Since Orq.ai has a prompt management tool for production use cases, Odyss's team can easily experiment with Generative AI hypotheses and run them to production all in the same platform.
Security & Privacy
Robust Data Security
Since Orq.ai is SOC-2 compliant, Odyss can build Generative AI use cases for their clients within their own cloud and VPC environments.
AI Monitoring
Monitoring & Observability
Odyss also uses Orq.ai to view a log of all deployment data in one place. That way, the team can monitor the responses generated from their AI in one single source of truth.
Equipped with Generative AI-native tooling, Odyss has the tools to continue scaling its operations and build reliable custom-made LLM-powered solutions for its clients.
They are excited about Orq.ai’s product roadmap and the features that will be released soon.
Odyss believes these features will facilitate greater control over large language models and further enhance the performance of AI products.
Thom van Lieshout
Co-founder Odyss
"We love Orq's platform and recommend all our clients to build products with them. If anyone wants to migrate their existing code and prompts to Orq, we would be more than happy to assist."
Odyss loves Orq.ai's
platform features
Prompt Management
Full-cycle prompt management for reliable AI products.
Experiment
One place for teams to safely experiment with LLM prompts.
Knowledge Bases (RAG)
Optimize LLM output with custom RAG workflows.
Deploy
Safely bring complex LLM prompts to production.
Observe
End-to-end insights to power up your AI system.
Optimize
Out-of-the-box tooling to refine AI product.