Case Study

How Quin accelerates AI-powered healthcare with Orq.ai

Quin rebuilt their patient support chatbot in-house, cutting deployment cycles and enabling product managers to experiment alongside engineers.

Case Study

How Quin accelerates AI-powered healthcare with Orq.ai

Quin rebuilt their patient support chatbot in-house, cutting deployment cycles and enabling product managers to experiment alongside engineers.

HealthTech

Better collaboration

Primary Care

Quin

Quin is a Dutch healthcare technology company improving patient access to primary care through AI-powered solutions that help general practices manage patient intake more efficiently.

Industry:

HealthTech

Use Case:

Patient Support Chatbot

Employees

51-200

Location

The Netherlands

Key outcomes

Impact at a glance

10X

Better collaboration

10X

Better collaboration

10X

Better collaboration

10X

Better collaboration

5X

better AI iterations

5X

better AI iterations

5X

better AI iterations

5X

better AI iterations

4X

Faster time to market

4X

Faster time to market

4X

Faster time to market

4X

Faster time to market

Company Overview

Quin is fighting to improve healthcare access in the Netherlands on two fronts: primary care (patient to general practice) and secondary care (general practice to specialist). On the primary care side, they're tackling a critical problem: general practices, especially in cities, face overwhelming inflows while GP availability remains limited. Their AI-powered chatbot helps patients get the assistance they need while reducing pressure on practice phone lines and consultation queues.

Challenges

Low-code limitations were blocking iteration and creating deployment anxiety

Quin's first chatbot was built on a low-code platform, designed for embedding conversational interfaces on websites. While it helped them validate product-market fit quickly, the platform's constraints became clear as the team scaled their AI capabilities.

The core issue was testing. The low-code system made it difficult to run offline and online evaluations, turning every deployment into a nerve-wracking event. "It was always an exciting time to click on the publish button because you never really knew," explains Gareth Steyn, Principal Engineer focusing on Applied GenAI at Quinn MD. "You tried your best to test things, but it felt like crossing your fingers and hoping nothing broke."

The team built their own eval scaffolding to address this, but the solution was tightly coupled to the low-code platform. When the vendor changed APIs - which happened frequently - the entire eval harness would fail. "All of a sudden we'd get 100 eval failures, but that wasn't possible," Gareth recalls. "They were false positives caused by API changes, not actual problems with our prompts."

Beyond testing, the platform limited collaboration. Product managers couldn't experiment with prompts or run tests independently. Every change required engineering intervention, creating bottlenecks in an area where rapid iteration should be the norm.

The team knew they needed to bring development in-house, but they didn't want to rebuild basic infrastructure. "I didn't want us to waste time building things we could take off the shelf," Gareth says. "Things like RAG, monitoring, observability, proper logging - all the LLMops pieces you need when building AI solutions."

Quin had evaluated Orq.ai two years earlier but chose the faster low-code path for initial validation. Now, with product-market fit proven and technical limitations mounting, it was time to revisit the decision.

Gareth Steyn

Principal Software Engineer - Applied Gen-AI

It was always an exciting time to click on the publish button because you never really knew. You tried your best to test things, but it felt like crossing your fingers and hoping nothing broke.

Solution

Building a production AI stack without rebuilding infrastructure

Quin's engineering team designed their migration around a clear principle: focus engineering effort on business value, not commodity infrastructure. Orq.ai became the foundation for their in-house rebuild.

The platform eliminated several weeks of infrastructure work from the project plan. "I was blown away because we didn't have to build our own knowledge base anymore," Gareth notes. The team also gained built-in observability, tracing, and logging without custom implementation.

More importantly, Orq.ai changed how the team worked together. Product managers could now experiment with prompts directly using the experimentation tool, then hand off tangible results to engineers. "The PM can tweak or experiment and show me, 'This is what I want. Can you do that?'" Gareth explains. "Without Orq, you'd be making tweaks on a test branch, deploying to a test environment, then playing around. But then you can't easily go back to see results. These small things accumulate and make everything harder."

The platform also introduced clean separation of concerns to Quin's architecture. The chatbot handles deterministic logic while Orq.ai manages the generative AI components. "Now we can deploy things more independently," Gareth says. "You don't need a massive pipeline or change process just to update a prompt."

The eval framework was another major win. Quin replaced their brittle custom scaffolding with Orq.ai's testing tools, eliminating the false positives that had plagued their previous system. Engineers could now validate changes with confidence before deployment.

Gareth Steyn

Principal Software Engineer - Applied Gen-AI

Don't get caught up in the thrill of building a knowledge base pipeline or implementing your own observability and tracing. It takes you away from what's important, and as always, what's important is getting product value.

Conclusion

Faster iteration, confident deployments, and capacity for innovation

Quin deployed their rebuilt chatbot on schedule, completing the migration from low-code to production-ready in-house system in three months. The new architecture unlocked faster iteration cycles and eliminated the deployment anxiety that came with their previous eval framework.

Collaboration between product and engineering improved dramatically. Where product managers previously had near-zero ability to experiment with prompts independently, they can now run tests and refine approaches before engineering implementation. "It's infinitely better because collaboration on the previous solution was near zero," Gareth says.

The team is using the time savings to expand their AI capabilities. With infrastructure concerns off their plate and smoother PM-engineer workflows, they're exploring new applied GenAI products to help the healthcare industry.

What’s Next?

Expanding AI capabilities across healthcare workflows

Quin plans to release the full new version of their chatbot in the next quarter. Beyond that, the engineering and product teams are identifying additional opportunities to apply GenAI across their healthcare platform.

The freed-up engineering capacity means Quin can focus on innovation rather than maintenance, exploring new ways AI can improve healthcare access across the Netherlands.

Gareth Steyn

Principal Software Engineer - Applied Gen-AI

Orq unlocks a lot of time and ability. I'm hoping we can come up with new ideas on how to use applied GenAI for more products that can help the industry.

Create an account and start building today.

Create an account and start building today.

Create an account and start building today.

Create an account and start building today.