© 2025 Fractional AI
Beghou is a life sciences consulting and technology partner that helps teams build strong data and technology foundations, drive operational excellence, and engage providers, payers, and patients. One of their key offerings is Beghou Arc, a platform used to rapidly build data-driven application modules that turn client data into dashboards, field operations tools, and CRMs. The end products are made available to end users at those companies who then interpret, edit, and use the data.
In addition to delivering advanced AI capabilities to its clients, Beghou has made significant internal investments in AI to streamline engineering workflows, enhance scalability, and enable consultants and technical specialists to focus on higher-value development and client problem-solving.
To rapidly scale their internal capabilities, Beghou’s AI experts collaborated with Fractional AI to build a system to reduce the specialized engineering time and training required to configure Beghou Arc for new client systems and updates. The goal of the project was to streamline engineering workflows, accelerate client onboarding, and strengthen the foundation for scalable platform-based delivery.
Historically, configuring the Beghou Arc platform for each new client or system refresh has required extensive manual effort from specially-skilled engineers. A Beghou Arc site is programmatically built (server side rendering on a C# backend) entirely based on the contents of a database. The same database that holds client data also contains a variety of metadata_ tables that use the same schema across all Beghou Arc sites to hold configuration data for each individual site.

Beghou Arc features a natural language configuration UI that allows site configurators to make changes to their sites (such as editing a grid or creating columns) without directly accessing the underlying database or writing SQL. However, the NLP configuration only provides a lightweight layer over the underlying implementation, so the process of making changes remains quite complex. As a result, onboarding new users into the system still requires significant time and effort.

The following scenario shows a potential use case for Beghou Arc functionalities.
The top of the “Manager Review” page typically features a search bar. On one particular Manager Review page, there is no search bar, but a site configurator wants to add one and have it search (filter) over the Id and Name columns.
Key Takeaway: The similarity of the grids and rows adds complexity when making changes manually in Beghou Arc.
Beghou's clients are experts in specialized pharmaceutical and healthcare domains, thus requiring precise, domain-specific insights. As a result, Beghou Arc is an extremely detailed and granular system. There are over 100 metadata_* tables that determine all the aspects of a Beghou Arc site, and well over 1500 columns total (not counting customer tables and views).
While Beghou Arc depth and configurability are key to delivering tailored solutions for clients, this level of sophistication also adds complexity for engineering teams managing multiple projects.
To maintain Beghou’s high delivery standards amid growth, the team sought ways to scale configuration processes and accelerate time-to-value - freeing engineers to focus on advancing platform capabilities and client innovation.
Fractional was initially tasked with creating a focused pilot project to establish the feasibility of incorporating AI into Beghou’s workflow.
The scope of the pilot project was to build a backend service that would enable Beghou Arc’s frontend to feature an AI assistant in the form of a chatbot or copilot. The project was limited to grid operations, such as the tabular data shown in Figure 1. The backend service had to handle tasks such as:
Engineers estimated a 12-week project timeline, budgeting for the intricacies of deploying a production service into the Beghou Arc environment and substantial integration efforts. While deliberately focused on grid operations, the team hoped to substantially exceed the contracted scope in terms of assistant capabilities.
Final deliverables included a backend, frontend, and evaluation frameworks.

For backend, the team utilized a FastAPI server; the core endpoint was /chat (with a /chat_stream version for streaming SSE), with other endpoints for storing and retrieving user data and for executing proposed SQL solutions.
The team also leveraged:
Other libraries included: dirty-equals, inline-snapshot, pre-commit, Pydantic, Pyright, Rich, Ruff, Typer, SQLModel, testcontainers, uv, sqlglot.
The backend system was a single agent guided by a relatively static system prompt and provided just one tool, query_mssql_database, that it could use to execute read-only queries to inspect a Beghou Arc instance. The purpose of the read-only queries is to establish a built-in safety protocol by allowing the agent to understand the existing configuration before proposing any changes. Critically, the agent never executes changes directly; an engineer must review and approve before the system executes.
The system prompt is assembled from a handful of static files separated mostly for developer convenience, but includes XML encoded examples that are traces from full runs– from user prompt to final solution– including all tool calls. The team appends every LLM call to the system prompt user context that it deterministically gathers (based on what Beghou Arc site and page they are looking at) which will let the agent know some useful details such as which grid, page, tab, and datasource are likely relevant to this request.
Pydantic AI completely handles tool calling; the developer calls agent.run(...) and receives a structured response. Internally, the agent is provided with information about the query tool and can call it up to 15 times, getting back either query results or an error message.
The agent response for successful requests is an object that contains a reasoning block, an explanation designed to be shown in the chat interface, and a SQL script that it proposes will solve the user’s request.
Each chat is long-lived, stored in the database with a UUID session ID, and new messages that come in are appended to all previous messages so that the agent is provided with the entire history of the conversation.
Originally, the Beghou Arc team intended to create their own UI and call the Fractional backend. After initial positive feedback at the 7-week mark, the Beghou Arc team asked to convert the demo application for collecting user feedback (called Streamlit) into a fully-developed application. The Fractional team had already developed a frontend for Streamlit prior to building a market-ready application, which enabled a more seamless transition into full-functionality. The front end was developed using Vanilla JavaScript, CSS, and HTML in addition to leveraging LLM coding.


Building evaluation frameworks (or “evals”) for the Beghou Arc agent proved to be another complex task: the primary output of an agent is a SQL statement that should solve a problem, and it is difficult to deterministically determine whether an arbitrary SQL statement accomplishes an arbitrary task. Furthermore, in order to accomplish the task the agent needs to be able to run arbitrary queries against a database with actual data.
The solution uses snapshots of databases from four different Beghou Arc demo sites and starts a containerized MSSQL Server instance that serves them as if they were real Beghou Arc sites. The agent queries the databases on that server, and once it responds, its proposed solution is executed. Once the proposed solution is executed, a deterministic check to confirm accuracy runs on the database and returns a score. The transaction is always rolled back, so the databases remain static for the other eval test cases running simultaneously or afterwards.
In other words, these "evals" compare the agent's SQL proposals against expected outputs for dozens of configuration scenarios – from simple column hiding to complex bulk changes. Every modification to the agent's prompts or logic gets tested against this suite, preventing regressions and measuring improvements.
Tool use was crucial to the success of the project, with GPT-5’s mini-model emerging as optimal for its balance of performance, cost, and response speed. The use of Auth0 was straightforward and streamlined the user login process, and Logfire proved extremely powerful with built-in integrations that allowed the team to debug things that crashed in production.
Additional tooling insights crystalized throughout the course of the project:
The Beghou Arc copilot enabled Beghou to complete common configuration tasks in minutes, leading to:
By streamlining a core part of the delivery process, Beghou can now take on more projects with the same resources and deliver value to clients faster.