Request a Consult

Request a Consultation

Your consultation request has been received
Oops! Something went wrong while submitting the form.
By hitting submit below, you consent to allow Fractional AI to process and store the data above. Please view our Privacy Policy for more detail.

Automating Custom Software Implementations with AI

Who is Beghou

Beghou is a life sciences consulting and technology partner that helps teams build strong data and technology foundations, drive operational excellence, and engage providers, payers, and patients. One of their key offerings is Beghou Arc, a platform used to rapidly build data-driven application modules that turn client data into dashboards, field operations tools, and CRMs. The end products are made available to end users at those companies who then interpret, edit, and use the data. 

In addition to delivering advanced AI capabilities to its clients, Beghou has made significant internal investments in AI to streamline engineering workflows, enhance scalability, and enable consultants and technical specialists to focus on higher-value development and client problem-solving. 

To rapidly scale their internal capabilities, Beghou’s AI experts collaborated with Fractional AI to build a system to reduce the specialized engineering time and training required to configure Beghou Arc for new client systems and updates. The goal of the project was to streamline engineering workflows, accelerate client onboarding, and strengthen the foundation for scalable platform-based delivery. 

Introduction to Beghou Arc 

Historically, configuring the Beghou Arc platform for each new client or system refresh has required extensive manual effort from specially-skilled engineers. A Beghou Arc site is programmatically built (server side rendering on a C# backend) entirely based on the contents of a database. The same database that holds client data also contains a variety of metadata_ tables that use the same schema across all Beghou Arc sites to hold configuration data for each individual site. 

Figure 1: Example of a Beghou Arc site, as seen by an end-user. 

Beghou Arc features a natural language configuration UI that allows site configurators to make changes to their sites (such as editing a grid or creating columns) without directly accessing the underlying database or writing SQL. However, the NLP configuration only provides a lightweight layer over the underlying implementation, so the process of making changes remains quite complex. As a result, onboarding new users into the system still requires significant time and effort. 

Figure 2: A table as seen inside the configuration UI. Each column in the displayed table is a row in the metadata_UIGridColumns table with 30 different columns controlling different configuration options. 

Looking Under the Hood 

The following scenario shows a potential use case for Beghou Arc functionalities. 

The top of the “Manager Review” page typically features a search bar. On one particular Manager Review page, there is no search bar, but a site configurator wants to add one and have it search (filter) over the Id and Name columns. 

  1. The Manager Review page is a row in the metadata_UIPages table. To add a search bar, the site configurator must set its FilterCollection_Id to specify a filter collection it’s going to use. 
  1. The Manager Review page is classified as a type of “grid”, so it has a Grid_Id (13) that is a foreign key into the metadata_UIGrids table. 
  1. The corresponding grid is a row in the metadata_UIGrids table with the Name “Manager Summary”, and DataSource_Id of 18, which is a foreign key into the metadata_UIDataSources table. 
  1. The matching row in the metadata_UIDataSources table has MetadataEntity_Id1 of 61 — it also has a MetadataEntity_Id column, though that’s deprecated and should be ignored, which poses a potential risk for both developers and AI systems. 
  1. The matching row in the metadata_MetadataEntities table has name TerritoryView, which means that grid 13 is pulling data from the actual database table (view) TerritoryView and displaying it to the user on the rendered table. 
  1. To create a new filter collection, the site configurator must create a new row in metadata_UIFilterCollections, but first needs create a field set by creating a new row in metadata_UIFieldSets since filter collections have FieldSet_Id as a required foreign key. 
  1. In order to have the search bar filter over the Id and Name columns, the site configurator must join metadata_UIGridColumns with metadata_UIDataFields to learn that Id and Name are the DisplayName for data fields 157 and 158. From there, they can insert two rows into metadata_FieldSetFields with those data field Ids and the newly created field set. 

Key Takeaway: The similarity of the grids and rows adds complexity when making changes manually in Beghou Arc. 

Key Challenge: Balancing Customized Sophistication and Scalability 

Beghou's clients are experts in specialized pharmaceutical and healthcare domains, thus requiring precise, domain-specific insights. As a result, Beghou Arc is an extremely detailed and granular system. There are over 100 metadata_* tables that determine all the aspects of a Beghou Arc site, and well over 1500 columns total (not counting customer tables and views). 

While Beghou Arc depth and configurability are key to delivering tailored solutions for clients, this level of sophistication also adds complexity for engineering teams managing multiple projects.  

To maintain Beghou’s high delivery standards amid growth, the team sought ways to scale configuration processes and accelerate time-to-value - freeing engineers to focus on advancing platform capabilities and client innovation. 

Objective 

Fractional was initially tasked with creating a focused pilot project to establish the feasibility of incorporating AI into Beghou’s workflow. 

The scope of the pilot project was to build a backend service that would enable Beghou Arc’s frontend to feature an AI assistant in the form of a chatbot or copilot. The project was limited to grid operations, such as the tabular data shown in Figure 1. The backend service had to handle tasks such as: 

  • “hide the last 6 columns” 
  • “move the ‘Customer ID’ column to the second position” 
  • “add a new column for ‘Sales Volume’” (assuming that is the name of a data field that already exists on the same datasource as the current table) 
  • “double the width of the street address column and half the width of the state column” 

Engineers estimated a 12-week project timeline, budgeting for the intricacies of deploying a production service into the Beghou Arc environment and substantial integration efforts. While deliberately focused on grid operations, the team hoped to substantially exceed the contracted scope in terms of assistant capabilities. 

Deliverables 

Final deliverables included a backend, frontend, and evaluation frameworks. 

Figure 3: Flow chart of final deliverable.  

Backend 

For backend, the team utilized a FastAPI server; the core endpoint was /chat (with a /chat_stream version for streaming SSE), with other endpoints for storing and retrieving user data and for executing proposed SQL solutions. 

The team also leveraged: 

  • Multiple application database backends; sqlite for local development, Postgres on Heroku, Azure SQL Server in Beghou’s Azure environment, and a local SQL Server instance for running evals against a copy of a test Beghou Arc site 
  • Docker to containerize our service 
  • Logfire for tracing and logging 
  • Auth0 for user authentication and logins 
  • Pydantic AI to simplify agent development and make it easy to try alternate providers 

Other libraries included: dirty-equals, inline-snapshot, pre-commit, Pydantic, Pyright, Rich, Ruff, Typer, SQLModel, testcontainers, uv, sqlglot. 

The backend system was a single agent guided by a relatively static system prompt and provided just one tool, query_mssql_database, that it could use to execute read-only queries to inspect a Beghou Arc instance. The purpose of the read-only queries is to establish a built-in safety protocol by allowing the agent to understand the existing configuration before proposing any changes. Critically, the agent never executes changes directly; an engineer must review and approve before the system executes. 

The system prompt is assembled from a handful of static files separated mostly for developer convenience, but includes XML encoded examples that are traces from full runs– from user prompt to final solution– including all tool calls. The team appends every LLM call to the system prompt user context that it deterministically gathers (based on what Beghou Arc site and page they are looking at) which will let the agent know some useful details such as which grid, page, tab, and datasource are likely relevant to this request. 

Pydantic AI completely handles tool calling; the developer calls agent.run(...) and receives a structured response. Internally, the agent is provided with information about the query tool and can call it up to 15 times, getting back either query results or an error message. 

The agent response for successful requests is an object that contains a reasoning block, an explanation designed to be shown in the chat interface, and a SQL script that it proposes will solve the user’s request. 

Each chat is long-lived, stored in the database with a UUID session ID, and new messages that come in are appended to all previous messages so that the agent is provided with the entire history of the conversation. 

Frontend 

Originally, the Beghou Arc team intended to create their own UI and call the Fractional backend. After initial positive feedback at the 7-week mark, the Beghou Arc team asked to convert the demo application for collecting user feedback (called Streamlit) into a fully-developed application. The Fractional team had already developed a frontend for Streamlit prior to building a market-ready application, which enabled a more seamless transition into full-functionality. The front end was developed using Vanilla JavaScript, CSS, and HTML in addition to leveraging LLM coding. 

 

Figures 4 & 5: Frontend of Streamlit 

Evaluation Frameworks

Building evaluation frameworks (or “evals”) for the Beghou Arc agent proved to be another complex task: the primary output of an agent is a SQL statement that should solve a problem, and it is difficult to deterministically determine whether an arbitrary SQL statement accomplishes an arbitrary task. Furthermore, in order to accomplish the task the agent needs to be able to run arbitrary queries against a database with actual data. 

The solution uses snapshots of databases from four different Beghou Arc demo sites and starts a containerized MSSQL Server instance that serves them as if they were real Beghou Arc sites. The agent queries the databases on that server, and once it responds, its proposed solution is executed. Once the proposed solution is executed, a deterministic check to confirm accuracy runs on the database and returns a score. The transaction is always rolled back, so the databases remain static for the other eval test cases running simultaneously or afterwards. 

In other words, these "evals" compare the agent's SQL proposals against expected outputs for dozens of configuration scenarios – from simple column hiding to complex bulk changes. Every modification to the agent's prompts or logic gets tested against this suite, preventing regressions and measuring improvements. 

Tools 

Tool use was crucial to the success of the project, with GPT-5’s mini-model emerging as optimal for its balance of performance, cost, and response speed. The use of Auth0 was straightforward and streamlined the user login process, and Logfire proved extremely powerful with built-in integrations that allowed the team to debug things that crashed in production. 

Additional tooling insights crystalized throughout the course of the project: 

  • gpt-5 proved to excel at parallel tool calling, and was sometimes far more efficient than other models because it would make 5 queries at once and only require 2 or 3 LLM calls instead of 4-7. 
  • low or medium reasoning often outperformed high reasoning for gpt-5, o3, and o4-mini, seemingly because of the high reasoning effort setting causing it to “overthink” problems and do more than it should. 
  • For instructing the agent on how to handle a task to the system prompt, adding an actual trace of a successful execution as an XML example without additional explanatory text succeeded more than adding detailed, step-by-step instructions. 
  • Lowering temperature from 1.0 to .4 almost halved error rate and made the agent more consistent overall 

Project’s Impact 

The Beghou Arc copilot enabled Beghou to complete common configuration tasks in minutes, leading to: 

  • Greater leverage (junior engineers can handle routine work, freeing up senior staff for complex changes). 
  • Faster delivery (tasks that once took hours or days can be completed in minutes). 
  • Quicker onboarding (new engineers can ramp faster with guided AI support). 
  • Consistent quality (built-in review processes ensure outputs meet technical and client standards). 
  • Scalability (a foundation that can expand across teams and eventually to client-facing use). 

By streamlining a core part of the delivery process, Beghou can now take on more projects with the same resources and deliver value to clients faster.