Request a Consult

Request a Consultation

Your consultation request has been received
Oops! Something went wrong while submitting the form.
By hitting submit below, you consent to allow Fractional AI to process and store the data above. Please view our Privacy Policy for more detail.

Automating Custom Software Implementations: Beghou’s AI Copilot for Faster Configurations

Beghou partnered with Fractional AI to build an intelligent copilot that automates complex configuration workflows in Beghou Arc — the company’s flagship life sciences data platform. The AI system dramatically reduces engineering time for new client implementations, completing configuration tasks up to 10× faster, while streamlining onboarding and creating a foundation for scalable platform delivery.

"Fractional AI was instrumental in helping us streamline engineering workflows and deliver tailored, tech-enabled solutions to clients faster – while maintaining rigor and excellence Beghou is known for. Now our team can focus more of its time on innovation and solving complex client challenges."
- Dan Cardinal, CTO, Beghou

Who is Beghou?

Beghou is a life sciences consulting and technology firm helping biopharma organizations build strong data and technology foundations, drive operational excellence, and engage providers, payers, and patients.

One of Beghou’s core offerings is Beghou Arc, a connected, flexible commercialization platform that helps life sciences clients unify and activate their data, tech, and execution across commercial planning, field deployment and operations, incentive compensation, customer engagement, and decision intelligence. Each Arc instance is uniquely configured for the client’s business logic — an approach that delivers precision and flexibility but has historically demanded deep engineering expertise and extensive manual effort.

The Challenge: Balancing Custom Sophistication with Scalability

Each Beghou Arc site is programmatically generated from hundreds of interrelated metadata tables — more than 100 in total, spanning over 1,500 columns. While this design allows precise customization, it also introduces steep complexity for engineers configuring new sites or updating existing ones.

Even with Beghou’s natural-language configuration UI, site setup and modification required substantial manual work. Adding something as simple as a search bar to a “Manager Review” page could require joining multiple metadata tables, tracking foreign keys across UIGrids, UIFilterCollections, and UIDataFields, and verifying deprecated fields.

To maintain Beghou’s delivery standards while scaling its client base, the team sought a way to:

  • Reduce the manual engineering required for site configuration
  • Accelerate onboarding and refresh cycles
  • Enable junior engineers to execute configuration changes safely and consistently 

The Solution: Beghou Arc AI Copilot

Fractional and Beghou jointly developed an AI system that could translate natural-language configuration requests into validated SQL proposals — all without executing any changes automatically.

The copilot operates as a backend service integrated with Beghou Arc’s UI. Engineers or configurators can issue commands such as:

  • “Hide the last six columns."
  • “Add a new column for ‘Sales Volume.’”
  • “Move the ‘Customer ID’ column to the second position.”
  • “Double the width of the street address column and halve the width of the state column.”

Each request triggers a reasoning workflow that interprets the user’s intent, inspects the current site configuration through read-only database queries, and generates a proposed SQL script. The engineer remains fully in control, reviewing every proposed change before execution.

This approach preserves expert oversight while automating the most repetitive and error-prone aspects of Arc configuration.

Impact

The Beghou Arc AI Copilot now enables engineers to complete configuration tasks in minutes instead of hours or days, delivering measurable improvements across the board:

  • Leverage: Senior developers can spend more time on advanced design and problem-solving, as AI-guided automation accelerates routine configuration work. 
  • Speed: Routine updates complete up to 10× faster, reducing project delivery timelines.
  • Consistency: Structured review and read-only safety layers ensure SQL proposals align with Beghou’s standards.
  • Onboarding: New engineers ramp faster with contextual AI support.
  • Scalability: The architecture sets the foundation for broader AI-assisted configuration across Beghou’s portfolio and future client-facing tools.

Looking Under the Hood

Architecture Overview

At its core, the system is powered by a FastAPI backend orchestrating a GPT-5-mini-based agent optimized for low-latency tool use. The agent operates within strict boundaries:

  • A read-only query_mssql_database tool allows inspection of Arc instances without risk of unintended modification.
  • The system prompt contains detailed XML traces of full runs — from user input to validated SQL output — providing the agent with concrete procedural patterns.
  • Context injection dynamically assembles relevant metadata about the current page, grid, and datasource to ground the agent’s reasoning.
  • The agent returns structured objects containing three elements: a reasoning trace, a human-readable explanation, and the proposed SQL statement.

Every session is stored with full conversational history to maintain context and auditability.

Evaluation Frameworks

Evaluating SQL-generating agents presents unique challenges: correctness is often context-specific, and small syntax differences can mask logical errors. To achieve deterministic testing, Fractional designed a custom containerized evaluation environment:

  • Snapshots of four Beghou Arc demo sites populate isolated MSSQL containers.
  • Each eval case runs a full agent query loop, executes the proposed SQL, verifies outcomes against expected results, and rolls back the transaction.
  • This allows parallel, non-destructive regression testing across dozens of configuration scenarios — from simple column adjustments to multi-table updates.

The evals serve as both QA and continuous improvement infrastructure, providing immediate feedback when prompt, reasoning, or model changes affect reliability.

Model Insights

Through extensive experimentation, the team surfaced several key observations:

  • Parallel tool calling: GPT-5’s mini-model often executed 5 queries concurrently, reducing total reasoning steps from 6–7 to 2–3.
  • Moderate reasoning beats overthinking: Lower reasoning effort frequently outperformed higher settings, which caused the model to pursue unnecessary steps.
  • Example traces outperform long instructions: Embedding successful execution traces as XML examples yielded more stable results than verbose “how-to” prompts.
  • Temperature tuning: Lowering temperature from 1.0 to 0.4 nearly halved error rates and improved determinism.

Together, these patterns formed a repeatable methodology for building safe, efficient, and explainable AI system.