Bentley Copilot: AI Tutor That Learns the Software to Speed Infrastructure Design

  • Thread Author
At the 2025 Bentley Illuminate conference an engineer from VHB demonstrated a deceptively simple application of generative AI—an internal Copilot agent that learns the software so engineers don’t have to—and the result is already saving real time while pointing to how design offices and construction sites may reorganize around digital assistants in the near future.

Background​

What happened at Bentley Illuminate and who is involved​

In June 2025, Bentley’s Illuminate event in Atlanta showcased emerging workflows for infrastructure delivery, and one session quickly became a talking point: Kyle Rosenmeyer, Model‑Based Design Lead at VHB, presented a custom AI agent he built using Microsoft Copilot Studio that helps engineers ask natural‑language questions about Bentley tools—OpenRoads, ProjectWise, MicroStation—and receive step‑by‑step guidance. The agent was demonstrated live and subsequently packaged as a repeatable blueprint that Rosenmeyer is sharing with peers across the industry.
VHB has publicly highlighted the tool internally and to Bentley’s Premier Scholars network as “Bentley Copilot” or “Copilot for Bentley,” describing it as a Teams‑embedded assistant that replaces tedious web searches and manual tutorial hunting with an on‑demand tutor inside the collaboration environment.

Why the story matters now​

Engineering software—CAD, model‑based design suites, and document management systems—carries a steep learning curve. Time spent learning tool mechanics is time diverted from design and engineering judgment. Rosenmeyer’s approach reframes the problem: use an AI agent to ground knowledge in authoritative documentation plus community Q&A and deliver it conversationally, reducing the friction of learning software features and troubleshooting issues. This is not a marginal productivity hack; it speaks to workforce efficiency, onboarding speed, and the way domain knowledge is surfaced inside enterprise systems.

Overview of the tool and how it was built​

Core components: Copilot Studio, system prompt, and knowledge sources​

The agent was assembled on Microsoft Copilot Studio, Microsoft’s low‑code platform for building internal AI assistants. Copilot Studio exposes connectors and grounding mechanisms that let organizations index web content, internal docs, video transcripts, and knowledge bases as the agent’s reference points. Rosenmeyer emphasized two build elements as decisive: a carefully crafted system prompt that defines the assistant’s tone and role, and a multi‑source grounding strategy that includes Bentley’s official documentation and the community forums where engineers exchange practical solutions.
The specific knowledge sources Rosenmeyer used were:
  • Bentley’s main documentation pages and product manuals for OpenRoads, ProjectWise, MicroStation.
  • Bentley blog posts and published user stories that clarify workflows and feature intent.
  • Bentley’s YouTube channel for how‑to demonstrations and recorded sessions.
  • Communities forums and user‑generated Q&A where engineers post real troubleshooting steps and workarounds (Rosenmeyer called this “messy but gold”).
  • Publicly available knowledge curated into the agent’s retrieval index.
Together these sources allow the Copilot agent to ground responses in authoritative content while leveraging the practical experience captured in forum threads—improving both accuracy and applicability for the everyday engineer.

The “system prompt” and behavior tuning​

The agent’s behavior was shaped by a single, human‑authored system prompt that encodes domain expertise, role expectations, and guardrails: be concise, prefer documented workflows, explain step sequences, and ask clarifying questions when the user’s request is ambiguous. Rosenmeyer credits the prompt design for much of the agent’s usefulness—effectively turning the model from a generic conversationalist into a software tutor and troubleshooter. This pattern—prompt + curated grounding + retrieval—mirrors emerging best practices for reliable enterprise copilots.

Deployment: Teams‑first and tenant security​

VHB deployed the agent inside Microsoft Teams, which keeps interactions inside the corporate tenant and benefits from existing identity, compliance, and governance controls. Deploying in Teams also reduces context switching: engineers ask the agent in the same chat environment they use for project coordination, giving the assistant immediate situational relevance and auditability. Rosenmeyer described the deployment as “stupid easy” in the sense that the technical barriers were low for someone comfortable with URL-based grounding and a well‑constructed prompt.

Measured outcomes: the time savings claim and caveats​

Claimed impact: hundreds of hours saved​

Rosenmeyer reports the agent personally saved him more than 100 hours in the past year, and a VHB survey suggested other users save one to two hours per week on average. Those are meaningful productivity gains when aggregated across an engineering team—one hour saved per engineer per week becomes dozens of full‑time equivalents across large firms. The public accounts combine Rosenmeyer’s personal testimony, VHB’s internal reporting, and the session’s workshop outputs where many participants built working Copilot tools in under an hour.

Cautions on the numbers​

These time‑savings figures should be read with standard journalistic skepticism: the 100‑hour number is Rosenmeyer’s personal metric and the survey methodology (sample size, selection bias, question framing) has not been publicly audited. Save‑rate claims are common in early Copilot stories and while they indicate real potential, they are sensitive to measurement design and user behavior. Treat the estimates as directional evidence of value rather than independently‑verified ROI.

Why this approach works: strengths and enablers​

1) Low‑code, high‑leverage platform​

Copilot Studio’s low‑code model lets domain experts assemble agents without deep engineering resources. That democratization matters in consultancy‑heavy industries where practitioners know the domain but not the cloud stack. Rosenmeyer’s workshop at Bentley’s Premier Scholars program reportedly produced dozens of custom Copilots in an hour—demonstrating how rapidly firms can prototype useful assistants.

2) Grounding in authoritative and peer content​

The agent combines official docs with community Q&A, giving it the dual benefits of correctness (from vendor docs) and pragmatic workarounds (from forum posts). Retrieval‑based grounding reduces hallucination risk and enables the assistant to quote or link to the originating material when asked for source evidence, which is crucial in regulated or technical contexts.

3) Integrated user experience reduces switching costs​

Embedding the assistant in Teams means engineers don’t need to context‑switch to a browser search or a separate knowledge base. This UX advantage compounds: answers arrive in the same conversation thread where the issue was raised, making it simpler to convert guidance into actual CAD actions or project decisions.

4) Rapid knowledge transfer and onboarding​

The assistant functions as an instant tutor for specific tasks—how to create a corridor in OpenRoads, where a ProjectWise workflow stores a revision, or how to apply a MicroStation tool. For firms hiring new graduates or cross‑training staff, a conversational guide that can explain tasks on demand shortens the ramp and preserves institutional best practices.

Technical and operational risks​

Hallucination and incorrect guidance​

Even grounded agents can produce plausible but incorrect answers. In infrastructure design, an incorrect configuration or misunderstood step can lead to rework or, in extreme cases, field safety risks. Every AI suggestion must be treated as advisory until validated by a qualified engineer. Firms should enforce human‑in‑the‑loop sign‑offs for any AI output that affects design deliverables or regulatory submissions.

Data governance and leakage​

Copilot Studio agents rely on content connectors. Misconfigured connectors or overly broad permissions can expose sensitive project data to the model or to unintended audiences. Enterprises must apply sensitivity labels, DLP rules, and tenant‑level guardrails before indexing project repositories into agent ground truth. Microsoft’s tenant grounding reduces exposure but does not absolve the organization from rigorous governance.

Vendor lock‑in and architectural dependence​

A Copilot‑based approach tightens a firm’s dependency on Microsoft’s cloud and agent tooling. That concentration can accelerate development initially but raises commercial negotiation and architectural portability risks later. Firms should plan export strategies for knowledge artifacts and consider layered architectures if vendor independence is strategically important.

Cost and consumption surprises​

Copilot Studio uses metered message or consumption models. High‑volume or autonomous agent scenarios can generate significant message counts and unexpected charges. Track and control agent consumption, and test message economics in a pilot before broad rollout.

Skill erosion and over‑reliance​

If engineers become dependent on conversational prompts for routine checks, firms risk the slow erosion of base skills. Organizations must balance automation with intentional skill retention programs that preserve core judgement and verification capabilities.

Governance checklist for firms considering a similar Copilot​

  • Confirm legal and contractual allowances for indexing customer or project content.
  • Apply sensitivity labels and Data Loss Prevention policies before enabling connectors.
  • Scope a pilot to a non‑critical team with clear KPIs: time saved, remediation events, accuracy rate.
  • Require provenance: require agents to cite the specific doc/video/forum link used to compose the answer.
  • Designate mandatory human approvals for any AI suggestion that modifies design deliverables, approvals, or field instructions.
  • Monitor message consumption; set budget alerts and rate limits.
  • Keep a “prompt library” with approved system prompts and example queries to standardize behavior.
  • Run red‑team tests for prompt‑injection and adversarial inputs.

Bigger implications: from “learn CAD” to “talk to your assistant”​

The design office as a conversation layer​

Rosenmeyer’s prediction—that in five years engineers won’t learn CAD in the traditional way but will instead talk to assistants—captures a plausible trajectory. If software literacy becomes conversational rather than procedural, onboarding and role design will shift. Instead of long CAD training courses, firms may maintain curated Copilot personas and conversational playbooks that replicate institutional methods and standards. That shift could accelerate model‑based design adoption and reduce the friction of moving from 2D plan sets to data‑rich 3D models.

Preparing for “Physical AI” and construction automation​

Rosenmeyer links the teammate‑assistant concept to a broader horizon: as autonomous equipment and robotics arrive on job sites—what he calls Physical AI—the design and operation workflows must be fully digitized and integrated with AI to manage the infrastructure data layer. A future where excavators or pavers accept task commands from a digital twin will require consistent data schemas, auditable decision trails, and tightly governed agent actions. That’s both an opportunity (dramatic productivity gains) and a systemic risk (automation without governance leads to failure modes at scale).

Impact on jobs and skills​

This transformation reframes roles rather than simply eliminating them. The valuable engineering skills will migrate from operating menus and toolbars to specifying intent, validating AI recommendations, and supervising autonomous execution. New hybrid roles—AI‑fluent engineers, data stewards, and agent designers—will become critical. Firms that invest in reskilling early will have a competitive advantage in both talent attraction and project delivery.

Practical blueprint: how to pilot an engineering Copilot (step‑by‑step)​

  • Select a narrow use case: e.g., “How to create a corridor in OpenRoads Designer” or “ProjectWise: where to store revised submissions.”
  • Prepare curated content: export vendor docs, community FAQs, and short tutorial videos into a sanitized index.
  • Draft a system prompt: define persona, tone, guardrails (ask clarifying questions, always cite sources, flag high‑risk recommendations).
  • Build a Teams agent in Copilot Studio and ground it on the prepared content.
  • Run a 4–8 week pilot with volunteers; instrument metrics: average time to answer, number of human validations, and user satisfaction.
  • Add provenance and an “explain like I’m a senior engineer” toggle for high‑risk answers.
  • Iterate prompts and grounding sources based on errors and user feedback.
  • Document governance rules and roll out to a second cohort once safety metrics are stable.

Critical perspective: what’s overhyped and what’s legitimately new​

  • Overhyped: the idea that Copilot agents will replace engineers in design decisions. The immediate value is in lowering friction and surfacing institutional knowledge; critical engineering judgments and liability‑bearing approvals remain human responsibilities.
  • Legitimately new: the speed at which domain experts can prototype and deploy role‑based agents using low‑code platforms. The ability to combine structured vendor docs with community knowledge in a single ground‑truth mechanism is practically transformative for onboarding and troubleshooting.
  • Hard to verify: company‑level productivity claims should be evaluated with careful measurement design. Rosenmeyer’s 100‑hour saving is credible as a personal metric; firm‑wide ROI requires broader, independent validation.

Strategic recommendations for engineering and IT leaders​

  • Treat Copilot Studio pilots as data projects not only UI projects: content hygiene and indexing quality determine assistant usefulness.
  • Prioritize provenance: insist agents always provide traceable citations back to the document, video, or thread used to compose answers.
  • Design human checkpoints for safety‑critical outputs and enforce auditable sign‑offs before any AI‑suggested change becomes contractual or field‑directed.
  • Monitor economics and message consumption; forecast costs for high‑volume teams and consider message packs or governance limits.
  • Build a cross‑functional center of excellence that includes engineering leads, IT, legal, and data governance to scale successful pilots safely.

Conclusion​

Rosenmeyer’s Copilot for Bentley is a practical, replicable demonstration of how AI agents can reduce friction in expert software use, producing immediate time savings and enabling faster onboarding for model‑based design workflows. The story is important not because it introduces an entirely new technology, but because it shows a low‑cost, scalable pattern—prompt engineering, curated grounding, and Teams deployment—that other firms can replicate today. At the same time, the case highlights the perennial tradeoffs of agentic AI: governance, provenance, consumption costs, and the need to preserve human judgement where it matters most. If firms get the data hygiene, validation layers, and human checkpoints right, these assistants will be a practical productivity multiplier; if they don’t, they risk false confidence and brittle automation at scale. The responsible path is clear: pilot fast, measure carefully, govern tightly, and treat agents as partners—powerful helpers that extend, but do not substitute, professional engineering judgement.

Source: Construction & Property News Engineer Builds AI Agent to Master Software, Saving Hundreds of Hours and Hinting at the Future of Infrastructure Design - Construction & Property News