Shive‑Hattery’s cautious, company‑wide embrace of AI — from Microsoft Copilot for every employee to specialist generative tools for early-stage design — shows how a mid‑sized engineering firm can accelerate routine work while keeping human judgment front and center. The Cedar Rapids firm’s experience, chronicled in a local Engineers Week feature, illustrates a pragmatic middle path: use AI to speed information access, transcribe and summarize meetings, and produce draft designs, but insist that experienced engineers validate outputs, curate data, and own client conversations.
Engineering firms face two concurrent pressures: a chronic workforce shortfall and growing demand for faster, more integrated design and documentation workflows. Recent industry research by the American Council of Engineering Companies’ Research Institute quantifies a troubling structural gap — roughly 18,000 fewer engineers entering the workforce than exiting it each year — driven by retirement, retention challenges, and a shrinking pipeline of domestic graduates. That gap is already shaping hiring, project backlogs, and the search for productivity gains.
At the same time, advances in generative AI and conversational assistants (branded by major vendors as copilots or agentic platforms) have matured from research demos into practical productivity tools embedded in everyday office apps and specialized design workflows. Large firms such as PwC and WSP have publicly documented enterprise deployments of Microsoft Copilot and related Azure AI capabilities at scale, reporting millions of Copilot actions and measurable time savings across knowledge workflows. These deployments signal that AI adoption in professional services is moving from pilot to production—creating a template smaller engineering firms can follow, but also a set of governance, security, and talent questions they must address.
Adopted thoughtfully, with clear governance, paired mentorship, and technical safeguards for numerical reliability, AI becomes a productivity multiplier that helps firms meet the twin challenges of a talent shortage and a rising project load. That is the balanced promise engineering leaders should pursue: smarter processes that let humans do the parts of engineering machines still can’t — understanding people, protecting communities, and applying professional judgment.
Source: thegazette.com Local engineers embrace AI tools to boost productivity, streamline processes
Background: why AI matters to engineering firms now
Engineering firms face two concurrent pressures: a chronic workforce shortfall and growing demand for faster, more integrated design and documentation workflows. Recent industry research by the American Council of Engineering Companies’ Research Institute quantifies a troubling structural gap — roughly 18,000 fewer engineers entering the workforce than exiting it each year — driven by retirement, retention challenges, and a shrinking pipeline of domestic graduates. That gap is already shaping hiring, project backlogs, and the search for productivity gains.At the same time, advances in generative AI and conversational assistants (branded by major vendors as copilots or agentic platforms) have matured from research demos into practical productivity tools embedded in everyday office apps and specialized design workflows. Large firms such as PwC and WSP have publicly documented enterprise deployments of Microsoft Copilot and related Azure AI capabilities at scale, reporting millions of Copilot actions and measurable time savings across knowledge workflows. These deployments signal that AI adoption in professional services is moving from pilot to production—creating a template smaller engineering firms can follow, but also a set of governance, security, and talent questions they must address.
How engineering teams are using AI today
Common, high‑value use cases
Across interviews and deployments, several patterns repeat in productive, low‑risk AI usage:- Knowledge retrieval and search: AI accelerates access to past designs, standards, and meeting minutes by surfacing relevant documents faster than manual search. Shive‑Hattery reports improved speed finding information, especially for onboarding and internal knowledge transfer.
- Transcription and summarization: Automatic meeting transcription plus concise summaries reduce administrative burden and create searchable records for later reference.
- Document drafting and style alignment: AI writing coaches help standardize proposals, reports, and client communications to firm style guides, letting senior engineers focus on technical judgment rather than editing.
- Early‑stage concept generation: Specialized generative tools can draft floorplans or layout options from input dimensions, enabling designers to quickly explore alternatives before detailed engineering. Shive‑Hattery explicitly distinguishes these architecture‑specific tools from general chatbots and treats them as ideation accelerants.
- Automation of repetitive tasks: From extracting tabular data to templating calculations and generating standard drawings or schedules, low‑risk automation frees staff to spend more time on analysis and client engagement. Enterprise case studies show measurable hours recovered when Copilot is well integrated into workflows.
Where AI augments rather than replaces expert judgment
The most consistent refrain from experienced engineering leaders is that AI is a force multiplier for productivity, not a substitute for experience. AI can draft, search, and summarize; it cannot reliably replace client interviews, site understanding, or the contextual judgment that determines whether a design is appropriate for the community it serves. Shive‑Hattery’s leadership emphasizes “human in the loop” curation and pairing younger staff (who are AI‑savvy) with senior engineers (who apply judgement) as a repeatable adoption pattern.Case study: Shive‑Hattery’s approach — democratize, experiment, govern
A company‑wide strategy, not an isolated pilot
Shive‑Hattery has taken a deliberate, inclusive approach: roll Copilot and workplace AI broadly, encourage experimentation, and cultivate an internal culture of measured use. The firm’s CIO describes the goal as democratizing AI — giving everyone the same baseline tools so beneficial use cases can emerge organically across departments. This contrasts with a tightly restricted, role‑by‑role rollout and signals a belief that adoption friction is often cultural rather than technical.Practical measures the firm uses
- Universal access to a common AI model to reduce platform fragmentation and simplify governance.
- Pairing juniors with seniors so AI tasks are scoped by less experienced staff and validated by experienced engineers.
- Using specialized generative tools for architecture concepting on top of general copilots — marking a pragmatic separation between ideation tools and core engineering models.
- Hiring for digital roles (e.g., Digital Solutions Specialist positions that explicitly reference Copilot Studio and low‑code/no‑code integration) to operationalize AI safely.
The measurable benefits — what data shows
Enterprise case studies and internal dashboards at large professional services firms report the kinds of productivity gains engineering teams should expect when AI is properly integrated:- PwC reports billions of Copilot actions across tens or hundreds of thousands of users and cites recovered capacity in the hundreds of thousands of hours during peak months after scaled deployment.
- Focused deployments in technology integrators and AEC firms show reduced time on routine documentation, faster validation cycles in early engineering phases, and clearer knowledge retention for onboarding. These outcomes are consistent across vendor case studies and independent consulting analyses.
Known limitations and real risks
AI delivers utility, but it brings clear, research‑backed limitations engineers must treat as constraints rather than bugs to be worked around.Hallucinations and overconfident errors
Large language models frequently generate plausible‑sounding but incorrect outputs — a phenomenon commonly called hallucination. Recent research demonstrates that LLMs can return confident but wrong answers even on carefully designed math or factual tasks, and adversarial or out‑of‑distribution prompts can sharply raise hallucination rates. Medical and technical domains are particularly vulnerable because an incorrect but convincing output can lead to dangerous downstream actions if not checked. Engineers must therefore verify AI outputs against primary sources and calculations.Weaknesses in rigorous numerical reasoning
Multiple academic studies and practical evaluations show LLMs struggle with multi‑step arithmetic and rigorous symbolic reasoning unless augmented with calculation tools or step‑checking strategies. Techniques such as chain‑of‑thought prompting, tool‑augmented execution (e.g., calling a calculator or symbolic solver), and self‑consistency checks improve but do not eliminate errors. For safety‑critical engineering calculations, human verification — and, where appropriate, independent software tools certified for engineering computations — remain mandatory.Data quality, privacy, and governance
AI systems are only as trustworthy as the data they access. Firms must ensure:- Source provenance and version control for standards and codes.
- Tenant‑aware, enterprise governance when Copilot or agents are connected to firm repositories.
- Strong controls over what project data is exposed to third‑party models, including contractual and technical safeguards. Enterprise case studies highlight the centrality of Responsible AI governance when scaling Copilot across thousands of employees.
Legal and ethical exposure
Using generative tools without clear audit trails or human sign‑off can expose firms to professional liability. Engineering is a licensed profession: final stamped calculations and design decisions must remain the responsibility of licensed engineers. AI can create draft work, but statutes of professional practice and ethical standards require human ownership of final decisions.Workforce implications: skills, roles, and hiring
New competencies every engineer should have
AI doesn’t require entrants to be model builders, but modern engineers should be fluent in a compact set of practical capabilities:- Information literacy with AI: crafting effective prompts, validating outputs, and identifying when to escalate to human review.
- Data curation and provenance: knowing which internal repositories are authoritative and how to keep them clean and versioned.
- Tool orchestration: integrating copilots with document management, project schedules, and BIM/CAD environments.
- Ethical and legal awareness: understanding professional liability and privacy constraints.
Mentorship as an adoption multiplier
Pairing younger hires (comfortable with AI tools from university and personal use) with seasoned engineers (who apply judgement and client context) creates a practical, reliable learning loop. This model accelerates adoption while preserving quality control and transfers tacit knowledge that AI cannot replicate. Shive‑Hattery’s practice of cross‑generational pairing is a replicable pattern for other firms.Governance and practical deployment checklist
To move from experimentation to reliable production use, engineering firms should treat AI adoption like any other technology modernization: plan for governance, security, and measurable outcomes.- Define the value cases you want to augment (e.g., search, transcription, document drafting).
- Map data flows: what internal and external repositories will Copilot or agents access? Apply the principle of least privilege.
- Establish clear human‑in‑the‑loop rules: which outputs require senior review, which are auto‑accepted as drafts.
- Create an adoption playbook and training plan that includes prompt‑crafting basics, red‑flag behaviors (hallucination markers), and escalation paths.
- Instrument and measure: track Copilot actions, time saved, and error rates to support continuous improvement.
- Legal check: ensure licensing, client confidentiality, and professional liability are addressed before using AI in deliverables.
Practical engineering workflows that work well today
- Use AI for exploratory tasks: draft layouts, alternative schematics, or initial materials research that accelerate human iteration cycles. Treat AI outputs as prototypes, not final designs.
- Connect Copilot to controlled knowledge bases (standards, previous projects) to improve retrieval accuracy; avoid feeding proprietary calculations without governance.
- Automate administrative chores (meeting minutes, document formatting, RFI tracking) to free licensed engineers for technical decision‑making.
- Employ automated numerical checks: when AI produces calculations, run the same numbers through independent engineering software or scripts before acceptance. This two‑step verification guards against plausible but incorrect outputs.
The ethics and community responsibility angle
Engineering is a public‑safety profession. Any tool that changes workflows must be evaluated against ethical obligations to clients, end users, and the community. That includes:- Guarding against complacency where AI outputs are accepted uncritically.
- Ensuring that AI enables more equitable service delivery rather than creating opaque, unexplainable designs.
- Investing regained productivity into better client engagement, resilience planning, and community‑centered design rather than simply driving billable hours.
Where to watch next: technology and policy trends that will matter
- Model‑assisted verification tools: expect a surge in products that combine LLMs with verified numerical engines, symbolic solvers, and provenance tracking to reduce hallucination risk for technical work. Recent academic work explores self‑consistency checks and pedagogical chain‑of‑thought prompting to improve mathematical reliability.
- Enterprise governance frameworks: large rollouts from firms such as PwC and WSP show that responsible scaling is achievable but requires investment in policy, training, and tenant‑aware architectures. Smaller firms should borrow these governance patterns and adapt them to scale.
- Regulatory and licensure clarifications: professional societies and licensing boards will likely issue more detailed guidance about AI use in deliverables and stampable work. Engineering leaders should monitor and participate in those conversations.
Conclusion: AI as amplification, not replacement
The practical lesson emerging from Shive‑Hattery’s local example and from large enterprise deployments is simple and actionable: use AI to remove routine friction, but keep experienced engineers responsible for judgement, context, and client care. AI can speed retrieval, draft communications, and even sketch design options, but it cannot replace the conversations that define a project’s constraints or the professional accountability that signs off on engineered work.Adopted thoughtfully, with clear governance, paired mentorship, and technical safeguards for numerical reliability, AI becomes a productivity multiplier that helps firms meet the twin challenges of a talent shortage and a rising project load. That is the balanced promise engineering leaders should pursue: smarter processes that let humans do the parts of engineering machines still can’t — understanding people, protecting communities, and applying professional judgment.
Source: thegazette.com Local engineers embrace AI tools to boost productivity, streamline processes