Vibe coding is the practical, human‑driven practice of telling a modern AI what you want and letting it produce working automation, scripts, and small applications — and, as Brian Tankersley and Randy Johnston argued on the Accounting Technology Lab podcast, it is already changing how firms build internal “digital plumbing.” rview
Vibe coding is not a formal programming methodology; it’s a working name for a cluster of behaviors enabled by large language models (LLMs) and code‑trained assistants. At its core it replaces the old model of “hire a developer, wait months, receive a deliverable” with a conversational, iterative loop: describe an objective in plain language (often with detailed examples), receive generated code or automation, run and test it, then iterate on errors or desired changes.
This approach leans on two technical realities:
- Modern LLMs are trained on massive corpora of code and documentation, and can reliably generate idiomatic code snippets, integration glue, and API calls. GitHub Copilot, for example, explicitly targets code generation and agentic coding workflows.
- Productivity suites and automation platforms (Microsoft 365, Power Automate, many cloud APIs) now surface model integrations and sanctioned connectors so AI can reason over authorized enterprise data and orchestrate multi‑step workflows. Microsoft’s Copilot family has extended model choice and agent tooling to enterprise customers, and Copilot Studio lets organizations build, test, and control agents.
Brian Tankersley walked listeners through concrete firm‑level examples on the Accounting Technology Lab episode: flattening hierarchical JSON for Excel, building a URL shortener that integrates with Power Automate, and mapping database fields for accounting system conversions — all created or assisted by A hand‑coding. Those granular examples capture exactly how vibe coding shifts who can deliver automation inside a practice.
What exactly is “vibe coding”?
The pattern, in plain language
- A person (often a non‑developer) defines a business problem in natural language, including sample data and a description of the desired output.
- An LLM or code assistant returns a set of files, scripts, or step‑by‑step instructions that implement the task (Python scripts, Power Automate flows, Power Platform components, shell scripts, small web endpoints).
- The user runs the generated code in a controlled test environment, captures runtime errors or behavior, and posts the outputs back to the assistant.
- The assistant iterates until the outputs meet the business requirement.
This is not “no work.” It does require systems thinking, basic familiarity with editors/IDEs, and the discipline to supervise results. But it dramatically reduces the friction to prototyping and producing internal automations.
How it differs from low‑code/no‑code
Low‑code/no‑code platforms provide graphical building blocks and deliberate constraints. Vibe coding sits beside those platforms: it gives users the ability to generate custom code or connectors when low‑code blocks don’t exist, and it turns natural‑language prompts into executable code that ties systems together. In practice, firms will mix and match: use low‑code screens and AI‑generated glue where needed.
Why accountants should pay attention now
- Rapid prototyping: Internal utilities that used to take days or weeks (flattening messy input files, automating link creation, generating engagement letters) can be assembled in hours. Brian’s URL shortener + Power Automate example showorial work can be eliminated by combining an AI‑generated integration with a form trigger and a flow.
- Expanded capability without deep hiring: Small firms that can’t afford full‑time developers can create practical “digital plumbing” that moves data between systems and automates repetitive tasks.
- Lower experimentation cost: Because the output is disposable and inexpensive to iterate, staff can “embrace the suck” — try, break, refine — to find practical automation wins.
- Strategic uplift: After a few successful internal projects, firms can standardize patterns and migrate matured automations into production‑grade builds with developer oversight.
These upsides are already visible in practice across accounting automation projects and early Copilot deployments in business users’ workflows. At the same time, Microsoft and other vendors are adding model choices and connectors that make it safer to let sanctioned AI read authorized content.
Real use cases from the podcast — what they look like in practice
1) Flattening JSON for Excel
Problem: Public websites or APIs often return hierarchical JSON (arrays nested under objects). Analysts need a flat table (one row per city with its state) for Excel, Power Query, or simple CSV exports.
Vibe coding approach:
- Provide the hierarchical sample and the desired flattened columns to the assistant.
- Request a Python script (pandas) or Power Query M code that flattens arrays into rows and repeats parent keys down the rows.
- Run the script locally or in a sandbox. If errors occur (schema edge cases, encoding issues), paste the error into the assistant’s thread and ask for fixes.
Why it matteon data‑prep bottleneck into a repeatable, auditable script that non‑developers can maintain. Brian used exactly this technique to convert a directory of firms and cities into a flat spreadsheet.
2) URL shortener + Power Automate integration
Problem: Long ceremony‑laden links (YouTube, vendor portals) are difficult to use in marketing and client communications. The firm wants a branded short domain and automatic creation of short links from a spreadsheet or form.
Vibe coding approach:
- Use an open‑source URL shortener as the backend (or ask the AI to generate one).
- Ask the assistant to write a Power Automate flow that:
- Accepts a Microsoft Form or Excel row with a long URL and alias.
- Calls the shortener API (with authentication).
- Returns the short link and stores it in a list or Excel file.
- Iterate until the flow reliably processes tters: Time savings on bulk publishing and better brand control. Brian described a workflow that created hundreds of episode links automatically, saving hours of manual work.
3) Mapping database fields for accounting system conversions
Problem: Migrating from Sage 100 (on‑prem ERP) to Sage Intacct (cloud ERP) requires mapping schema differences: chart of accounts, customer records, open AR, and so on.
Vibe coding approach:
- Provide table/field descriptions (or export samples) from the source and a target template.
- Ask the assistant to suggest field mappings and SQL or ETL steps to extract, transform, and stage data.
- Produce scripts or API call skeletons to load staged CSVs into the target system via API.
Why it matters: System conversions are historically complex and expensive. Using AI to accelerate schema discovery and mapping cuts initial analysis time and produces a tested list of fields to reconcile — not a finished migration, but ad starting point. Tankersley reported that different LLMs surfaced different field sets (e.g., 51 vs. 77 fields), underscoring the need for human review.
Strengths and practical benefits (what firms gain)
- Speed: Prototypes that once took a developer sprints to produce can often be assembled in hours.
- Accessibility: Accountants, operations staff, and managers with domain knowledge (but not coding fluency) can own automation initiatives.
- Iterative learning: Conversational feedback loops let teams learn by doing, accelerating internal capability building.
- Cost efficiency: Reduced external contracting for many repetitive or one‑off automations.
- Composability: Small automations become reusable bricks in a larger automation architecture (mortar) — connecting CRM, document storage, and reporting spreadsheets.
Risks, failure modes, and why you must be cautious
Vibe coding lowers the barrier to production code, which creates familiar but serious risks:
- Data leakage and model telemetry: Sending sensitive client data to public models (or misconfigured enterprise models) can leak PII or confidential financials. Even enterprise copilots have had security incidents and misconfigurations; firms must treat AI outputs as potentially fallible and control what data models can see. Recent product incidents illustrate the real risk of model exposure to confidential content.
- Hallucinations and inaccurate logic: Models sometimes invent field mappings, API parameters, or SQL that looks plausible but is wrong. Human validation and automated test suites are essal security for public‑facing code:** Tankersley recommended keeping early vibe‑coding outputs internal (engagement letters, internal URL shorteners, data mappings) until they have developer review and security hardening. Public‑facing endpoints expose a much broader threat surface.
- Governance gaps: Who owns the generated code? How is it versioned, audited, and deployed? Without DevOps controls, firms can accumulate brittle scripts that break silently.
- Compliance and professional liability: CPAs and firms remain responsible for the accuracy and security of client work products. AI assistance does not absolve professional responsibilities.
- Vendor and model diversity: Different LLMs produce different outputs (as the Sage example showed). That variability requires cross‑checking and sometimes keeping multiple model options available. Microsoft has begun supporting multiple model choices in Copilot, explicitly enabling Anthropic’s Claude alongside OpenAI models — a practical signal that model choice matters.
Practical guardrails: A minimally sufficient governance checklist
- Establish an “AI sandbox” environment separate from production systems.
- Prohibit copying client PII into public LLMs without redaction or explicit legal approval.
- Require human review for all generated code before any public deployment.
- Log prompts, model responses, and iterations as part of an audit trail.
- Use enterprise offerings (Copilot, Copilot Studio, controlled MCP connectors) where model access and entitlements are centrally managed.
- Add automated validation tests for data migrations and conversions; fail fast on schema mismatches.
- Maintain a small developer review pipeline for security hardening and to refactor successful scripts into maintainable code.
How to run a safe, measurable vibe‑coding pilot (step‑by‑step)
- Scope a low‑risk internal use case (example: a branded URL shortener or an engagement‑letter generator).
- Create a dedicated sandbox tenant or environment, and restrict data to anonymized or synthetic samples.
- Select two model pathways: (a) internal enterprise Copilot/approved model; (b) an external general LLM for comparison. Track differences in outputs and error rates. Microsoft’s expanded model choice can be used here to evaluate multiple models under one admin control plane.
- Write a detailed prompt template. Include:
- The business objective
- Sample inputs and desired outputs
- Target runtime environment (Python 3.x, Power Automate, Power Apps, etc.)
- Security constraints (no external calls with client data, no secrets in code)
- Iterate: run, capture errors, paste errors back into the same conversation thread, ask for fixes.
- Add unit tests or simple validation scripts to verify outputs.
- Have a developer or security reviewer audit before a production release.
- Measure: time saved, number of errors per iteration, and downstream maintenance burden.
Sample prompt patterns (practical templates)
Below are
patterns — not copy/pasted commands — you can adapt in any model UI or Copilot Studio agent creation workflow:
- Problem to script selector: “I have a hierarchical JSON sample (attached) with states at top level and cities nested in arrays. Produce a Python 3.11 script using pandas to flatten into a CSV with columns [State, City, CityID]. Add robust error handling for missing fields and an option to write UTF‑8 BOM for Excel.”
- API connector builder: “Create a Power Automate flow that accepts a Microsoft Form with fields [alias, long_url], calls this authenticated REST POST endpoint to create a short link, stores the response in a SharePoint list, and returns the short link to the form submitter.”
- Conversion mapper: “Given the Sage100 chart_of_accounts export (sample rows) and the Sage Intacct GL API field list (sample), produce a field mapping table and a staged transformation plan to produce target CSVs suitable for Intacct’s import API. Highlight data issues we should validate before import.”
These patterns are the “prompts as code” of the vibe‑coding workflow: specific, example‑driven, and iteration friendly.
Where enterprise tools fit and why model choice matters
Large vendors are rapidly building features to make vibe coding safer and more enterprise ready. GitHub Copilot has evolved into an IDE‑centric coding agent that can run tests, generate PRs, and be controlled by organization policy. Microsoft has also expanded model choices in Microsoft 365 Copilot and Copilot Studio, enabling Anthropic’s Claude as a selectable reasoning engine for enterprise agents — an acknowledgement that different models bring complementary strengths.
The practical implication: firms should favor supported, auditable agent platforms that let admins control which models and connectors agents can use, while preserving logs and governance.
A candid look at the ethical and professional edge cases
- Auditability: Generative code can be opaque. Maintain prompt logs and code diffs. If a client‑reporting script misreports a balance, the firm must show how the output was generated and validated.
- Bias and fairness: AI may mishandle outlier data (unusual chart of accounts codes, legacy vendors), producing incorrect mappings. Always treat outputs as first drafts, not final audit‑ready transformations.
- Regulatory risk: Certain industries (financial services, healthcare) restrict data flows and outsourcing. Confirm contractual and regulatory constraints before using any external model.
Transition plan: from vibe coding prototype to production grade
- Prototype in the sandbox → Validate with tests → Developer hardening (security, error handling, observability) → Deploy behind an internal API or approved platform → Monitor with telemetry and maintain prompt/code repository.
This two‑track approach keeps the experimentation benefits of vibe coding while ensuring that persistent, client‑facing, or critical automations receive the engineering rigor they need.
Final recommendations for accounting firms
- Start small, internal, and measurable. Choose projects with clear KPIs (time saved, tasks automated).
- Use enterprise Copilot/Copilot Studio or approved VPS models where possible; make model choice an administrative decision.
- Insist on prompt and output logging — treat promptst documentation.
- Build a lightweight “AI review board” with technical and compliance representatives to approve escalations to production.
- Train staff to read and iterate on generated code; consider a short internal apprenticeship where juniors work with senior reviewers.
- When in doubt, keep sensitive and public‑facing systems with traditional development until you can demonstrate hardened controls.
Conclusion
Vibe coding is not a silver bullet — but it is a practical lever that empowers accounting teams to build the connective tissue between systems, automate tedious steps, and accelerate conversion and onboarding work that used to bottleneck practices. The podcast discussion by Tankersley and Johnston captures the pragmatic, experimental spirit: use AI to build internal tooling fast, iteratexes, and bring engineering oversight in when a tool needs to become public or mission‑critical.
Technically, the plumbing is now within reach: GitHub Copilot and Copilot coding agents can produce and test code; Microsoft’s Copilot Studio and expanded model choices let administrators govern agent behavior; and standard platforms like Power Automate provide the connectors that make small automations valuable.
But success is not automatic. Firms must pair
curiosity with
controls: enable internal experimentation, require human validation, maintain auditable logs, and escalate to developers for production hardening. With those guardrails, vibe coding will be a durable skill in the accountant’s toolkit — an approachable way to "suck less" at automation, learn by doing, and relieve staff from repetitive tasks so they can focus on higher‑value advisory work.
Source: CPA Practice Advisor
Vibe Coding with AI - Accounting Technology Lab Podcast - Feb. 2026