
Tony Lin’s argument is simple and urgent: artificial intelligence is not a passing trend for accounting and business schools — it is a structural shift that demands rethinking curriculum, assessment, and the day‑to‑day practice of accounting.
Background / Overview
Artificial intelligence has long been part of the technological backdrop of business; its formal roots trace back to the 1950s, and its public profile surged again with the 2022 arrival of consumer-friendly generative systems such as ChatGPT. The Dartmouth summer workshop in 1956 is widely regarded as the field’s founding moment, while ChatGPT’s November 2022 release kicked off a new wave of mainstream adoption and organizational experimentation. Rowan University assistant professor Tony Lin captures both facts in a single, useful frame: AI has a deep history, but generative and agentic capabilities are the developments that have made AI visible — and operational — in the everyday workflows of firms and classrooms. Lin links probabilistic language models to the accountant’s core job of gathering evidence and telling a coherent story: with the right data and governance, AI can answer routine questions about performance, profitability and outlook, and can even automate multistep workflows via agentic systems.Why this matters to business and accounting programs
Accounting is fundamentally an evidence-based narrative: collect invoices, verify entries, reconcile differences, and present a story that stakeholders can act upon. That work is both data-intensive and rules-driven — precisely the space where AI is showing early practical impact.- Automation of routine tasks: AI can automate invoice capture, matching, reconciliations and routine journal entries, freeing human accountants to focus on judgment, exceptions and advisory work. Industry bodies and professional organizations have moved rapidly to support practitioners with training and toolkits to adopt AI safely.
- Faster analysis and forecasting: Large language models (LLMs) and retrieval-augmented systems accelerate narrative synthesis and scenario analysis, enabling finance teams to produce faster quarterly commentary and rolling forecasts.
- New governance and compliance demands: Using LLMs in audited or regulatory contexts introduces auditability, provenance and model‑risk questions that accountants must be able to evaluate and document. Professional guidance and symposia now center these concerns.
What Lin is saying — unpacked
Tony Lin’s comments in the Rowan Today profile emphasize three connected ideas:- AI’s deep history and recent acceleration. Lin reminds educators that AI’s intellectual lineage goes back decades, even if generative systems made the technology a household phrase only recently. Contextualizing AI historically helps avoid faddish approaches and encourages measured curriculum design.
- Concrete utility in accounting. Lin argues that accounting’s evidence‑based outputs — invoices, orders, audits, reconciliations — are perfect targets for probabilistic models and retrieval systems that transform raw documents into explanations and forecasts. This is not speculative; accounting organizations and vendors are already piloting and productizing these exact workflows.
- Agentic AI as a force multiplier. Lin uses the metaphor of four research assistants performing discrete tasks (literature review, data collection, analysis, summary) to show how a single agentic AI platform can orchestrate multistep processes. Early agentic systems such as Auto‑GPT and other agent frameworks demonstrated the idea: chain tasks, use tools, iterate until a goal is reached — albeit with a lot of human supervision still required.
Agentic AI: promise and reality
What is “agentic AI”?
Agentic AI refers to systems that can plan and execute multi‑step tasks with some independence — composing calls to APIs, retrieving documents, running analyses, and producing outputs without requiring a new prompt at each step. Public interest in the term spiked when open‑source projects like Auto‑GPT and BabyAGI demonstrated the pattern of “think → act → observe → plan” in April 2023, generating both excitement and caution.Where agents help in accounting and finance
- Aggregating and reconciling data across ERP, CRM and email to identify exceptions for human review.
- Preparing first‑draft financial narratives and variance analyses that accountants then verify and finalize.
- Automating repetitive tax provision calculations and flagging unusual adjustments for deeper inquiry.
- Orchestrating due diligence tasks in M&A scenarios where many documents, stakeholders and checklists must be coordinated.
Limits and current reality
Early agentic implementations are promising but fragile. They routinely need human oversight to prevent hallucinations, handle edge cases, and enforce compliance. Many open‑source and early commercial prototypes required substantial orchestration, hard-coded rules and human approvals — they are not yet reliable stand‑alone professionals. The industry’s consensus is pragmatic: agents can augment workflows but must be scoped, permissioned and auditable.Evidence and verification: cross-checking Lin’s practical claims
Three key factual claims from Lin’s remarks deserve explicit verification.- Claim: “AI dates back to the 1950s.” Verified: The Dartmouth Summer Research Project (1956) and earlier foundational work (Alan Turing’s 1950 paper) are the standard historical anchors for AI’s origin.
- Claim: “Generative AI became a buzzword in 2022 with the rise of systems like ChatGPT.” Verified: ChatGPT’s public release in late November 2022 triggered a rapid adoption cycle and broad public awareness. The November 2022 launch is well established in contemporary reporting and product histories.
- Claim: “Agentic AI can combine multiple research tasks into a single platform.” Verified with caveats: early agent examples (Auto‑GPT, BabyAGI) and commercial activity demonstrate the architectural pattern for chaining tasks; however, real-world deployments typically require additional tooling, governance, and human‑in‑the‑loop controls. In short: agentic concepts are real, but production maturity varies and needs strong oversight.
Curriculum implications: what should business schools teach?
Business schools must balance two priorities: preserve core accounting knowledge and develop AI‑literate practitioners. A practical curriculum blueprint follows.Core components to add or strengthen
- Prompt engineering and model literacy: teach students how to craft prompts, use retrieval‑augmented generation (RAG), and evaluate model outputs for accuracy and bias.
- Model risk and auditability: train students to document data provenance, maintain audit logs, and create human‑in‑the‑loop signoffs for finance outputs.
- Agent choreography: exercises in orchestrating multi‑agent workflows, defining agent roles (researcher, data retriever, analyzer, summarizer) and building escalation paths.
- Ethics, regulation and professional responsibility: coverage of confidentiality, client consent, and sectoral compliance (tax, audit standards).
- Hands‑on tool access: partnerships and sandboxes that grant students time-limited access to enterprise tools (Copilot, cloud AI services) under controlled data conditions. Professional organizations and accelerator programs are already building this practical infrastructure for firms and schools.
Pedagogy and assessment changes
- Replace some traditional take‑home essays with AI‑augmented portfolios: students must show what prompts they used, how they validated outputs, and why final conclusions are defensible.
- Emphasize human verification artifacts: model cards, decision logs, and incident playbooks should become deliverables in capstone projects.
- Use collaborative, cross‑functional projects where students build an agentic workflow, document the governance controls, and measure outcome quality.
Risks and governance: where leaders should be worried
Tony Lin and multiple professional and industry analyses flag the same classes of risk: hallucinations and overconfidence, data leakage, model bias/opacity, vendor lock‑in, and regulatory exposure.- Hallucinations and factual errors: LLMs can produce plausible but incorrect outputs; unchecked use in financial reporting or tax advice can cause material misstatements. Institutions need verification playbooks and human signoffs.
- Data security and leakage: agentic systems that access email, drive shares, and ERPs broaden the attack surface. Data classification, least privilege access, and tenant‑level controls are essential before production rollout.
- Bias and explainability: models trained on opaque corpora can encode and amplify biases. For regulated financial and forensic contexts, model cards, audit trails and explainability checks are now being recommended by professional bodies.
- Concentration and vendor lock‑in: deep integration with a single productivity stack (vendor‑supplied agent runtime, memory store and connectors) increases switching costs. Design for portability and open standards where possible.
Practical playbook for firms and educators
Below is an operational checklist that combines Lin’s classroom orientation with enterprise best practice.- Define intent and risk appetite: pick one measurable business KPI (reduce invoice-processing time by X; lower manual reconciliation headcount by Y).
- Start narrow: pilot a human‑in‑the‑loop agent that performs a single task (e.g., extract invoice fields, suggest reconciling entries) and require human verification.
- Instrument everything: maintain audit logs, prompt histories, and cost telemetry for each agent action.
- Enforce data boundaries: classify data used for model tuning; enforce least privilege and non‑training guarantees where vendor support exists.
- Scale by orchestration: once the pilot shows durable value, design multi‑agent handoffs with explicit escalation rules and rollback procedures.
- Teach and hire for verification roles: create roles such as “AI steward,” “agent ops,” and “model verifier” with clear career ladders and compensation parity.
- Secure sandboxed tooling access for students with anonymized or synthetic datasets.
- Require governance deliverables as part of technical assignments.
- Partner with professional bodies for guest lectures and certification pathways.
Strengths and opportunities
- Acceleration of high‑value work: AI reduces time spent on rote tasks and increases capacity for advisory and strategic work.
- Democratization of capabilities: small accounting teams and solo practitioners can access analytic and drafting capabilities previously reserved for larger firms.
- Pedagogical momentum: real-world AI projects in curricula create demonstrable employer value and improve graduate employability if curricula adapt quickly.
Weaknesses, blind spots and verification caveats
- Vendor metrics require scrutiny. Productivity claims published by vendors are directional; institutions should pilot and measure in their own environment. Many headline numbers come from self‑selected early adopters.
- Equity and access: institutions with fewer resources risk leaving students behind; equitable access to enterprise sandboxes matters.
- Environmental and operational costs: agentic systems increase inference workloads and cloud spend; cost observability is essential.
What universities should measure (KPIs for curriculum change)
- Time‑to‑competency for new AI‑augmented tasks (weeks from introduction to baseline proficiency).
- Student portfolio quality: number of AI governance artifacts (model cards, audit logs, agent playbooks) produced per capstone.
- Employer readiness: percent of graduates hired into roles that require AI orchestration or verification skills.
- Ethical compliance readiness: number of students who can pass a compliance simulation requiring human sign‑off for high‑risk outputs.
A final, practical verdict
Tony Lin’s message is neither alarmist nor complacent. It is pragmatic: AI’s technical lineage is deep, its recent capabilities are transformational for routine accounting work, and preparing students for a world where they must orchestrate and verify AI agents is now a curricular imperative. Academic leaders should adopt a test‑and‑measure approach: start small with governance, instrument outcomes, iterate curriculum to emphasize verification and ethical use, and build employer partnerships that convert training into job readiness.The transition is manageable when approached as organisational redesign rather than a licensing decision. Those who treat AI as a faculty and process problem — not merely a product purchase — will produce graduates who add immediate, verifiable value in the marketplace.
Recommended next steps for business schools and accounting programs
- Build a one‑semester, hands‑on module titled “Human + Agent: Applied Accounting Workflows” that requires students to produce an agentic workflow, governance artifacts, and a validated output.
- Establish partnerships with CPA bodies and cloud vendors to obtain sandboxed environments and cloud credits with strict data protections.
- Create faculty fellowships to bring practitioners into the classroom for co‑teaching and to keep course content current.
- Publish program KPIs annually to ensure outcomes are transparent and aligned with employer needs.
The future Lin envisions is not one in which AI replaces accountants, but one where accountants who can supervise, verify and govern AI will be more valuable than ever.
Source: Rowan Today Tuning into the potential of AI in business