AI@Carson Workshop: Practical Prompt Engineering for Business Students

  • Thread Author
Washington State University’s Carson College of Business this week staged a focused, practical entry into campus AI education with its inaugural two‑day AI@Carson Workshop — an immersive bootcamp that combined prompt‑engineering labs, ethics case studies, and discipline‑specific sessions designed to move business students from curiosity to competence in working with generative AI.

Background​

Washington State University’s Carson College has for several years signaled a strategic push to integrate technological fluency into business curricula. The AI@Carson Workshop brought that strategy into an active‑learning format: over two days at the Spark Academic Innovation Hub, roughly sixty students from across the college participated in hands‑on labs and faculty‑led breakouts aimed at teaching not only how to use generative AI tools, but how to evaluate and supervise their outputs responsibly.
The program was organized and launched by faculty in the Department of Management, Information Systems, and Entrepreneurship, with explicit goals that mirror broader higher‑education priorities: build AI literacy, teach practical prompt engineering and tool‑use, and cultivate a critical, ethical mindset toward large language models (LLMs) and AI agents. Students completing the workshop received certificates attesting to competency in AI literacy and responsible use.

What the workshop did — a quick recap​

  • Two full days of immersive, instructor‑led activities in an active learning space.
  • A kickoff on prompt engineering with hands‑on practice and meta‑prompting techniques.
  • An ethics workshop focused on misinformation, bias, and unintended consequences of over‑reliance on generative AI.
  • Discipline‑specific breakout sessions (marketing, accounting, finance, hospitality, HR, IT) that tied AI capabilities to real business workflows.
  • Demonstrations of AI agents and how they can support repetitive tasks such as audit work or data summarization.
  • Certificates for participants who completed the training modules.

Why this matters: AI literacy in business education​

Higher education is under intense pressure to produce graduates who can work with AI tools from day one. Employers increasingly expect familiarity with generative AI, Copilot‑style assistants, and basic prompt design; universities are responding by embedding short courses, workshops, and modules into undergraduate and professional programs.
The Carson College’s approach — short, intense, hands‑on training tied to domain examples — reflects best practices for adult learning and workplace readiness. The model emphasizes three practical outcomes:
  • Teach students to get useful results from today’s tools through better prompts and iterative refinement.
  • Teach students to recognize errors and biases and to verify outputs rather than accept them at face value.
  • Teach students to design workflows that pair AI output with human review and domain expertise.
These same outcomes are the core of professional prompt‑engineering and executive education programs now offered by many institutions, and they align with industry demand for graduates who can responsibly apply AI to business problems.

Prompt engineering: the new workplace literacy​

What students learned​

The workshop began with prompt engineering basics and progressed into meta‑prompting tactics — asking AI not just for answers but to help craft the next, better prompt. Instruction encouraged a mindset of experimentation: treat the model like a junior analyst or intern, iterate, give feedback, and use follow‑ups to refine outputs.
Key lessons emphasized to students:
  • Be specific about inputs (role, constraints, desired format).
  • Use examples and counter‑examples to guide the model.
  • Chain prompts to break a problem into smaller, verifiable steps.
  • Record and reuse successful prompt patterns as templates.

Why prompt engineering matters​

Prompt engineering is rapidly evolving from a niche technical trick into a broadly demanded workplace skill. Well‑constructed prompts directly influence output quality, reduce hallucinations, and make AI tools more reliable for business tasks like summarization, draft generation, data extraction, and basic analysis.
Universities and executive education providers now offer certificate courses in prompt engineering — the practice is being taught as a concrete, repeatable skillset that combines clear writing, logical decomposition of tasks, and verification techniques. For business students, that means improved employability and immediate productivity gains when they enter roles that use generative AI in day‑to‑day workflows.

AI ethics and critical thinking: more than a checkbox​

Ethics session highlights​

A central thread of the workshop was ethical reasoning: small‑group scenarios forced students to confront real tradeoffs — misinformation, biased recommendations, privacy risks, and the reputational consequences of deploying unverified AI outputs in client or stakeholder communications.
Faculty framed generative AI as a “people pleaser” — systems optimized to produce plausible, persuasive text that may nevertheless be inaccurate. Students practiced critical evaluation techniques: source triangulation, red‑team prompts to expose weaknesses, and rule‑based checks for sensitive content.

The educational value​

Ethics sessions are not an optional add‑on. For business learners, the ability to spot unreliable outputs and to design human‑in‑the‑loop controls is core to responsible deployment. The workshop’s case‑based approach mapped directly to managerial decisions students will face: when to accept an AI suggestion, when to escalate for human review, and how to disclose AI use in deliverables.

Breakouts by discipline: practical, field‑specific training​

One strength reported from the event was the discipline‑specific breakout model. Instead of generic demos, students experienced AI applications in context:
  • Accounting and auditing: demonstrations of AI agents that automate repetitive checks, draft working papers, or summarize transaction histories — paired with warnings about the need for manual validation.
  • Marketing: using generative AI to create campaign drafts, audience segmentation suggestions, and A/B test ideas — with attention to brand voice and data privacy.
  • Finance: scenario exploration of AI in modeling, stress testing, and report drafting — with an emphasis on validation of numerical outputs.
  • Hospitality and service industries: using AI for personalized guest communication, dynamic content creation, and operational checklists.
  • Human resources: AI‑assisted job description drafting, candidate screening aids, and concerns about bias amplification.
  • Information systems: technical sessions on tooling, agent frameworks, and integrating LLMs into workflows.
Breaking content into domain silos made the training immediately actionable. Students left with workflows they could imagine applying to internships and early‑career roles.

Certificates and credentials: signaling competence​

Participants who completed the two‑day program received certificates acknowledging completion and competency in AI literacy and responsible use. These short‑form credentials are increasing in value as employers seek candidates who can both use tools and supervise their output.
Certificates from a recognized college of business serve several functions:
  • Provide a visible credential for résumés and LinkedIn profiles.
  • Standardize a baseline of skills among students within a program.
  • Act as a first step toward deeper curricular integration (credit‑bearing courses or minors).
That said, short certificates are not a substitute for deeper technical understanding; they are a pragmatic complement to domain training when designed to emphasize judgment and verification.

Strengths of the AI@Carson approach​

  • Experiential learning model: Active labs and role play emulate workplace conditions and accelerate skill acquisition.
  • Interdisciplinary orientation: Connecting AI capability to concrete business tasks helps students see relevance and applicability.
  • Ethics integrated with practice: Ethical reasoning is taught in the same session as tool practice — that pairing increases retention and real‑world readiness.
  • Faculty‑led, domain‑expert instruction: Faculty from accounting, marketing, finance, and IS delivered contextualized training that bridges theory and practice.
  • Credentialing: Certificates give students a tangible signal of competence that employers can recognize.
  • Human‑in‑the‑loop emphasis: The program repeatedly reinforced that AI is a collaborator — not a replacement — which aligns with safe deployment principles.

Risks, gaps, and areas to watch​

No training program can eliminate AI risk entirely. Several potential gaps and hazards are worth highlighting:
  • Overconfidence and automation bias: Short demos that generate impressive outputs can create a false sense of safety. Students need repeated practice in verification under varied, adversarial conditions to counter automation bias.
  • Hallucinations and factual errors: LLMs can produce plausible but incorrect statements — the reported “models are wrong about 20% of the time” is a useful heuristic, but the reality is complex: hallucination rates vary widely by model, task, prompt, and evaluation method. Mitigation depends on model choice, grounding strategies, and robust verification workflows.
  • Data privacy and IP risk: Using proprietary datasets or past client information with public LLMs can create leakage or licensing issues unless institutions teach safe data handling and use of enterprise‑grade APIs with proper contracts.
  • Equity and access: Short, popular workshops may primarily reach already engaged students. Scaling access across cohorts and campuses is necessary to avoid creating an AI literacy divide among graduates.
  • Credential inflation: As many institutions begin issuing micro‑certificates, employers will need to distinguish between superficial badges and rigorous, assessed competencies.
  • Regulatory and compliance blind spots: Students may be left unprepared for sector‑specific regulatory requirements (finance, healthcare, legal) unless programs include compliance modules tailored to those domains.
Each of these risks can be mitigated with deliberate curricular design, stronger assessment, and alignment with institutional policy on data use and academic integrity.

Verifying core claims — what’s evidence and what’s context​

The workshop’s core claims — practical prompt engineering, ethics training, discipline‑specific labs, and issuance of completion certificates — are consistent with best practices and what many universities are rolling out in response to employer demand.
Two important technical claims deserve explicit note:
  • “LLMs can be wrong about 20% of the time.”
  • That figure functions as an illustrative estimate rather than a universal constant. Empirical studies and third‑party tests show wide variation: some benchmarks report hallucination rates in single‑digit percentages for high‑end models on narrow factual tests, while other tests and models yield error rates that exceed 20–30% on different question types or for smaller models. The takeaway for students is that error rates depend on the model, the prompt, and the task; the safe practice is to assume fallibility and verify critical outputs.
  • Prompt engineering materially improves outcomes.
  • Academic reviews and industry training programs have documented that structured prompts, few‑shot examples, chain‑of‑thought decompositions, and meta‑prompts all measurably reduce error and improve relevance in many use cases. The pedagogical emphasis on iterative prompt refinement is therefore evidence‑based and appropriate.
Where specific numbers are cited (attendance size, exact wording of faculty quotes, certificate issuance), those are properly presented as the university’s reported facts from the workshop.

Practical recommendations for business programs​

To make workshops like AI@Carson maximally effective and sustainable, business schools should consider these design patterns:
  • Build a scaffolded pathway: short workshops → for‑credit modules → capstone projects that require AI governance plans.
  • Make verification routine: require students to document source checks, red‑team prompts, and a validation checklist for any AI‑generated deliverable.
  • Include legal and compliance training: even basic modules on data licensing, PII handling, and sector rules will reduce downstream risk.
  • Measure outcomes: collect pre/post tests, workplace placement feedback, and employer satisfaction to quantify the value of credentials.
  • Expand access: rotate workshops across cohorts and campuses, and provide asynchronous materials for students who can’t attend in person.
  • Partner with enterprise tools: where possible use enterprise APIs or on‑prem solutions that preserve data governance for assignments involving real business data.
  • Build faculty capacity: invest in faculty development so instructors can lead advanced sessions without outsourcing.
These steps will help shift AI literacy from one‑off events to integrated, accountable curricular innovation.

What students should take away​

  • Learn to prompt deliberately: precise instructions, role framing, and stepwise decomposition improve results.
  • Assume outputs are provisional: verify facts, check sources, and do not publish AI text without human review.
  • Think in terms of workflows: pair AI agents with human checkpoints rather than handing off end‑to‑end responsibility.
  • Focus on transferable judgment: the cognitive skills that make prompts work — clarity of thought, problem decomposition, and skepticism — are assets regardless of which tool is current.

Conclusion​

The Carson College’s inaugural AI@Carson Workshop is a practical, well‑targeted model for how business schools can teach students to use generative AI responsibly. Its strengths are clear: hands‑on prompt labs, ethics integrated with practice, and discipline‑specific breakouts that translate technology into business value. The program also highlights broader institutional challenges — how to scale training, how to measure competence, and how to guard against overconfidence and privacy risk.
For universities, the imperative is to move beyond demos and to build curricula that teach both how to get value from AI and how to manage the technology’s limitations. For students, the imperative is to develop judgment — the single most valuable skill when machines begin generating the work: how to tell when an answer is useful, when it is wrong, and how to fix it. The AI@Carson Workshop is a promising prototype of that balanced, workplace‑ready approach.

Source: WSU Insider https://news.wsu.edu/news/2025/12/0...ural-ai-literacy-and-skill-building-workshop/