AI Literacy in Action: Carson College's Two Day AI Workshop

  • Thread Author
Diverse students collaborate around laptops in a bright classroom with colorful posters.
The Carson College of Business recently staged a focused, two‑day immersion into practical AI literacy—an inaugural AI@Carson Workshop held Nov. 1–2 at WSU Pullman’s Spark Academic Innovation Hub that brought roughly 60 students together to learn prompt engineering, ethics, discipline‑specific AI workflows, and human‑in‑the‑loop verification practices.

Background​

Washington State University’s Carson College has joined a growing list of business schools moving beyond abstract conversations about artificial intelligence toward hands‑on, skill‑oriented literacy that prepares students to use, evaluate, and supervise generative AI tools in real workplace scenarios. The Spark Academic Innovation Hub provided a collaborative, active‑learning setting that matched the program’s experiential intent and the college’s stated goals to pair technical fluency with ethical judgment. This workshop is not an isolated experiment; it reflects a broader shift across higher education: short, intensive workshops and micro‑credentials are being used to bridge the gap between student curiosity and employer expectations for immediate AI competency. Institutional pilots, managed campus GPTs, and vendor-coordinated Copilot deployments are all part of this movement to embed AI in the student experience while attempting to control data governance and pedagogical risk.

What happened at AI@Carson: a day‑by‑day, hands‑on recap​

The program structure balanced technical practice, ethical reflection, and domain relevance.
  • Day 1: Kickoff session on prompt engineering and meta‑prompting; live labs for students to draft and iterate prompts against modern LLM interfaces.
  • Day 1–2: Ethics case studies and small‑group scenarios exploring misinformation, bias, and unintended consequences of over‑reliance on generative AI.
  • Day 2: Discipline‑specific breakouts (marketing, accounting, finance, hospitality, HR, information systems) led by faculty and graduate instructors; demonstrations of how AI agents and copilots can augment workflows.
  • Completion: Students who finished the program received certificates documenting AI literacy and responsible‑use competencies.
Faculty framed the core learning outcomes succinctly: teach students to craft better prompts, to treat model outputs as drafts that require verification, and to design human‑in‑the‑loop controls for domain‑sensitive tasks. As Robert Crossler, chair of WSU’s Department of Management, Information Systems, and Entrepreneurship and founder of the event, put it: the goal is to prepare students to “think about [AI] and understand what it does well, where it falls short and how to use it responsibly.”

Learning to craft better AI prompts: pedagogy and practice​

Why prompt engineering was front and center​

Prompt engineering—deliberately crafting the instructions given to large language models—is now a core workplace literacy for anyone using generative assistants such as Microsoft Copilot, OpenAI chat models, or Google’s Gemini. The Carson Workshop began here for a reason: better prompts materially improve output quality, reduce error rates, and make downstream verification simpler. Faculty taught students both practical prompt patterns (role framing, constraints, examples) and meta‑prompting techniques—asking the model to propose improved prompts or to check its own answers. Microsoft’s own guidance echoes the workshop’s emphasis: structured prompts with explicit context, examples, and expected output formats lead to more accurate, usable responses in Copilot and enterprise LLM deployments. That practice is applicable whether students are drafting marketing copy, summarizing financial statements, or crafting audit working papers.

Classroom tactics that worked​

  • Start with a clear role and outcome: “Act as an entry‑level analyst and summarize these notes into a one‑page executive briefing, citing sources.”
  • Use few‑shot examples and counter‑examples to steer model style and correctness.
  • Chain complex tasks into smaller verifiable steps (extract → verify → synthesize).
  • Save and reuse prompt templates that produced reliable outputs.
These are repeatable techniques that translate directly to workplace tasks: resume optimization, first drafts of client memos, data summarization, and idea generation. The workshop’s labs asked students to iterate quickly, treat outputs like intern drafts, and annotate places that needed human review—an approach Crossler summarized as training the model the way you’d train a junior analyst.

AI ethics and critical thinking: not optional​

One of the central threads of AI@Carson was ethical reasoning: small‑group scenarios probed misinformation, bias amplification, privacy leaks, and the reputational risks of deploying unverified AI outputs. Richard Johnson, associate professor, framed the models as “people pleasers” that return plausible‑sounding answers which are not always correct—an apt caution emphasizing evaluation, not blind trust.

Practical verification techniques taught​

  • Source triangulation: require multiple, independent references for factual claims.
  • Red‑team prompting: craft adversarial prompts to test model brittleness.
  • Rule‑based checks: apply deterministic tests for categories like PII, legal citations, or numerical totals.
  • Documentation: require students to submit prompt‑and‑output logs and a short reflection explaining verification steps.
These classroom practices mirror recommended institutional safeguards in many campus pilots and industry guidance: verification and provenance must be built into workflows rather than being afterthoughts.

Discipline breakouts: tailoring AI to the field​

One of the workshop’s strengths was applying AI concepts to domain problems rather than staying generic. Faculty‑led breakouts demonstrated how the same underlying models require different guardrails, verification checks, and acceptance thresholds across fields.
  • Accounting & Audits: AI agents can automate tedious checks, draft working papers, and summarize transaction histories, but auditors must always validate source data and computation logic. Beau Barnes stressed that AI should enhance workflows, not replace judgment.
  • Marketing: Generative models are useful for campaign ideation and copy drafts but demand strict brand‑voice constraints and privacy filters when using customer data.
  • Finance: Scenario‑based modeling benefits from AI for draft narratives and stress‑test idea generation, but numeric outputs must be validated with deterministic formulas or reconciled against canonical datasets.
  • HR & Hospitality: AI speeds content personalization and candidate‑screening drafts, but these areas raise distinct bias, fairness, and customer‑experience risks.
Breaking instruction into domain silos made the training immediately actionable for students who will apply these skills in internships and entry positions.

Certificates and signaling: what a two‑day badge really means​

Participants who completed the two‑day training received certificates acknowledging competencies in AI literacy and responsible use. Such micro‑credentials do several pragmatic things: they standardize a baseline of skills within a cohort, give students a resumeable artifact, and act as a launchpad for deeper, credit‑bearing curricular integration. Debbie Compeau, Carson College dean, framed the workshop as experiential training that supports technological fluency. However, micro‑certificates are not a panacea. They are most valuable when tied to demonstrable outcomes—portfolio artifacts, employer feedback, or assessed rubrics—not merely attendance. The real credibility comes from evidence of capability, not a badge alone.

Critical analysis: strengths, practical value, and enduring risks​

Notable strengths​

  • Experiential pedagogy: Active labs emulate workplace tasks, accelerating skill acquisition and making learning stick. Students learned by doing rather than by passively observing.
  • Domain contextualization: Breakouts by discipline made AI relevant and actionable in marketing, accounting, finance, HR and IT—avoiding the one‑size‑fits‑all trap.
  • Ethics integrated with practice: Teaching ethical reasoning in the same sessions where students use tools increases retention and encourages reflexive verification habits.
  • Human‑in‑the‑loop emphasis: Repeated messaging that AI is a collaborator, not a replacement, aligns with safe deployment principles and preserves professional judgment.

Systemic and pedagogical risks​

  1. Overconfidence and automation bias. Short, impressive demos can create a false sense of safety; students may trust plausible outputs without adequate verification. Empirical evaluations show hallucination rates vary widely across tasks and models; treating a single heuristic (for example, “20% error”) as a universal constant is misleading. Workshop speakers used the 20% figure as a heuristic—useful for instilling caution—but real‑world error rates depend on model, prompt, and task.
  2. Hallucinations and factual errors. Contemporary research and independent evaluations demonstrate that hallucination rates vary dramatically—some benchmarks report single‑digit rates for narrow tasks, while broader, open‑ended evaluations can show error rates well above 20–30% depending on model and metric. Mitigation requires retrieval‑augmented generation, chain‑of‑thought prompting, and explicit provenance strategies. Workshops that teach only surface prompting without grounding techniques leave students exposed.
  3. Data privacy and contractual exposure. Pasting proprietary client data or student records into consumer models can produce leakage or licensing risk. Universities must pair skill training with clear data‑handling rules: tenant‑bound Copilot access, DLP policies, and explicit contractual non‑training clauses for vendor services. Public announcements of “enterprise” tools are insufficient unless the procurement details are confirmed.
  4. Credential inflation and employer signal dilution. As many institutions issue micro‑certificates, employers will need to distinguish rigorously assessed credentials from attendance badges. The value of a certificate is tied to documented, reproducible evidence of skill.
  5. Equity of access and faculty readiness. Short workshops often reach already‑engaged students; scaling access across cohorts and campuses is necessary to avoid an AI literacy divide. Similarly, adjuncts and contingent instructors risk being left behind without faculty‑wide professional development.
  6. Vendor lock‑in and governance blind spots. Heavy dependence on a single vendor API or enterprise stack can create long‑term lock‑in risks and fragile curricular dependencies. Institutions should design exit strategies and insist on auditable contract terms that clarify data usage and retention.

Verifying the headline claims and the “20%” heuristic​

The workshop headline facts—dates (Nov. 1–2), location (Spark Academic Innovation Hub), attendance (≈60 students), faculty leads and quoted remarks—are confirmed in the Carson College release published on Dailyfly and in the campus Spark facility description. The attendance and quotes reported are university‑sourced facts from the event. The oft‑repeated classroom heuristic that “these models can be wrong about 20% of the time” is pedagogically useful but technically imprecise. Recent academic evaluations and independent model leaderboards show hallucination/error rates that vary by model family, task type, dataset, and evaluation metric—ranging from single digits on narrow factual benchmarks up to 30–80% on adversarial or open‑ended tasks in some studies. Treat the 20% figure as a cautionary rule of thumb rather than an empirically fixed property of all LLMs. Students should be taught to assume fallibility and to verify critical outputs.

Practical recommendations for institutions and instructors​

  1. Build a scaffolded pathway: run short workshops → offer for‑credit modules → require capstone projects that include an AI governance plan and reproducible prompt logs.
  2. Make verification routine: require source triangulation, red‑team prompts, and explicit verification checklists for any AI‑generated deliverable.
  3. Pair skill training with procurement safeguards: use enterprise tenant access, enforce DLP and Purview controls for sensitive data, and insist on contractual non‑training clauses where needed.
  4. Measure outcomes: collect pre/post assessments, track employer feedback on graduates’ AI readiness, and monitor DLP and integrity incidents.
  5. Expand access and faculty capacity: rotate workshops, publish asynchronous materials, and invest in faculty development so instructors across ranks can lead AI‑aware courses.
  6. Make micro‑credentials evidence‑based: tie certificates to assessed artifacts (portfolios, graded projects, interviews) rather than attendance alone.

What students gained—and what to expect next​

Participants reported practical wins: better prompt patterns, concrete AI‑augmented workflows, and an ethic of verification. Student feedback highlighted the value of domain‑specific sessions—accounting students saw how agents could reduce repetitive audit tasks; marketing students practiced brand‑safe content generation; finance students explored narrative generation paired with numerical validation. These are the precise, employable outcomes that make short workshops valuable when paired with follow‑up practice and assessment. For institutions, the real work begins after the pop‑up: converting awareness into durable curricular changes, governance frameworks, and institutionally provisioned sandboxes where students can practice safely. Short events seed capability; sustained governance and assessment convert that seed into reliable graduate competence.

Conclusion​

The Carson College’s inaugural AI@Carson Workshop represents the pragmatic model many business schools are adopting: short, experiential sessions that teach prompt craft, verification, and domain‑specific application while stressing ethical reasoning and human oversight. It demonstrates how a college can move quickly to give students practical skills without downplaying risks.
Yet the event also highlights how many institutions must couple skill development with robust governance: clear data controls, assessed credentialing, faculty training, and sustained access for all cohorts. When those pieces are assembled, workshops like AI@Carson can deliver real competitive advantage—graduates who can use AI effectively and judge it responsibly rather than simply mimic its outputs. The immediate takeaway for educators and employers is simple and decisive: treat prompt engineering and ethics as paired competencies, measure student competence with artifacts and verification evidence, and build institutional guardrails that make adoption both productive and safe.

Source: Dailyfly News WSU Carson College hosts inaugural AI literacy and skill-building workshop
 

Back
Top