Marquette AI Task Force Survey: Guiding Responsible AI Adoption Campus-wide

  • Thread Author
Marquette’s new AI Task Force has opened the conversation on campus by sending a short, targeted survey to all faculty and staff asking how artificial intelligence is — or isn’t — being used across teaching, clinical education, research, administrative operations and student success, and the responses will shape policy, training and technology choices going forward.

A diverse group presents an AI Task Force Survey on a large touchscreen.Background and overview​

Marquette’s announcement describes a formally constituted Artificial Intelligence Task Force with an executive committee and five distinct workgroups: teaching and learning, clinical teaching and learning, research, administrative operations, and wellness and student success. The Task Force’s stated mission is to identify where policies, support structures, training, or resources should be updated so that campus adoption of AI aligns with the university’s Catholic, Jesuit values and operational needs. Faculty and staff were told to expect a brief survey by email; the institution defines AI for this effort broadly — from standalone generative tools such as Microsoft Copilot and ChatGPT to embedded AI features inside existing platforms like D2L Brightspace, Blackbaud, EHR systems and Slate.
This is not an isolated administrative exercise. Across higher education, institutions are moving from ad hoc use toward managed adoption models that combine procurement controls, training, syllabus-level guidance, and staged pilot programs. Industry and sector surveys show strong appetite among higher-education professionals to expand AI use, but also growing concerns about bias, privacy, and governance — a mixed signal that makes Marquette’s campuswide inventory both timely and necessary.

Why Marquette’s survey matters​

It creates a baseline for practical decisions​

A short, representative survey answers a core administrative question: what is actually happening on campus today? Inventory data — who is using which tools, for what tasks, and with what levels of institutional support — is the foundation for any responsible rollout. Without it, universities often face shadow AI phenomena where staff or faculty use consumer tools for work tasks because no sanctioned alternative exists, creating avoidable privacy and compliance risk. Marquette’s approach to gather employee perspectives across multiple operational domains is a business‑critical first step.

It centers values and local context​

Marquette has explicitly framed the Task Force’s remit in the language of its Jesuit mission and values, which matters. Governance decisions about AI are not purely technical; they are ethical and pedagogical. Declaring institutional principles at the outset — and asking the campus community for input — helps align policy outcomes with mission-driven priorities, from equitable access to student-centered pedagogy and care for clinical confidentiality. That framing also means the Task Force will need to translate values into actionable safeguards (e.g., syllabus statements, vendor contract requirements, and clinical-data protections).

It reduces operational and legal blind spots​

Modern AI features can surface inside many campus systems — learning management systems, donor- or finance-platforms, EHRs, admissions CRMs — and each integration brings unique data flow and retention questions. A campuswide survey that asks about embedded tool usage can expose places where sensitive information might be entering third‑party models or where contract language is missing or ambiguous. This kind of discovery is a precondition for negotiating the contractual terms that protect student and research data.

What the survey will and will not do — realistic expectations​

  • The survey is a listening and discovery tool, not a policy fix. Expect it to surface patterns and risks, not immediately solve them.
  • Results will be reported in aggregate and participation is voluntary; respondents were told not to include confidential or controlled information in their answers. That protects compliance but can undercount sensitive but important uses (e.g., small research labs using cloud models for subject data).
  • Providing a name is optional, and people can opt into follow‑up conversations or focus groups — an important design choice that lets the Task Force gather qualitative context after the initial quantitative sweep.
Caveat: the announcement as published contains what appears to be a typographical error in the return sender address quoted in the article; readers and respondents should verify the sending address in their official Marquette inboxes before acting on any email requests. This is a small but consequential operational point: phishing risk increases when legitimate communications contain typos in email addresses.

What higher education research and sector surveys tell us — two independent corroborations​

  • Sector survey evidence of rapid adoption and anxiety. Ellucian’s survey of higher‑education professionals found that institutional AI adoption has accelerated significantly and that 93% of respondents expect to expand their AI use for work within two years — but concerns about privacy and model bias are rising in parallel. This pattern mirrors what many campuses report: enthusiasm for productivity gains combined with requests for clear guardrails.
  • EDUCAUSE’s field work and guidance. EDUCAUSE’s recent research on AI’s impact in higher education consolidates the same core themes: institutions need definition, governance, training and operational controls; they must define what counts as AI for their policies; and they must pair any provisioning with literacy efforts that teach verification, prompt craft, and ethical constraints. EDUCAUSE’s work offers operational templates and cautions that are directly applicable to Marquette’s task force.
Taken together, these independent views validate Marquette’s tactic: inventory first, then policy and procurement reforms informed by real campus practice.

Strengths of Marquette’s approach​

  • Cross‑functional workgroups. Splitting the Task Force into domain-specific teams (teaching, clinical, research, operations, and student success) reduces the risk of one-size-fits-all rules that ignore domain-specific needs like HIPAA‑adjacent clinical workflows or IRB‑protected research. That structure mirrors recommended practice in the field.
  • Mission alignment. Centering decisions in institutional values helps move the conversation from vendor-driven procurement to mission-driven adoption. This reduces the risk of reactive, PR-driven choices and supports sustainable, pedagogy-aligned integration.
  • Community engagement. By soliciting input and offering opt‑in follow-ups, Marquette both builds legitimacy for its eventual policies and creates channels for faculty and staff to surface use cases and concerns. This inclusive approach is a known success factor in campus policy creation.

Key risks and blind spots the Task Force must address​

1. Shadow AI and data leakage​

Employees and instructors will default to consumer tools when sanctioned alternatives are absent. That creates leakage risk: research data, student records, or protected health information can find their way into vendor systems without contractual protections. Institutions must inventory unsanctioned usage and provide immediate, safe alternatives.

2. Contractual ambiguity (training, retention, audit rights)​

Vendor terms vary dramatically: some enterprise offerings promise non‑training clauses, others do not. Institutions need explicit contract terms about whether prompts and uploads are used to refine models, how long logs are retained, and what audit rights the institution has. These are negotiable items that materially affect privacy and IP exposure.

3. Pedagogy and academic integrity​

Tools that scaffold writing or problem solving can improve access and feedback — but they can also create integrity—and assessment—problems if instructors do not redesign assignments and evaluation rubrics. Institutions need a layered approach: syllabus-level AI use statements, redesigned assessments that require process evidence, and training for faculty on how to verify and evaluate AI-assisted work.

4. Clinical and research compliance​

Clinical teaching environments and research involving human subjects need strict controls. Any use of third‑party models with patient or subject data must be governed by legal review, data-use agreements, and technical isolation. Failure here risks HIPAA, FERPA, or IRB violations. Marquette’s explicit clinical workgroup is an appropriate recognition of that risk, but the Task Force must connect recommendations to legal and IRB workflows quickly.

5. Equity and access​

AI can amplify inequities if support tools are available unevenly across student populations or if accessibility features are not provisioned. Governance must include an equity lens so that adopted AI supports, not substitutes for, inclusive pedagogy.

Tactical checklist for Marquette’s Task Force (practical, prioritized steps)​

  • Inventory: Map tools in use (consumer and enterprise), list data classes involved, and identify who is using them and why.
  • Rapid mitigation: Publish immediate, clear guidance on what not to paste into public AI tools (e.g., student PII, clinical notes, unreleased research data).
  • Contract standards: Draft non‑training, retention, tenant isolation, SSO and audit-log clauses for procurement; insist on exportable logs and exit provisions.
  • Pilot safe tools: Offer enterprise‑tenanted, non‑training sandbox options for teaching and administrative pilots (Copilot with campus tenant or hosted "private GPT" options).
  • Faculty development: Launch short, practical modules on prompt craft, hallucination checks, and assessment redesign; pair these with small course redesign grants.
  • Syllabus language: Create templated AI-use statements departments can adopt and adapt.
  • Privacy & compliance lane: Build a fast escalation pathway for clinical and research teams to request secure environments or exceptions.
  • Measurement plan: Define KPIs for pilots (adoption rates, time saved in admin tasks, student learning outcomes, incidents of misuse) and a cadence for review.

Suggested questions the survey should (and should not) include​

Good discovery surveys balance breadth with specificity. Recommended question areas:
  • Which AI tools do you use for work? (free-text + multiple choice including common names)
  • For each tool, what data types are you inputting? (public text, course materials, student identifiers, clinical notes, research data)
  • How often do you use AI tools for work tasks? (never / occasionally / weekly / daily)
  • What benefits have you observed? (time savings, improved feedback, accessibility)
  • What concerns do you have? (privacy, bias, accuracy, integrity)
  • Do you need training or sanctioned alternatives? (yes/no + topic selection)
  • Would you volunteer for a follow-up focus group? (opt‑in)
Avoid asking respondents to paste any sensitive examples or documents into the survey itself; request descriptions only and offer follow-up interviews to explore particular use cases more deeply.

What good governance looks like (operational principles)​

  • Transparency. Publish redacted contract summaries and a tool catalog so the campus knows which tools are sanctioned and under what conditions.
  • Accountability. Assign stewardship responsibilities (e.g., IT for procurement and tenant config; libraries for digital literacy; IRB/legal for research-compliant use).
  • Proportionality. Match controls to risk: simple summarization tasks may need light governance, while clinical or human‑subjects research demands isolation and legal safeguards.
  • Iterative policy. Treat policies as living documents that evolve with vendor capabilities, research findings, and campus needs. Pilot → measure → refine.

The role of pedagogy: teach students how—not just whether—to use AI​

Marquette’s teaching workgroup has an unusual opportunity: to convert an operational risk into a teaching moment. The literature and systematic reviews show that the most durable benefits of AI come when instruction intentionally teaches how to verify AI outputs, how to integrate AI into a research workflow responsibly, and how to document process evidence. Assignments that require drafts, annotated AI interactions, and oral defenses preserve learning outcomes while allowing students to leverage AI as a scaffold rather than a shortcut. Embedding short AI-literacy modules into courses scales this learning.

Measuring success — metrics the Task Force should track​

  • Adoption: percent of faculty/staff using sanctioned AI tools; distribution of use by domain.
  • Risk incidents: reported near-misses, data exposures, or policy violations.
  • Pedagogical outcomes: changes in graded-learning outcomes for classes that used scaffolded AI supports.
  • Equity metrics: access parity across student demographics.
  • Contract metrics: percentage of new contracts containing non‑training clauses and stronger retention/audit terms.
  • Training uptake: percentage of faculty who complete baseline AI‑literacy modules.

Final assessment — opportunity, not inevitability​

Marquette’s AI Task Force survey is a constructive, properly sequenced move: inventory first, then policy and procurement, paired with training and pedagogical redesign. That sequence follows sector best practice and responds to the twin pressures institutions face — rapid, decentralized user adoption and rising expectations from students and staff for both capability and safeguards. If the Task Force follows through with clear procurement standards, immediate mitigations for sensitive data, and mandatory, practice‑focused faculty development, Marquette can convert what many campuses experience as a governance headache into a reputational and pedagogical advantage.
At the same time, the Task Force must avoid the common pitfalls: treating AI only as a productivity tool, under-resourcing follow‑through, or allowing procurement wins to outpace contractual protections. The next months — survey analysis, focused follow‑ups, and public articulation of procurement and syllabus expectations — will show whether Marquette’s approach leads to responsible, mission-aligned adoption or to a patchwork of local solutions and policy confusion.
Marquette’s community has been invited to participate; taking the survey is a low‑cost way for faculty and staff to shape how the university learns to use AI responsibly and to ensure that adoption reflects both practical needs and institutional values.

Conclusion​

The Task Force’s campuswide survey is a pragmatic and timely first step toward coherent AI governance at Marquette. It aligns with what sector research and higher‑education surveys recommend: start with an accurate inventory, pair procurement with contractual guardrails, invest in short, applied faculty training, and redesign assessments so AI supports learning rather than erasing it. The work ahead is operationally complex and ethically consequential, but the framework Marquette has signaled — cross‑functional workgroups, a public survey, and stated mission alignment — puts the university in the right posture to manage both the promise and the risk of AI. Faculty and staff responses to the survey will be the decisive input that turns this posture into policy and practice.

Source: Marquette Today AI Task Force launches campuswide survey on employee use of artificial intelligence | Marquette Today
 

Back
Top