ETH Zurich is moving AI in teaching from experimentation into a more structured, institution-wide support model. The latest Staffnet guidance makes clear that lecturers are no longer expected to navigate generative AI alone; instead, they are being offered workshops, managed tools, and project funding to redesign assessment and classroom practice around the technology. The message is pragmatic rather than promotional: use generative AI where it helps, but do so through approved, data-protected channels and with clear pedagogical intent.
The timing matters. Universities across Europe are still working out how to respond to ChatGPT-style tools, and many institutions are stuck between two extremes: either banning AI outright or allowing it with little support. ETH is taking a third path by building a framework in which lecturers can test ideas, access vetted tools, and share practice through organized programs rather than improvising individually. That approach reflects a broader shift in higher education, where the question is no longer whether AI will affect teaching, but how quickly institutions can adapt without weakening quality or trust.
ETH’s current posture is also notable because it treats AI not as a niche IT topic, but as a teaching and learning issue. The Staffnet piece points to Daniel Flück and the Unit for Teaching and Learning as key enablers, while also tying the initiative to broader structures such as Innovedum, the Learning & Teaching Fair, and the university’s managed cloud ecosystem. In other words, the AI discussion is being embedded in pedagogical development, rather than siloed in technical support.
That is significant for both staff and students. For lecturers, the new model lowers the barrier to experimentation by combining training, tool access, and funding. For students, it signals that AI will be used more openly in coursework, but within a framework that emphasizes responsible use, structured feedback, and assessment redesign rather than simple automation.
The move also reflects an institutional maturity that many universities are still chasing. ETH is effectively saying that if AI is going to reshape learning, then the university should shape that change deliberately. That means building workflows, policies, and exemplars before the technology becomes normalized by default.
The workshop concept is especially relevant because assessment is where AI creates the deepest tension. If students can generate fluent text, code, or analysis in seconds, then traditional take-home tasks may no longer measure what instructors think they measure. ETH’s response is not to retreat into suspicion, but to invite lecturers to redesign the task itself so that student understanding remains visible.
That matters in a university environment where time is scarce and teaching staff are already overloaded. A compact format is more likely to get traction than a long certification path, especially when the topic is changing so rapidly. The workshops are therefore not just a training product; they are a mechanism for organizational change.
Key advantages include:
That is a difficult change, but also a necessary one. If institutions do not redesign assessment, they risk creating a gap between the skills they claim to assess and the behaviors students actually use. ETH seems to understand that responsible AI use starts with assessment design, not with detection tools.
That distinction is crucial. In higher education, one of the biggest barriers to AI adoption has been uncertainty about what happens to uploaded data, prompts, and generated outputs. By offering a protected environment, ETH gives lecturers a safer way to explore AI without having to rely on consumer-grade accounts or unclear privacy promises. That is especially important for teaching materials, unpublished work, student submissions, and internal documentation.
It also reduces friction. When a lecturer can use an ETH-managed account, the decision to test AI in teaching becomes less about setting up a separate account and more about integrating a tool into existing workflows. That is often the difference between occasional experimentation and sustained adoption.
The key implications are:
This is a meaningful development because it shows ETH is treating AI as a managed service ecosystem rather than a single chatbot. Universities that take this route can support different use cases more effectively: basic brainstorming for staff, document interaction for teaching, and more specialized applications for learning materials or course support.
A few likely effects stand out:
ETH has already been experimenting with this direction for some time. The newer version of Ethel, as described in previous Staffnet coverage, is meant to support students through feedback and guided practice, not just answer queries. That matters because tutoring is one of the few educational domains where generative AI has obvious value if it stays grounded in course content and pedagogical intent.
The educational value is obvious. Students get help at the moment of difficulty, which is often when learning is most effective. Lecturers also gain an additional support layer that can reduce repetitive questions and make larger classes more manageable.
Still, the model works only under disciplined conditions:
The trade-off is equally clear. If the bot becomes too convenient, students may stop struggling productively with problems on their own. There is also the risk of over-reliance on AI explanations, especially when a student accepts a fluent answer without checking whether it matches the course logic. In that sense, tutoring is not just a feature; it is a pedagogical responsibility.
The latest project guidance shows that the AI focus area has expanded from “AI for Teaching and Learning” to include “Assessment with AI.” That expansion is revealing. It means ETH now sees assessment not as a side issue but as one of the central places where AI must be integrated thoughtfully. It also suggests that the university expects more faculty-led experimentation in how AI changes evaluation, feedback, and performance measurement.
Innovedum also creates visibility. Projects funded through the program can be shared, compared, and reused, which helps build a culture of evidence rather than hype. Over time, that can make the difference between isolated pilots and institutional practice.
Benefits of the funding route include:
It also encourages a more mature conversation about success. Not every project should aim to automate more. Some may aim to make learning more transparent, improve student reflection, or create better feedback loops. That broader definition is healthier than a simplistic productivity metric.
The fair matters because it joins students and lecturers in the same conversation. That is particularly important in AI, where users often hold different assumptions about convenience, fairness, academic integrity, and skill development. A shared forum helps surface those tensions before they harden into policy conflicts.
The fair also reinforces the idea that AI should be evaluated in context. A tool that works in one course may be a poor fit in another. By creating a cross-disciplinary showcase, ETH can highlight the difference between generic enthusiasm and disciplined design.
Notable themes around the fair include:
This governance layer matters because universities are full of mixed-risk content. Some material is public and low-risk, while other material includes unpublished research, student work, personal data, or internal assessments. A one-size-fits-all AI policy would be too blunt to be useful. ETH’s more granular approach is therefore a sign of maturity.
That approach also helps avoid the common “shadow AI” problem, where staff quietly adopt consumer tools because the approved alternatives are too cumbersome. By making the sanctioned path more usable, ETH reduces the incentive to go around the rules.
Core governance principles include:
That combination also gives ETH a competitive advantage in the academic labor market. Lecturers increasingly expect institutions to offer not just slogans about innovation, but actual support for modern teaching practice. When a university provides workshops, managed tools, and funding, it signals seriousness. That can matter in recruiting and retaining faculty who want to experiment responsibly.
This is also a reputational play. A university associated with careful, well-governed AI use in education can position itself as a leader rather than a follower. That is especially valuable in a field where public skepticism about AI is still strong and examples of irresponsible use can travel fast.
The broader competitive implications include:
The likely pressure points are familiar. Lecturers will want clearer examples, better templates, and evidence that AI-supported teaching actually improves outcomes. Students will want consistency across courses, so that acceptable AI use does not vary in confusing ways from one class to another. And leadership will need to keep balancing innovation with caution, because the regulatory and technical landscape will keep shifting.
What to watch next:
In that sense, the real story is not that ETH wants lecturers to use AI. It is that ETH wants them to use it well, inside a system designed to protect data, preserve academic standards, and encourage innovation with purpose. That is a far more durable strategy than either blanket enthusiasm or blanket prohibition, and it may prove to be the model that other universities eventually follow.
Source: ethz.ch Tools and workshops for AI in teaching
Overview
The timing matters. Universities across Europe are still working out how to respond to ChatGPT-style tools, and many institutions are stuck between two extremes: either banning AI outright or allowing it with little support. ETH is taking a third path by building a framework in which lecturers can test ideas, access vetted tools, and share practice through organized programs rather than improvising individually. That approach reflects a broader shift in higher education, where the question is no longer whether AI will affect teaching, but how quickly institutions can adapt without weakening quality or trust.ETH’s current posture is also notable because it treats AI not as a niche IT topic, but as a teaching and learning issue. The Staffnet piece points to Daniel Flück and the Unit for Teaching and Learning as key enablers, while also tying the initiative to broader structures such as Innovedum, the Learning & Teaching Fair, and the university’s managed cloud ecosystem. In other words, the AI discussion is being embedded in pedagogical development, rather than siloed in technical support.
That is significant for both staff and students. For lecturers, the new model lowers the barrier to experimentation by combining training, tool access, and funding. For students, it signals that AI will be used more openly in coursework, but within a framework that emphasizes responsible use, structured feedback, and assessment redesign rather than simple automation.
The move also reflects an institutional maturity that many universities are still chasing. ETH is effectively saying that if AI is going to reshape learning, then the university should shape that change deliberately. That means building workflows, policies, and exemplars before the technology becomes normalized by default.
Booster workshops: a practical way to rethink assessment
The first resource highlighted by ETH is the Booster workshop format, which is designed to help lecturers rethink assessment and teaching in light of generative AI. The model is hands-on and time-bound: lecturers spend three hours preparing, then attend a four-hour workshop, and later test their concept with students. That sequence is important because it shifts the conversation from abstract concern to applied redesign.The workshop concept is especially relevant because assessment is where AI creates the deepest tension. If students can generate fluent text, code, or analysis in seconds, then traditional take-home tasks may no longer measure what instructors think they measure. ETH’s response is not to retreat into suspicion, but to invite lecturers to redesign the task itself so that student understanding remains visible.
Why the workshop model matters
Booster workshops work because they compress theory, collaboration, and implementation into one cycle. Participants learn from each other, which is often more valuable than a top-down lecture on AI policy. They also leave with a concept they can test quickly, which makes the training feel actionable rather than generic.That matters in a university environment where time is scarce and teaching staff are already overloaded. A compact format is more likely to get traction than a long certification path, especially when the topic is changing so rapidly. The workshops are therefore not just a training product; they are a mechanism for organizational change.
Key advantages include:
- Rapid ideation for assessment redesign
- Peer exchange across departments and disciplines
- Immediate classroom testing after the workshop
- Lower entry threshold for lecturers new to AI
- Focus on real teaching problems, not just tools
- Better alignment between pedagogy and policy
Assessment redesign as the real battleground
The deeper issue is not whether AI can help students complete assignments faster. It is whether assessment methods are still fit for purpose in a world where the first draft of nearly anything can be machine-generated. ETH’s workshops implicitly acknowledge that universities may need more oral defenses, in-class work, staged submissions, reflective components, and assessment formats that reward process over polished output.That is a difficult change, but also a necessary one. If institutions do not redesign assessment, they risk creating a gap between the skills they claim to assess and the behaviors students actually use. ETH seems to understand that responsible AI use starts with assessment design, not with detection tools.
Managed tools: Copilot, Gemini, NotebookLM, and protected access
The second resource ETH highlights is access to managed AI tools that have been legally and technically vetted. These include Microsoft Copilot, Google Gemini, and NotebookLM, with access through ETH accounts and license structures designed to reduce data-risk exposure. The university’s framing is careful: not all AI tools are suitable for all data, and staff are expected to use the approved services that match the sensitivity of their material.That distinction is crucial. In higher education, one of the biggest barriers to AI adoption has been uncertainty about what happens to uploaded data, prompts, and generated outputs. By offering a protected environment, ETH gives lecturers a safer way to explore AI without having to rely on consumer-grade accounts or unclear privacy promises. That is especially important for teaching materials, unpublished work, student submissions, and internal documentation.
What protected access changes
ETH’s managed approach does more than simplify login. It creates a policy-backed environment where the institution can say, with some confidence, that certain data will not be repurposed for model training under the relevant account conditions. That does not eliminate every risk, but it gives staff a much clearer operational baseline than public free-tier tools.It also reduces friction. When a lecturer can use an ETH-managed account, the decision to test AI in teaching becomes less about setting up a separate account and more about integrating a tool into existing workflows. That is often the difference between occasional experimentation and sustained adoption.
The key implications are:
- Lower privacy risk for approved use cases
- Clearer governance around data handling
- Simpler onboarding through existing ETH accounts
- More realistic classroom experimentation
- Better alignment with institutional compliance rules
- Reduced dependence on ad hoc consumer tools
Google AI Pro and the new license layer
One of the most interesting details in the Staffnet article is the mention of the newly available Google AI Pro license for Gemini. That suggests ETH is not merely offering baseline access but is actively expanding premium functionality where the institutional demand justifies it. In practice, that can mean better model access, stronger integration with cloud workflows, and more room for course-specific experimentation.This is a meaningful development because it shows ETH is treating AI as a managed service ecosystem rather than a single chatbot. Universities that take this route can support different use cases more effectively: basic brainstorming for staff, document interaction for teaching, and more specialized applications for learning materials or course support.
A few likely effects stand out:
- Lecturers can test AI more confidently within approved boundaries.
- Departments can standardize around common tools instead of fragmented alternatives.
- Students may encounter more consistent guidance on what counts as acceptable AI use.
- IT and teaching support can work from a shared policy base.
- The university can adapt as model offerings evolve without starting from scratch.
Ethel and course-specific tutoring
The third resource in the ETH package is the ETHEL project, which points to course-specific chatbots and exercise tutors. This is where the institution’s AI strategy becomes more ambitious, because it moves beyond generic productivity tools and into tailored teaching support. Course-specific bots can answer questions about course materials, guide students through exercises, and provide feedback in ways that are more contextual than a general-purpose AI assistant.ETH has already been experimenting with this direction for some time. The newer version of Ethel, as described in previous Staffnet coverage, is meant to support students through feedback and guided practice, not just answer queries. That matters because tutoring is one of the few educational domains where generative AI has obvious value if it stays grounded in course content and pedagogical intent.
Why course-specific bots are different
A general chatbot can be useful, but it can also be misleading, overconfident, or simply too broad. A course-specific tutor, by contrast, can be built around the syllabus, exercise sheets, and lecture material that define the learning objectives. That makes it more relevant and potentially more reliable, especially when the system uses retrieval-based methods tied to known sources.The educational value is obvious. Students get help at the moment of difficulty, which is often when learning is most effective. Lecturers also gain an additional support layer that can reduce repetitive questions and make larger classes more manageable.
Still, the model works only under disciplined conditions:
- The bot must be anchored to approved course material
- Responses should be clearly bounded by the content base
- The tutor should support learning, not replace it
- Lecturers need oversight over what the bot says
- Students need guidance on how to use it responsibly
- Privacy and data handling must stay central
The pedagogical upside and the trade-off
The upside is obvious: students can receive immediate, repeated help without waiting for office hours, and lecturers can scale support without losing some of the human quality of instruction. That can be especially valuable in gateway courses, large STEM classes, or exercise-heavy subjects where students often need incremental feedback.The trade-off is equally clear. If the bot becomes too convenient, students may stop struggling productively with problems on their own. There is also the risk of over-reliance on AI explanations, especially when a student accepts a fluent answer without checking whether it matches the course logic. In that sense, tutoring is not just a feature; it is a pedagogical responsibility.
Innovedum funding: turning experiments into projects
ETH’s third recommendation is funding through the Innovedum Fund, which supports teaching innovation projects, including those focused on AI. This is a powerful mechanism because it turns interest into structured action. Rather than simply telling lecturers to “innovate,” ETH provides a channel for designing, testing, and evaluating new approaches in a formal framework.The latest project guidance shows that the AI focus area has expanded from “AI for Teaching and Learning” to include “Assessment with AI.” That expansion is revealing. It means ETH now sees assessment not as a side issue but as one of the central places where AI must be integrated thoughtfully. It also suggests that the university expects more faculty-led experimentation in how AI changes evaluation, feedback, and performance measurement.
Why funding matters more than slogans
Innovation in teaching often fails not because people lack ideas, but because they lack time, money, and institutional backing. Funding lets lecturers buy out effort, coordinate with colleagues, build prototypes, and collect evidence of impact. Without that support, AI experimentation tends to stay local and unsustained.Innovedum also creates visibility. Projects funded through the program can be shared, compared, and reused, which helps build a culture of evidence rather than hype. Over time, that can make the difference between isolated pilots and institutional practice.
Benefits of the funding route include:
- Dedicated resources for experimentation
- Formal recognition of teaching innovation
- Opportunities to evaluate outcomes
- Stronger scaling potential if projects succeed
- Knowledge sharing across ETH communities
- Better linkage between innovation and policy
From pilot to practice
What makes this funding model especially effective is the expectation that projects move beyond pilot status. ETH is not only interested in proofs of concept; it wants teaching ideas that can be implemented, refined, and spread. That matters because many AI initiatives in universities never reach the point where they change routine teaching. Funding tied to implementation is therefore more strategic than innovation theater.It also encourages a more mature conversation about success. Not every project should aim to automate more. Some may aim to make learning more transparent, improve student reflection, or create better feedback loops. That broader definition is healthier than a simplistic productivity metric.
Learning & Teaching Fair: the institution’s public AI forum
The Learning & Teaching Fair 2026 gives ETH a public stage for the kinds of ideas it is trying to cultivate. With AI as the focus for this year’s fair, the event becomes more than a showcase; it becomes a forum for debating what meaningful and responsible AI use in education should look like. The timing is appropriate, since the challenge now is less about awareness and more about implementation quality.The fair matters because it joins students and lecturers in the same conversation. That is particularly important in AI, where users often hold different assumptions about convenience, fairness, academic integrity, and skill development. A shared forum helps surface those tensions before they harden into policy conflicts.
Why visibility changes behavior
When teaching innovations are visible, they become easier to discuss, compare, and emulate. That social effect can be just as important as funding. If lecturers see colleagues using AI in a thoughtful way, they are more likely to test similar ideas rather than assume they are out on a limb.The fair also reinforces the idea that AI should be evaluated in context. A tool that works in one course may be a poor fit in another. By creating a cross-disciplinary showcase, ETH can highlight the difference between generic enthusiasm and disciplined design.
Notable themes around the fair include:
- Meaningful use of AI in teaching
- Responsible integration into curricula
- Student preparation for AI-rich workplaces
- Sharing of course-level innovations
- Dialogue between students and instructors
- Visibility for non-AI innovations as well
Data protection and governance: the quiet foundation
The strongest signal in ETH’s AI approach is not the tools themselves, but the governance language surrounding them. The university repeatedly emphasizes that AI use must be legally and technically vetted, and that staff should use the right service for the right type of data. That framing is what makes large-scale adoption feasible in an academic institution with sensitive teaching material and personal information.This governance layer matters because universities are full of mixed-risk content. Some material is public and low-risk, while other material includes unpublished research, student work, personal data, or internal assessments. A one-size-fits-all AI policy would be too blunt to be useful. ETH’s more granular approach is therefore a sign of maturity.
Responsible use is a systems problem
It is tempting to think of AI safety as a user education issue alone. In reality, it is a systems problem involving account management, licensing, approved service lists, data classification, and staff training. ETH’s managed tool strategy suggests it understands this. Users can only behave responsibly if the institution gives them safe defaults.That approach also helps avoid the common “shadow AI” problem, where staff quietly adopt consumer tools because the approved alternatives are too cumbersome. By making the sanctioned path more usable, ETH reduces the incentive to go around the rules.
Core governance principles include:
- Use approved tools for institutional work
- Avoid uploading sensitive data to unvetted services
- Check data-use terms before relying on an AI platform
- Match the tool to the data classification
- Verify AI output critically
- Keep human accountability for teaching decisions
How this compares with broader university trends
ETH’s approach sits somewhere between caution and ambition, which is probably the right place to be. Across higher education, many institutions have adopted AI guidance, but fewer have paired it with sufficient staff development and approved infrastructure. ETH’s package is stronger because it combines policy, tools, and pedagogical support in one ecosystem.That combination also gives ETH a competitive advantage in the academic labor market. Lecturers increasingly expect institutions to offer not just slogans about innovation, but actual support for modern teaching practice. When a university provides workshops, managed tools, and funding, it signals seriousness. That can matter in recruiting and retaining faculty who want to experiment responsibly.
The strategic implication for ETH
ETH’s move is not only about current teaching quality. It is also about institutional resilience. If AI becomes a standard feature of study and work, universities that have already built support structures will adapt faster than those still debating basic principles. ETH appears to be investing early in that capability.This is also a reputational play. A university associated with careful, well-governed AI use in education can position itself as a leader rather than a follower. That is especially valuable in a field where public skepticism about AI is still strong and examples of irresponsible use can travel fast.
The broader competitive implications include:
- Better faculty confidence in adopting AI
- More coherent institutional policy
- Stronger reputation for innovation
- Potential attraction of forward-looking staff
- A model other universities may imitate
- Reduced fragmentation in AI adoption
Strengths and Opportunities
ETH’s current AI-in-teaching strategy is compelling because it is concrete, not rhetorical. It links staff development, compliant infrastructure, and financial support in a way that makes responsible adoption easier. That combination gives lecturers a realistic path from curiosity to implementation, which is exactly what most universities still lack.- Clear pedagogical framing rather than pure technology hype
- Data-protected access to approved AI tools
- Practical workshop formats that lead to classroom experimentation
- Funding pathways that can turn ideas into projects
- Course-specific tutoring options like Ethel for targeted support
- Visible community exchange through the Learning & Teaching Fair
- A governance-first model that should reduce ad hoc tool use
Risks and Concerns
The main risk is that enthusiasm could outrun implementation quality. AI can make teaching appear more modern without necessarily improving learning outcomes, and universities are vulnerable to mistaking adoption for impact. ETH’s own emphasis on responsible use suggests it understands this danger, but the challenge will be maintaining rigor as uptake grows.- Over-reliance on AI in assessment or tutoring
- Uneven lecturer uptake across faculties and disciplines
- Confusion over what tools are approved for which data
- False confidence in AI-generated answers
- Policy drift as tools and licenses change quickly
- Potential equity gaps if some courses adopt AI much faster than others
- Administrative complexity in maintaining safe access and guidance
Looking Ahead
The next phase for ETH will be moving from structured availability to measured impact. The university has already laid out the ingredients: booster workshops, managed tools, tutoring bots, funding, and community events. What matters now is whether those ingredients produce better teaching, better learning, and more defensible assessment practices.The likely pressure points are familiar. Lecturers will want clearer examples, better templates, and evidence that AI-supported teaching actually improves outcomes. Students will want consistency across courses, so that acceptable AI use does not vary in confusing ways from one class to another. And leadership will need to keep balancing innovation with caution, because the regulatory and technical landscape will keep shifting.
What to watch next:
- How many lecturers adopt the booster workshop concepts in real courses
- Whether Google AI Pro and similar licenses gain meaningful traction
- How Ethel evolves as a course-specific teaching assistant
- What kinds of Innovedum projects are funded in the AI space
- Whether assessment formats change in visible, durable ways
- How ETH updates its guidance as models and policies evolve
In that sense, the real story is not that ETH wants lecturers to use AI. It is that ETH wants them to use it well, inside a system designed to protect data, preserve academic standards, and encourage innovation with purpose. That is a far more durable strategy than either blanket enthusiasm or blanket prohibition, and it may prove to be the model that other universities eventually follow.
Source: ethz.ch Tools and workshops for AI in teaching