Harvard’s grading wake-up call — and a syndicated classroom essay by Dr. Jessica A. Johnson — have reopened a national conversation about what grades are for, how post‑COVID pedagogies reshaped classroom expectations, and how colleges can preserve rigor without abandoning the formative gains of active learning.
The debate is not merely academic. In October 2025 the Office of Undergraduate Education at Harvard published a data‑rich update, arguing that grade distributions have become compressed and top‑heavy, with A‑range grades rising sharply over two decades and now representing a majority of recorded grades. The report frames the problem as one of signal loss: when too many students receive A’s, grades no longer distinguish mastery or reliably guide employers and graduate programs. National coverage — including a widely read Chronicle of Higher Education feature headlined “Grading Is Broken” — amplified the debate and highlighted the political and cultural friction around changing long‑standing assessment practices. That piece and the Harvard report together crystallized a set of claims now central to public discussion: that pandemic emergency grading and “no‑fail” policies normalized leniency; that active‑learning pedagogies and project‑based evaluation can compress grade distributions even while improving learning; and that Gen Z students now expect more interactive, scaffolded work with frequent formative feedback. Dr. Jessica Johnson’s syndicated column — a practitioner’s reflection from a composition instructor — echoes many of these themes through classroom vignettes: the use of peer tutoring, explicit AI policies, Gen Z–centered assignments, and a belief in combining one‑on‑one coaching with clear grading standards. Her piece is representative of instructors who seek both rigor and relevance in a post‑COVID, AI‑enabled classroom.
Her classroom anecdote also highlights two realities that policy documents often miss:
That design includes:
Educators and institutions can do three things now to move the conversation forward:
Source: miningjournal.net Thoughtfulness can incorporate post-COVID learning changes
Background: why this moment matters
The debate is not merely academic. In October 2025 the Office of Undergraduate Education at Harvard published a data‑rich update, arguing that grade distributions have become compressed and top‑heavy, with A‑range grades rising sharply over two decades and now representing a majority of recorded grades. The report frames the problem as one of signal loss: when too many students receive A’s, grades no longer distinguish mastery or reliably guide employers and graduate programs. National coverage — including a widely read Chronicle of Higher Education feature headlined “Grading Is Broken” — amplified the debate and highlighted the political and cultural friction around changing long‑standing assessment practices. That piece and the Harvard report together crystallized a set of claims now central to public discussion: that pandemic emergency grading and “no‑fail” policies normalized leniency; that active‑learning pedagogies and project‑based evaluation can compress grade distributions even while improving learning; and that Gen Z students now expect more interactive, scaffolded work with frequent formative feedback. Dr. Jessica Johnson’s syndicated column — a practitioner’s reflection from a composition instructor — echoes many of these themes through classroom vignettes: the use of peer tutoring, explicit AI policies, Gen Z–centered assignments, and a belief in combining one‑on‑one coaching with clear grading standards. Her piece is representative of instructors who seek both rigor and relevance in a post‑COVID, AI‑enabled classroom.What the evidence says about active learning, COVID grading shifts, and grade distributions
Active learning improves outcomes — and sometimes compresses grades
The strongest, longest‑standing evidence in the teaching and learning literature supports active‑learning strategies: a landmark meta‑analysis of hundreds of STEM studies found that active learning increases exam performance (roughly the equivalent of a half‑letter grade) and substantially reduces failure rates compared with traditional lecturing. These gains are robust across settings and class sizes and are commonly cited in support of participatory, project‑based instruction. That is the core pedagogical trade‑off driving the current debate: better instruction can raise baseline performance, which will often push more students into higher grade bands. In a system calibrated for sorting rather than mastery, that compression looks like inflation; in a system aiming for universal competence, it looks like success. The data show both realities are possible simultaneously.Emergency pandemic grading created precedent
During spring 2020 and beyond, thousands of institutions adopted emergency measures — pass/fail options, lifted restrictions on grade‑type changes, and other accommodations — to avoid penalizing students facing access barriers. Universities from the University of Pennsylvania to the University of Colorado and many state systems temporarily relaxed grading norms; these actions were intentional equity measures at the time, but they also set precedents students and families remember. Those emergency policies were rarely intended as permanent shifts, yet their memory and local policy remnants now shape student expectations and faculty judgments about fairness and workload.Harvard’s report: compression at the top, and a governance problem
Harvard’s report provides an unusually granular institutional case study: A‑grade prevalence moved from roughly a quarter of grades in 2005 to around 60% by 2025, and median GPA figures have climbed in recent classes. The report frames the problem as systemic — driven by incentives (student satisfaction metrics and enrollment competition), faculty evaluation anxieties, and uneven departmental practices — and recommends collective, cross‑faculty responses rather than blaming individual students. These Harvard figures are an important data point because elite institutions shape public narratives. But Harvard is not unique: trends toward higher average grades have been documented across many U.S. institutions over the past two decades, even though the causes and scale vary widely.What Jessica Johnson and frontline instructors are reporting
Dr. Johnson’s column offers a practical, classroom‑level view: requiring peer tutoring, insisting on human‑graded essays (not AI), modeling responsible AI use for preparatory tasks, and designing Gen Z–relevant prompts to motivate students. These tactics illustrate a middle path many instructors favor: hold students to clear standards while redesigning assignments so effort maps onto demonstrable learning.Her classroom anecdote also highlights two realities that policy documents often miss:
- Students want relevance and interactivity; many respond positively when assignments let them write about contemporary issues and demonstrate applied skills.
- Some students prefer to do work without AI help when given clear expectations — but many will also use powerful tools unless coursework requires process evidence (drafts, reflection, oral defense).
Strengths in the emerging consensus: what’s worth preserving
1. Active learning’s evidence base
Active learning produces measurable learning gains, particularly in STEM fields. That improvement is real and is not merely an artifact of grading policy changes. Preserving active practices is essential to raising overall competence and equity in large, heterogeneous classrooms.2. Process‑based assessments that teach as they evaluate
Assessments that require process evidence (draft histories, annotated revisions, in‑person demonstrations, oral defenses) shift incentives away from outsourcing work and toward documented mastery. These approaches both reduce plagiarism risk — including AI‑enabled outsourcing — and create formative learning moments that grades alone cannot deliver.3. Faculty calibration and transparency
Clear rubrics, published grading philosophies, and regular calibration across sections are low‑tech, high‑value practices. Harvard’s report explicitly calls for greater transparency and calibration; when departments align on what an “A” represents, grading becomes more defensible and less subject to game‑theory pressure.4. Targeted AI literacy and governance
Teaching students how to use generative tools responsibly (prompting, source checking, documenting AI use) preserves their productivity benefits while protecting academic integrity. Managed adoption — education‑tier contracts, non‑training clauses, and explicit assignment‑level AI disclosure — allows institutions to keep AI as a pedagogical ally.Risks and blind spots: what to watch for
Risk 1 — Equity harms if reforms are blunt
A sudden, across‑the‑board reversion to high‑stakes in‑person exams will disadvantage students with caregiving responsibilities, disabling conditions, or precarious work and housing situations. Any move to tighten standards must be accompanied by access and accommodation planning, not just stricter exams.Risk 2 — Misreading active learning as “easy grading”
Some observers conflate active learning with lax standards. That misunderstanding risks policy responses that either ban project‑based learning or punish instructors for innovation. The right response is to map learning goals to assessment formats and to ensure that active learning includes rigorous mastery checks.Risk 3 — Governance gaps around AI
Bans on AI are often ineffective; unmanaged, elastic policies lead to covert use and surveillance‑heavy detection regimes. Institutions that neglect contracts, telemetry, and teacher training may inadvertently expose student data or create adversarial classroom cultures.Risk 4 — Overreliance on grades as the sole signal
When transcripts are the only visible output of learning, pressure to “game” grades escalates. Institutions should consider supplemental signals — narrative evaluations, calibrated departmental medians, or annotated transcripts explaining alternative assessment methods — to preserve external comparability while enabling pedagogical innovation.Practical policy options: how colleges and instructors can respond
For instructors: design, document, defend
- Be explicit in the syllabus: state your grading philosophy, allowed AI uses, and the evidence you will accept for mastery.
- Require process artifacts: drafts, revision logs, tutor feedback, annotated bibliographies, or short recorded defenses.
- Use clear rubrics: publish what distinguishes A, B, and C work in observable terms.
- Build low‑stakes, frequent formative checks to identify gaps early and reduce incentive for last‑minute outsourcing.
For departments: calibrate and communicate
- Facilitate cross‑section grading rubrics and periodic calibration sessions.
- Publish departmental medians or grade‑range guidance to reduce cross‑course discrepancies and defend against incentives to “compete” with easy grading.
- Pilot transcript annotations or supplemental narratives for programs using portfolio‑based or mastery‑style assessments to preserve external comparability.
For institutions: governance and measurement
- Negotiate education‑grade AI contracts with non‑training clauses and clear retention/deletion terms; require vendor transparency and audit rights.
- Invest in short, practical faculty development that pairs AI prompt literacy with assessment redesign coaching.
- Measure outcomes, not just engagement. Pre‑specify learning metrics for pilots (short‑term mastery gains, 4–8 week retention measures, equity differentials) and publish findings.
- Avoid crude fixes (e.g., caps on A’s) without accompanying workload and assessment support; systemic issues require systemic responses.
A balanced path: “the best of both worlds”
The most defensible path is neither a retreat to rote, high‑stakes testing nor a wholesale embrace of pandemic‑era leniency. It is a purposeful redesign of assessments that preserves the benefits of active learning while ensuring grades continue to signal meaningful mastery.That design includes:
- Mapping each assignment to explicitly stated course outcomes.
- Embedding verifiable process evidence in every summative assessment.
- Teaching AI literacy and requiring AI disclosure where used.
- Publishing clear rubrics and practicing inter‑instructor calibration.
- Monitoring changes with data — distributional statistics and learning outcome measures — before scaling reforms.
What remains unresolved and what to study next
- Longitudinal learning outcomes. Short‑term exam lifts from active learning are well documented, but longer‑term retention, transfer, and higher‑order reasoning gains need more multi‑site, peer‑reviewed studies.
- Equity impacts of AI‑enabled instruction. Vendor pilot claims of time savings are promising, but independent replication with demographic disaggregation is required to ensure benefits are not concentrated among already advantaged students.
- External signaling solutions. How do employers and graduate admissions interpret annotated transcripts or portfolios at scale? Small pilots should involve external stakeholders early to assess rollout feasibility.
- Mental health and workload trade‑offs. Any policy that raises grading strictness must monitor student anxiety, time commitment, and retention metrics to avoid unintended harms.
Conclusion: preserving signal and widening mastery
The jockeying over grades after COVID‑19 is less about nostalgia for older exams and more about clarifying purpose. Grades have two distinct functions: to support learning and to signal relative achievement. Preserving both requires intentional assessment design, transparent standards, and governance that accepts trade‑offs rather than hoping a single technical fix will save the day.Educators and institutions can do three things now to move the conversation forward:
- Clarify the purpose of grades at course and department levels.
- Redesign assessments so learning is both visible and verifiable.
- Integrate AI literacy and governance so tools support learning, not supplant it.
Source: miningjournal.net Thoughtfulness can incorporate post-COVID learning changes