A Future You: Collage and AI in First Year Writing at Penn State Shenango

  • Thread Author
Penn State Shenango’s “A Future You” exhibition offered more than a display of student art; it staged a careful experiment in how craft, composition, and generative AI can be combined to help first‑year students imagine their professional and personal futures. In a single semester, students in English 15: Rhetoric and Composition moved from scissors-and-paper collage to writing AI prompts for Microsoft Copilot, then reflected in essays on what those two very different creative processes revealed about identity, aspiration, and the limits of machine representation.

Students in an art gallery study portraits, with colorful artworks on display and a laptop showing a portrait.Background​

The project originated in a first‑year writing classroom, where faculty often face the twin challenges of building student engagement and developing transferable critical thinking skills. The instructor framed the assignment as integrative: students would use visual art, creative writing, and generative AI to answer a deceptively simple question — what will you be when you grow up? The work was produced, displayed in the campus Lecture Hall Art Gallery, and funded by campus leadership at no cost to the students.
This kind of cross‑disciplinary work reflects a larger trend in higher education: moving composition beyond paragraph structure and thesis statements toward multimodal literacies that prepare students for a world where communication is frequently visual and mediated by AI tools.

How the project worked​

Step‑by‑step process​

  • Students created self‑portrait collages from magazines, newspapers, stickers, and colored paper to express a current sense of self.
  • They drafted written prompts aimed at Microsoft Copilot to generate an image showing them in a chosen future occupation or activity.
  • Students combined the AI image with their physical collage to create a composite artwork.
  • Each student wrote a reflective essay explaining what the juxtaposition of analogue craft and an AI‑generated image taught them about identity, perception, and aspiration.

Educational goals​

  • Encourage integrative thinking by approaching identity from multiple modalities.
  • Teach basic prompt composition and media literacy around AI image generation.
  • Spark metacognitive reflection: how does the medium influence the message?
  • Provide a low‑stakes creative outlet early in the college experience to reduce anxiety about performance.

What students discovered​

The student responses were instructive on several levels. Many found collage liberating: it lowered barriers to creativity because there was no expectation of technical artistic skill. Collage invited tactile exploration and immediate visual feedback, and several students reported that the analogue process felt therapeutic.
By contrast, the AI component sparked a range of reactions. Some students were delighted by the novelty and immediacy of a machine rendering of their future self. Others were disappointed or uneasy because the AI images often did not look like them or failed to capture subtle markers of identity they considered important. Several students reported preferring the authenticity and intentionality of their own collage over Copilot’s rendering.
These mixed reactions are valuable data: they reveal that generative models can inspire imagination but do not automatically translate a student’s self‑image into a convincing or satisfying portrait. That gap between expectation and output prompted deep reflection in many essays — precisely the metacognitive exercise the faculty intended.

Pedagogical value: why this matters in first‑year writing​

Multimodal composition is not optional​

Modern communication frequently blends text, image, and algorithmic intermediaries. Teaching students to compose across modes equips them with practical literacies: how to craft a persuasive visual argument, how to write effective prompts, and how to assess the credibility and limitations of machine outputs.

Reflection as assessment​

The reflective essay anchored the project. It ensured the activity was not mere novelty or "edutainment." Reflection required students to explain their choices, critique the AI outputs, and articulate what they learned about themselves. This moves evaluation from product to process and aligns with best practices for teaching critical thinking.

Emotional and identity work​

First‑year students are negotiating identity formation at a transitional life stage. The exercise offered a structured space for imagining futures while also confronting how external systems (media, algorithms) can misrepresent or flatten a person’s self‑image. That tension itself is a teachable moment about power, representation, and narrative control.

Technical considerations: what Microsoft Copilot (and similar models) bring to the table​

Generative AI tools like Microsoft Copilot simplify the process of producing images from text prompts, enabling rapid prototyping of visual ideas in classroom settings. They allow students without formal art training to generate professional‑grade imagery and to experiment with visual rhetoric.
However, the tools have technical characteristics that educators should understand:
  • Prompt sensitivity: Outputs depend heavily on phrasing. Small changes in wording can produce markedly different images, which teaches students the importance of specificity and iteration.
  • Representation biases: Models are trained on vast datasets and can reproduce systemic biases in occupation, appearance, and culture. This explains why some students may feel the generated image did not represent their gender, ethnicity, body type, or style.
  • Likeness and generalization: Most current image models are better at producing stylized representations than photorealistic, identity‑accurate portraits. This can lead to a mismatch between user expectation and machine output.
  • Determinism and randomness: Many generators include stochastic elements; two runs with the same prompt can yield different results. That variability can be pedagogically useful but frustrating for students seeking a stable depiction.
These technical realities are not barriers to classroom use, but they demand explicit scaffolding so students understand why an AI produced a given image and how to interrogate the model’s limitations.

Ethical and legal questions​

Consent, likeness, and privacy​

When students generate images of themselves using an AI tool, several questions arise:
  • Does the AI process store or use student images or prompts in ways that could affect privacy?
  • Who owns the resulting image — the student, the vendor, or the institution?
  • Are students fully informed about data retention and the potential for reuse in model training?
Institutions should ensure transparent policies and clear informed‑consent steps before engaging students in projects involving third‑party AI tools.

Copyright and image provenance​

Collage materials (magazine clippings, photographs) have varying copyright statuses. Combining them with AI outputs introduces complex intellectual property questions. While educational use often falls under fair use, instructors should advise students on responsibly sourcing materials and avoiding the use of copyrighted assets in ways that could create downstream issues.

Deepfake risk and misuse​

Teaching students to generate images inevitably involves exposing them to techniques that can be misused to create misleading or harmful representations. Responsible pedagogy requires contextualizing these risks and emphasizing ethical norms around consent, truthfulness, and harm mitigation.

Institutional responsibilities and policy implications​

Colleges adopting AI in curricula must address several administrative responsibilities:
  • Create or adapt an institutional AI policy that covers acceptable use, data governance, and intellectual property.
  • Ensure vendor agreements with cloud‑AI providers meet privacy and FERPA obligations.
  • Provide training for faculty on how to integrate generative tools responsibly and on how to explain model limitations to students.
  • Offer accessible alternatives for students who opt out of using third‑party AI for privacy, religious, or personal reasons.
A carefully designed policy minimizes legal risk and demonstrates institutional leadership in ethical technology use.

Accessibility and inclusivity​

This project’s use of collage alongside AI highlighted important accessibility considerations. Collage is tactile and low‑barrier; it welcomes neurodiverse learners, students with limited digital literacy, and those who might feel alienated by a purely tech‑centric assignment.
At the same time, AI tools may produce outputs that invalidate or marginalize certain identities. Educators should:
  • Provide multiple pathways to completion (analogue and digital).
  • Ensure prompts and examples do not reify stereotypes.
  • Offer prompt‑writing scaffolds that help students articulate identity details the model might otherwise miss.
Inclusivity demands both technological literacy and humanistic attention to individual experience.

Assessment: grading and learning outcomes​

Traditional grading rubrics focused solely on product quality do not capture the learning in this kind of assignment. Instead, assessment should prioritize:
  • Clarity and depth of reflection in essays.
  • Demonstration of critical thinking about media and AI.
  • Evidence of iterative design (prompt revisions, collage drafts).
  • Engagement with ethical considerations.
A rubric that balances craft, literacy, and reflection rewards student risk‑taking and process‑oriented learning.

Practical recommendations for educators​

  • Define learning goals clearly. Is the project about creative expression, AI literacy, identity work, or all three? Align activities and assessment to those goals.
  • Scaffold prompt writing. Teach prompt mechanics with examples and allow practice rounds.
  • Mandate reflection. Require a written piece that connects the visual artifact to learning objectives and personal meaning.
  • Offer opt‑outs and alternatives. Provide analog or offline options for students who decline to use vendor AI tools.
  • Clarify legal and privacy considerations. Supply a short consent form and an FAQ about data handling.
  • Model critical reading of outputs. Use class time to deconstruct how the AI interpreted prompts and where it introduced bias or error.
  • Encourage iteration. Allow multiple submissions or iterations so students see prompt engineering as a process, not a one‑shot test.
  • Document outcomes. Collect anonymized reflections to evaluate the pedagogical value and iterate the assignment for future semesters.

Strengths demonstrated by the Penn State Shenango model​

  • Low‑cost, high‑impact: The campus funded the project at no cost to students, removing financial barriers to participation.
  • Multimodal literacy: The combination of collage, AI, and writing engages different skill sets and learning styles.
  • Emotional resonance: For first‑year students, assignments that explicitly ask them to imagine futures can catalyze meaningful self‑reflection.
  • Real‑world literacy: Students gained hands‑on experience with a mainstream AI tool, preparing them for workplaces where such tools are increasingly prevalent.
These strengths make the model replicable across programs that want to cultivate both creativity and critical technological literacy.

Risks and limitations​

  • Representation mismatch: AI outputs may fail to represent students’ identities accurately, which can harm self‑perception or discourage participation.
  • Vendor dependency: Relying on a third‑party tool can introduce privacy, continuity, and equity concerns if the service changes or becomes paid.
  • Superficial engagement risk: Without careful scaffolding, projects can become novelty exercises rather than opportunities for critical learning.
  • Unequal access to expertise: Faculty without training in generative AI may mismanage risks or miss teachable moments.
Recognizing these limitations is not an argument against using AI in classrooms; it is a call for intentional, well‑resourced implementation.

The broader cultural contours: why this assignment is timely​

Higher education is at a crossroads where AI is reshaping what counts as literacy. Assignments like “A Future You” are timely because they:
  • Force students to confront the reality that algorithms will mediate how they are seen in professional contexts.
  • Teach students to be producers of mediated content rather than passive consumers.
  • Encourage ethical reflection at a moment when visual misinformation and identity misrepresentation are escalating social concerns.
Embedding AI literacy into first‑year curricula helps demystify powerful tools and gives students early practice in wielding — and critiquing — them.

Recommendations for scaling and future research​

  • Pilot the assignment across multiple disciplines (business, art, psychology) to compare learning outcomes and disciplinary affordances.
  • Collect longitudinal data to see whether early AI literacy predicts later classroom success or career readiness.
  • Develop a shared repository of scaffold materials (prompt templates, reflection prompts, rubrics) to lower the barrier for faculty adoption.
  • Investigate how different AI models compare in terms of representation fidelity and bias, and publish anonymized summaries for faculty planning.
  • Explore partnerships with vendors that provide education‑centric terms addressing data ownership and student privacy.
Systematic study will turn a promising classroom experiment into evidence‑based practice.

Conclusion​

The “A Future You” project at Penn State Shenango underscores a vital pedagogical insight: teaching young adults to narrate and shape their futures must engage both human craft and machine tools. Collage provided a tactile entry point and a sense of ownership; generative AI introduced new expressive possibilities and revealed how algorithms interpret human prompts. The friction between the two — when students felt the AI image “wasn’t them” — created exactly the kind of critical moment educators should welcome.
To realize the full potential of such work, institutions must pair creative assignments with clear policies, ethical guidance, and alternatives for students with legitimate concerns. With intentional design, projects that blend art, writing, and AI can yield more than a visually arresting display: they can teach students to navigate a mediated world with imagination, discernment, and agency.

Source: Business Journal Daily Students Use Art, Writing, Generative AI to Imagine Future Selves
 

Back
Top