• Thread Author
Artificial intelligence is transforming nearly every sphere of life, but perhaps nowhere is this evolution more vivid and hopeful than in the hands of students as they experiment, create, and reimagine the role of technology on campus. At the University of Georgia (UGA), this trend is not hypothetical—it is playing out in real time alongside a wave of student-driven generative AI projects designed not only to showcase technical prowess but also to address practical, community-centered challenges. The annual Generative AI Competition, now in its second year, stands as a unique lens through which to examine the interplay of creativity, ethics, and social impact engendered by student innovation with generative AI tools.

A diverse group of students outdoors using tablets with futuristic digital data projections floating around them.
The Rapid Growth of Student-Led AI Innovation​

The 2025 Generative AI Competition at UGA, sponsored by the Office of Instruction in partnership with Franklin College of Arts and Sciences’ department of philosophy, highlights both the momentum and diversity of AI engagement among students. Participation surged from just eight projects in the inaugural year to 24 entries this cycle, signaling not only greater awareness but a fertile intellectual and ethical landscape in which these technologies are being applied. According to Lindsey Harding, director of Franklin College’s Writing Intensive Program and co-coordinator of the GenAI Competition, the aim is not simply technical display but the “enrichment of the UGA community or experience.” This sentiment emphasizes harnessing AI not just for its efficiency or novelty, but as a tool for deeper thinking and perspective-expanding possibilities.
Judges selected winners based on creativity, tangible community impact, inventive use of AI tools, and thorough documentation of each project’s developmental process. This process looked across initiatives that leveraged the latest in generative AI—ChatGPT, DALL-E, Microsoft Copilot, Adobe Firefly, and more—to develop not only clever proofs of concept but also working applications with a direct line to real-world benefit.

Rethinking Accessibility: InkTrap and Visual Learning​

Taking first place was InkTrap, conceived and led by Sophie Brewer, a third-year graphic design major at UGA’s Lamar Dodd School of Art. Brewer’s project addresses a longstanding challenge in education: making reading and comprehension accessible to students who struggle with focus and textual engagement, especially over long periods. The innovation lies in InkTrap’s fusion of Microsoft Copilot, Adobe Firefly, ChatGPT, and OpenArt. This synergistic blend allows users to input text, which is then enriched by AI-generated images and rephrased explanations to enhance understanding. The tool is designed as a web platform with a clean interface and lays out clear pathways for students to break complex readings into manageable, visually-supported pieces.
Brewer traces her inspiration to an AI-centered arts class that opened her eyes to how accessible design and generative AI could intersect. “When used appropriately and ethically, AI can be an extremely valuable learning tool for anyone of any age and lead to new projects that were previously unfathomable,” Brewer emphasized. Her project offers a counterpoint to common narratives about AI as trivial or even dangerous—InkTrap instead envisions AI as a bridge over longstanding divides in educational equity.
Critically, InkTrap’s design process underscores the value of student imagination in problem-solving. Rather than accepting AI as inflexible or untrustworthy, Brewer explored boundary conditions—testing for clarity, appropriateness of generated images, and the adaptability of text simplification—while documenting both successes and setbacks. This transparency in design is especially vital for educational tools, and it sets a model for ethical and inclusive AI deployment.

Community Building Through History: Mapping Athens’ Musical Legacy​

Second place in this year’s competition went to a project at the intersection of cultural preservation and data science: an interactive map of Athens’ rich musical history produced by Suhan Kacholia, a Double Dawg student working toward both her bachelor’s in cognitive science and a master’s in AI. With deep roots in the Athens Music Project Oral History Collection, curated by the Richard B. Russell Library for Political Research and Studies, Kacholia’s project begins by extracting location and event data from audio transcripts of interviews with musicians and cultural insiders. This unstructured data is processed with Google’s Gemini 2.0 large language model, which parses mentions of venues and landmarks. Subsequently, Python-based scripts geocode these entities, plotting them as waypoints on an interactive digital map.
The project is striking for a couple of reasons. First, it demonstrates AI’s considerable power in making archival materials—often accessible only to specialists—available to a broader audience. By transmuting unindexed oral histories into navigable visualizations, Kacholia has enabled current students to “literally walk in the footsteps of all these musicians and creatives I find so inspiring.” Second, it demonstrates how generative AI models like Gemini 2.0 can help structure, summarize, and make sense of data that defies easy categorization.
The project’s potential risks revolve around the reliability of AI-driven extraction from noisy or ambiguous oral records; if the language model misattributes locations or events, the resulting map could mislead rather than enlighten. However, Kacholia’s method includes iterative testing and validation against known datasets, flagging unverified data points for human review—a best practice for similar context-driven AI applications.

Music, Memory, and Retention: Reimagining Class Notes with Generative Audio​

Third-place winner Bianca Wilson, a third-year music student in UGA’s Hugh Hodgson School of Music, demonstrated how AI might revolutionize learning and recall through her project, Music Notes. Wilson’s insight draws from research showing the human brain processes musical inputs differently—and often more memorably—than simple text. With Music Notes, students can upload classroom notes or flashcard material, which is subsequently transformed by Google Gemini and MusicGen-based software (specifically, YuE) into short, catchy songs. These compositions can then be downloaded as MP3 files and used for reinforcement learning or simply for fun.
The proposal stands at the nexus of science and artistry: it tests long-standing theorized advantages of mnemonics (and the so-called “Mozart effect”) with contemporary generative tools. Early pilot feedback cited in the competition indicates measurable improvements in short-term retention when students rehearse musical versions of academic material. In a landscape often dominated by text-to-text AI applications, Wilson’s approach is distinctive for translating between modalities, opening avenues for audio-first and even special-education-oriented software.
However, there are nuances to consider. Recent peer-reviewed studies have shown mixed outcomes for the long-term retention of complex concepts through musical mnemonics, with the benefits often dependent on musical preference, cognitive style, and even course subject. Users should temper expectations and, ideally, use tools like Music Notes in combination with traditional study methods.

The Idea Appetizer: Lowering the Barrier to Rapid Prototyping​

Honorable mention in this year’s UGA competition went to Ph.D. student Rex VanHorn for The Idea Appetizer, a tool that harnesses large language models to produce working code from user-defined app ideas. The concept sits within a burgeoning movement toward “AI as co-developer,” lowering technical barriers for entrepreneurs and non-technical users. Prominent platforms in this vein include OpenAI’s GPT-4o, Replit’s AI21-powered Ghostwriter, and Microsoft’s Copilot. VanHorn’s project emphasizes the process: by surfacing not only the code output but also the intermediate reasoning and troubleshooting steps, it models “explainability” in AI—a feature often absent from conventional code generation tools.
Critical analysis suggests that The Idea Appetizer holds potential for democratizing software development, particularly for communities without deep programming expertise. However, it also inherits familiar risks: the output code must be rigorously tested for security, efficiency, and ethical use, and the documentation, while comprehensive, can never wholly supplant expert review. Yet, the project’s detailed process logs—and the transparency they afford—are a step in the right direction, both for open education and for institutional safety.

Diversity of Tools, Diversity of Vision​

One of the most remarkable subplots of the UGA Generative AI Competition is the diversity of platforms, frameworks, and methodologies employed by students. From Microsoft Copilot and Adobe Firefly to Google Gemini, OpenArt, Amper, and MusicGen, the technical range is vast. Some projects foreground text-image interplay, some integrate geospatial data mapping, while others bring together music, machine learning, and psychology.
This variety is not accidental but rather a reflection of a broader trend in higher education and the tech industry at large: generative AI models are rapidly becoming accessible across disciplines, from fine arts and music to cognitive science, archival studies, and philosophy. The event’s judges—including faculty and staff across these varied departments—praised not only the technical execution but also the way many teams foregrounded the thinking, iteration, and setbacks inherent in any complex creative process.
Chandler Christoffel, director of UGA’s Academic Engagement Department and another GenAI Competition judge, noted that “engagement with a broader range of AI applications, tools and techniques” was particularly apparent this year. Submissions weren’t just about showing a polished final product, but also communicating design thinking—the rationale for decision-making, transparency around failure points, and openness to iterating in response to constructive feedback.

AI for Social Good: From Campus Safety to Sustainability​

The creativity-enabled optimists of the UGA student body didn’t limit themselves to academic or artistic experiments. Several submitted projects expanded the campus community’s reach in meaningful ways. For example, one entry designed a comprehensive safety app for students, featuring AI-driven emergency alerts, direct contact integration with local authorities, and adaptive, context-sensitive notifications in a way that outpaces current, often rigid campus safety solutions. Another focused on sustainability, leveraging AI data models to predict, measure, and ultimately reduce food waste as part of the Zero-Waste Dining Initiative. These applications wield AI not as a mere utility, but as the scaffolding for broader, more sustainable, and more inclusive campus cultures.
What distinguishes these projects is not just their technical execution, but a clear orientation toward solving real problems. The best AI scholarship, as the UGA competition demonstrates, does not obsess with the technology in isolation, but addresses the intersection between technological possibility and civic need.

Critical Risks and the Question of Responsible AI​

Amid the celebration of student creativity and progress, the competition’s results also prompt careful scrutiny. The same technologies that empower inclusion and innovation can also perpetuate bias, security vulnerabilities, and new forms of exclusion if not critically managed.
Key risks include:
  • Accuracy and Reliability: AI-driven summarization or geocoding may be prone to major errors if the underlying language models misinterpret context or source ambiguity. This is evident in projects like the Athens music map, where validation steps are essential to prevent the spread of historical inaccuracies.
  • Bias and Ethical Use: With generative AI, model bias is an ongoing concern. For instance, image or content generators, even when trained on broad datasets, may inadvertently reinforce stereotypes or overlook underrepresented groups. Students emphasized ethical review and documentation, but wider institutional guardrails are still needed.
  • Accessibility Paradox: While projects like InkTrap expand access for some, the reliance on advanced digital tools could alienate students without robust internet access or compatible devices—highlighting a persistent digital divide.
  • Intellectual Property and Attribution: Projects building on archival data, music, or text face a thicket of copyright and attribution questions. AI-generated works raise ongoing questions about ownership and fair use, underlining the need for clear institutional policies.
The university’s approach—scoring on documentation, encouraging transparency, and prioritizing community benefit—lays a foundation for responsible AI innovation. Still, as more tools transition from classroom experiment to public deployment, concerns about auditing, review, and stakeholder input will only grow.

The Road Ahead: Sustaining Momentum and Deepening Impact​

Reflecting on the trajectory of the competition and its burgeoning impact, Aaron Meskin, head of UGA’s philosophy department and co-coordinator of the GenAI Competition, emphasized that “our students are eager to find new and valuable ways of using AI here at the university.” This eagerness, fueled by institutional support and grounded by a multi-disciplinary framework, hints at a near-future where AI is not just a niche topic for STEM but a cross-campus catalyst for creativity, public engagement, and critical inquiry.
Looking forward, the UGA community plans to expand the competition—updating guidelines, evolving transparency requirements, and continuing to invite projects that not only stretch the bounds of today’s technology, but also wrestle with its societal implications. By centering storytelling, inclusive design, and ethical engagement alongside technical achievement, the UGA Generative AI Competition serves as a case study in the broader evolution of AI in higher education.

Conclusion​

The 2025 Generative AI Competition at the University of Georgia affirms that the most impactful AI tools are those forged at the intersection of ingenuity and empathy. From enhanced educational accessibility and immersive historical mapping to novel learning modalities and rapid software prototyping, student innovators are not only harnessing generative AI—they are redefining its value and purpose. Their work reveals both the strengths of interdisciplinary, ethically informed development and the ongoing risks associated with accuracy, bias, and access. As interest compounds and institutional support grows, the question is not whether AI will reshape the campus experience, but how thoughtfully and justly that transformation will be carried out. The UGA model, with its blend of competition, critique, and community focus, is one blueprint for ensuring that the promise of generative AI is realized as both a tool for problem-solving and a catalyst for deeper, more equitable engagement.

Source: UGA Today AI Competition projects use technology for creative problem-solving
 

Back
Top