Reimagining AI Detection in Higher Education

Client.

Thesis

Tools.

Miro

Year.

2025

Role.

UX Researcher

Quick Overview - Watch the 10-Minute Video

This short video summarizes the entire research journey, from the core problem to key insights and opportunities for innovation. Start here if you prefer a quick, visual walkthrough.


Background

The widespread adoption of generative AI tools like ChatGPT has fundamentally changed how students approach assignments. As of 2024, 86 percent of college students report using AI for academic work. However, only 25 percent of educators feel confident in identifying AI-generated content.

To manage this shift, many universities implemented AI detection tools. But these tools have been unreliable, creating more problems than they solve. In response, dozens of leading institutions including Harvard, Columbia, Stanford, and MIT have scaled back or completely discontinued AI detection systems.


Problem

Educators are no longer asking whether students used AI. Their real concern is whether AI usage compromises the learning objectives of an assignment. Current tools fail to capture this distinction. They detect AI presence but offer no insight into how it affects student learning or academic integrity.

Professors are left with little choice but to manually review student work, leading to increased workload, confusion, and lack of trust in technology.


Research Approach

To investigate this problem, I conducted a mixed-methods study involving:

  • In-depth interviews with 12 professors across 5 U.S. universities

  • A literature review of detection tools, policy responses, and AI misuse patterns

  • A student survey at Thomas Jefferson University, capturing perspectives from 42 design students

The goal was to understand what current tools miss and what educators actually need.


Key Insights
1. Tool Accuracy Remains a Major Concern

89 percent of interviewed professors who used detection tools reported false positives and false negatives.

“A student with a strong writing style got flagged as AI. I couldn’t risk accusing someone based on that.” — Participant 1

2. Acceptable AI Use Still Gets Flagged

Detection tools cannot distinguish between light AI support, like grammar checks, and full-scale AI generation.

“Even paraphrased authentic work gets flagged. I have to go through it all manually.” — Participant 4

3. Manual Review Is Still the Default

Despite having access to AI detectors, 93 percent of professors still manually review assignments. In many cases, tools were eventually abandoned.

“Detection tools just make more work. I’m back to reading everything myself.” — Participant 5

4. Professors Are Not Anti-AI

Every professor interviewed said they were open to AI use—as long as it did not interfere with students achieving the learning goals.

“I don’t mind if students use AI. I care whether they’re still learning.” — Participant 10


Core Insight

Detection tools today are solving the wrong problem. They ask "Was AI used?" when the real question is "Did AI use prevent learning?"

Educators want tools that assess the impact of AI on learning, not just its presence.


Opportunity Areas

There is a clear need for solutions that:

  • Reduce the need for manual review

  • Focus on learning objectives instead of binary AI flags

  • Allow educators to set their own definitions of acceptable AI use

  • Build trust by offering transparent, explainable results


Research Question

How might we reduce the need for manual checks by college educators to determine if students’ use of AI compromises the learning objectives of assignments, so they can focus on teaching instead of policing AI misuse?


Why This Matters

This is not a small problem. There are over 13 million higher education faculty members worldwide. The AI detection tools market, valued at 483 million dollars in 2023, is projected to grow to over 2 billion dollars by 2031. But market growth means little without tools that educators actually trust.


Metrics for Success

To evaluate future solutions, the following KPIs are proposed:

  • 80% educator trust in tool results

  • 60% adoption rate among targeted faculty

  • 50% percent reduction in average time spent manually reviewing assignments

© 2025 Eswar Varma

© 2025 Eswar Varma