Rethinking Assessments in the Age of Generative AI

In education, particularly in B-schools that use the case-method approach, assessments have traditionally moved beyond rote memorization. This approach pushes learners to think critically, applying concepts and insights more meaningfully. The case method is something I learned about when I joined the Asian Institute of Management. Coming from Physics, I thought it’s fascinating; it centers around real-world cases, often with open-ended questions and no single “correct” answer. This encourages students to weigh perspectives, make informed decisions, and articulate their reasoning—skills crucial in any social and business context. At AIM, we often joke about how our students are wired to respond with “it depends.”

More seriously though, pedagogy lies at the heart of this approach. Effective teaching is about fostering engagement, insight, and skill development, and today’s AI-driven landscape demands that we adapt our methods.

However, the rise of large language models (LLMs), capable of generating nuanced responses and taking on different personas, has added a new layer of complexity. It’s no longer sufficient to simply ask students to analyze, dissect, or write, as LLMs excel at both. Just try Google’s NotebookLM and you’ll know what I mean.

I honestly haven’t fully figured it out yet. I mean—how to navigate the education space in the era of generative AI. But one way I’ve taken to meet this challenge, is something like a blend of implementation with insight. In my Network Science course, for example, learners are allowed to use LLMs and class scripts to solve complex problems on exams, provided they cite sources and work independently. The way questions are asked is key; they’re designed to push students beyond just providing answers, requiring them to show how they arrived at their conclusions and the rationale behind each decision.

Sure, generative AI can also tackle these tasks, but it often requires a high degree of prompt engineering—knowing exactly which tasks to specify and what context to provide the LLM. This could end up taking longer in an exam setting than if students rely on their own insights. Ultimately, this approach doesn’t replace foundational knowledge but rather strengthens students’ decision-making skills, encouraging depth, reasoning, and transparency.

Practical Applications

Indeed, the design is key. In this exam, I use a unique empirical dataset collected specifically for the course—data not available elsewhere. This requires students to analyze unfamiliar information and develop original insights, challenging them in ways that generic resources or AI-generated knowledge can’t easily support.

Of course, while critical thinking is essential, we must equally prioritize technical rigor, regardless of field. Thus, some exam questions were designed to assess foundational skills and factual understanding independently of LLM assistance, with part one being closed-notes for this particular midterm exam. This ensures students can demonstrate both mastery of methods and accuracy—skills critical in the data science field. Ultimately, the goal is to balance analytical insight with technical proficiency, preparing students to engage responsibly and effectively in AI-enabled environments. And, again, it goes back to what the learning goals and outcomes are.

Now, it goes without saying that this Network Science midterm exam is just one part of the course’s performance criteria. Alongside it, students complete journal scans, use case presentations, and final projects, offering multiple avenues to demonstrate their mastery.

I honestly find it both exciting and a bit daunting to be an educator in the age of generative AI.

Related