Jan 7 • Rachel Aspinall

Assessment needs to grow up - why AI is changing how learning is measured

AI hasn’t broken assessment, it’s prompted a rethink. This piece explores how universities are adapting assessment to focus more on visible thinking, judgement, and real-world readiness.

Rethinking university assessment in an AI-first world

University leaders, educators, and careers teams are grappling with a fast-changing assessment landscape. AI hasn’t just disrupted assessment, it has exposed a long-standing issue in higher education: an over-reliance on polished outputs as a proxy for learning.

When essays, reports, and even technical work can be generated in seconds, it raises a simple but uncomfortable question: what are we really assessing?

From outputs to thinking

Traditional assessment has often rewarded the final artefact: the finished essay, the completed model, the “right” answer.

But learning rarely looks like that.

Real understanding is iterative and imperfect. It involves revising ideas, responding to feedback, and explaining decisions, particularly in applied disciplines like finance, where judgement and context matter as much as technical accuracy.

If assessment only values the end product, it risks measuring presentation rather than understanding.

What students are telling us

This isn’t just theoretical.

In our recent survey of 550+ finance students and graduates, many said they felt underprepared for assessment and interview stages that tested judgement and decision-making under pressure, not because they lacked knowledge, but because they’d had limited chances to practise applying it.

Students consistently rated role-specific, experiential learning as far more valuable than traditional coursework alone.

They want assessment that reflects reality.

AI changes assessment, not learning

Much of the AI debate in higher education focuses on prevention.

But AI is already embedded in the world graduates are entering. The real question isn’t whether students use AI, it’s how they use it.

Assessment that focuses on process makes thinking visible:

  • how students interpret information
  • how they justify decisions
  • how their thinking evolves

 
These signals can’t be generated by AI alone.

A neccessary reset 

AI hasn’t broken assessment.

It has revealed where assessment was already fragile.

If universities want to prepare students for an AI-shaped future, assessment needs to grow up, moving beyond polished artefacts and towards visible thinking, judgement, and learning over time.

This is why we’re seeing growing interest in simulation-based learning and assessment,  approaches that allow students to practise real decision-making, reflect on performance, and build confidence long before recruitment begins.

Programmes like AmplifyME Pathways are designed around this principle: giving students structured opportunities to apply knowledge, understand how roles really work, and develop the judgement that employers increasingly expect.

What we’re seeing in practice

These questions aren’t hypothetical. They’ve come through clearly in our recent student research with over 550 finance students and graduates, alongside conversations with universities and employers navigating the same challenges. Read more here.
Want to explore how universities are rethinking assessment and employability in practice? Get in touch with the team.