Rethinking university assessment in an AI-first world
From outputs to thinking
But learning rarely looks like that.
Real understanding is iterative and imperfect. It involves revising ideas, responding to feedback, and explaining decisions, particularly in applied disciplines like finance, where judgement and context matter as much as technical accuracy.
If assessment only values the end product, it risks measuring presentation rather than understanding.
What students are telling us
In our recent survey of 550+ finance students and graduates, many said they felt underprepared for assessment and interview stages that tested judgement and decision-making under pressure, not because they lacked knowledge, but because they’d had limited chances to practise applying it.
Students consistently rated role-specific, experiential learning as far more valuable than traditional coursework alone.
They want assessment that reflects reality.
AI changes assessment, not learning
But AI is already embedded in the world graduates are entering. The real question isn’t whether students use AI, it’s how they use it.
Assessment that focuses on process makes thinking visible:
- how students interpret information
- how they justify decisions
- how their thinking evolves
These signals can’t be generated by AI alone.
A neccessary reset
It has revealed where assessment was already fragile.
If universities want to prepare students for an AI-shaped future, assessment needs to grow up, moving beyond polished artefacts and towards visible thinking, judgement, and learning over time.
This is why we’re seeing growing interest in simulation-based learning and assessment, approaches that allow students to practise real decision-making, reflect on performance, and build confidence long before recruitment begins.
Programmes like AmplifyME Pathways are designed around this principle: giving students structured opportunities to apply knowledge, understand how roles really work, and develop the judgement that employers increasingly expect.