MPhil in Ethics of AI, Data & Algorithms

University of Cambridge · Enter password to continue

Centre for the Future of Intelligence · University of Cambridge

Understand AI. Shape its future.

A full-time, research-intensive master's programme equipping the next generation of researchers, policymakers and industry leaders to analyse and navigate the ethical, social and practical dimensions of artificial intelligence.

Apply Now Learn More
Aerial view of Cambridge

We Are

A community of researchers across disciplines

Based at the Centre for the Future of Intelligence at the University of Cambridge, we bring together philosophers, social scientists, computer scientists, legal scholars, designers, cultural theorists and policy researchers with a shared mission: ensuring AI goes well for humanity. That breadth of disciplines, viewpoints and methods, all focused on AI, is what makes CFI unusual.

You Are

Curious, rigorous, and ready to engage

You want to do serious research on AI and its implications, whether your next step is a PhD, a role in policy or government, or a position in industry. You enjoy interdisciplinary challenge and want to develop real research skills. We welcome applicants from philosophy, social science, computer science, law, policy, design, humanities and beyond.


Broad in scope. Rigorous in method. Grounded in real-world impact.

The programme covers AI ethics, governance, safety, evaluation, the economics and geopolitics of AI, human-AI relationships, cultural and critical perspectives, and the future of work, while allowing students to pursue specialised interests through independent research and engagement with the range of expertise at CFI.

Interdisciplinary Cohort

Join students from philosophy, law, computer science, history, political science, economics and beyond. Different perspectives are compared, challenged and integrated throughout the year.

Flexible Research Focus

Assessments aren't tied to specific modules, so you're free to research whatever interests you within the programme's scope, guided by expert supervision.

Core & Elective Modules

A structured core provides shared foundations. Elective modules change each year to reflect the research landscape, letting you go deeper into the areas that matter most to you.

World-Class Setting

Attend seminars, reading groups, conferences and events at CFI, while drawing on Cambridge's wider ecosystem in science, philosophy, law and policy.

  • Should frontier AI models be open-sourced?
  • How should governments regulate systems they can't fully understand?
  • What do we owe digital minds, if they exist?
  • Will AI replace most jobs, and what should policy do about it?
  • How do you evaluate an AI system for risks no one has seen yet?
  • Whose values get encoded in AI systems, and whose get left out?
  • How do we design institutions that keep up with AI when the technology moves faster than policy?

Nine months of taught modules, independent research, and supervised writing

The programme runs full-time across the three Cambridge terms. Taught modules build shared foundations and specialist knowledge. A mix of essays, presentations, group work and other formats develops your ability to research and communicate independently.

Michaelmas Term

Core modules & electives. Research Essay 1 (5,000 words).

Lent Term

Elective modules & seminars. Research Essay 2 (7,000 words). Works-in-progress presentations.

Easter Term

Dissertation (up to 12,000 words). Presentation. Supervision and revision.

Taught Modules

Two core modules provide shared foundations: an introduction to key concepts, theories and debates in AI ethics and society, and a technical module building intuition for how AI and ML systems work. Students attend at least four additional elective modules from a list that changes each year.

Supervised Research

Students work individually with domain experts to produce four pieces of written work of increasing length and depth. You receive dedicated one-to-one supervision for each essay, building from shorter analytical exercises to a full dissertation. Those intending doctoral work will develop a well-planned PhD proposal.


Active, varied, and designed for the AI era

This isn't a passive lecture programme. We use teaching formats that develop skills you can't pick up alone at home with a chatbot: arguing on your feet, working in teams, thinking under pressure.

🎓

Weekly Research Seminar

Each week, a researcher or practitioner presents on a live topic in AI and society. Past and invited speakers include people from DeepMind, METR, RAND and leading universities. PhD students join too, and each session is followed by informal discussion at the pub.

Structured Debates

Argue different sides of live controversies in AI policy and ethics. We also use "anti-debate" formats where the goal is arriving at truth together rather than winning.

Group Projects & Presentations

Work in small teams on research questions and present your findings. The kind of collaboration that policy and industry roles actually require.

Simulations & Role-Play

Work through real-world scenarios: international AI governance negotiations, organisational crises, decision-making under uncertainty. Then reflect on what happened and why.

Scaffolded Research

Develop work in stages: proposal, draft, feedback, revision. The focus is on how you think, not just what you hand in at the end.

Peer Commentary & Reflection

Write short module reflections and share them with your cohort. Give and receive peer feedback on each other's thinking.

Assessments that test understanding, not just output

When anyone can generate polished text with AI, assessment has to go deeper. We test whether you actually understand what you're writing about and can defend it.

Progressive Research Essays

Four essays of increasing length (3,000 to 12,000 words), each supervised one-to-one. We ask for original analytical or empirical contributions, not literature reviews.

Written

Works-in-Progress Presentations

Present your developing research to peers and faculty. Get live feedback and sharpen your arguments before they reach the page.

Oral

AI-Integrated Assessment

Some assignments involve working with AI tools as part of the process: generating, analysing, critiquing or building on AI outputs. The point is to test your judgement, not your ability to produce text.

AI-integrated

Collaborative In-Class Work

Some assessment happens live in the classroom: group problem-solving, in-class exercises, collaborative analysis. Teachers see how you actually think and work with others.

Collaborative

Module Reflection Papers

A short synthesis after each module: key insights, an original idea, connections to your own research. These are shared with the cohort so everyone learns from each other.

Written

AI Literacy & Responsible Use

Since this is an MPhil on AI and society, we treat the programme's own use of AI tools as part of the intellectual project. Early in the year, a dedicated session covers how to use LLMs well and where they go wrong.

Prompt engineering and using LLMs for literature discovery, brainstorming and stress-testing arguments

Understanding LLM limitations: hallucination, sycophancy, reasoning failures, distributional biases

Using AI for tutoring, creative thinking, getting feedback on drafts, and exploring counterarguments

Co-designing assessment norms: what does intellectual integrity look like in an era of capable language models?

Using AI to strengthen your own reasoning: stress-testing arguments, checking consistency, surfacing blind spots


What you might study

Elective topics vary each year, reflecting the current research interests of staff and developments in the field. The following are examples of modules that have been or may be offered.

Introduction to Ethics of AI

Key concepts, theories and debates: AI capabilities and risks, bias, fairness, moral reasoning, machine decision-making, value alignment, and anticipating future challenges.

Core

Technical Foundations

How AI and ML systems are built, evaluated and deployed: from regression and classification to reinforcement learning and language modelling.

Core

AI Governance: Intelligence Rising

A strategic role-playing game exploring international AI governance — used with real policymakers in industry and government. Teams role-play states and AI companies navigating transformative change.

Elective

Law & Policy of General-Purpose AI

Emerging legal frameworks for GPAI — the EU AI Act, systemic risk regulation, governance under uncertainty, and the role of capability evaluation in law.

Elective

Evaluation of AI Systems

Why robust evaluation matters, alternative approaches, and the challenges of assessing increasingly capable systems for safety and societal impact.

Elective

AI & Social Science

Empirical approaches to AI's societal effects: public attitudes, misinformation, epistemic ecosystems, human–AI interaction and the social psychology of AI.

Elective

Consciousness in AI

Can machines have minds — or only the appearance of minds? Philosophical and neuroscientific perspectives on AI consciousness, moral status and digital welfare.

Elective

Algorithmic Fairness & Auditing

Technical and legal definitions of fairness, justice and accountability, tensions between them, and practical auditing methods. Cases from criminal justice, healthcare and finance.

Elective

AI, Race & Empire

How AI intersects with colonialism, global power and epistemic inequality. Decolonial and indigenous approaches to more just technological futures.

Elective

Ethics of AI Prediction

The epistemic power of AI: accuracy, the risks of knowing too much, classification as policy, and the ethical stakes of data-driven prediction.

Elective

AI & International Security

How AI transforms national security, military strategy and geopolitics. Autonomous weapons, surveillance, cyber capabilities and arms control challenges.

Elective

AI, Economics & the Future of Work

How AI reshapes labour markets, productivity, wealth distribution and economic policy. Automation, job displacement, new forms of work, and debates around redistribution and growth.

Elective

AI, Narratives & Culture

How stories, media and cultural imaginaries shape the development and reception of AI. Feminist, STS and critical theory perspectives on technology and power.

Elective

Forecasting & Societal Decision-Making

Tools for anticipating AI trajectories. Superforecasting, scenario planning, and frameworks for high-stakes decisions under deep uncertainty.

Elective

Module offerings and formats are indicative and subject to change. Not all modules listed will be available in a given year.


Learn from leading researchers and practitioners

The programme is directed by researchers at the Centre for the Future of Intelligence and draws on a network of contributors from Cambridge, other universities and frontier AI organisations.

Programme Co-Directors & Co-ordinator

Lucius Caviola

Lucius Caviola

Assistant Professor, CFI
Co-Director of MPhil Programme
Christoph Winter

Christoph Winter

Assistant Professor, CFI
Co-Director of MPhil Programme
Lucy Cavan

Lucy Cavan

Postgraduate Co-ordinator

Module Convenors & Supervisors

Modules are taught by researchers from CFI and the broader Cambridge community, spanning philosophy, social science, computer science, law, policy, HCI and design, and cultural and media studies. This means you encounter a wide range of disciplines, methods, angles and perspectives throughout the programme.

Guest Speakers & External Contributors

The programme regularly features guest lectures from researchers and practitioners at other universities, policy organisations, frontier AI labs and industry, covering AI safety, governance, philosophy, economics, law and international security.

Cambridge

Knowledge and skills for the AI era

Graduates leave with the conceptual tools, practical skills and professional networks to pursue research, policy, governance or careers at the intersection of AI and society.

Critical thinking: evaluating evidence, arguments and AI outputs carefully and honestly.

Clear communication, written and oral, developed through essays, presentations and debates.

AI literacy: how frontier systems work, how to use them as research tools, and where they fail.

Broad foundations across philosophy, social science, computer science, law, economics and public policy as they relate to AI.

Forecasting and decision-making under uncertainty. Tools for thinking about where AI is heading and what that means.

Research skills in AI governance, risk assessment, safety, regulation and policy.

Thinking on your feet, developed through live debates, presentations and in-class exercises.

Training in independent research, culminating in a supervised dissertation on a topic of your choice.

A launchpad for doctoral research, policy roles in government and international organisations, or positions at AI companies where analytical depth matters.


What our students say

I particularly loved the flexibility of this course. The assessments aren't tied to specific modules, so you're free to research whatever interests you. That freedom made the course especially rewarding. With the guidance of my supervisors, I had the space to develop my own ideas — and realised I wanted to pursue a PhD.

Mathilda Mulert · 2024–25

The network I got exposed to, and the signal of the master's programme, meant I could secure a full-time role at the AI Safety Institute. CFI enabled me to draw connections between topics that domain experts often missed — enabling impactful research usually only possible later in one's career.

Jai Patel · 2023–24

One of the best aspects is the diverse cohort. Coming from different cultural backgrounds, academic disciplines and professional experiences, I learned so much about AI ethics from a variety of viewpoints. Everyone encouraged me to carve my own academic path and explore intersections between AI, ethics, law and philosophy.

Zoya Yousef · 2023–24


Join the programme

We're looking for people passionate about the implications of AI, committed to interdisciplinary perspectives, and from a range of academic backgrounds and experiences.

What you'll need

  • Two academic references
  • Transcript
  • CV / résumé
  • Evidence of competence in English
  • Two writing samples (2,500–5,000 words each)
  • Statement of purpose (~600 words)
  • Research proposal (max 500 words)

Key dates

  • September — Applications open
  • October — Gates Scholarship deadline (US applicants)
  • December — University-wide funding deadline
  • Late February — Final application deadline

Precise dates and further information are available on the postgraduate admissions portal.

For queries: education@lcfi.cam.ac.uk

Study AI's biggest questions at Cambridge.

Applications for 2027-28 expected to open in September 2026.

Apply Now