Future trajectories for AI-enabled personalised learning - RSA blog - RSA

Future trajectories for AI-enabled personalised learning

Blog 1 Comments

  • Education and learning
  • Technology

This is the final article in a three-part series drawing insights from a multidisciplinary roundtable discussion on AI and the future of learning curated in partnership with Google DeepMind.

So far, we’ve explored the intersection between AI and the current landscape of learning and addressed the key opportunities and challenges of implementing AI-enabled personalised learning. Now, we design and speculate on likely future scenarios before identifying the critical next steps that would enable AI for lifelong learning.

All three articles have been indebted to the Three Horizons (3H) model, a conceptual framework that helps us conceptualise long-term social change – in this case, related to education and learning.

In this three-part model, H1 refers to a dominant system that is increasingly becoming unfit for purpose as the world of learning (and its needs) changes. H2 refers to innovations emerging and starting to disrupt the dominant system as a response to H1’s pending irrelevancy. H3 represents radical ideas happening in niches that differ drastically from the current system, but will be the dominant system of the future.

In part one, we focused on the current state of education systems. In part three, we will use the 3H model as a tool to predict and demonstrate how AI-enabled personalised learning could appear in each horizon.

Exploring the future trajectories

AI for personalised learning is optimising current education systems

(H1 / short term / 1-3 years)

H1 is already in motion, as the application of AI to education systems aims to streamline operations and improve overall efficiency. Considering this timeline covers the next one to three years, AI-enabled personalisation is not currently used to change the current education and learning system but to mitigate wider challenges and shortfalls. In this horizon, AI is not questioning or changing what we’re learning, how we’re learning, why we’re learning, and who is and isn’t involved in learning. It is simply helping our current educational systems to work more effectively and, in some cases, at a lower cost or as a tool for students to engage with material in new ways.

The 3H model as described by Leaders’ Quest.

In this horizon, AI can help teachers become more efficient and repurpose their time for greater productivity. Its expanded use in non-teaching aspects of education, from tackling school attendance to assessment design and grading, is expected to alleviate administrative burdens for teachers. Time created by AI applications can then be devoted to personalised instruction. Other AI-driven tutoring platforms, like Brainly and Khan Academy, can provide personalised tutorial experiences tailored to each student's learning needs outside the classroom.

This short-term application does not address the necessity for systemic change within education caused by a more radical shift in the landscape around skills for learners. This would create a situation in which the deployment of a fit-for-purpose technology is constrained by a traditional system. Elsewhere, applications are likely to be limited to individual use cases that patch over an ineffective system.

AI for personalised learning is driving new disruptive trends, but does not reform the system

(H2 / medium term / 2-5 years)

H2 redefines how we develop, use and improve tools to support the current education and learning system. It also opens windows into what might be possible, at least for the short term. It serves as a tipping point to determine whether innovations will continue to nudge the current paradigm towards disruption, or whether they will fail to generate enough momentum and then fizzle out.

Considered on this second horizon, the growing capabilities of AI promise a more skill-oriented and gamified learning trend. Demand for technological skills, social and emotional skills, and higher-level cognitive skills is expected to surge over the next five years. Teachers and learners would meet this demand by shifting to a skills-first approach and engaging learners via AI-powered gamified learning platforms.

One roundtable participant, representing a not-for-profit organisation working on AI in tertiary education, highlighted that AI needs to help reframe the paradigm, not support it.

In this horizon, lifelong learning becomes essential due to the rapid pace of technological change and the disruptive impact of polycrises on jobs and industries. The need for rapid reskilling calls for a more flexible and adaptive education and lifelong learning system. One roundtable participant, representing a not-for-profit organisation working on AI in tertiary education, highlighted that AI needs to help reframe the paradigm, not support it. In this horizon, the goal shouldn't be to use AI merely to help students pass exams, but to rethink the exam and learning format in its entirety. Priorities would instead go to teaching the right skills to serve workforces of the future using the right methods and assessments.

We can observe this trend already, with the International Baccalaureate permitting the use of ChatGPT in exams. On one hand, this response places pressure on the current system as we begin to question the purpose of essay examinations. On the other hand, it forces us to consider whether the ongoing integration of ChatGPT will fail to consider its impact on future skills.

Within Horizon 2, there is a danger of perpetuating risks already found in the education system at scale and speed due to the fast adoption of AI applications. The scale of damage as a result of using AI-enabled personalisation in a system not yet fit for purpose can be vast and difficult to undo.

AI for personalised learning is disrupting and transforming the skills and learning system

(H3 / long term / 5-10 years)

Horizon 3 emerges as a direct result of the developments of innovations taking place in H2, which will create space and energy around AI-enabled learning. What once seemed radical in H1 is now commonplace and settling as the new paradigm; in this horizon, AI-enabled personalised learning has become part of a broader disruption of the current system by facilitating a shift in the purpose of education – namely how, what and why we learn. The use of AI in education to increase much-needed skill development has helped to inform policy changes and launch wider education reform. This new learning paradigm focuses on the need to develop skills and abilities fit for the future, using methods tailored to each individual learner.

Learners are now placed at the heart of decision-making and can choose their own learning paths. This shift to a more autonomous and personalised learning system increases the degree of ownership and creativity each learner can apply to their own journey, reflected in their growing motivation to learn. In this horizon, the realisation of genuine personalised learning requires strategic planning at the national level. It demands the gathering of individual learner data points coupled with the latest societal trends to craft tailored learning paths.

Notably, Estonia, a top-performing country in the Programme for International Student Assessment (PISA), has initiated a national project dedicated to constructing an infrastructure for personalised learning paths. The goal is to enable educators to set their teaching strategies while learners can indicate their preferences, and AI will process and deliver personalised learning paths for each learner.

Insights on AI and Education along the three trajectories

We have compiled insights from the roundtable to present a set of recommendations for different stakeholders to consider for opportunities and innovations at each horizon. 

AI-enabled personalised learning within skills and learning systems in:

H1: optimising current skills and learning systems

  • Education/learning

Efficient teaching: support teachers via expanded use of AI in administrative aspects of education to free up time for teaching.

  • Policy

Adjust to enable the use of AI in classrooms and school administrations.

  • Tech

Refine functions of existing edtech for better user experience and to drive cost and time savings.

 

 

H2: providing short-lived trends that do not reform the skills and learning systems

  • Education/learning

Adaptive learning: technology-based; rise of short-lived trends such as skill-focus and gamification, but bypasses deeper learning challenges.

  • Policy

Support policies that allow space for ethical experimentation. 

  • Tech

Experimental edtech that harnesses AI and tailored learning plans that bridge learning gaps and increase students’ engagement.

H3: disrupting and transforming the skills and learning systems

  • Education/learning

Personalised learning: blends of adaptive and customised learning; transit to learner-owned personalised learning to enable lifelong learning.

  • Policy

New regulatory frameworks designed to enable AI for lifelong learning, and reform educational curriculums and standards to be whole-person, learner-led and lifelong.

  • Tech

Develop AI-driven, people-centred learning platforms that offer highly personalised learning experiences aimed at harnessing and growing the potential of each individual learner.

Consensus and key considerations for AI in the long-term: what happens next?

To productively move towards the world we envisaged through our H3 model there are actions we must start taking today to mitigate the risks outlined in part two. We outline key recommendations from our discussions to provide an overview of next steps for key stakeholders.

WHEN is the time to act and regulate AI in education and learning systemically?

  • As one of the roundtable participants succinctly put it, "the time for acting is now, but we need not rush it. We need to do both." The application of AI in education is not a question for tomorrow, but for today, but we need not match the rapid pace of innovation to answer that question.
  • The challenge lies in navigating the rapid pace of AI evolution with measured and calculated strides. With education institutions each responding to AI in their own ways, inconsistencies are inevitable, leading to concerns over fairness and coordination.
  • Experts at the roundtable criticised the government's recent ambiguous stance, as indicated by the Department for Education's statement, suggesting that it should take a leading role in this collective endeavour. Therefore, the call to action is both an urgent and considered one, underscoring the need for balance in our progress.

WHAT is the purpose of education and the role of AI in helping educators and learners achieve their goals?

  • Conversations about AI in education need to shift from a simple embrace-or-ban dichotomy to a deep exploration of how evolving tools redefine our education system. The rapid pace of change will naturally resolve existing technical issues, hence we should focus on the real challenge: making the understanding and appreciation of human intelligence in relation to AI paramount. What does this mean in terms of human-centric and automation-proof skills and how do these re-orientate the focus of our dominant education and learning systems?
  • With the swift pace of technological advancements, the barriers of costs and digital skills are expected to diminish. Interacting with AI could become as straightforward as speaking out loud.
  • The changing landscape necessitates identifying and developing the complementary skills and values essential for effective interaction with AI.

WHO should lead and be involved?

  • Roundtable participants unanimously advocated for a broader spectrum of voices in decision-making around AI in education and personalised learning, transcending the confines of industry and government alone.
  • The perspectives of children, parents and civil society – the primary users of this technology – must feature prominently in regulation discussions.
  • The process of designing policy and regulation demands democratisation and human rights in the digital age, with criticism directed at the slow pace of governmental responses and control from tech giants.
  • Encouraging cross-collaboration between organisations, institutions, government and civil society is crucial to fostering mindful growth and productive outputs.

HOW can we implement AI in education effectively?

Approach

  • Technology entities should actively collaborate with civic and public institutions, incorporating diverse perspectives for a holistic approach.
  • Creating open channels between educators, AI researchers and edtech tools is fundamental to nurturing productive collaborations.
  • Favour small-scale experimentation to test new approaches, learn and disseminate findings across the wider community.
  • Adopt a sandboxing approach to encourage swift but thoughtful moves. Entice businesses to try small-scale trials until the benefits are empirically proven to outweigh the risks before broader implementation is considered. The scale of damage of AI would be too fast for humans to fix if not carefully planned.
  • There is a pressing need for empirical evidence to validate the effectiveness of AI in education. We need tangible proof that AI initiatives work before they are widely adopted.
  • Encourage more research and debate to debunk myths and uncover the reality of AI's impact on education. For example, personalised learning is a complex issue that requires careful dissection and understanding.
  • Rather than focusing solely on tech replacements for students, the focus should shift to who holds the power and access to technology.

Funding

  • Current funding for AI tools predominantly flows from private sources, prompting the need for increased civic and public investment to mitigate cost barriers, distribute accountability, reduce bias and improve the quality of AI resources.
  • Investment in AI in education should shift its emphasis from intelligent tutoring systems to empowering teachers and enhancing their capabilities.

Regulation

  • Regulation serves as a buffer against the domination of tech giants while establishing a more equitable playing field to enable small-tech innovation.
  • The individual responses of educational institutions to AI create discrepancies and pose challenges in fairness and coordination. A ‘guiding position’ from the government is necessary for orchestration and consistency.
  • A critical omission from the regulation discourse around AI in education is the absence of children, parents and civil society. The focus has been disproportionately skewed towards industry and government.
  • The pace of legal frameworks and regulatory structures is notoriously slow, often lagging behind technological advancements. The process of implementing and enforcing regulatory measures, such as GDPR, often takes years, by which time the tech landscape may have significantly evolved. This has led to a lack of effective deterrence for tech companies in terms of data and cookies usage.
  • Regulation should foster a robust dialogue between tech companies and society at large, advocating for public interest and ethical considerations in the technology landscape. Profits derived from children's data should have societal implications.
  • The insights and data obtained from children should serve societal benefits, redefining how we view data ownership and rights.

It is only through continuous collaboration among a diverse group of experts and a variety of lived experiences that AI-enabled personalised learning has an equitable future. Learning as we know it is on the cusp of disruption, and AI-enabled personalised learning promises to be a valuable addition to the learning and education sector. This space has incredible potential, which can be unlocked by regulation, tech and practice if the right voices are included. Starting questions for this process must be open-minded: ‘why do we learn, and how might we learn and relearn at our best?’ rather than the narrow ‘what AI could we or should we use?’

We will continue our conversations on AI on Circle, our online platform for Fellows to connect and engage with each other and our ongoing work.

Google DeepMind also shared its gratitude for the participants’ contributions and expertise in exploring this topic. This investigation, led by the RSA, is just one roundtable in a series of partner-led, open discussions on the future of AI systems in society.

Google DeepMind previously published A Blueprint for Equitable AI in partnership with the Aspen Institute, as well as a report on how we can use AI to improve employment outcomes for people with disabilities with the American Association for People with Disabilities. Google DeepMind hopes to publish collated findings from all roundtables later this year.

We are continuing our partnership with Google DeepMind through our Student Design Awards, where they are partners on an AI and climate change brief for our 2023-24 and 100th year. Find out more by visiting the Student Design Awards area of our website.

The illustrations used in this blog are lifted from Google DeepMind's Visualising AI collection. They are an artist's illustration of artificial intelligence and learning. They were created by Novoto Studio.

RSA Student Design Awards

Our competition for emerging designers who want to make a difference. Includes a brief on circular fashion, supported by the People's Postcode Lottery.

Join the discussion

1 Comments

Please login to post a comment or reply

Don't have an account? Click here to register.

  • Thank you for this insightful article and the work you do in this space. I fully agree that in immature systems, AI can cause unintended, hard-to-reverse damage, and that user agency in all horizons, but especially in H3, is vital. One of the myths in personalized learning is that it is a sufficient paradigm for transforming education. If H3 achieves precise personalized learning, it will be effective at blending adaptive and customized learning. However, that is only one side of the educational coin. The other side - standardized and collaborative learning- should also undergo optimization through AI-enabled platforms. For holistic learning (educating the whole person, as an individual and as a member of a collective), we need to intentionally personalize but also de-personalize learning. I would therefore hope that H3 can be about precise balancing points of the two learning pathways.

Cities of Learning

Connecting and catalysing place-based lifelong learning to unlock opportunities for a regenerative economy.

Join our virtual Cities of Learning network

Our dedicated Cities of Learning Circle community is full of place-based learning influencers. Interested? Get involved and help us develop the Cities of Learning movement with other Fellows online.

Read the other blogs in our AI and the future of learning series with Google DeepMind

  • The shifting landscape of learning and AI

    Blog

    Alessandra Tombazzi Joanna Choukeir Natalie Lai DeepMind Partnership Block

    Read our first of three blogs summarising our roundtable with Google DeepMind on how AI might enable personalised lifelong learning to drive better outcomes for people, places and planet.

  • AI-enabled personalised learning: opportunities and challenges

    Blog

    Alessandra Tombazzi Joanna Choukeir Natalie Lai DeepMind Partnership Block

    In blog two of three in our series with Google DeepMind, we explore the opportunities for AI-enabled personalised learning as well as the projected barriers to achieving this future vision.