This is the second of three articles drawing insights from a roundtable discussion on AI and the Future of Learning curated in partnership with Google DeepMind.
The discussion examined how artificial intelligence (AI) might enable personalised lifelong learning to drive better outcomes for people, places and planet. The series reflects the thoughts and views shared by various educators, policymakers, entrepreneurs and practitioners. Part one - The shifting landscape of learning and AI - examined the current landscape of learning, AI, and where they intersect.
Here, we explore both the future opportunities for AI-enabled personalised learning through the lenses of viability, desirability and feasibility, as well as the projected barriers to achieving this future vision.
As an emerging field, ‘AI for learning’ holds a wealth of untapped potential – and roundtable participants were eager to interrogate the consequences of implementing this technology and the scenarios that could play out as a result.
While schooling is an essential part of how we learn and educate, this roundtable took a more holistic view of how AI might enable educational touchpoints throughout our lifetimes.
In this piece, we’ll present opportunities and challenges, through three lenses:
- Viability: whether AI presents a future that makes personalised and lifelong learning better
- Desirability: whether there is a broad positive appetite for this area across stakeholders
- Feasibility: whether our systems are equipped with the necessary resources and levers to make this opportunity area a reality.
Opportunities for AI-enabled personalised learning
Catalyst for improving and diversifying access to education
AI is demonstrating its viability in the education sector by revolutionising access and diversifying learning methods. It's not merely about mainstream learners; AI could transform learning opportunities for all, particularly the marginalised and those excluded from traditional settings. There’s never been a better time to enhance access to learning, as one education charity leader suggested, the restrictive predetermined paths of the mainstream pre-18 education system lead to compounded feelings of inadequacy.
If designed with inclusion at its heart, AI for learning could ensure that everyone receives an education tailored to their unique needs. An education policy advisor explained how leveraging AI to enhance localised, real-world learning experiences could be vital for migrant communities adjusting to new languages and surroundings. There is also untapped potential in shaping AI to tailor and serve the unique needs of neurodiverse learners. As we venture into a future of lifelong learning, AI can provide a means for inclusion for learners with highly personalised needs.
Facilitate lifelong learning
As emphasised by one edtech charity leader, AI's desirability is evident in its growing presence in lifelong learning conversations. An education policymaker suggested that AI empowers individuals to transition careers, citing nurses as one example. With an average age range of 30-35, many have previously felt limited in their career trajectories – but can now access AI-enabled learning tools and resources to explore new pathways.
AI will make it ever-more possible for individuals to become lifelong learners in a constantly transforming world fueled by rapid tech advancements. Tom Kenyon, the RSA’s Head of Enterprise Design, stated that defining ‘future skills’ becomes a moving target within this dynamic environment, and the true differentiator becomes one's ability to learn continuously.
Increasing teaching capacity
The workload teachers face today underscores the value of AI in education. Alongside their primary role, educators are swamped by administrative tasks, consistent reporting, and continuous communication with students and parents. An AI team lead for an education charity emphasised this burden, stating that on average each primary school teacher produces tens of thousands of words for student-related reports alone, including weekly reports on every single child.
Restricted from using AI due to safeguarding concerns, teachers often sacrifice their personal time to ensure classroom quality. This contributes to the trend of educators leaving the profession within just a few years. AI holds the potential to alleviate this problem by easing the administrative burdens of time-pressured educators at work./
Deliver student-centred learning
AI stands out for its capacity to deliver personalised learner goal-setting, feedback and journeys, making student-centred learning a genuine prospect. The value of quality feedback in education is indisputable, with one education charity AI team lead pointing to AI’s potential to provide personalised, instant and iterative feedback.
AI's true power lies in differentiation, seamlessly adjusting to individual student needs – a task that is consistently challenging for educators to manage. As one practitioner highlighted, they cannot provide 36 different children with 36 different learning pathways, but they can support AI to create those adaptive learning pathways to help all students. By addressing individual learning needs, AI ensures each learner is on track to meet their unique potential and interests, while owning their respective educational journeys.
Identifying the challenges with AI-enabled personalised learning
There are also a number of challenges surrounding the use of AI in learning. With roundtable participants, we identified the following, along with ways to overcome them and mitigate their risks.
Access, inclusion and scaling failure
Inequity of access and inclusion and scaling failure were the most pressing challenges identified for viability. How do we ensure the benefit of AI personalised learning is for the many, not just for the elite few? How do we adopt AI in learning so that it enhances, instead of damaging, learning experiences?
1. Access and inclusion
Access to technology is already grossly imbalanced globally and AI threatens to widen this digital divide if not carefully considered. Barriers to access can take various forms; cost is a key factor, with paywalls deciding who can afford to use the technology to benefit their learning. This financial barrier could exclude certain students from better learning opportunities, or give those with a financial advantage even more resources, thus exacerbating inequalities.
In the roundtable, one AI researcher highlighted language and geography as other barriers, with most AI tools designed to perform best in English and lacking sufficient data in different languages or different non-English speaking contexts. This means AI is likely to work better and provide higher-quality intelligence for Western and English-speaking cultures, perpetuating global knowledge and access inequities. That said, others pointed out that this technology could improve outcomes for learners who want to operate in English but are not fluent.
2. Scaling failure
Regardless of who is building it, the use of AI is growing exponentially in an unregulated space. As the use of AI scales, its power and influence increases. Concerns were expressed about AI-enabled learning causing more harm at scale and pace if it were used to optimise a failing system, such as streamlining current assessment processes that are failing learners. One tech-focused charity explained that experimenting at a small scale to test systematically and learn helps practitioners to understand likely impacts while managing risks in a controlled environment.
Trust and bias
Trust and bias were the two major challenges identified within the desirability lens. How can we build tools with datasets that don’t replicate the current flawed system, and encourage learners to use them safely and responsibly?
A major challenge to integrating AI into education and learning systems is an inherent lack of trust in the current system. A university professor specialising in AI in education articulated that the problem with trust is endemic within the current transactional education system which has the singular purpose of getting people into jobs.
There is a disconnect between teachers and learners, which comes to a head on the issue of cheating. This could become even more volatile as some AI tools make it easier to complete assignments. Instead of banning AI tools for fear of misuse, we need to educate and trust learners to use AI tools safely and responsibly, and question how automated generative tools can work together with education to retain the spirit of assessment. This means creating a trustworthy environment, while simultaneously urging edtech organisations using AI to create tools that empower learning providers and learners.
From anti-Black bias to gender bias in credit card applications, bias in datasets has significant real-world impacts. This is because AI relies on the original dataset it is trained on to execute its functions. Creating diverse, rich datasets to ensure data equity is one priority for an equitable future of AI. A former Student Design Award winner highlighted a potential dangerous consequence of AI-ingrained bias within the education system affecting algorithmic assessment and wrongly restricting a learner’s results.
Inclusive datasets need to be built to prevent or mitigate bias. To provide an accurate representation of society, everyone needs to be involved in creating these datasets. But there will need to be a level of trust and transparency to encourage those most excluded to willingly engage and participate in these datasets.
Data sharing, privacy and regulation
Within this lens, the two challenges identified involved data sharing and privacy, as well as tensions about regulation. How do we manage and regulate AI tools in edtech so that learners are protected?
1. Data sharing and privacy
A major concern in the room was how children’s data is managed; how do we protect them and their data when they don’t have agency or choice in how it is used, or an understanding of what data they’re generating? A senior academic researcher explained that data protection and privacy is only as good as the edtech company’s policies, and children’s data could be easily exploited if interoperability was mismanaged between their specific AI learning technologies and other sites used for educational resources.
An AI and human rights lawyer suggested there needs to be more engagement between tech companies and wider society; any data shared by children that could be monetised or used for insights should be governed by national policies and regulations with safeguarding as the first priority. There is precedent for this in existing technology regulations. Within the community, parents and citizens should be educated to understand the value of AI and how they can participate, protect and educate younger learners.
Data ownership was also raised, with AI in edtech currently an unregulated space. As one teacher from an education social enterprise asked, if you want to ‘shift’ or port learning experiences to another platform, do you own your knowledge? How much of our personal data would we need to give to an AI system to get the benefits they offer?
Comparisons to personalised health were made, where a level of voluntary data release is given to receive benefits from a private company in the form of personalisation. The requirement to give personal data to reap the benefits of AI-enabled learning must be considered and regulated between learning providers. This also poses questions about whether new AI business models are needed that disentangle data sharing from monetisation.
2. Regulation shortfalls
As one edtech organisation founder stated, regulation typically follows the growth of technology – so how do we best prepare for what is to come? The speed of regulation needs to match the speed of AI growth and applications while also having the time and space to give policies the careful consideration they need. This becomes more challenging when different education and learning environments need differing policy priorities – one academic remarked that universities work in three- to five-year cycles, and statements or policies put in place will be amended over time. More discussion concerning regulation will be covered in the third article in our series.
Navigating the challenges of AI-enabled education and learning is undeniably complex. However, these hurdles, when addressed, can lead to the unlocking of profound structural benefits. It's important to remember that these are not insurmountable barriers, but rather stepping stones towards the fundamental and structural advantages AI can bring to the lifelong learning landscape. By framing the opportunities and challenges under the lenses of viability, desirability and feasibility we start to see areas of potential emerge and the interconnections between key factors at play to inform next steps.
In part three, we conclude our series by using the Three Horizons model (introduced in the first article) to explore future trajectories for AI personalised learning, and outline consensus and consideration among diverse stakeholders who have levers to transform AI for personalised lifelong learning in the long term.
The illustrations used in this blog are lifted from Google DeepMind's Visualising AI collection. They are an artist's illustration of artificial intelligence and learning. They were created by Rose Pilkington.
Read the other blogs in our AI and the future of learning series with Google DeepMind
Alessandra Tombazzi Joanna Choukeir Natalie Lai DeepMind Partnership Block
Read our first of three blogs summarising our roundtable with Google DeepMind on how AI might enable personalised lifelong learning to drive better outcomes for people, places and planet.
Alessandra Tombazzi Joanna Choukeir Natalie Lai DeepMind Partnership Block
In the final article of three drawing insights from a multidisciplinary roundtable discussion on AI and learning we speculate on future scenarios and identify the next steps to enable AI for lifelong learning.