What will the rise of AI really mean for schools? – TES

Advances in artificial intelligence (AI) are accelerating at breakneck speed. Systems like ChatGPT are approaching, and by some measures exceeding, human-level performance in many domains.

But what does the growth of these systems mean for schools?

We see tremendous potential for these technologies to enhance and augment human capabilities, making us smarter, more efficient and able to solve problems that currently seem impossible to manage.

However, we also see significant downsides. Without thoughtful intervention, AI could diminish human agency, stifle creativity and potentially stunt our collective progress.

Nowhere are the stakes higher than in education. Schools and universities have helped generations climb the ladder of knowledge and skills. But if machines can soon out-think us, whats the point of learning? Why invest time and effort acquiring expertise that could soon be obsolete?

To explore these questions, we recently co-authored a paper analysing the staggering pace of progress in AI and the potential implications for education.

Systems like GPT-4 are already scoring higher than well over 90 per cent of humans on academic tests of literacy and quantitative skills. Many experts predict AI will reach human-level reasoning across all domains in the next decade or two.

Once achieved, these artificial general intelligence systems could quickly exceed the combined brainpower of every person who has ever lived.

Faced with these exponential advances, how might society respond? We foresee four possible scenarios, all of which would have different implications for schools:

One option is that governments recognise the risks and halt further AI development, through regulation or restricting hardware supply. This might slow things down and buy some time.

Bans are hard to enforce, often porous and would mean that we would forfeit many potential benefits that carefully governed AI systems could bring. However, if AI advances get curtailed at, say, GPT-4.5 there is a greater chance that humanity stays in the driving seat and we still benefit from education.

In fact, with suitable guardrails, many of the recent AI advances might greatly accelerate our thinking skills, for example by providing high-quality supplementary AI tuition to all students and by acting as a digital personal assistant to teachers.

A second pathway is that AI takes over most jobs, but legislation forces companies to keep employing humans alongside the machines, in largely ceremonial roles. The risk here is that this fake work risks infantilising people.

As AI thinking accelerates, our stunted contributions could create bottlenecks, leaving us disempowered spectators rather than active participants.

This pathway also requires only a basic level of education - we would simply need to turn up and read out the script displayed in our AI glasses. After all, our own thinking and words would never exceed the abilities of the machines.

Wanting to remain competitive, some might opt to biologically or digitally upgrade their brains through gene editing or neural implants. This might sound like science fiction, but is not beyond the realm of possibility - and such a scenario would have profound implications for education.

We might be able to literally download new knowledge, skills and abilities in milliseconds. No more need for schooling.

But in making ourselves more machine-like, would we risk losing our humanity?

A final scenario is that we accept economic irrelevance and embrace universal basic income - paid for by taxing the fruits of AI labour. Freed from work, people would focus on sports, hobbies, rituals and human connections.

But devoid of productive purpose, might we lose our vital force and struggle to get out of bed in the morning?

All these paths are, in different ways, problematic. So, before we sleepwalk into one, we need urgent debate on the destination we want.

Our paper offers 13 pragmatic proposals to regulate and slow down AI, to buy time for this discussion by, for example: requiring frontier AI models to be government licensed before their release; making it illegal for systems to impersonate humans; implementing guardrails to stop AI systems from giving students the answers; and making system developers accountable for untruths, harms and bad advice generated by their systems.

At the same time, we must also re-examine educations role in society. If humans can add only marginal value working alongside AI, schools may need to pivot from preparation for employment to nurturing distinctly human traits: ethics, empathy, creativity, playfulness and curiosity.

As AI excels at information retrieval and analysis, we must double down on contextual reasoning, wisdom, judgement and morality. However, even here, we must be realistic that (eventually) AI is likely to be able to emulate all these human traits as well.

Some skills like literacy might also become less essential - for example, if we can learn through verbal discourse with AI or by porting into realistic simulations.

Yet foundational knowledge will likely remain crucial, enabling us to meaningfully prompt and critique AI. And direct instruction, whether by teacher or AI, will still help students to grasp concepts more quickly than trial-and-error discovery. We must, therefore, identify the irreducible core of timeless human competencies to pass on.

None of this is preordained. With vigilance, foresight and governance, AI can uplift humanity in the same way that prior innovations have. But we must act decisively. Timelines are highly uncertain. AI capabilities could exceed our own in a decade or two. Either way, the hinge point of history is now.

We hope these proposals stimulate urgent debate on the society and education system we want to build - before the choice is made for us.

Dylan Wiliam is emeritus professor of educational assessment at the UCL Institute of Education. John Hattie is emeritus laureate professor of education at the University of Melbourne. Arran Hamilton is group director, education, at Cognition Learning Group. His contributions had editorial support from Claude AI

Go here to read the rest:

What will the rise of AI really mean for schools? - TES

Related Posts

Comments are closed.