Why is AI hard to define? | BCS – BCS

For you

Be part of something bigger,join BCS, The Chartered Institute for IT.

A good working definition of applied AI is: the acquisition, manipulation, and exploitation of knowledge by systems whose behaviours may change on the basis of experience, or which are not constrained to be predictable or deterministic.

That (applied) AI knowledge can be:

Hybrid approaches, e.g. expert-validated machine learning, work well. Some large pre-trained models use this approach to a surprising degree.

Complex ecosystems of software systems can exhibit emergent behaviour, or intelligence. Just as ant colonies exhibit more intelligence than individual ants, AI-behaviour can emerge from complex ordinary software systems.

Until recently, once an AI technique was established, it was no longer perceived as AI; knowing how the rabbit is pulled out of the hat destroys the magic. This was the de-facto moving goal-posts definition of AI: that which a computer cant do.

AI used to be wide but shallow: horizontally applicable, but not powerful, such as a 1990s multi-lingual summariser which, though effective, had little idea of what it was writing. Alternatively AI could be deep but narrow: powerful only on tightly related problems.

The art of the computer scientist is explored in Professor Wirths influential eponymous book, Algorithms + Data Structure = Programs. But some AI systems are now either creating algorithms and data structures, or acting as if they have:

GLLMs have changed perceptions: AI can at last do things again, and AI systems which invent programs (self programming computers?) are both wide and deep. Some even give an appearance of edging up from machine intelligence towards sentience; should accidental or deliberate machine sentience arrive, we wont necessarily understand or even recognise it.

With greater public understanding of AI capabilities, the label AI is less frequently used simply to glamourise mundane software though it remains a popular buzz-word, replacing the meaningless big data.

AI discussions often conflate its three depths. Overloaded terms help marketing, but hinder understanding: deep learning means a neural net with more than three levels, but is often misunderstood as profound learning.

When systems make decisions, explainability becomes important when welfare is at stake. Explainability is the AI equivalence of human accountability. Arguably there is a need to make GLLMs explainable. Unfortunately, by their very black-box (neural net) nature they are not. Powerful AI (which learns its own knowledge representations and reasoning techniques) might be necessarily intrinsically opaque with unexplainable decisions.

Misunderstanding AI characteristics can lead people to try regulating AI techniques but it is only the system effect that might be regulated, not the means used to achieve it. A wrongly declined mortgage has equal impact whether due to a requirements mistake, biased dataset, database error, bug, incorrect algorithm, or misapplied AI technique. Regulating AI as if it were just clever software would impinge on the fundamental characteristics from which its capability flows, and inhibit its benefits. A reasonable requirement would be that any system, not just AI, which impinges on welfare must be able to explain its decisions.

As a colleague observed, defining AI is like defining time: we all think we know what it means, but it is actually hard to pin down. Just as our understanding of time changes appropriately enough, with time so AI itself may cause us to change our definition of AI.

Andrew Lea (FBCS), with the connivance of the BCS AI interest group - based on his four decades of applying AI in commerce, industry, aerospace and fraud detection - explores why AI is so hard to define. He has been fascinated by AI ever since reading Natural Sciences at Cambridge and studying Computing at London University.

Read this article:

Why is AI hard to define? | BCS - BCS

Related Posts

Comments are closed.