19.6 C
Miami
Sunday, February 22, 2026

What to read this week: The Laws of Thought by Tom Griffiths

- Advertisement -spot_imgspot_img
- Advertisement -spot_imgspot_img

Dwight Ellefsen/FPG/Archive

The Laws of Thought
Tom Griffiths, William Collins (UK) Macmillan (US)

FOR nearly 70 years, cognitive researchers have been fighting a civil war. On one side is computationalism, which argues intelligence is best explained by rules, symbols and logic that can be expressed in equations. On the other is connectionism, where intelligence emerges from vast, connected networks modelled on the brain’s neurons, and no one component is intelligent but somehow the system as a whole is.

That battle has shaped everything from cognitive science to the artificial intelligence that is now transforming the global economy. This month, two new books wade in from opposite sides. For me, the standout is The Laws of Thought: The quest for a mathematical theory of the mind. In it, Princeton professor Tom Griffiths traces the long attempt to formalise thinking in mathematical laws, explaining why modern AI is the way it is – and what the future may hold.

Griffiths frames the story around three competing and increasingly entangled mathematical ways of formalising thought: rules and symbols, neural networks and probability. The first treats thinking rather like problem-solving – break a task into goals and sub-goals, then navigate it with formal steps. It powered early AI, but also showed why human common sense is so hard to bottle, with the number of rules AI had to follow soon spooling out into tens of millions of requirements.

Neural nets trade explicit rules for learning from examples, building intelligence from many simple units whose interactions produce complex behaviour. This is (sort of) how humans operate, but probability and statistics add a third ingredient: uncertainty. Minds don’t have access to perfect information, and what makes us human is how we weigh evidence and update our beliefs.

For Griffiths, none of the three frameworks is enough. Realistic accounts of intelligence, whether human or machine, will blend all three. He makes his case historically, looking at how humans have tried to map the mind’s processes using mathematics, drawing on archives and interviews with researchers. As a result, his book is detailed and engaging, if a bit ponderous.

A different tack is taken by neuroscientist Gaurav Suri and Jay McClelland in The Emergent Mind: How intelligence arises in people and machines, in which they argue that the mind is an emergent property of interacting networks of neurons, biological or artificial, which can generate thoughts, emotions and decisions. It draws on McClelland’s history as a pioneer of connectionism.

The two books offer interesting, and contradictory, takes on the generative AI revolution. For Griffiths, a large language model (LLM) confirms his hybrid vision: it is impressive, but hallucinates and stumbles, and a symbolic layer will be needed to fix it. For Suri and McClelland, the same LLM is a vindication: it is awe-inspiring how much reasoning emerged from a network alone.

The problem with The Emergent Mind isn’t so much its thesis as its delivery, as the tone flips between folksy asides and clunky phrasing. Explaining the maths and science was always going to be tricky, and neither book completely delivers, though The Laws of Thought comes closer because describing AI history means focusing on what each framework can and can’t explain.

The Emergent Mind has a more provocative manifesto, with the authors seeing no fundamental barrier to more autonomous, goal-driven AI emerging from purely neural architectures. As a result, it can feel less rooted in reality.

Griffiths’s book, however, leaves you with a sturdy sense of the “languages” we have to describe thought and why the future may well lie in messy overlaps.

Could that future even signal peace between the two camps?

 

Two other great books on machine intelligence

 

New Scientist. Science news and long reads from expert journalists, covering developments in science, technology, health and the environment on the website and the magazine.

Algorithms to Live By
by Brian Christian and Tom Griffiths

This is a lively, non-technical tour of how ideas from computing can illuminate everyday decisions, including how an algorithmic approach can improve human decision-making. It was co-written by Griffiths a decade ago, before the ChatGPT revolution, but remains relevant.

 

New Scientist. Science news and long reads from expert journalists, covering developments in science, technology, health and the environment on the website and the magazine.

Rebooting AI
Building artificial intelligence we can trust
by Gary Marcus and Ernest Davis

Current neural networks can be impressive but brittle, this book argues. It makes the case for hybrid systems that recover strengths from the rules-and-symbols approach – one of the three mathematical frameworks in Griffiths’s new book.

 

Chris Stokel-Walker is a tech writer based in Newcastle upon Tyne, UK

Source link

- Advertisement -spot_imgspot_img

Highlights

- Advertisement -spot_img

Latest News

- Advertisement -spot_img