色盒直播

I thought I was a creative scientist – until AI worked out my trick

The idea that humans’ inventiveness will always keep them one step ahead of computers may not turn out to be true, says David Sanders

May 25, 2023
Montage of Gulliver with digital sunset and board of symbols to illustrate I thought I was a creative scientist – until AI worked out my trick
Source: Getty/Alamy montage

In Part?III of Gulliver’s Travels, the fictive author visits the grand Academy of?Lagado. A?professor there is?“employed in a?project for improving speculative Knowledge by practical and mechanical Operations…Every one knew how laborious the usual Method is of?attaining to?Arts and Sciences; whereas by?his Contrivance, the most ignorant person at a?reasonable Charge, and with a?little bodily Labour, may write Books in?Philosophy, Poetry, Politicks, Law, Mathematics and Theology, without the least Assistance from Genius or?Study.”

One immediately recognises the Contrivance as an?archetype of?modern-day natural language processing/generating programs – the book’s illustration of?it even has an?uncanny resemblance to a?silicon chip. But it?is clear that Swift believes that such a?contraption is an?impossible and potentially dangerous absurdity.

It doesn’t look like an absurdity now. Yet even as AI advances in leaps and bounds, many observers continue to insist that machines will never acquire human-style creativity. Hence, students are assured, the skills that they acquire at university to challenge, reformulate and generate new ideas will continue to be in demand through their working lives.

But is that really true? A personal narrative may be illustrative.

色盒直播

ADVERTISEMENT

Early in my career, I?went into the business of predicting protein structure. It was a sideline, but it was important to both my research and teaching. Greatly simplifying, folded proteins are made up of four architectural elements: alpha-helices, beta-strands, turns and loops. But an early computational approach to predicting protein structure had proved to overpredict the propensity for sequences to form alpha-helices.

This was because the database of proteins on which the program was founded had, for historical experimental reasons, an over-representation of proteins that were predominantly alpha-helical. Other programs tried to correct for the bias, but they were only marginal improvements, and researchers began to lose faith in the possibility of algorithmic prediction.

色盒直播

ADVERTISEMENT

However, the subsequent explosion in inferred protein sequence data offered a new approach. Proteins that perform the same biochemical functions in different organisms possess the same core structure but differ partially in amino acid sequence. The variation has limits, and these limits are informative. Therefore, there was the potential for using the data from multiple protein sequences to predict common core structures rather than being dependent on just one sequence. The patterns of permitted substitutions in the sequence could then be used to predict whether a particular segment of the protein sequence would be folded into an alpha-helix or a beta-strand.

My lab published a prediction for the structure of a protein in which we were interested, and we later confirmed it experimentally. Subsequently, I?would on occasion predict the structures of proteins for other researchers using my methods. Colleagues encouraged me to create a computer program that embodied those methods – but my interests at the time lay elsewhere. Still, I?took pride in my insights and thought that I?was rather clever and creative.

Fast-forward two decades and a neural-network-generated program called has transformed protein-structure prediction. Reading about it, I?learned that a significant component of this computational tour de?force, co-developed with Google DeepMind, emerged from the same principles as those on which I?had based my approach. But rather than reading my negligible contribution to the field, AlphaFold was trained on an enormous database of sequences and structures (both of which have grown exponentially in the past decades) and “recognised” the power of extraction of information from multiple sequences and their restricted variation, among other achievements.

I don’t feel that creative any more. Apparently, my insight was nothing more than humdrum pattern recognition, achievable by an electronic device. I?am?not trying to diminish the achievements of AlphaFold. I?am merely puncturing inflated human pretensions – and calling for a reconsideration of the nature of human creativity.

色盒直播

ADVERTISEMENT

At one time, we might have thought that a chess grandmaster or a Go champion with an innovative strategy was creative. No?more. If?, where is the creativity located? It seems like any boundary we set will be quickly overrun.

In Part?IV of his travels, Gulliver meets a race of perfectly rational horses called Houyhnhnms, to whom he struggles to explain the concept of lying. Not having a word for it in their language, they can only render a lie as “the Thing which was?not”. In?contemporary organisations, the “creative” team is often considered to be the marketers. But they are often purveyors of Things which are?not. Is?that where human creativity will finally reside – in?deception?

Certainly, AI can generate lies, but can it “know” when it is?not telling the truth? Is?deceiving ourselves about human creativity the quintessential expression of human creativity? I?do?not know.

Many see AI as a threat to human employability and even sense of self-worth. But perhaps the true challenge facing educators is?not whether we can instil creativity into our students, but rather, whether we can teach them to recognise what passes for creativity – including when they are being told the Thing which was?not.

色盒直播

ADVERTISEMENT

David A. Sanders is associate professor of biological sciences at Purdue University.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

The AI chatbot may soon kill the undergraduate essay, but its transformation of research could be equally seismic. Jack Grove examines how ChatGPT is already disrupting scholarly practices and where the technology may eventually take researchers – for good or ill

16 March

Reader's comments (1)

The protein-structure prediction story does not really contain anything new. It has been obvious for at least 10 years if not more that machine learning algorithms would be able to perform such tasks, usually via a data-driven approach as described. The limitations when I started playing with artificial neural networks over 30 years ago were computing power, storage and a suitable language such as Python. We should be able to tackle some intractable problems now or at least get some insight, so nothing to panic about.

Sponsored

ADVERTISEMENT