Is it ethical to use complex mini-brains for artificial intelligence?

Brain organoids could be more effective than silicon-based AIs at certain tasks. But as they grow more complex, when should we step in to protect their welfare, asks Michael Le Page.

DO WE want a future in which data centres full of living, bodiless brains carry out various tasks for us? That is the question raised by the work being done by at least three teams around the world.

Michelle D’urbano

In 2021, I reported on how Brett Kagan at Cortical Labs in Australia was growing flat sheets of mouse and human brain cells, hooking them up to electrodes and getting them to play games such as Pong. “We often refer to them as living in the Matrix,” he told me at the time. “When they are in the game, they believe they are the paddle.”

Kagan has moved on to working with more complex brain organoids, three-dimensional “mini-brains” that can be grown from stem cells. In February, he and 20 others published a kind of manifesto calling for the development of “organoid intelligence”. Then, in March, it emerged that two other teams are doing similar experiments.

Why? The argument is that, while artificial intelligence systems such as GPT-4 are doing some amazing things, this is being achieved by making the systems bigger and bigger and using huge amounts of energy to train them on ever more massive data sets.

Animal brains are very different and are more efficient. We animals can learn from seeing just a few examples of something, and even a human brain uses only around 20 watts of power – less than many laptops. So the idea is that living brains, or at least living brain tissue, could be much more efficient than silicon-based AI for some tasks.

I do find this idea disturbing. A “brain in a vat” would be an utterly helpless slave, lacking the ability to sense anything other than what its owner chooses.

But, then again, every year we raise billions of thinking, feeling animals to slaughter for food. A hen in a battery farm is a helpless slave too. Is it any different ethically to use brains in a vat to do work for us, providing they aren’t functioning at human level?

For now, brain organoids are nothing like real brains. They are disorganised bunches of brain cells just a few millimetres across that are nowhere near even a simple animal brain. There is general agreement that they aren’t aware, conscious or able to feel pain and emotion.

But researchers are creating more and more sophisticated brain organoids, mainly for studying brain diseases. Recent work includes merging human brain organoids with rat brains. If it proves possible to profit from the work of brain organoids, there will be an even stronger incentive to develop yet more complex ones.

What should “organoid welfare” involve if they do start becoming aware? How much rest and sleep will advanced organoids require? Should they be allowed to range freely over virtual landscapes during breaks?

Most importantly, where do we draw the line? The authors of the organoid intelligence manifesto say we need to start establishing the boundaries of what is acceptable now. But it appears to me that we still don’t know enough about consciousness and feeling to be able to draw clear lines – so where should that line be? At dog level? Dolphin level? Octopus level?

Maybe this won’t ever become an issue. Based on experiments done so far, it is hard to see how companies could turn a profit anytime soon. Instead, it seems to me that it will be more practical to make silicon-based AIs mimic biological brains than to turn biological brains in vats into AIs. But is this any better ethically?

I instinctively assume that any cell-based brain-like thing must be more aware and feeling than any silicon one, but I can’t logically justify that. Perhaps we should be worrying about the welfare of future AIs just as much as we do about how AIs will affect human welfare in the future.

Michael Le Page is an environment reporter at NewScientist Journal.

This article has been sourced from NewScientist for knowledge of our readers. We strongly recommend to read the original article here.

Post a Comment

Last Article Next Article