The limits of AI…

Advertisements
Photo by Spiros Kakos from Pexels

A team of mathematicians and AI researchers discovered that despite the seemingly boundless potential of machine learning, even the cleverest algorithms are nonetheless bound by the constraints of mathematics.

Research showed that a machine’s ability to actually learn – called learnability – can be constrained by mathematics that is unprovable. In other words, it’s basically giving an AI an undecidable problem, something that’s impossible for an algorithm to solve with a true-or-false response.

The team investigate a machine learning problem they call ‘estimating the maximum’ (EMX), in which a website seeks to display targeted advertising to the visitors that browse the site most frequently – although it isn’t known in advance which visitors will visit the site. This problem is similar to a mathematical paradox called the continuum hypothesis, another field of investigation for Gödel.

Like the incompleteness theorems, the continuum hypothesis is concerned with mathematics that cannot ever be proved to be true or untrue, and given the conditions of the EMX example, at least, machine learning could hypothetically run into the same perpetual stalemate. (1)

We believe in science.

But the only thing science has proved is that it cannot prove anything.

Gödel’s incompleteness theorem showed that whatever we do, we will never be able to prove everything in the context of our limited theories. No matter how many axioms you choose and how carefully you choose them, you will never be able to describe the cosmos in its totality in an objective way as proponents of scientism would like to believe.

Now we have built big computers.

With the hope that they will answer everything.

But they cannot answer anything.

Because there is nothing to answer in the first place.

In a limited world there is no reason to analyze the Monad.

In an immeasurable cosmos you cannot count beyond One.

The maximum is zero.

The minimum is infinite.

A computer struggling to make sense of the problem. A man standing beside the computer trying to make sense of the computer. A bird flying by. Poor man…

Counting. Playing music.

Advertisements
Photo by Rafael Serafim from Pexels

Bees can solve seemingly clever counting tasks with very small numbers of nerve cells in their brains, according to researchers. (1)

Scientists have developed a 3D-printed robotic hand which can play simple musical phrases on the piano by just moving its wrist. (2)

Everyone feeling so important when counting. But every animal can do it. Even bees. And what makes us special is that we may choose not to count even though we can. Everyone feeling so amazed when seeing a robot playing the piano. And yet we are not important because we play music, but because we may choose not to and listen to the silence instead.

In the future the world will be full of bees and robots.

Buzzing through chattering humans.

Playing the piano between soundless men.

But within the dreaded noisy night, a child will suddenly stay silent.

And under the scorching midday sun, an old man will stop to listen…

Beyond the robots playing perfectly…

Past the bees counting seamlessly…

Looking at the cosmos.

Crying, for it is so full and perfect.

Laughing, for it is so flawlessly dead…

Back to analog… Looking at the forest again…

Advertisements
Photo by Tatiana from Pexels

Analog computers were used to predict tides from the early to mid-20th century, guide weapons on battleships and launch NASA’s first rockets into space. They first used gears and vacuum tubes, and later, transistors, that could be configured to solve problems with a range of variables. They perform mathematical functions directly. For instance, to add 5 and 9, analog computers add voltages that correspond to those numbers, and then instantly obtain the correct answer. However, analog computers were cumbersome and prone to “noise” – disturbances in the signals – and were difficult to re-configure to solve different problems, so they fell out of favor.

Digital computers emerged after transistors and integrated circuits were reliably mass produced, and for many tasks they are accurate and sufficiently flexible. Computer algorithms for those computers are based on the use of 0s and 1s.

Yet, 1s and 0s, pose limitations into solving some NP-hard problems. (e.g. the “Traveling Salesman” problem) The difficulty with such optimization problems, researcher Toroczkai noted, is that “while you can always come up with some answer, you cannot determine if it’s optimal. Determining that there isn’t a better solution is just as hard as the problem itself”.

[Note: NP-hardness is a theory of computational complexity, with problems that are famous for their difficulty. When the number of variables is large, problems associated with scheduling, protein folding, bioinformatics, medical imaging and many other areas are nearly unsolvable with known methods.]

That’s why researchers such as Zoltán Toroczkai, professor in the Department of Physics and concurrent professor in the Department of Computer Science and Engineering at the University of Notre Dame, are interested in reviving analog computing. After testing their new method on a variety of NP-hard problems, the researchers concluded their solver has the potential to lead to better, and possibly faster, solutions than can be computed digitally. (1)

Breaking a problem into pieces can do so many things.

But at the end you will have to look at the problem itself.

And the problem does not have any components.

But only a solution.

Visible only to those who do not see the problem.

You cannot ride the waves.

All you can do is fall into the sea and swim.

You cannot live life.

All you can do is let go and prepare to die.

Look at the big picture.

You can solve anything.

As long as you accept that you cannot…

At the end, the voltage will reach zero.

At the end, the computer will shut down.

You might see this as a sign of failure.

But it would be the first time it really solved anything…

AI. Positive. Negative. One. Zero.

Advertisements
Photo by Elizaveta Dushechkina from Pexels

Classifying things is critical for our daily lives. For example, we have to detect spam mail, fake political news, as well as more mundane things such as objects or faces. When using AI, such tasks are based on “classification technology” in machine learning – having the computer learn using the boundary separating positive and negative data. For example, “positive” data would be photos including a happy face, and “negative” data photos that include a sad face. Once a classification boundary is learned, the computer can determine whether a certain data is positive or negative. The difficulty with this technology is that it requires both positive and negative data for the learning process, and negative data are not available in many cases (for instance, it is hard to find photos with the label, “this photo includes a sad face,” since most people smile in front of a camera.)

In terms of real-life programs, when a retailer is trying to predict who will make a purchase, it can easily find data on customers who purchased from them (positive data), but it is basically impossible to obtain data on customers who did not purchase from them (negative data), since they do not have access to their competitors’ data. Another example is a common task for app developers: they need to predict which users will continue using the app (positive) or stop (negative). However, when a user unsubscribes, the developers lose the user’s data because they have to completely delete data regarding that user in accordance with the privacy policy to protect personal information.

According to lead author Takashi Ishida from RIKEN AIP, “Previous classification methods could not cope with the situation where negative data were not available, but we have made it possible for computers to learn with only positive data, as long as we have a confidence score for our positive data, constructed from information such as buying intention or the active rate of app users. Using our new method, we can let computers learn a classifier only from positive data equipped with confidence.”

Ishida proposed, together with researcher Niu Gang from his group and team leader Masashi Sugiyama, that they let computers learn well by adding the confidence score, which mathematically corresponds to the probability whether the data belongs to a positive class or not. They succeeded in developing a method that can let computers learn a classification boundary only from positive data and information on its confidence (positive reliability) against classification problems of machine learning that divide data positively and negatively. (1)

Computers trying to learn based on negative feedback.

And when such not exists, trying to compensate for that based on the positive one.

But can there be any feedback which is either positive or negative?

Can anything not be something else?

Can anything not be part of nothing?

In a cosmos full of everything, where can you seek nothingness? Which result can be negative in a cosmos where every negative element creates an equally positive one? Which result can be positive in a cosmos leading to death in every possible scenario in place? How can the computer learn anything in a world where humans have forgotten how they started learning in the first place, at a time when there was nothing to learn?

Look at that child.

Learning every passing minute.

By not learning anything…

Doctors. AI. Dead people.

Advertisements
Photo by Josh Sorenson from Pexels

Could machines using artificial intelligence make doctors obsolete?

Artificial intelligence systems simulate human intelligence by learning, reasoning, and self-correction. This technology has the potential to be more accurate than doctors at making diagnoses and performing surgical interventions, says Jörg Goldhahn, MD, MAS, deputy head of the Institute for Translational Medicine at ETH Zurich, Switzerland.

It has a “near unlimited capacity” for data processing and subsequent learning, and can do this at a speed that humans cannot match.

Increasing amounts of health data, from apps, personal monitoring devices, electronic medical records, and social media platforms are being brought together to give machines as much information as possible about people and their diseases. At the same time machines are “reading” and taking account of the rapidly expanding scientific literature. (1)

We believe computers can replace doctors.

But doctors are not here to keep us alive.

They are here to discuss with the dead.

No matter how much data you analyze, you will always miss the point.

That our life is not our own.

And that we are not here to avoid death.

But to embrace it.

A computer cannot help you live.

Simply because it can never die…