Open-ending algorithms… The end as the beginning…

Advertisements
Photo by Enric Cruz López from Pexels

Evolution allows life to explore almost limitless diversity and complexity. Scientists hope to recreate such open-endedness in the laboratory or in computer simulations, but even sophisticated computational techniques like machine learning and artificial intelligence can’t provide the open-ended tinkering associated with evolution. Here, common barriers to open-endedness in computation and biology were compared, to see how the two realms might inform each other, and ultimately enable machine learning to design and create open-ended evolvable systems. (1)

Looking for an end.

By accepting that there is none.

How could there be one?

The end is defined by the beginning.

And this definition is also the end.

One can never pass through the walls he raised.

Achilles will never reach the turtle.

Mathematicians will never prove everything.

Humans will never find the meaning of life.

Unless they stop looking for meaning.

Unless mathematicians stop trying to prove things.

Unless Achilles stops trying to pass the turtle and just runs.

No, there is no end. There are just beginnings…

Be careful with that first step…

No, it is not just a first step.

It is also your last…

Attributing art. Understanding art. Making art?!

Advertisements
Photo by BASIL JOSE from Pexels

AI used to analyze and attribute art. (1)

Computers analyzing art.

Categorizing it. Attributing it.

Computers understanding art.

Computers destroying art.

Only because they understood it.

While it is not meant to be understood.

But can’t you see?

This means that they didn’t understand it after all!

Weird cosmos.

Full of people. Full of computers.

Humans creating art.

Computers understanding it!

How nonsensical.

How dull.

How awfully… artistic!

The limits of AI…

Advertisements
Photo by Spiros Kakos from Pexels

A team of mathematicians and AI researchers discovered that despite the seemingly boundless potential of machine learning, even the cleverest algorithms are nonetheless bound by the constraints of mathematics.

Research showed that a machine’s ability to actually learn – called learnability – can be constrained by mathematics that is unprovable. In other words, it’s basically giving an AI an undecidable problem, something that’s impossible for an algorithm to solve with a true-or-false response.

The team investigate a machine learning problem they call ‘estimating the maximum’ (EMX), in which a website seeks to display targeted advertising to the visitors that browse the site most frequently – although it isn’t known in advance which visitors will visit the site. This problem is similar to a mathematical paradox called the continuum hypothesis, another field of investigation for Gödel.

Like the incompleteness theorems, the continuum hypothesis is concerned with mathematics that cannot ever be proved to be true or untrue, and given the conditions of the EMX example, at least, machine learning could hypothetically run into the same perpetual stalemate. (1)

We believe in science.

But the only thing science has proved is that it cannot prove anything.

Gödel’s incompleteness theorem showed that whatever we do, we will never be able to prove everything in the context of our limited theories. No matter how many axioms you choose and how carefully you choose them, you will never be able to describe the cosmos in its totality in an objective way as proponents of scientism would like to believe.

Now we have built big computers.

With the hope that they will answer everything.

But they cannot answer anything.

Because there is nothing to answer in the first place.

In a limited world there is no reason to analyze the Monad.

In an immeasurable cosmos you cannot count beyond One.

The maximum is zero.

The minimum is infinite.

A computer struggling to make sense of the problem. A man standing beside the computer trying to make sense of the computer. A bird flying by. Poor man…

Back to analog… Looking at the forest again…

Advertisements
Photo by Tatiana from Pexels

Analog computers were used to predict tides from the early to mid-20th century, guide weapons on battleships and launch NASA’s first rockets into space. They first used gears and vacuum tubes, and later, transistors, that could be configured to solve problems with a range of variables. They perform mathematical functions directly. For instance, to add 5 and 9, analog computers add voltages that correspond to those numbers, and then instantly obtain the correct answer. However, analog computers were cumbersome and prone to “noise” – disturbances in the signals – and were difficult to re-configure to solve different problems, so they fell out of favor.

Digital computers emerged after transistors and integrated circuits were reliably mass produced, and for many tasks they are accurate and sufficiently flexible. Computer algorithms for those computers are based on the use of 0s and 1s.

Yet, 1s and 0s, pose limitations into solving some NP-hard problems. (e.g. the “Traveling Salesman” problem) The difficulty with such optimization problems, researcher Toroczkai noted, is that “while you can always come up with some answer, you cannot determine if it’s optimal. Determining that there isn’t a better solution is just as hard as the problem itself”.

[Note: NP-hardness is a theory of computational complexity, with problems that are famous for their difficulty. When the number of variables is large, problems associated with scheduling, protein folding, bioinformatics, medical imaging and many other areas are nearly unsolvable with known methods.]

That’s why researchers such as Zoltán Toroczkai, professor in the Department of Physics and concurrent professor in the Department of Computer Science and Engineering at the University of Notre Dame, are interested in reviving analog computing. After testing their new method on a variety of NP-hard problems, the researchers concluded their solver has the potential to lead to better, and possibly faster, solutions than can be computed digitally. (1)

Breaking a problem into pieces can do so many things.

But at the end you will have to look at the problem itself.

And the problem does not have any components.

But only a solution.

Visible only to those who do not see the problem.

You cannot ride the waves.

All you can do is fall into the sea and swim.

You cannot live life.

All you can do is let go and prepare to die.

Look at the big picture.

You can solve anything.

As long as you accept that you cannot…

At the end, the voltage will reach zero.

At the end, the computer will shut down.

You might see this as a sign of failure.

But it would be the first time it really solved anything…

AI. Positive. Negative. One. Zero.

Advertisements
Photo by Elizaveta Dushechkina from Pexels

Classifying things is critical for our daily lives. For example, we have to detect spam mail, fake political news, as well as more mundane things such as objects or faces. When using AI, such tasks are based on “classification technology” in machine learning – having the computer learn using the boundary separating positive and negative data. For example, “positive” data would be photos including a happy face, and “negative” data photos that include a sad face. Once a classification boundary is learned, the computer can determine whether a certain data is positive or negative. The difficulty with this technology is that it requires both positive and negative data for the learning process, and negative data are not available in many cases (for instance, it is hard to find photos with the label, “this photo includes a sad face,” since most people smile in front of a camera.)

In terms of real-life programs, when a retailer is trying to predict who will make a purchase, it can easily find data on customers who purchased from them (positive data), but it is basically impossible to obtain data on customers who did not purchase from them (negative data), since they do not have access to their competitors’ data. Another example is a common task for app developers: they need to predict which users will continue using the app (positive) or stop (negative). However, when a user unsubscribes, the developers lose the user’s data because they have to completely delete data regarding that user in accordance with the privacy policy to protect personal information.

According to lead author Takashi Ishida from RIKEN AIP, “Previous classification methods could not cope with the situation where negative data were not available, but we have made it possible for computers to learn with only positive data, as long as we have a confidence score for our positive data, constructed from information such as buying intention or the active rate of app users. Using our new method, we can let computers learn a classifier only from positive data equipped with confidence.”

Ishida proposed, together with researcher Niu Gang from his group and team leader Masashi Sugiyama, that they let computers learn well by adding the confidence score, which mathematically corresponds to the probability whether the data belongs to a positive class or not. They succeeded in developing a method that can let computers learn a classification boundary only from positive data and information on its confidence (positive reliability) against classification problems of machine learning that divide data positively and negatively. (1)

Computers trying to learn based on negative feedback.

And when such not exists, trying to compensate for that based on the positive one.

But can there be any feedback which is either positive or negative?

Can anything not be something else?

Can anything not be part of nothing?

In a cosmos full of everything, where can you seek nothingness? Which result can be negative in a cosmos where every negative element creates an equally positive one? Which result can be positive in a cosmos leading to death in every possible scenario in place? How can the computer learn anything in a world where humans have forgotten how they started learning in the first place, at a time when there was nothing to learn?

Look at that child.

Learning every passing minute.

By not learning anything…