Natural. Unnatural. How natural…

Advertisements
Photo by Jess Vide from Pexels

To find out which sights specific neurons in monkeys ‘like’ best, researchers designed an algorithm, called XDREAM, that generated images that made neurons fire more than any natural images the researchers tested. As the images evolved, they started to look like distorted versions of real-world stimuli. (1)

Round and round we go. Trying to understand where we are by getting away from where we are. Can you find anything not made by wood inside a forest?

See the unnatural. It will catch your attention.

Not because it is unnatural.

But because of how natural it looks!

That is the greatest secret nature taught us. A secret we once knew. A secret we chose to forget. Look at the great mysteries of life. Behold the great occurrences of randomness inside a cosmos governed by change…

There is nothing natural… nature whispers in the night.

But we do not trust the night anymore. We worship the sun.

We opened our eyes to see. And we saw a different cosmos.

Stable. Full of patterns. Laws. Order.

We like that cosmos now. Too afraid to let it go.

But one day, we will sleep tired.

Floating on the silvery moon light…

One day we will dream again…

Knowing that light only creates shadows…

One day we will stand in the midst of nature.

One day, nature will look so unnatural…

Quantum computers: Meet my new computer. Different than the old computer…

Advertisements
Photo by Cat Crawford from Pexels

In theory, quantum computers can do anything that a classical computer can. In practice, however, the quantumness in a quantum computer makes it nearly impossible to efficiently run some of the most important classical algorithms.

The traditional grade-school method for multiplication requires n^2 steps, where n is the number of digits of the numbers you’re multiplying. For millennia, mathematicians believed there wasn’t a more efficient approach.

But in 1960 mathematician Anatoly Karatsuba found a faster way. His method involved splitting long numbers into shorter numbers. To multiply two eight-digit numbers, for example, you would first split each into two four-digit numbers, then split each of these into two-digit numbers. You then do some operations on all the two-digit numbers and reconstitute the results into a final product. For multiplication involving large numbers, the Karatsuba method takes far fewer steps than the grade-school method.

When a classical computer runs the Karatsuba method, it deletes information as it goes. For example, after it reconstitutes the two-digit numbers into four-digit numbers, it forgets the two-digit numbers. All it cares about is the four-digit numbers themselves. But quantum computers can’t shed (forget) information.

Quantum computers perform calculations by manipulating “qubits” which are entangled with one another. This entanglement is what gives quantum computers their massive power, but it is the same property that makes (made) it impossible for them to run some algorithms which classical computers can execute with ease. It was only until some years ago that Craig Gidney, a software engineer at Google AI Quantum in Santa Barbara, California, described a quantum version of the Karatsuba algorithm. (1)

Think. Forget. Move on. Think again…

Know everything.

And you will need to forget.

Forget so that you can learn.

So that you know it all.

The path to light, passes through alleys of darkness.

And trusting the light can only lead to darkness, when the Sun sets down.

You need the Moon.

For it is only there, that you can see your eyes reflected…

Upon the silvery calm lake…

Sun breathing fire.

Light reflected on the Moon…

Cold light reflected on water…

Light passing through your eyes.

In the dead of the night,

You realize that you knew the Sun.

Stand still enough…

And you will listen to the cosmos being born…

The limits of AI…

Advertisements
Photo by Spiros Kakos from Pexels

A team of mathematicians and AI researchers discovered that despite the seemingly boundless potential of machine learning, even the cleverest algorithms are nonetheless bound by the constraints of mathematics.

Research showed that a machine’s ability to actually learn – called learnability – can be constrained by mathematics that is unprovable. In other words, it’s basically giving an AI an undecidable problem, something that’s impossible for an algorithm to solve with a true-or-false response.

The team investigate a machine learning problem they call ‘estimating the maximum’ (EMX), in which a website seeks to display targeted advertising to the visitors that browse the site most frequently – although it isn’t known in advance which visitors will visit the site. This problem is similar to a mathematical paradox called the continuum hypothesis, another field of investigation for Gödel.

Like the incompleteness theorems, the continuum hypothesis is concerned with mathematics that cannot ever be proved to be true or untrue, and given the conditions of the EMX example, at least, machine learning could hypothetically run into the same perpetual stalemate. (1)

We believe in science.

But the only thing science has proved is that it cannot prove anything.

Gödel’s incompleteness theorem showed that whatever we do, we will never be able to prove everything in the context of our limited theories. No matter how many axioms you choose and how carefully you choose them, you will never be able to describe the cosmos in its totality in an objective way as proponents of scientism would like to believe.

Now we have built big computers.

With the hope that they will answer everything.

But they cannot answer anything.

Because there is nothing to answer in the first place.

In a limited world there is no reason to analyze the Monad.

In an immeasurable cosmos you cannot count beyond One.

The maximum is zero.

The minimum is infinite.

A computer struggling to make sense of the problem. A man standing beside the computer trying to make sense of the computer. A bird flying by. Poor man…

Back to analog… Looking at the forest again…

Advertisements
Photo by Tatiana from Pexels

Analog computers were used to predict tides from the early to mid-20th century, guide weapons on battleships and launch NASA’s first rockets into space. They first used gears and vacuum tubes, and later, transistors, that could be configured to solve problems with a range of variables. They perform mathematical functions directly. For instance, to add 5 and 9, analog computers add voltages that correspond to those numbers, and then instantly obtain the correct answer. However, analog computers were cumbersome and prone to “noise” – disturbances in the signals – and were difficult to re-configure to solve different problems, so they fell out of favor.

Digital computers emerged after transistors and integrated circuits were reliably mass produced, and for many tasks they are accurate and sufficiently flexible. Computer algorithms for those computers are based on the use of 0s and 1s.

Yet, 1s and 0s, pose limitations into solving some NP-hard problems. (e.g. the “Traveling Salesman” problem) The difficulty with such optimization problems, researcher Toroczkai noted, is that “while you can always come up with some answer, you cannot determine if it’s optimal. Determining that there isn’t a better solution is just as hard as the problem itself”.

[Note: NP-hardness is a theory of computational complexity, with problems that are famous for their difficulty. When the number of variables is large, problems associated with scheduling, protein folding, bioinformatics, medical imaging and many other areas are nearly unsolvable with known methods.]

That’s why researchers such as Zoltán Toroczkai, professor in the Department of Physics and concurrent professor in the Department of Computer Science and Engineering at the University of Notre Dame, are interested in reviving analog computing. After testing their new method on a variety of NP-hard problems, the researchers concluded their solver has the potential to lead to better, and possibly faster, solutions than can be computed digitally. (1)

Breaking a problem into pieces can do so many things.

But at the end you will have to look at the problem itself.

And the problem does not have any components.

But only a solution.

Visible only to those who do not see the problem.

You cannot ride the waves.

All you can do is fall into the sea and swim.

You cannot live life.

All you can do is let go and prepare to die.

Look at the big picture.

You can solve anything.

As long as you accept that you cannot…

At the end, the voltage will reach zero.

At the end, the computer will shut down.

You might see this as a sign of failure.

But it would be the first time it really solved anything…

AI. Positive. Negative. One. Zero.

Advertisements
Photo by Elizaveta Dushechkina from Pexels

Classifying things is critical for our daily lives. For example, we have to detect spam mail, fake political news, as well as more mundane things such as objects or faces. When using AI, such tasks are based on “classification technology” in machine learning – having the computer learn using the boundary separating positive and negative data. For example, “positive” data would be photos including a happy face, and “negative” data photos that include a sad face. Once a classification boundary is learned, the computer can determine whether a certain data is positive or negative. The difficulty with this technology is that it requires both positive and negative data for the learning process, and negative data are not available in many cases (for instance, it is hard to find photos with the label, “this photo includes a sad face,” since most people smile in front of a camera.)

In terms of real-life programs, when a retailer is trying to predict who will make a purchase, it can easily find data on customers who purchased from them (positive data), but it is basically impossible to obtain data on customers who did not purchase from them (negative data), since they do not have access to their competitors’ data. Another example is a common task for app developers: they need to predict which users will continue using the app (positive) or stop (negative). However, when a user unsubscribes, the developers lose the user’s data because they have to completely delete data regarding that user in accordance with the privacy policy to protect personal information.

According to lead author Takashi Ishida from RIKEN AIP, “Previous classification methods could not cope with the situation where negative data were not available, but we have made it possible for computers to learn with only positive data, as long as we have a confidence score for our positive data, constructed from information such as buying intention or the active rate of app users. Using our new method, we can let computers learn a classifier only from positive data equipped with confidence.”

Ishida proposed, together with researcher Niu Gang from his group and team leader Masashi Sugiyama, that they let computers learn well by adding the confidence score, which mathematically corresponds to the probability whether the data belongs to a positive class or not. They succeeded in developing a method that can let computers learn a classification boundary only from positive data and information on its confidence (positive reliability) against classification problems of machine learning that divide data positively and negatively. (1)

Computers trying to learn based on negative feedback.

And when such not exists, trying to compensate for that based on the positive one.

But can there be any feedback which is either positive or negative?

Can anything not be something else?

Can anything not be part of nothing?

In a cosmos full of everything, where can you seek nothingness? Which result can be negative in a cosmos where every negative element creates an equally positive one? Which result can be positive in a cosmos leading to death in every possible scenario in place? How can the computer learn anything in a world where humans have forgotten how they started learning in the first place, at a time when there was nothing to learn?

Look at that child.

Learning every passing minute.

By not learning anything…