The limits of AI…

Photo by Spiros Kakos from Pexels

A team of mathematicians and AI researchers discovered that despite the seemingly boundless potential of machine learning, even the cleverest algorithms are nonetheless bound by the constraints of mathematics.

Research showed that a machine’s ability to actually learn – called learnability – can be constrained by mathematics that is unprovable. In other words, it’s basically giving an AI an undecidable problem, something that’s impossible for an algorithm to solve with a true-or-false response.

The team investigate a machine learning problem they call ‘estimating the maximum’ (EMX), in which a website seeks to display targeted advertising to the visitors that browse the site most frequently – although it isn’t known in advance which visitors will visit the site. This problem is similar to a mathematical paradox called the continuum hypothesis, another field of investigation for Gödel.

Like the incompleteness theorems, the continuum hypothesis is concerned with mathematics that cannot ever be proved to be true or untrue, and given the conditions of the EMX example, at least, machine learning could hypothetically run into the same perpetual stalemate. (1)

We believe in science.

But the only thing science has proved is that it cannot prove anything.

Gödel’s incompleteness theorem showed that whatever we do, we will never be able to prove everything in the context of our limited theories. No matter how many axioms you choose and how carefully you choose them, you will never be able to describe the cosmos in its totality in an objective way as proponents of scientism would like to believe.

Now we have built big computers.

With the hope that they will answer everything.

But they cannot answer anything.

Because there is nothing to answer in the first place.

In a limited world there is no reason to analyze the Monad.

In an immeasurable cosmos you cannot count beyond One.

The maximum is zero.

The minimum is infinite.

A computer struggling to make sense of the problem. A man standing beside the computer trying to make sense of the computer. A bird flying by. Poor man…

AI. Positive. Negative. One. Zero.

Photo by Elizaveta Dushechkina from Pexels

Classifying things is critical for our daily lives. For example, we have to detect spam mail, fake political news, as well as more mundane things such as objects or faces. When using AI, such tasks are based on “classification technology” in machine learning – having the computer learn using the boundary separating positive and negative data. For example, “positive” data would be photos including a happy face, and “negative” data photos that include a sad face. Once a classification boundary is learned, the computer can determine whether a certain data is positive or negative. The difficulty with this technology is that it requires both positive and negative data for the learning process, and negative data are not available in many cases (for instance, it is hard to find photos with the label, “this photo includes a sad face,” since most people smile in front of a camera.)

In terms of real-life programs, when a retailer is trying to predict who will make a purchase, it can easily find data on customers who purchased from them (positive data), but it is basically impossible to obtain data on customers who did not purchase from them (negative data), since they do not have access to their competitors’ data. Another example is a common task for app developers: they need to predict which users will continue using the app (positive) or stop (negative). However, when a user unsubscribes, the developers lose the user’s data because they have to completely delete data regarding that user in accordance with the privacy policy to protect personal information.

According to lead author Takashi Ishida from RIKEN AIP, “Previous classification methods could not cope with the situation where negative data were not available, but we have made it possible for computers to learn with only positive data, as long as we have a confidence score for our positive data, constructed from information such as buying intention or the active rate of app users. Using our new method, we can let computers learn a classifier only from positive data equipped with confidence.”

Ishida proposed, together with researcher Niu Gang from his group and team leader Masashi Sugiyama, that they let computers learn well by adding the confidence score, which mathematically corresponds to the probability whether the data belongs to a positive class or not. They succeeded in developing a method that can let computers learn a classification boundary only from positive data and information on its confidence (positive reliability) against classification problems of machine learning that divide data positively and negatively. (1)

Computers trying to learn based on negative feedback.

And when such not exists, trying to compensate for that based on the positive one.

But can there be any feedback which is either positive or negative?

Can anything not be something else?

Can anything not be part of nothing?

In a cosmos full of everything, where can you seek nothingness? Which result can be negative in a cosmos where every negative element creates an equally positive one? Which result can be positive in a cosmos leading to death in every possible scenario in place? How can the computer learn anything in a world where humans have forgotten how they started learning in the first place, at a time when there was nothing to learn?

Look at that child.

Learning every passing minute.

By not learning anything…

Doctors. AI. Dead people.

Photo by Josh Sorenson from Pexels

Could machines using artificial intelligence make doctors obsolete?

Artificial intelligence systems simulate human intelligence by learning, reasoning, and self-correction. This technology has the potential to be more accurate than doctors at making diagnoses and performing surgical interventions, says Jörg Goldhahn, MD, MAS, deputy head of the Institute for Translational Medicine at ETH Zurich, Switzerland.

It has a “near unlimited capacity” for data processing and subsequent learning, and can do this at a speed that humans cannot match.

Increasing amounts of health data, from apps, personal monitoring devices, electronic medical records, and social media platforms are being brought together to give machines as much information as possible about people and their diseases. At the same time machines are “reading” and taking account of the rapidly expanding scientific literature. (1)

We believe computers can replace doctors.

But doctors are not here to keep us alive.

They are here to discuss with the dead.

No matter how much data you analyze, you will always miss the point.

That our life is not our own.

And that we are not here to avoid death.

But to embrace it.

A computer cannot help you live.

Simply because it can never die…

Nanomaterials. AI. Prediction.

Photo by Matteo Badini from Pexels

Breakthroughs in the field of nanophotonics – how light behaves on the nanometer scale – have paved the way for the invention of “metamaterials,” human-made materials that have enormous applications, from remote nanoscale sensing to energy harvesting and medical diagnostics. But their impact on daily life has been hindered by a complicated manufacturing process with large margins of error.

An interdisciplinary Tel Aviv University study published in “Light: Science and Applications” demonstrated a way of streamlining the process of designing and characterizing basic nanophotonic, metamaterial elements.

“Our new approach depends almost entirely on Deep Learning, a computer network inspired by the layered and hierarchical architecture of the human brain,” Prof. Wolf explains. “It’s one of the most advanced forms of machine learning, responsible for major advances in technology, including speech recognition, translation and image processing. We thought it would be the right approach for designing nanophotonic, metamaterial elements”.

The scientists fed a Deep Learning network with 15,000 artificial experiments to teach the network the complex relationship between the shapes of the nanoelements and their electromagnetic responses. “We demonstrated that a ‘trained’ Deep Learning network can predict, in a split second, the geometry of a fabricated nanostructure,” Dr. Suchowski says. (1)

Imitating the human brain.

To predict what the human brain cannot predict.

Could you have a better proof that our brain is not algorithmic?

We are humans not because we can tell the future.

But because we have experienced it already.

We are humans not because we can find the answers.

But because we can ask the questions…

We are gods not because we know how metamaterials will form.

But because we don’t even care…

Thinking. Remembering. Being.

Photo by Tobias Bjørkli from Pexels

IBM researchers are developing a new computer architecture, better equipped to handle increased data loads from artificial intelligence. Their designs draw on concepts from the human brain and significantly outperform conventional computers in comparative studies. They reported on their findings in the Journal of Applied Physics, from AIP Publishing.

Today’s computers are built on the von Neumann architecture, developed in the 1940s. Von Neumann computing systems feature a central processer that executes logic and arithmetic, a memory unit, storage, and input and output devices. Unlike the stovepipe components in conventional computers, the authors propose that brain-inspired computers could have coexisting processing and memory units.

Abu Sebastian, an author on the paper, explained that executing certain computational tasks in the computer’s memory would increase the system’s efficiency and save energy. (1)

Thinking. Remembering.

Remembering. Thinking.

Within the dark forest, you think of the abyss.

Within the dark abyss, you remember of the forest.

Remember because you think.

Thinking because you remember.

Within the dark forest, you simply wander around.

Within the dark abyss, you just die and open your eyes.

Existing because you think of nothing…

Being only because you forget everything…