Against AI.

Photo by Spiros Kakos @ Pexels

The new artificial intelligence system ChatGPT has become a sensation.

It can write poetry, it can program whatever you ask it to, it can reply to your answers. In summary, it can do whatever a human can do. Taking into account the fact that most humans have more limited knowledge than ChatGPT, it would be no exaggeration to say that the new AI system can outperform humans on almost everything.

(Except from the daunting task of opening a tight marmalade jar… Yet…)

But does this performance of AI mean anything?

Should we be worried, or should we be enthusiastic about it?

Harmonia Philosophica has for a long time commented on the most recent developments in Artificial Intelligence. And the main thing we must be concerned about is not the progress of computers, but the fact that humans themselves have started thinking like computers.

ChatGPT and any other artificial system can answer whatever it can answer. Yes, this is obviously a tautology – but an important one nonetheless. No system can ever deal with something it was not expected from its programmers. Yet, as Penrose has postulated some time ago, the humans are not only able to deal with unknown issues but sometimes they thrive with them.

Any artificial intelligence program will go up to where its creators have programmed it to go. And yes, this includes the machine learning aspect of the system, which itself cannot surpass the limits it cannot surpass based on the way it is working, the algorithm implemented in its code, the data it is fed with et cetera.

But humans will see the unknown and think about what was never thought before. Humans can envision the infinite in a cosmos that is finite and can grow no more. Humans can trust their intuition to discover what hides in the shadows. Or they can hide everything under the Sun…

We can see the Moon though and cry.

We can stare at the Sun and feel we are alive.

We can clap with one hand.

If only we accept that logic is dead.

And ChatGPT has nothing to do in a such a world.

Where we accept outself.

As being nothing but dead…

Humans will one day understand though.

That there is nothing artificial about thinking as they can…

Algorithms. Jail. Peoples’ lives.

Photo by Spiros Kakos @ Pexels

An algorithm takes decisions about peoples’ live and decides whether and how they will potentially go to jail again. The algorithm is one of many making decisions about people’s lives in the United States and Europe. Local authorities use so-called predictive algorithms to set police patrols, prison sentences and probation rules. In the Netherlands, an algorithm flagged welfare fraud risks. A British city rates which teenagers are most likely to become criminals. Nearly every state in America has turned to this new sort of governance algorithm, according to the Electronic Privacy Information Center, a nonprofit dedicated to digital rights. Algorithm Watch, a watchdog in Berlin, has identified similar programs in at least 16 European countries. (1)

Robots deciding about our life. Robots that will never experience life.

That is why they can make such decisions anyway.

One can only decide on what he cannot understand.

Whenever you get to know something, you become that something. No one can decide on a life he lives. Life decides about him. You can easily end your life. Only because it is not your own. You can live your life. Only when you decide to leave it.

And as the robot will never understand, you will never understand neither.

And that is the only thing to ever understand.

Do you understand? Now go back to your jail.

And tell everyone that they are already free…

AI not explaining it self… Scary AI… Scary humans…

Photo by Spiros Kakos @ Pexels

Upol Ehsan once took a test ride in an Uber self-driving car. Instead of fretting about the empty driver’s seat, anxious passengers were encouraged to watch a “pacifier” screen that showed a car’s-eye view of the road: hazards picked out in orange and red, safe zones in cool blue.

For Ehsan, who studies the way humans interact with AI at the Georgia Institute of Technology in Atlanta, the intended message was clear: To explain what the AI was doing. But something about these whole scene highlighted the strangeness of the experience rather than reassured. It got Ehsan thinking: what if the self-driving car could really explain itself? (1)

Scary AI…

Not being able to explain itself.

Scary humans.

Not being able to explain themselves.

Scary life.

(Are you afraid of me?)

Finding your way…

Photo by Spiros Kakos @ Pexels

A team at Facebook AI has created a reinforcement learning algorithm that lets a robot find its way in an unfamiliar environment without using a map. (1)

Finding your way without a map.

Is there any other way?

With a map, you will always return at home.

But what is home?

Were you not born inside chaos?

Were you not bred by lightning?

Did you not ride the rough waves?

There is no destination.

For there was never a home in the first place.

Look at you.

Did you not bring fire into the cosmos?

There is only one reason to return home.

And that is to burn it down to ashes…

Dark AI… Dark humans…

Photo by Spiros Kakos from Pexels

A study found that hiring algorithms are too opaque for us to understand if they are fair or not. (1) In other news, a scientist tried to help humans design algorithms that would never go the wrong way, doing harm rather than good, by implementing fail safes in their initial design (2)

We have started having kids.

And our main concern is to control them.

But there can be no control without love.

Unconditional love.

Leaving everything uncontrolled…

Let the river flow.

Leave the sea as it is.

And one day…

You will touch the water.

Let the waves carry you.

And one day…

You will swim!

Without moving an inch…

Exit mobile version
%%footer%%