Human enhancements. Society.

Advertisements
Photo by Keenan Constance from Pexels

Human enhancement technologies are opening up tremendous new possibilities. But they’re also raising important questions about what it means to be human. These technologies are currently geared towards upgrading or restoring physical and psychological abilities for medical purposes. An application is surfacing, however, that is designed with another goal in mind: embellishing performance. An international team of researchers has been examining the ethical issues arising from these experiments. (1)

Society is based on humans getting together.

But humans want to improve.

And, thus, they believe society will do too.

Society is based on humans and humans are based on society. But this was not always the case. Society is a very recent construct. We used to be alone. And only at some point did we start realizing the potential in cooperating with others. It seems like a noble cause. But it is not. Humans have always looked towards their personal interest. They wish they could cooperate with others to serve that interest, through society. They wish they could enhance themselves to serve that interest.

But there is another way of seeing things.

A Man tried to teach that way once.

But we killed Him. Because it is not easy to kill one’s self.

That there is no us. That there are no others. There can be a society based on these premises. But not a society with other people.

But a society with the only One who matters…

Forget about society.

Let go of you.

And you will see.

That we are already all together…

Language. Thought. Time. Dasein.

Advertisements
Photo by Maria Orlova from Pexels

The relationship between language and thought is controversial. One hypothesis is that language fosters habits of processing information that are retained even in non-linguistic domains.

Languages, for instance, vary in their branching direction. In typical right-branching (RB) languages, like Italian, the head of the sentence usually comes first, followed by a sequence of modifiers that provide additional information about the head (e.g. “the man who was sitting at the bus stop”). In contrast, in left-branching (LB) languages, like Japanese, modifiers generally precede heads (e.g. “who was sitting at the bus stop, the man”). In RB languages, speakers could process information incrementally, given that heads are presented first and modifiers rarely affect previous parsing decisions. In contrast, LB structures can be highly ambiguous until the end, because initial modifiers often acquire a clear meaning only after the head has been parsed. Therefore, LB speakers may need to retain initial modifiers in working memory until the head is encountered to comprehend the sentence.

Studies show that the link between language and thought might not be just confined to conceptual representations and semantic biases, but rather extend to syntax and its role in our way of processing sequential information or in the way the working memory of speakers of languages with mixed branching or free word order works. “[…] left-branching speakers were better at remembering initial stimuli across verbal and non-verbal working memory tasks, probably because real-time sentence comprehension heavily relies on retaining initial information in LB languages, but not in RB languages”, says Alejandro Sanchéz Amaro, from the Department of Cognitive Science at the University of California, San Diego. (1)

Thinking in a sequence based on your language.

Languages based on the way you think.

A cosmos structured in the way you see.

People seeing based on how their brain is structured.

In a universe where things can go either right or left, there is only one correct way to go… (Nowhere!) In a cosmos where thinking can be done in various ways, there is only one way to think… (Don’t think!)

Listen to the forest whispering in your ear…

Watch the dim light of existence cast shadows under the light…

Listen to the silence between the words…

There is a structure in the cosmos. And there is chaos in this structure. There is logos governing the universe. And inside logos, the deep darkness of stillness. Any structure imposes structures. Any way of thinking destroys other ways, equally possible and correct.

There is a unity in the clatter of phenomena.

You cannot see this unity from left and go right. Neither if you observe from right to left. You cannot know everything if you already know things. You cannot understand it all if you start by claiming that you understand something.

This unity you can only watch by watching everything.

And the only way to do that, is by watching nothing…

Is the man sitting at the bus?

Search inside…

What is a man?

And you will be astonished by the lack of any plausible answer…

AI. Games. Intelligence. Humans.

Advertisements
Photo by Collis from Pexels

Artificial Intelligence is constantly beating humans in more and more board games. Some years ago, the same team that created that Go-playing bot celebrated something more formidable: an artificial intelligence system that is capable of teaching itself—and winning at—three different games. The AI is one network, but works for multiple games; that generalizability makes it more impressive, as it might also be able to learn other similar games, too.

They call it AlphaZero, and it knows chess, shogi (Japanese chess), and Go. All of these games fall into the category of “full information” or “perfect information” contests – each player can see the entire board and has access to the same info (that is different from games like poker where you do not know what cards an opponent is holding). The network needs to be told the rules of the game first, and after that, it learns by playing games against itself.

The system “is not influenced by how humans traditionally play the game,” says Julian Schrittwieser, a software engineer at DeepMind, which created it.

Since AlphaZero is “more general” than the AI that won at Go, in the sense that it can play multiple games, “it hints that we have a good chance to extend this to even more real-world problems that we might want to tackle later,” Schrittwieser adds. (1)

See?

Even computers can learn.

As long as you teach them. (the rules)

That is how you learnt as well.

Alone.

Wandering in the dark abyss.

Walking in the dead of the night.

You knew the rules.

You just had to deduct the rest.

And you were so afraid.

Because the only rule was that there were no rules.

Because the only law was that you were the law.

Once upon a time, your father told you he loves you.

And that you were free to go.

You decided to leave.

Afraid of yourself.

And you are trying to find rules ever since…

I learn. You learn. We learn. (nothing)

Advertisements

“I learn,” “you learn,” “she learns,” “they learn,” yet, according to a surprising new linguistic study, in countries where the dominant language allows personal pronouns such as ‘I’ to be omitted, learning suffers. (1)

A more or less logical conclusion. Learning is about you increasing your knowledge. While being, on the other hand, is about increasing your ignorance to the point that you become one with the cosmos.

Question everything.

Even your ability to question anything.

Do you feel wise? Are you ready to accept that you are not? It is only when you are ready to accept that you are nothing, that you become everything. A cup of tea is not useful when it is full…

Only the wisest of men admitted that they learnt nothing…

Only the most arrogant of men advertised that they know something…

I am. Therefore, I learn.

I am no one.

Therefore, I already know everything…

Not because I know them.

But because I accept that I am already part of nothing…

Sign language. Spoken language limitations.

Advertisements
Photo by Sergei Akulich from Pexels

Sign languages are considered by linguists as full-fledged and grammatically very sophisticated languages. But they also have unique insights to offer on how meaning works in language in general.

Sign languages can help reveal hidden aspects of the logical structure of spoken language, but they also highlight its limitations because speech lacks the rich iconic resources that sign language uses on top of its sophisticated grammar.

For instance, the logical structure of the English sentence Sarkozy told Obama that he would be elected is conveyed more transparently in sign language. The English sentence is ambiguous, Schlenker explains, as he can refer to Sarkozy or to Obama. Linguists have postulated that this is because the sentence contains some unpronounced – but cognitively real – logical variables like x and y.

If the sentence is understood as Sarkozy (x) told Obama (y) that he (x) would be elected, with the same variable x on Sarkozy and on he, the pronoun refers to Sarkozy; if instead he carries the variable y, it refers to Obama. Remarkably, in sign language the variables x and y can be visibly realized by positions in space, e.g. by signing Sarkozy on the left and Obama on the right. (1)

See.

Now you know that it was about Sarkozy.

Listen.

Now you know what the other guy meant.

Feel.

Now you understand why the other one is even speaking to you.

Reach out with your senses.

It is all the same at the end.

Ideas may sometimes be conveyed better with images.

But blind people cannot see.

Ideas may sometimes be conveyed better with words.

But deaf people cannot hear.

At the end, you will need to reach out to understand what is said.

But not to the person talking to you.

But to the person inside you.

Listen carefully.

Do you listen anything?

See.

Listen.

Feel.

Why are you even listening?