Old mummy… Silent voices…

The sound of a vocal tract from a 3,000-year-old mummy has been recreated using CT scans, a 3D-printer, and a voice synthesizer. Details of this achievement—such as it is—were published in Scientific Reports. (1)

Old voices.

Lost voices.

Meaning nothing now.

Frightening isn’t it?

Why don’t we understand those voices?

Why do we need to?

Lost humans.

Void of anything.

Except of the things they can lose…

The forest is silent now.

Full of skeletons.

And in that deafening silence.

You can hear nothing at all.

Nothing but yourself speaking…

Understanding language. Word by word…

Photo by Spiros Kakos from Pexels

The capacity for language is distinctly human. It allows us to communicate, learn things, create culture, and think better. Because of its complexity, scientists have long struggled to understand the neurobiology of language.

In the classical view, there are two major language areas in the left half of our brain. Broca’s area (in the frontal lobe) is responsible for the production of language (speaking and writing), while Wernicke’s area (in the temporal lobe) supports the comprehension of language (listening and reading). A large fibre tract (the arcuate fasciculus) connects these two ‘perisylvian’ areas (around the Sylvian fissure, the split which divides the two lobes).

“The classical view is largely wrong,” says Hagoort. Language is infinitely more complex than speaking or understanding single words, which is what the classical model was based on. While words are among the elementary ‘building blocks’ of language, we also need ‘operations’ to combine words into structured sentences, such as ‘the editor of the newspaper loved the article’. To understand and interpret such an utterance, knowing the speech sounds (or letters) and meaning of the individual words is not enough. For instance, we also need information about the context (who is the speaker?), the intonation (is the tone cynical?), and knowledge of the world (what does an editor do?). (1)

We believe thinking is complex.

And even when it is not, we make it be so.

The meaning of words depends on their context.

But going backwards, what was the first context of them all?

Go back and see within the darkness.

And you will see one word.

Uttered within perfect silence.

This is the substrate of it all.

(Silence)

Are you brave enough to listen to yourself?

Reading. Seeing. Seeing better!

Photo by Spiros Kakos from Pexels

Reading is a recent invention in the history of human culture — too recent for dedicated brain networks to have evolved specifically for it. How, then, do we accomplish this remarkable feat? As we learn to read, a brain region known as the ‘visual word form area’ (VWFA) becomes sensitive to script (letters or characters). However, some have claimed that the development of this area takes up (and thus detrimentally affects) space that is otherwise available for processing culturally relevant objects such as faces, houses or tools.

An international research team led by Falk Huettig (MPI and Radboud University Nijmegen) and Alexis Hervais-Adelman (MPI and University of Zurich) set out to test the effect of reading on the brain’s visual system. If learning to read leads to ‘competition’ with other visual areas in the brain, readers should have different brain activation patterns from non-readers — and not just for letters, but also for faces, tools, or houses. ‘Recycling’ of brain networks when learning to read has previously been thought to negatively affect evolutionary old functions such as face processing. Huettig and Hervais-Adelman, however, hypothesized that reading, rather than negatively affecting brain responses to non-orthographic (non-letter) objects, may, conversely, result in increased brain responses to visual stimuli in general. (1)

Seeing. Reading. Learning.

In an inactive cosmos we are active.

Don’t be fooled by the super nova or the black holes colliding.

There is silence in the cosmos.

And we break that silence with our chatter.

Seeing. Seeing more. And then even more!

Learning to read in a cosmos which says nothing.

Nothing but the obvious…

Listen to your self while reading aloud.

He doesn’t truly say anything.

Except only when you stay silent and listen to him…

Speaking AI… Silent logos…

Photo by 鑫 王 from Pexels

North Carolina State University researchers have developed a framework for building deep neural networks via grammar-guided network generators. In experimental testing, the new networks (called AOGNets) have outperformed existing state-of-the-art frameworks, including the widely-used ResNet and DenseNet systems, in visual recognition tasks.

“AOGNets have better prediction accuracy than any of the networks we’ve compared it to”, says Tianfu Wu, an assistant professor of electrical and computer engineering at NC State and corresponding author of a paper on the work. “AOGNets are also more interpretable, meaning users can see how the system reaches its conclusions.” (1)

Speak.

And you will think.

Think.

And words will come out of your mind.

We believe in Logos.

And we train our children accordingly.

But there is a secret we fail to grasp.

And in our endless chattering we choose to forget.

In the beginning there was not Logos.

Something gave birth to Logos.

In every phrase uttered, the same secret cries out loudly…

There is nothing you can say that hasn’t been said  before…

For being the veil of endless aeons…

Beyond the stars and the darkness…

In the beginning, there was silence…

Exit mobile version
%%footer%%