Tags

,

Photo by 鑫 王 from Pexels

North Carolina State University researchers have developed a framework for building deep neural networks via grammar-guided network generators. In experimental testing, the new networks (called AOGNets) have outperformed existing state-of-the-art frameworks, including the widely-used ResNet and DenseNet systems, in visual recognition tasks.

“AOGNets have better prediction accuracy than any of the networks we’ve compared it to”, says Tianfu Wu, an assistant professor of electrical and computer engineering at NC State and corresponding author of a paper on the work. “AOGNets are also more interpretable, meaning users can see how the system reaches its conclusions.” (1)

Speak.

And you will think.

Think.

And words will come out of your mind.

We believe in Logos.

And we train our children accordingly.

But there is a secret we fail to grasp.

And in our endless chattering we choose to forget.

In the beginning there was not Logos.

Something gave birth to Logos.

In every phrase uttered, the same secret cries out loudly…

There is nothing you can say that hasn’t been said  before…

For being the veil of endless aeons…

Beyond the stars and the darkness…

In the beginning, there was silence…