AI and the theory of Connectivity

Big news, everyone!

According to the following article, intelligence could originate from a very simple algorithm:

https://futurism.com/intelligence-may-stem-from-a-basic-algorithm-in-the-human-brain/

I'll have to look into the theory of Connectivity some more. I'll bet our AIs could definitely benefit from this if it's true.

I must apologize for not
I must apologize for not posting this as a web link. I don’t know how, and I just had to get this news out there…

Edit: Now I know how. D’oh! Stuff like this is what happens when I get too excited…

While the idea is

While the idea is interesting, the article remains a bit vague on what it would mean exactly, at least regarding AI.

Also I never quite understand how such very reductionist algorithms are useful. It’s a bit like using lego blocks, you can in theory build almost any shape out of lego blocks and if you zoom out far enough, you wont see much of a difference compared to a shape made another way (or out of other materials/building blocks).

But that still does not mean you know how to build a specific shape or how to find it through a smart search or efficient trial and error.

Essentially these reductionist theories just give you new simple building blocks, but the issue is how to combine them.

An artificial neuron is also pretty simple, and a computer can be reduced to a TM with very few basic operations. Yet writing specific non-trivial programs is still a significant effort. And training neural nets is still required.

“Intelligence is really about dealing with uncertainty and infinite possibilities,” Tsien said. It appears to be enabled when a group of similar neurons form a variety of cliques to handle each basic like recognizing food, shelter, friends and foes. Groups of cliques then cluster into functional connectivity motifs, or FCMs, to handle every possibility in each of these basics like extrapolating that rice is part of an important food group that might be a good side dish at your meaningful Thanksgiving gathering. The more complex the thought, the more cliques join in.

So basically you have categorization, and then you combine these categories to make more complex ones. This is also what you do in ontologies/semantic networks, or at least partially in neural networks (mostly categorization in the latter).

What is interesting is that apparently all possible combinations that make up a category are represented in the brain.

http://jagwire.augusta.edu/archives/39066

But it still remains a little vague and ungraspable what that actually means for practical applications. It seems somehow obvious that you need to encode various concepts/categories and be able to recognize them again somehow when looking at the program and its memory or the brain. Maybe it is suprising that this grouping of neurons is clearly seen (according to that article).

Yet I still think, looking at it from an AI POV, we still don’t know how that could help. We also have these groupings and relationships in datastructures (such as in logic or semantic nets etc.), yet that doesn’t tell us how to find appropriate ones or how to write programs in general. We still need to solve it for every case/program we want to make.