Neurons as Code - Part 2

Carrying on from Part 1

In the last part, I briefly highlighted some of the mechanisms I learned about in terms of neural processing with the human brain.

There were definitely some interesting learnings but also a lot of questions that, currently, do not have answers.

My goal towards programming my own neural learning software does not work when there are questions without answers. However, I decided to keep going for a bit and press on. I suspected there might be enough answers to make a few assumptions that could be converted into digital form. Maybe I am wrong.

Polarization and Sodium

I used this resource to read a more biological heavy review of how neurons communicate.

It was fascinating to learn that, at a high level, it can be an electrical and chemical process. There is a resting state for membranes that get "triggered" when a certain charge is accumulated outside the membrane. To me, it almost resembles a gathering of people. When enough people gather outside the door, demanding to be let in, the door is opened. The neural membranes are sensitive to this build up of charge (or polarization) and allow entry under very specific circumstances.

Potassium and sodium are two of the main "ions" that determine the strength of the charge. I am not entirely sure why this was surprising to me but, for whatever reason, it was. In any case, the inside and the outside of the membrane must be properly balanced. If there's enough of a difference, the membrane opens up to balance things out.

This is a deeply mechanical way to visualize how electrical and/or chemical signals are transferred and dealth with at the membrane level.

Once the balancing act has begun, dendrites and other mechanisms jump in and the process is carried on to other neurons until the charge or chemical is leveled out.

Difference between chemical and electrical signals

Reading a bit more in that resource helped me to understand that there's a pretty big difference between chemical signals and electrical signals. What I described in the last section above is almost exclusively chemical based.

In an electrical neural connection, there are actually no gaps between neurons but, rather, they have junctions where electrical signals can pass unimpeded. A chemical reaction can be blocked and delayed as things build up or wind down. Even the response time can be delayed in a chemical process. In an electrical process, things are passed almost instantly and bidirectionally, in some cases, which is even crazier.

Ok but this does not really help

No matter how much I try to dig into how neurons communicate and store information there just does not seem to be a clear answer on how it all works together.

I read this resource which sort of confirmed that understanding.

Since memories or learnings are not associated to "sections" or "areas" of the brain, the running theory is that all neurons in your brain work together and, when fired in a specific pattern, correlate to a memory (for example).

Not just that but your brain changes. The number of neurons connected to each other, their locations, even their internal structure, all modifiable. However, your general ability to remember and learn does not really change, sometimes at all. That is incredible.

Something else is going on here and we probably do not understand or know what it is. Even the article states that current neural networks in computing only make assumptions or guesses as to the general mechanisms and they are evolving as we understand the human brain better.

The article brings up a good point that actually complicates this more. Giving some examples in nature, like bees choosing where they pollinate, our brains have a capability of doing crazy computational activities like probability checks. That is an application of things learned and seen to predict an outcome. That is a very different beast than just learning and memorizing things.

A few more interesting articles

Not satisfied with the lack of answers, I looked up a few more articles.

This one covers how our brain shrinks when we sleep, as if it is defragmenting a hard drive and removing excess pieces.

This article explains how no two neurons are alike. Each neuron is a snowflake. Unique. With no discernable pattern in neuron structure, that would imply that patterns must arise from how the neurons are connected and fired as opposed to their structure or location.

I stumbled onto a very interesting scientific study on whether synapses remodeled themselves or your brain created new ones in order to learn new things. The general idea is that both might be true. There may even be dormant (or "silent") synapses that get turned on when they need to be used. This article was very intriguing because it introduced me to a different way of thinking about how our brain processes information. The implication is that your brain knows which neurons are most effective in processing or recalling information and understands how to create those connections to achieve the desired results.

To me, the question is still: HOW? How does your brain know which neurons or synapses are appropriate for the function at hand? What kind of calculation takes place that says "Oh, synapse A is currently connected to synapse B but I need to connect synapse A to synapse C to sum 1 and 1" and where does that meta calculation take place?

Another article reinforced something I had read before. A study was done on rats. They were trained to listen to a tone and react to it. The author of the study then injected the rats with a chemical that prevented neurons from creating proteins at all. He tried the tone again 24 hours later and the rats did not respond, although previously trained to. This means that neurons actually re-build whatever process took place on the initial formation of the training. That is, down the protein creation level, if a memory is formed, you will use the same entire process to recall that memory. The difference will be that the neurons will go through the process faster and perhaps with less accuracy. Thus the reason why memories usually tend to not have all the details.

I think I should take a break...

I've reached a point where I need to think more about how current "digital" neural networks work and compare those to what I have learned along the way. Maybe there's something I can do but it requires some thought. I may write a followup later this year.