Author: Catherine Cabrera – Microbiologist, chemist and Data Scientist
Have you ever stopped to wonder how we learn? As a scientist, I find this absolutely fascinating. Every single thought and action we have ever made is controlled by one of the most incredible organs in our bodies – the brain. And what’s even more mind-blowing is that all of humankind’s greatest achievements throughout history are based on the simple, yet complex, process of learning. Observing recent history, one of these astonishing breakthroughs has been artificial intelligence (AI).
We’ve been able to create some seriously impressive AI entities that can do everything from recreating images and videos to playing games and even having full-on conversations with us! When you consider what we can do with our brains, it is astounding. The real question is whether or not these machines are thinking. If so, exactly how do they accomplish it? Artificial neural networks vs. Human Brain will investigate the interesting field of artificial intelligence while contrasting human thought processes with these incredible artificial entities.
“I think; therefore, I am.”
“I think; therefore, I am.” Rene Descartes. With this simple phrase, this philosopher was able to capture the essence of the power of our thoughts. Our ability to think makes us who we are – it’s what drives us to do what we do, feel the way we feel, and connect with others. In other words, everything we are and everything we do starts with that one little thought in our head! It’s pretty incredible when you stop to think about it.
Although there are various ways to learn, natural learning is among the most amazing. It could be interpreted as the mental mechanism we employ to pick up information and abilities without a conscious effort. It allows us to learn new skills and reach new conclusions by letting us apply our mental abilities, such as problem-solving, critical thinking, and decision-making. It also includes various cognitive skills such as perception, attention, memory, and language processing [1].
From a biological perspective, our unique ability to think is all thanks to how our neurons interact in our brains. Each neuron is made up of three essential parts, a cell body, where the genetic material is stored, and two extensions, an axon, and a dendrite, that work together to transmit and receive chemical signals throughout neurons by a process called the synapse (fig. 1). The aggregation of millions of neurons is what we recognize today as a natural neural network (NNN) [2]. Around the 1950’s, the study of natural learning, cognition, and brain architecture brought up another interesting idea, can we mimic natural learning with an artificial approach?
Fig 1. Synapse process and neuron architecture in the human brain. Made by Catherine Cabrera
Travelling throughout history – artificial intelligence
Many thinkers throughout history, like René Descartes and John Locke, debated the central issue of the connection between the body and the intellect, laying the groundwork for the development of cognitive science [3]. But the field of artificial intelligence didn’t take shape until the middle of the 20th century, with the invention of perceptrons. They were designed to mimic the structure and functionality of natural neurons to create simple machines that could solve classification problems (Fig. 2).
People were quite excited about this technology as they envisioned all of the amazing possible uses it might have. For example, when the Navy was working on a perceptron technology-based electronic computer in 1958, according to a New York Times article, it would not only be able to walk, talk, and see but also write, reproduce, and be aware of its existence [4].
Fig 2. Comparison between perceptrons and biological neurons. From: https://appliedgo.net/perceptron/
And while perceptrons were able to take inputs, process them, and produce outputs, their limited application was soon revealed when they failed to solve non-linear problems like XOR functions [5]. It was then that the tantalizing question arose: if we can mimic a single neuron, could we mimic an entire neural network? The relationship between the brain and intelligence sparked a whole new era of exploration, leading to the incredible advancements in AI that we see today. Initially, artificial neural networks (ANN) were made by joining multiple layers of single perceptrons, becoming able to solve pattern recognition problems; however, two obstacles arose; training these networks was not cheap in terms of computational cost and the binary answer (0 or 1) perceptrons manage, limit the problems these neural networks can solve [4].
Thankfully, we have overcome these obstacles, and now we can design significant artificial intelligence entities like ChatGTP, You, Midjourney, Dalle-2, and others. This accomplishment was achieved with the creation of new technologies and ground-breaking theories that have completely changed the architecture and processing abilities of artificial neural networks. For instance, recurrence or feedback information processes were added to neural networks, which was a significant advance in this area and resulted in the creation of recurrent neural networks (RNNs) (fig. 3). The instauration of this concept in ANNs generated that these networks start to diverge from natural neural networks, as we don’t have this process instaurated in our brain. Just consider that in the synapse process, the signal direction is one-way oriented [2].
However, thanks to these adaptations, we can now execute various tasks, such as test prediction tasks and language synthesis. Nonetheless, RNNs have certain drawbacks too, such as poor short-term memory and challenging learning curves. Long Short-Term Memory Networks (LSTMs) were developed to address these issues. They have completely changed the game by enabling RNNs to expand their memory and carry out jobs that need long-term memory (fig. 3). In a more recent story, convolutional neural networks (CNNs), a three-dimensional configuration of artificial neurons, had been created for increasingly more complex tasks. These networks are primarily employed for activities related to computer vision analogue to those you can carry out using your own eyesight (fig. 3) [5].
Fig 3. The architecture of different neural networks. a) Structures of ANN, RNN, and LSTM. Made by Catherine Cabrera. b) CNN basic structure, from https://www.interviewbit.com/blog/cnn-architecture/
It’s fascinating how our understanding of brain functions has influenced the creation of AI. However, despite being historically inspired by our human brain, AI has always been designed to function differently from us. It’s like the difference between a bird and an aeroplane – both can fly, but they do it in their own unique ways. Similarly, AI has its unique way of processing information and making decisions, which sets it apart from our own thought processes. Specifically, AI development has been a data-centric approach. Besides combining science and technology to develop these machines, we have to give data to these AIs to teach them how to “see”. For example, just like a biological neuron has dendrites to receive signals, a cell body to process them, and an axon to convey messages to other neurons, for a single perceptron to make a prediction, input channels, a processing stage, and one output channel are necessary to understand data (fig. 2). In our case, the signals we receive are stimuli from the outside world, which we later on convert into chemical signals the brain can process. For AI, data is this entry signal they need to function [1,6].
“Despite being historically inspired by our human brain, AI has always been designed to function differently from us”.
Catherine Cabrera – Chemist and microbiologist
Can AI outthink the human brain?
Nowadays, both fear and curiosity about new AIs technologies cause a typical misconception about the strengths and limitations they have. For instance, observing an AI generating new images from plain text, resuming vast amounts of information, and creating videos, all this in seconds, is scary, basically because we ourselves are not capable of this. But did you know that you, or even a kid, can probably win in Tic-Tac-Toe against some of the most amazing AIs? And even though artificial neural networks were inspired by the function of our brain, making them somehow similar in concept, important differences in structure and processing capacities cause significant divergence in the functionality they have, as shown in the table below.
Table 1. Comparison between natural neural networks and artificial neural networks. Made by Catherine Cabrera [4, 7].
From the information exposed before, we can observe that there are some huge differences between NNN and ANN, including size, flexibility, and power efficiency, among others. However, comparing just preliminary numbers or specific aspects isn’t enough to understand the whole picture, as learning is a process that goes beyond the sum of these aspects.
For example, have you ever considered how distinct machines are from people in terms of how they perceive and interpret the world around them? It’s similar to equating wisdom with knowledge. Despite having a vast quantity of knowledge in its memory, AI lacks the wisdom to evaluate tricky circumstances that humans can easily handle. For instance, it may be simple for us to identify a fuzzy image of an animal, but it may be difficult for an AI system to do so because of the rigid computer vision training parameters that are used to create them [8]. Hence, the next time you’re in awe of AIs’ incredible talents, remember that there are some things that only humans are capable of doing.
Fig 4. Image classification, performed by AI. From https://arxiv.org/abs/1412.6572)
Exploring this idea, AI’s are recognized for performing specific tasks with ease, but did you know that our ability to multitask is one of the things that makes our brains so incredible? Our ability to multitask is due to our neurons’ asynchronous and parallel nature. Regrettably, artificial intelligence (AI) can’t completely match our capabilities in this regard because their artificial neuron layers are frequently fully connected. Artificial neurons need weights of zero to represent a lack of connections, in contrast to biological neurons, which are small-world in nature and can have fewer connections between them [2,4,7]. It just goes to show that while artificial intelligence has made great strides, nature still excels at some tasks better than even the most sophisticated machinery! Even more, the strict parameters used to build AIs and their data dependencies make them extremely vulnerable to any software or hardware malfunction. In comparison, even if a segment of our brain gets any damage, it could still function, keeping us alive!
“While artificial intelligence has made great strides, nature still excels at some tasks better than even the most sophisticated machinery”.
Catherine Cabrera – Chemist and microbiologist
Conclusions
Despite their differences, natural thinking and AI can complement each other in many ways. For example, AI systems can be used to help human decision-making by providing insights and predictions based on large amounts of data. In turn, human cognition can be used to validate and refine the output of AI systems, as well as to provide context and interpret results. So next time you learn something new, take a moment to marvel at the incredible power of your brain and natural learning, and don’t underestimate yourself! You are not a machine, and you don’t need to be.
REFERENCES
- Criss, E. (2008). The Natural Learning Process. Music Educators Journal, 95(2), 42–46. https://doi.org/10.1177/0027432108325071
- Brain Basics: The Life and Death of a Neuron. (2022). National Institute of Neurological Disorders and Stroke. https://www.ninds.nih.gov/health-information/public-education/brain-basics/brain-basics-life-and-death-neuron
- Gardner, H. (1987). The mind’s new science: A history of the cognitive revolution. Basic Books.
- Nagyfi, R. (2018). The differences between Artificial and Biological Neural Networks. Medium. https://towardsdatascience.com/the-differences-between-artificial-and-biological-neural-networks-a8b46db828b7
- 3 types of neural networks that AI uses. (2019). Artificial Intelligence|. https://www.allerin.com/blog/3-types-of-neural-networks-that-ai-uses
- Sejnowski, Terrence J. (2018). The Deep Learning Revolution. MIT Press. p. 47. ISBN 978-0-262-03803-4.
- Thakur, A. (2021). Fundamentals of Neural Networks. International Journal for Research in Applied Science and Engineering Technology, 9(VIII), 407–426. https://doi.org/10.22214/ijraset.2021.37362
- Ian J.et al. (2015). Explaining and Harnessing Adversarial Examples. Published as a conference paper at ICLR 2015. https://doi.org/10.48550/arXiv.1412.657
Catherine Cabrera – Data Scientist
EQUINOX
what’s ai?
Discover what is AI and how it will become revolutonary in the industry