
Author: Catherine Cabrera – Microbiologist, chemist and Data Scientist
Have you ever stopped to wonder how we learn? As a scientist, I find this absolutely fascinating. Every single thought and action we have ever made is controlled by one of the most incredible organs in our bodies – the brain. And what’s even more mind-blowing is that all of humankind’s greatest achievements throughout history are based on the simple, yet complex, process of learning. Observing recent history, one of these astonishing breakthroughs has been la inteligencia artificial (IA).
We’ve been able to create some seriously impressive AI entities that can do everything from recreating images and videos to playing games and even having full-on conversations with us! When you consider what we can do with our brains, it is astounding. The real question is whether or not these machines are thinking. If so, exactly how do they accomplish it? Redes neuronales artificiales vs el cerebro humano investigará el interesante campo de la inteligencia artificial al tiempo que contrasta los procesos del pensamiento humano con estas increíbles entidades artificiales.
“I think; therefore, I am.”
“I think; therefore, I am.” Rene Descartes. With this simple phrase, this philosopher was able to capture the essence of the power of our thoughts. Our ability to think makes us who we are – it’s what drives us to do what we do, feel the way we feel, and connect with others. In other words, everything we are and everything we do starts with that one little thought in our head! It’s pretty incredible when you stop to think about it.
Aunque hay varias formas de aprender, el aprendizaje natural es una de las más sorprendentes. Podría interpretarse como el mecanismo mental que empleamos para captar información y habilidades sin un esfuerzo consciente. Nos permite aprender nuevas habilidades y llegar a nuevas conclusiones al permitirnos aplicar nuestras capacidades mentales, como la resolución de problemas, el pensamiento crítico y la toma de decisiones. También incluye diversas habilidades cognitivas como la percepción, la atención, la memoria y el procesamiento del lenguaje [1].
From a biological perspective, our unique ability to think is all thanks to how our neurons interact in our brains. Each neuron is made up of three essential parts, a cell body, where the genetic material is stored, and two extensions, an axon, and a dendrite, that work together to transmit and receive chemical signals throughout neurons by a process called the synapse (fig. 1). The aggregation of millions of neurons is what we recognize today as a natural neural network (NNN) [2]. Around the 1950’s, the study of natural learning, cognition, and brain architecture brought up another interesting idea, can we mimic natural learning with an artificial approach?

Fig. 1. Proceso de sinapsis y arquitectura neuronal en el cerebro humano. Realizado por Catherine Cabrera
Travelling throughout history – artificial intelligence
Many thinkers throughout history, like René Descartes and John Locke, debated the central issue of the connection between the body and the intellect, laying the groundwork for the development of cognitive science [3]. But the field of artificial intelligence didn’t take shape until the middle of the 20th century, with the invention of perceptrons. They were designed to mimic the structure and functionality of natural neurons to create simple machines that could solve classification problems (Fig. 2).
La gente estaba muy entusiasmada con esta tecnología, ya que imaginaba todas las increíbles posibilidades de uso que podría tener. Por ejemplo, cuando la Marina trabajaba en 1958 en un ordenador electrónico basado en la tecnología del perceptrón, según un artículo del New York Times, no sólo sería capaz de andar, hablar y ver, sino también de escribir, reproducirse y ser consciente de su existencia [4].

Fig. 2. Comparación entre perceptrones y neuronas biológicas. De: https://appliedgo.net/perceptron/
Y aunque los perceptrones eran capaces de tomar entradas, procesarlas y producir salidas, pronto se puso de manifiesto su limitada aplicación cuando fracasaron en la resolución de problemas no lineales como las funciones XOR [5]. Fue entonces cuando surgió la tentadora pregunta: si podemos imitar una sola neurona, ¿podríamos imitar toda una red neuronal? La relación entre el cerebro y la inteligencia desencadenó toda una nueva era de exploración, que condujo a los increíbles avances en IA que vemos hoy en día. Inicialmente, las redes neuronales artificiales (RNA) se hicieron uniendo múltiples capas de perceptrones simples, llegando a ser capaces de resolver problemas de reconocimiento de patrones; sin embargo, surgieron dos obstáculos; el entrenamiento de estas redes no era barato en términos de coste computacional y la respuesta binaria (0 o 1) que manejan los perceptrones, limita los problemas que estas redes neuronales pueden resolver [4].
Thankfully, we have overcome these obstacles, and now we can design significant artificial intelligence entities like ChatGTP, You, Midjourney, Dalle-2, and others. This accomplishment was achieved with the creation of new technologies and ground-breaking theories that have completely changed the architecture and processing abilities of artificial neural networks. For instance, recurrence or feedback information processes were added to neural networks, which was a significant advance in this area and resulted in the creation of recurrent neural networks (RNNs) (fig. 3). The instauration of this concept in ANNs generated that these networks start to diverge from natural neural networks, as we don’t have this process instaurated in our brain. Just consider that in the synapse process, the signal direction is one-way oriented [2].
Sin embargo, gracias a estas adaptaciones, ahora podemos ejecutar diversas tareas, como las de predicción de pruebas y las de síntesis del lenguaje. No obstante, las RNN también tienen ciertos inconvenientes, como su escasa memoria a corto plazo y sus complicadas curvas de aprendizaje. Las redes de memoria a corto plazo (LSTM) se desarrollaron para resolver estos problemas. Han cambiado por completo las reglas del juego al permitir a las RNN ampliar su memoria y realizar tareas que necesitan memoria a largo plazo (fig. 3). En una historia más reciente, se habían creado las redes neuronales convolucionales (CNN), una configuración tridimensional de neuronas artificiales, para tareas cada vez más complejas. Estas redes se emplean principalmente para actividades relacionadas con la visión por ordenador análogas a las que se pueden realizar con la propia vista (fig. 3) [5].

Fig. 3. Arquitectura de diferentes redes neuronales. Arquitectura de diferentes redes neuronales. a) Estructuras de ANN, RNN y LSTM. Realizado por Catherine Cabrera. b) Estructura básica de CNN, de https://www.interviewbit.com/blog/cnn-architecture/
It’s fascinating how our understanding of brain functions has influenced the creation of AI. However, despite being historically inspired by our human brain, AI has always been designed to function differently from us. It’s like the difference between a bird and an aeroplane – both can fly, but they do it in their own unique ways. Similarly, AI has its unique way of processing information and making decisions, which sets it apart from our own thought processes. Specifically, AI development has been a data-centric approach. Besides combining science and technology to develop these machines, we have to give data to these AIs to teach them how to “see”. For example, just like a biological neuron has dendrites to receive signals, a cell body to process them, and an axon to convey messages to other neurons, for a single perceptron to make a prediction, input channels, a processing stage, and one output channel are necessary to understand data (fig. 2). In our case, the signals we receive are stimuli from the outside world, which we later on convert into chemical signals the brain can process. For AI, data is this entry signal they need to function [1,6].
“Despite being historically inspired by our human brain, AI has always been designed to function differently from us”.
Catherine Cabrera – Chemist and microbiologist
¿Puede la IA pensar más que el cerebro humano?
Hoy en día, tanto el miedo como la curiosidad por las nuevas tecnologías de IA provocan un típico concepto erróneo sobre los puntos fuertes y las limitaciones que tienen. Por ejemplo, observar a una IA generar nuevas imágenes a partir de texto plano, reanudar grandes cantidades de información y crear vídeos, todo ello en cuestión de segundos, asusta, básicamente porque nosotros mismos no somos capaces de ello. Pero, ¿sabía que usted, o incluso un niño, probablemente pueda ganar al tres en raya contra algunas de las IA más asombrosas? Y aunque las redes neuronales artificiales se inspiraron en la función de nuestro cerebro, lo que las hace en cierto modo similares en concepto, importantes diferencias en estructura y capacidades de procesamiento provocan divergencias significativas en la funcionalidad que tienen, como se muestra en la siguiente tabla.

Tabla 1. Comparación entre redes neuronales naturales y redes neuronales artificiales. Realizado por Catherine Cabrera [4, 7].
From the information exposed before, we can observe that there are some huge differences between NNN and ANN, including size, flexibility, and power efficiency, among others. However, comparing just preliminary numbers or specific aspects isn’t enough to understand the whole picture, as learning is a process that goes beyond the sum of these aspects.
For example, have you ever considered how distinct machines are from people in terms of how they perceive and interpret the world around them? It’s similar to equating wisdom with knowledge. Despite having a vast quantity of knowledge in its memory, AI lacks the wisdom to evaluate tricky circumstances that humans can easily handle. For instance, it may be simple for us to identify a fuzzy image of an animal, but it may be difficult for an AI system to do so because of the rigid visión por computadora training parameters that are used to create them [8]. Hence, the next time you’re in awe of AIs’ incredible talents, remember that there are some things that only humans are capable of doing.

Fig. 4. Clasificación de imágenes realizada por la IA. En https://arxiv.org/abs/1412.6572)
Exploring this idea, AI’s are recognized for performing specific tasks with ease, but did you know that our ability to multitask is one of the things that makes our brains so incredible? Our ability to multitask is due to our neurons’ asynchronous and parallel nature. Regrettably, artificial intelligence (AI) can’t completely match our capabilities in this regard because their artificial neuron layers are frequently fully connected. Artificial neurons need weights of zero to represent a lack of connections, in contrast to biological neurons, which are small-world in nature and can have fewer connections between them [2,4,7]. It just goes to show that while artificial intelligence has made great strides, nature still excels at some tasks better than even the most sophisticated machinery! Even more, the strict parameters used to build AIs and their data dependencies make them extremely vulnerable to any software or hardware malfunction. In comparison, even if a segment of our brain gets any damage, it could still function, keeping us alive!
“While artificial intelligence has made great strides, nature still excels at some tasks better than even the most sophisticated machinery”.
Catherine Cabrera – Chemist and microbiologist
Conclusiones
Despite their differences, natural thinking and AI can complement each other in many ways. For example, AI systems can be used to help human decision-making by providing insights and predictions based on large amounts of data. In turn, human cognition can be used to validate and refine the output of AI systems, as well as to provide context and interpret results. So next time you learn something new, take a moment to marvel at the incredible power of your brain and natural learning, and don’t underestimate yourself! You are not a machine, and you don’t need to be.
REFERENCIAS
- Criss, E. (2008). The Natural Learning Process. Music Educators Journal, 95(2), 42–46. https://doi.org/10.1177/0027432108325071
- Brain Basics: The Life and Death of a Neuron. (2022). National Institute of Neurological Disorders and Stroke. https://www.ninds.nih.gov/health-information/public-education/brain-basics/brain-basics-life-and-death-neuron
- Gardner, H. (1987). The mind’s new science: A history of the cognitive revolution. Basic Books.
- Nagyfi, R. (2018). The differences between Artificial and Biological Neural Networks. Medium. https://towardsdatascience.com/the-differences-between-artificial-and-biological-neural-networks-a8b46db828b7
- 3 types of neural networks that AI uses. (2019). Artificial Intelligence|. https://www.allerin.com/blog/3-types-of-neural-networks-that-ai-uses
- Sejnowski, Terrence J. (2018). The Deep Learning Revolution. MIT Press. p. 47. ISBN 978-0-262-03803-4.
- Thakur, A. (2021). Fundamentals of Neural Networks. International Journal for Research in Applied Science and Engineering Technology, 9(VIII), 407–426. https://doi.org/10.22214/ijraset.2021.37362
- Ian J.et al. (2015). Explaining and Harnessing Adversarial Examples. Published as a conference paper at ICLR 2015. https://doi.org/10.48550/arXiv.1412.657

Catherine Cabrera – Data Scientist
