Artificial Intelligence

In this category we show you all Artificial Intelligence related blogpost.

Will AI replace software developers

Author: Juan David Yepes – Software Engineer 

INTRODUCTION

As Artificial Intelligence continues growing exponentially, the question on many people’s minds is whether it will replace their job. Everyone, from CEOs to junior professionals, has the right to be concerned about the potential impact of AI, as we continually witness new AI technologies performing tasks we never thought possible, even in futuristic films.

As a software developer, it’s essential to recognise that AI technologies can already generate quality code. Therefore, in this article, we will examine the current state of AI in software development and identify how we can use this to our advantage.

developer working with two screens

The current state of AI

The current state of AI in software development is quite promising. AI technologies such as ChatGPT, GitHub Copilot, Open AI’s Codex, AlphaCode by DeepMind have been developed to generate code automatically, analyse code for bugs and vulnerabilities, and even optimise code for better performance. They are already being used by some companies to improve their software development process. The latest one is GPT-4, which is a big step in performance.

Since the launch of GPT-4, people have used it in exciting ways such as developing games with just a few prompts, generating HTML from an image, creating an app with a repo, finding breaches of security in a code and many more things. Also, its achievements in the tests they run for AIs showed that there is still a lot of room for improvement.

Another tool that is very close to developers is GitHub Copilot, which launched in October of 2021. It autocompletes while you code or gives suggestions when you write an idea. Some conclusions from people who have used it, for example, Mark Seemann, a Danish developer, is that it is still in an early stage.

For developers, it is not necessary to save time typing; the time is mostly spent researching, for example, API documentation, and that’s where it can make a difference in saving time. But there can be errors that are difficult to find and end up being more of a waste of time. ​(Seeman, 2022)​.

New skills that developers will need

To remain relevant in the field, developers need to acquire new skills that complement the work of AI. For example, they need to focus on problem-solving, critical thinking, and creativity. These skills are essential in ensuring that the AI-generated code meets the project’s specific needs.

Additionally, developers need to be able to understand how AI technologies work and are being used in the development process. This includes identifying the limitations of the technology, as well as how to optimise its output. A good understanding of AI will also enable developers to collaborate more effectively with AI systems rather than seeing them as competitors.

Another crucial skill for developers is the ability to manage data effectively. With AI, large volumes of data are processed at incredible speeds. Developers need to know how to work with this data, including how to extract, process, and store it. Additionally, they need to understand how to use AI-generated data to improve the quality of their code.

Finally, developers need to be able to communicate effectively. As AI technologies become more prevalent in software development, it’s important for developers to be able to explain the limitations and capabilities of AI to non-technical stakeholders. This includes being able to explain how AI technologies are being used in the development process and how they will impact the product.

How to use it as an advantage?

Despite the concerns about AI replacing software developers, it’s essential to recognise that AI technologies can be used to the advantage of both the industry and the developers. For industry, AI can be used to automate routine tasks, thereby increasing productivity and efficiency. It can also be used to identify and mitigate bugs and vulnerabilities in code, leading to more secure software. Additionally, AI-generated code can be used to quickly prototype and test new ideas, reducing the time to market for new products.

For developers, AI can be used to augment their skills and capabilities. By automating routine tasks, developers can focus on more creative and complex tasks, such as problem-solving and critical thinking. Additionally, AI can be used to assist with research and development, allowing developers to quickly prototype and test new ideas.

With the current technologies and talking more specifically in the day-to-day work, with ChatGPT, you can ask for anything related to code. Still, there is a chance of being wrong, so here you should try to use it carefully and as a second set of eyes that could give you a different perspective rather than the absolute truth.

A simple way of using it, given by a senior software engineer, Chris McClure, is doing a code review, this way it will help you analyse a piece of code and maybe provide some suggestions, also generate small pieces of code that you are able to understand and modify at your will.

code review chatgpt image

Code review ChatGPT

Use it as a tool to troubleshoot and solve errors that you are not able to find. Usually, you might use Stack Overflow; this can be a suitable replacement in some cases where you don’t see the answer or a lack of explanation, which is key for learning. Ask to translate code from one language to another; you might be an expert in a specific language but, in some cases, can’t solve the same idea in a different language.

image Prompt to translate from C#

Prompt to translate from C#

Result of translation in Python

Some recommendations when using code in these applications are to avoid giving keys, secret or personal information, or anything else you wouldn’t want to be leaked online. Never fully trust this information; always double-check every piece of code and text.

CONCLUSION

In conclusion, we are still far from being replaced by an AI given the challenges that face the types of projects that developers participate in; however, it is essential to acknowledge the changes that will happen in the industry and how the developers will have to adapt to be competitive and remain productive in a team. From now on, the best thing we can do is stay on track with the latest technology and experiment with AI to find its potential and understand how it can make our job more efficient.

REFERENCES

  1. CloudGuyChris. (2023, 01). 7 Amazing Uses Of Chat GPT For Software Developers. Retrieved from https://www.youtube.com/watch?v=WmY5NFi-nFw&ab_channel=CloudGuyChris
  2. Gazar, E. (2023, 01 07). Medium. Retrieved from https://ehsangazar.com/my-experience-with-github-copilot-2d6903870450
  3. Seeman, M. (2022, 12 05). Ploeh blog. Retrieved from https://blog.ploeh.dk/2022/12/05/github-copilot-preliminary-experience-report/
juan yepes photo

Juan David Yepes – Software Engineer

EQUINOX

what’s ai?

Discover what is AI and how it will become revolutonary in the industry

chess game seen through computer vision
picture of a woman using a tik tok filter

Author: Angie Duran – Data Scientist

INTRODUCTION

Artificial intelligence (AI) has profoundly impacted our daily lives, revolutionizing how we work, communicate, and even perceive ourselves. In recent years, AI has also made significant inroads into the beauty industry, giving rise to innovative products and services that promise to enhance our physical appearance. On the one hand, these developments offer exciting new opportunities for self-expression, creativity, and personalization. However, on the other hand, they raise important ethical and social questions about the nature of beauty, the role of technology in shaping our self-image, and the potential risks and harms associated with AI-assisted beauty. In AI of beauty: Dark and bright sides, we will look at both the potential benefits and the possible drawbacks of this emerging trend.

AI IN SOCIAL MEDIA FILTERS

With the advance of technology, the use of social media has been growing, and the need to show a perfect image to followers has been created. People want to show their best photos, the most pleasant moments, and the best mood.

Artificial intelligence has played an essential role in social media, creating ‘beauty filters’ based on augmented reality technology. These filters modify a person’s appearance to create smoother and brighter skin, bigger eyes, fuller lips, a snub nose, and a slimmer face, among others; all these changes take away each person’s unique features and bring all faces to a standard of artificial beauty.

Filters not only modify face attributes but also have been used over people’s faces to promote brands, resemble animals (like the famous dog filter) or personify fairy tale characters.

These filters have been created through image analysis. Reference points are found on the face of the person using the filter, such as four points on the eyes and mouth, and a 3D mesh is created on which the various effects are applied, which moves at the same time as the reference points. This is why, for example, if we cover part of our face with a hand, the filter loses the reference points and does not work correctly.

three women using instagram's beauty filters

Taken from: New face digital

One of the most controversial filters lately has been TikTok’s «Bold Glamour». Unlike other filters, this one sticks to the user’s face in such a way that it does not disappear when the user touches their face or puts their hand in front of it.

This was achieved by training an AI to apply facial transformations automatically, which analyses each ‘frame’ or image, using the effect separately to each one; this is how it understands when the face has been partially covered and applies the effect only to the visible part.

Specifically, it is the result of an AI technology called «Generative Adversarial Network» (GAN). That is, two neural networks that compete to generate better results. In the case of Bold Glamour, it’s a competition between the camera view and the style TikTok wants to transform you into.

woman with old beauty filter and woman with bold glamour from tiktok

Woman using old filter vs. woman using Bold glamour from Tiktok

Can you spot the diference and see the “makeup” over the hand of the first woman?

IS THIS THE BEAUTY STANDARD WE WANT TO REACH?

woman using blod glamour filter from tiktok

Woman using Bold glamour from Tiktok

The canons of female beauty have changed over time. Looking back to classical Greece, the phrase «a beautiful body promises a beautiful soul» (Socrates) stands out; at this time, it was believed that perfect proportions were the key to a beautiful woman.

In the Renaissance era, the canon of beauty was referenced by Sandro Botticelli’s painting «The Birth of Venus»; the characteristics were white skin, rosy cheeks, large, clear eyes, rounded hips and stomachs, and red lips.

Between 1950 and 1960, the sex symbol Marilyn Monroe appeared, women bleached their hair, and the iconic hourglass figure was also extremely popular; most women wore girdles and corsets to achieve this shape.

Nowadays, this sentence by Luciana Peker, journalist, and writer, stands out: «…That generates an enormous exclusion, and exclusion makes you anguished and distant from your body, from your identity…», referring to the digital era, where everyone wants to show that they have no imperfections.

picture of greek statue venus birth and marilyn monroe

Beauty standards throught the time

From my point of view, I believe that we should not live subjected to definitions of what is beautiful, definitions that change with time and fashion. Instead, I think beauty lies in accepting that we are all different and can never conform to unrealistic beauty standards. 

 As mentioned above, it is natural to want to show a perfect image to followers on social media. But a study conducted in 2021 by researchers at the City University of London revealed that «beauty filters» distort body image and increase the risk of low self-esteem. Among the results of the study on the harmful effects of filters on self-esteem and mental health were the following:  

  • 94 percent of participants said they felt pressured to look «pretty», and more than half said they felt intense pressure.  
  • 70 percent felt pressured to show a «perfect life. 
  • 86 percent said that what they showed on social media did not reflect their real life.  

«Young women told us they feel under considerable pressure to present themselves as fun, happy and sociable—as well as effortlessly beautiful—reflecting the ways that appearance pressures have extended into presenting ‘a perfect self» – Rosalind Gill, Professor of Social and Cultural Analysis, City University of London. 

SO CAN WE USE AI IN THE BEAUTY INDUSTRY?

We can always use this kind of technology to our advantage; for example, with the same augmented reality technology, virtual make-up testers have been implemented; reference points on the user’s face are identified, and make-up products are applied; foundations, concealers, powders, blushers, bronzers, and setting sprays can be tested; and matte, satin or radiant finishes can be compared with great accuracy.

For example, L’Oréal Paris has an online hair colour simulator, which allows you to test the hair colour products you want from the camera of your mobile or pc. This brand has also created a personalized foundation machine called Le Teint Particulier, which uses AI to find the «exact» foundation colour for your skin.

The information goes to a computer that uses an algorithm to choose from 20,000 different foundation colours.

Finally, the results from the computer are sent to a machine that mixes the foundation for the customer in the same shop. This is particularly useful, as you don’t have to spend hours trying on make-up products or dyeing your hair to end up with a result you don’t like.

Artificial intelligence has made our daily lives easier, from facilitating the purchase of cosmetics to posting alternative content on social media.

woman face using the loreal virtual makeup tester

Loreal Paris virtual makeup tester

CONCLUSION

In conclusion, the emergence of artificial intelligence (AI) in the beauty industry has revolutionised how we perceive ourselves, offering new opportunities for self-expression, creativity, and personalisation. However, it also raises important ethical and social questions about the nature of beauty, the role of technology in shaping our self-image, and the potential risks and harms associated with AI-assisted beauty. Beauty filters created through image analysis have been used extensively in social media, distorting body image and increasing the risk of low self-esteem. The canons of beauty have changed over time, and it is essential to understand that beauty lies in accepting that we are all different and can never conform to unrealistic beauty standards. Despite this, AI can be used to our advantage in the beauty industry. For example, virtual makeup testers using augmented reality technology can be implemented to test various makeup products with great accuracy. We should embrace AI technology in a way that benefits us, rather than conforming to harmful beauty standards.

REFERENCES

  1. City University London. (2021, March). Changing the perfect picture.
  2. Int J Environ Res Public Health. (2020). 17(2), 672.
  3. Fardouly, J., Diedrichs, P. C., Vartanian, L. R., & Halliwell, E. (2020). Social comparisons on social media: The impact of filters on body image and mood in young women. Body Image, 33, 175-182.
angie duran picture

Angie Duran – Data Scientist

EQUINOX

what’s ai?

Discover what is AI and how it will become revolutonary in the industry

chess game seen through computer vision
woman saying deaf with sign language

Author: Juan Sebastian Casas – Data Scientist

MOTIVATION

Deaf and hard-of-hearing people face significant barriers to communication due to their lack of hearing. Breaking communication barriers with AI (and Computer Vision) can be a game-changer for this community. These barriers can limit their ability to fully participate in society, as they are often excluded from verbal communication and rely heavily on sign language and other forms of communication.
In addition, the lack of accessibility and awareness about the needs of people with this disability can further hinder their integration and full participation in daily life in educational, work, and social environments.

INTRODUCTION

The creation of a sign language-to-audio translator could have a significant impact on the lives of people with hearing disabilities. The development of this technology could enable better communication, leading to greater independence and connection to the world around them. For example, they might more easily participate in meetings, classes, and social events where oral language is predominantly used.

One of the technological solutions to this problem is artificial intelligence, which, thanks to its significant advances in computer vision and deep learning, can recognise gestures and patterns without needing external wearables. This means that the AI can analyse large amounts of sign language data and autonomously learn to identify and classify the most common gestures and patterns. And combining this AI model with speech processing technologies, it is possible to create a real-time sign language-to-audio translation system.

STATE OF THE ART

At present, there are different systems and technologies dedicated to the translation of sign language to audio. Some of these use signal processing, pattern recognition, and machine learning techniques to interpret hand and body movements and gestures in sign language and transform them into sounds and spoken words. Among the most common methods used in these systems are the use of cameras and motion sensors to capture the user’s gestures and the use of image processing and pattern recognition algorithms to interpret them. Even though these systems still have certain limitations, such as the difficulty in recognising more complex gestures or the need for precise calibration, their ability to facilitate communication between deaf and hearing people is increasing.

One of the clearest examples of the use of machine learning to make this translation is the company SignAll, which uses a combination of cameras and motion sensors to interpret sign language, capturing the movement of the body, the shape of the hands, and facial movement, to translate it into text and voice in real-time. [1]
Another example can be found with the MotionSavvy UNI product, which uses a Leap Motion optical controller to capture the movement of the hands. This device was combined with a tablet to perform sign language to audio translation.[2]

Finally, the company SignAloud developed a pair of gloves that use position and motion sensors to detect the movements of the user’s hands and then translate them into spoken words and phrases. The system uses advanced sequential statistical regression technology to analyse the data and thus perform the translation. [3]

EXPERIMENTATION

For a first iteration of a computer vision sign language-to-audio translator, there are several ways to go. The first is to start training a convolutional neural network from scratch. This includes getting all the data needed for training, iterating over various architectures, hyperparameter tuning, and more. Another path that can be chosen is to transfer learning from an already pre-trained neural network that has been trained in a similar task, such as image classification. This option allows you to take advantage of prior knowledge of the neural network and tailor it for the specific task of the sign language to an audio translator.
The technique used for this first iteration was transfer learning; it has some advantages compared to training from scratch [4].

Some of these are:  

  1. Fewer data needed: pre-trained models have already learned valuable features on large and diverse data sets. Hence, they need fewer data to train and generally produce more accurate results than models trained from scratch.  
  2. Faster training speed: Training from scratch requires more time and computational resources than transfer learning since all model parameters are initialised randomly.  
  3. Improves generalisation: pre-trained models have learned to extract useful features in different types of images and can, therefore, better generalise across various computer vision tasks. 

A. DATA

A dataset of sign language gestures is required to develop the sign language-to-audio translator. However, it is not necessary to use a vast number of images for the training cause we’re not trying to learn the features that the pre-trained model already knows. We want to focus on capturing the nuances and intricacies of sign language gestures specific to this task. This means that a smaller, curated dataset of sign language gestures, with precise labelling and annotation, can be sufficient for the model’s training.
For the first iteration of the training, a dataset of 15 images per phrase was created for five different phrases: ‘I love you’, ‘Yes’, ‘No’, ‘Thank you’, and ‘Hello’. Resulting in just 75 images for the whole project.

As seen in Image 1, the sign language gesture used to train the model is the «Hello» sign. This gesture was captured using a phone camera and passed to a computer to continue with the labelling part.

man saying hello in sign language

Image 1. Sign language «Hello.» 

B. LABELS

Continuing the training, it is necessary to assign labels to each image in the dataset. In this case, the label correspond to the sign language gesture being performed in each frame. To generate these labels, a manual labelling process was performed using LabelImg, which is a popular image annotation tool. For each image, a label was assigned to represent the sign gesture being performed, and a bounding box was created to identify the hands in the image (object detection). This process ensures that the model can accurately identify and classify the sign gestures in new images it encounters during use.

An example of the labelling process using LabelImg can be seen in Image 2. In this example, the image shows a person performing the sign gesture for «I Love You.» The LabelImg tool drew a bounding box around the hands in the image, representing the sign gesture. The label «I Love You» was then assigned to the image. This process was repeated for all images in the dataset, ensuring each image was accurately labelled and annotated for the sign gesture. The labelled dataset was then used to train the sign language-to-audio translator to accurately classify and translate sign gestures in real time. [5]

man saying i love you with sign language

Image 2. Labelled image «I Love You»

C. TRAINING

The dataset was used to fine-tune a pre-trained object detection model using transfer learning. The chosen pre-trained model was the SSD MobileNet V2 FPNLite 640×640 architecture, which was trained on the COCO17 dataset. The last layer of the model was replaced with a fully connected layer with five output neurons corresponding to the sign language gestures. The new layer was fine-tuned using the sign language dataset for 10,000 training steps with a batch size of 4. [6]

This is an object detection model using the Single Shot Multibox Detection (SSD) framework. [7] It is designed to identify and locate objects of five different classes in an image, which are the sign language gestures for «I love you», «Yes», «No», «Thank you», and «Hello». The input image is resized to 640×640 pixels before processing. The optimiser used is a momentum optimiser with a cosine decay learning rate. Data augmentation was not used, as the pre-trained model came with its data augmentation techniques, including random horizontal flips and random crops.

D. RESULTS

The fine-tuned SSD MobileNet V2 FPNLite 640×640 model achieved a DetectionBoxes_Precision/mAP score of 0.81, indicating good performance in identifying and localising sign language gestures for «I love you», «Yes», «No», «Thank you», and «Hello». In addition, the total loss at the end of the 10,000 training steps was 0.16 (in graph 1), showing that the model could effectively minimise its loss function during training.

Overall, these results demonstrate the effectiveness of transfer learning and the suitability of the SSD MobileNet V2 FPNLite 640×640 architecture for object detection tasks. The results also suggest that the model has the potential to be applied in real-world scenarios where accurate and efficient sign language recognition is needed.

total loss graph

Graph 1. Total_loss

AUDIO

With the object detection model trained, we can now focus on implementing the audio translation component of the system. One approach would be to use the output of the object detection model to determine which sign language gestures were made and use a text-to-speech library to generate spoken output in the desired language. This process could be implemented using various Python libraries such as TensorFlow and PyTorch for the object detection model and libraries like gTTS for text-to-speech.

WHAT’S NEXT?

To further improve the accuracy of the sign language recognition project, a possible next step would be first to train a model specifically to recognise hands. This specialised model could then be fine-tuned and trained with sign language gestures, potentially resulting in a more precise and reliable system. By breaking down the task into smaller, more focused components, the model could better differentiate between the different elements of sign language, such as hand shapes, movements, and facial expressions, leading to improved recognition and interpretation.

Another essential aspect to consider for the future of this project is the expansion of the training data. While the current dataset of static images provides a good foundation for the model, there are many sign language phrases that use movement gestures. To improve the accuracy of the model, it would be helpful to incorporate videos of sign language phrases into the training data. Additionally, the model would benefit from a more extensive vocabulary of sign language phrases, allowing it to understand better and interpret more complex messages. By expanding the training data and vocabulary, we can continue to improve the accuracy and usability of the model for individuals who use sign language.

CONCLUSION

In conclusion, the development of a sign language-to-audio translator using artificial intelligence technology could greatly benefit individuals with hearing disabilities by enabling better communication and facilitating their integration into society. While there are currently several technologies available for translating sign language to audio, there is still room for improvement, particularly in terms of recognising more complex gestures and enhancing the overall accuracy of the translation. By using techniques such as transfer learning and smaller, curated datasets with precise labelling, it is possible to train AI models to better recognise and interpret sign language gestures in real time. Further research and development in this field could significantly improve the accessibility and inclusion of individuals with hearing disabilities.

REFERENCES

  1. (n.d.). SignAll Lab. SignAll Lab. https://www.signall.us/lab 
  2. Strauss, K. (2014, October 27). MotionSavvy UNI: 1st sign language to voice system. Forbes. https://www.forbes.com/sites/karstenstrauss/2014/10/27/tech-tackles-sign-language-motionsavvy/?sh=a3bc60478627 
  3. https://medium.com/biofile-programa-de-historias-cl%C3%ADnicas-en-la-nube/signaloud-guantes-que-traducen-el-lenguaje-de-se%C3%B1as-a-voz-y-texto-2352d15deff0 
  4. S. (2014, October 27). Benefits of Transfer Learning. Kaggle. https://www.kaggle.com/general/291011 
  5. Sell, L. (2022, September 22). LabelImg – README. Github. https://github.com/heartexlabs/labelImg. 
  6. Ssd_mobilenet_v2/fpnlite_640x640. Tfhub. https://tfhub.dev/tensorflow/ssd_mobilenet_v2/fpnlite_640x640/1 
  7. Hui, J. (2018, March 13). SSD object detection: Single Shot MultiBox Detector for real-time processing. Medium. https://jonathan-hui.medium.com/ssd-object-detection-single-shot-multibox-detector-for-real-time-processing-9bd8deac0e06. 
juan casas

Juan Sebastian Casas – Data Scientist

EQUINOX

what’s ai?

Discover what is AI and how it will become revolutonary in the industry

chess game seen through computer vision
amazon jaguar walking in the forest

Author: Daniela Ruiz – Data Engineer

Mining concessions, increase in hydroelectric dams, construction of roads, expansion of agriculture, cattle ranching, deforestation and changes in the legislation of protected areas are just some of the problems that indicate that the Amazon is In danger. On the other hand, artificial intelligence (AI) has been one of the most disruptive technologies in recent years and has been implemented in multiple areas, from the automotive industry to healthcare. In this sense, AI is not far behind when it comes to caring and preserving the Amazon. 

Why should we care about it? 

As you may know, Amazon is called the «green lung of the Earth». It is the world’s largest Rainforest, covering 6.7 million km2. According to the World Wildlife Fund (WWF), nearly 60% of the Rainforest is in Brazil, while the rest is shared among eight other countries (Bolivia, Colombia, Ecuador, Guyana, Peru, Suriname, Venezuela and French Guyana)1.

amazon forest cover per country

The Amazon is home to an incredible array of biodiversity. Just to give an example, you can find more types of ants on one tree in the Amazon than you can in some whole countries!2 Additionally, the Amazon is not only a vital source of water, food, medicines, and wood, but it also plays a critical role in stabilising the climate. 

The trees in the Amazon release 20 billion tons of water into the atmosphere daily, playing a critical role in global and regional carbon and water cycles3. The Amazon Rainforest is also home to many indigenous communities that depend on it. 

Despite its importance, the Amazon is in danger of reaching an irreversible point. And why is this so critical? We cannot tackle the climate crisis without the Amazon’s vital life-sustaining role. The Amazon rainforest is endangered due to a combination of factors, including: 

  • Deforestation: The Amazon rainforest is being rapidly cleared for commercial purposes such as agriculture, mining, and logging, which results in the destruction of crucial habitats and the release of large amounts of carbon into the atmosphere. Another major problem is the rising global demand for meat, leading Brazil to become the world’s biggest beef exporter. So yes, as you are imagining, the Amazon Rainforest is also cleared for cattle production.
cattle heads diagram in south america
  • Climate change: Deforestation in the Amazon contributes to climate change, as the forest plays a crucial role in absorbing carbon dioxide from the atmosphere. Climate change is also causing droughts and fires, exacerbating deforestation. 
deforestation picture in the amazon
  • Illegal logging and mining: Illegal logging and mining are significant contributors to deforestation in the Amazon, and they often operate in protected areas where they cause significant environmental damage. 
  • Wildlife trafficking: The Amazon is home to a wide range of unique and endangered species, and wildlife trafficking is a significant problem in the region.
  • Land conflicts: There are ongoing conflicts between indigenous communities and companies trying to exploit the resources of the Amazon, leading to displacement and violence. 
  • Agricultural expansion: Large-scale agriculture is expanding in the Amazon, leading to the destruction of forests and displacement of indigenous communities. 

As you can see, all these factors are interconnected and exacerbate one another, leading to a vicious cycle of destruction and endangerment of the Amazon rainforest. Still, one of the main problems is deforestation (either caused by human activity or climate change). The loss of the Amazon rainforest would have significant impacts on the global climate, biodiversity, and human communities that depend on the forest for their livelihoods and way of life. 

Is there any solution to these problems? 

Great news! Yes, there are solutions. Nowadays, many organisations are helping the Amazon, such as World Wildlife Fund or One Tree Planted. Additionally, technology, specifically Artificial Intelligence, is being used to support the Amazon Rainforest in some ways that will be explained below. 

But the truth is AI is not a magic wand, it should be seen as a tool that can help us solve problems and improve our lives, but it is not a cure-all or a magical solution to every challenge we face. To address the situation in the Amazon, a concerted effort from governments, local communities, and international organisations is necessary. In addition, there must be strict regulatory measures to preserve protected areas in the Amazon.  

 Having said that here are some solutions for the Amazon in danger powered by AI, which are currently being implemented: 

  • Mapping and monitoring: One of the essential things that can be done to prevent deforestation is to keep track of where it’s happening. AI could be used to analyse satellite imagery and detect changes in forest cover, allowing conservation organisations to focus their efforts on at-risk areas.

For example, to address this problem, the international NGO Rainforest Foundation has developed an AI application called «Forest Watcher» that uses satellite imagery to monitor deforestation in real-time. The application uses a machine learning algorithm to analyse satellite images and detect changes in vegetation cover. In addition, the application also uses field data collected by local communities to improve the accuracy of deforestation detection.

  •     Predictive modelling: By analysing historical data on deforestation rates, climate patterns, and other environmental factors. AI could be used to create models that predict where deforestation is most likely to occur, which species are taken out of their habitat, and how agriculture will impact the soil in the future. This information could help guide policy decisions and conservation efforts.
  • Enforcement: Land conflicts, wildlife trafficking, illegal logging, and deforestation often go undetected because they happen in remote areas. AI-powered drones with computer vision patrol these areas and identify criminal activity, allowing law enforcement agencies to take action to help fauna, flora and natives in the zone.
  •      Education and outreach: AI could be used to create interactive educational materials that teach people about the importance of the Amazon rainforest and the potential impact its loss could have on the environment. This could raise awareness and encourage more people to take action to protect.

Conclusions

In conclusion, the Amazon rainforest is in constant danger due to a combination of factors such as deforestation, climate change, illegal logging and mining, wildlife trafficking, land conflicts, and agricultural expansion. The loss of the Amazon rainforest would have significant impacts on the global climate, biodiversity, and human communities that depend on the forest. However, there are solutions for deforestation in the Amazon powered by AI, such as mapping and monitoring, predictive modelling, enforcement, and education and outreach. AI can be a powerful tool to aid in conservation efforts. 

Still, it is essential to remember that it is not a magical solution. A concerted effort from governments, local communities, and international organisations is necessary to address the problem in the Amazon. By working together and using technology responsibly, we can help preserve the Amazon rainforest for generations.

REFERENCES

daniela ruiz data engineer

Daniela Ruiz – Data Engineer

EQUINOX

what’s ai?

Discover what is AI and how it will become revolutonary in the industry

chess game seen through computer vision
Artificial neural networks vs human brain

Author: Catherine Cabrera – Microbiologist, chemist and Data Scientist

Have you ever stopped to wonder how we learn? As a scientist, I find this absolutely fascinating. Every single thought and action we have ever made is controlled by one of the most incredible organs in our bodies – the brain. And what’s even more mind-blowing is that all of humankind’s greatest achievements throughout history are based on the simple, yet complex, process of learning. Observing recent history, one of these astonishing breakthroughs has been artificial intelligence (AI).

We’ve been able to create some seriously impressive AI entities that can do everything from recreating images and videos to playing games and even having full-on conversations with us! When you consider what we can do with our brains, it is astounding. The real question is whether or not these machines are thinking. If so, exactly how do they accomplish it? Artificial neural networks vs. Human Brain will investigate the interesting field of artificial intelligence while contrasting human thought processes with these incredible artificial entities.

«I think; therefore, I am.»

«I think; therefore, I am.» Rene Descartes. With this simple phrase, this philosopher was able to capture the essence of the power of our thoughts. Our ability to think makes us who we are – it’s what drives us to do what we do, feel the way we feel, and connect with others. In other words, everything we are and everything we do starts with that one little thought in our head! It’s pretty incredible when you stop to think about it.

Although there are various ways to learn, natural learning is among the most amazing. It could be interpreted as the mental mechanism we employ to pick up information and abilities without a conscious effort. It allows us to learn new skills and reach new conclusions by letting us apply our mental abilities, such as problem-solving, critical thinking, and decision-making. It also includes various cognitive skills such as perception, attention, memory, and language processing [1].

From a biological perspective, our unique ability to think is all thanks to how our neurons interact in our brains. Each neuron is made up of three essential parts, a cell body, where the genetic material is stored, and two extensions, an axon, and a dendrite, that work together to transmit and receive chemical signals throughout neurons by a process called the synapse (fig. 1). The aggregation of millions of neurons is what we recognize today as a natural neural network (NNN) [2]. Around the 1950’s, the study of natural learning, cognition, and brain architecture brought up another interesting idea, can we mimic natural learning with an artificial approach?

graphic of the biological synopsis

Fig 1. Synapse process and neuron architecture in the human brain. Made by Catherine Cabrera

Travelling throughout history – artificial intelligence

Many thinkers throughout history, like René Descartes and John Locke, debated the central issue of the connection between the body and the intellect, laying the groundwork for the development of cognitive science [3]. But the field of artificial intelligence didn’t take shape until the middle of the 20th century, with the invention of perceptrons. They were designed to mimic the structure and functionality of natural neurons to create simple machines that could solve classification problems (Fig. 2).

People were quite excited about this technology as they envisioned all of the amazing possible uses it might have. For example, when the Navy was working on a perceptron technology-based electronic computer in 1958, according to a New York Times article, it would not only be able to walk, talk, and see but also write, reproduce, and be aware of its existence [4].

Fig 2. Comparison between perceptrons and biological neurons. From: https://appliedgo.net/perceptron/

And while perceptrons were able to take inputs, process them, and produce outputs, their limited application was soon revealed when they failed to solve non-linear problems like XOR functions [5]. It was then that the tantalizing question arose: if we can mimic a single neuron, could we mimic an entire neural network? The relationship between the brain and intelligence sparked a whole new era of exploration, leading to the incredible advancements in AI that we see today. Initially, artificial neural networks (ANN) were made by joining multiple layers of single perceptrons, becoming able to solve pattern recognition problems; however, two obstacles arose; training these networks was not cheap in terms of computational cost and the binary answer (0 or 1) perceptrons manage, limit the problems these neural networks can solve [4].

Thankfully, we have overcome these obstacles, and now we can design significant artificial intelligence entities like ChatGTP, You, Midjourney, Dalle-2, and others. This accomplishment was achieved with the creation of new technologies and ground-breaking theories that have completely changed the architecture and processing abilities of artificial neural networks. For instance, recurrence or feedback information processes were added to neural networks, which was a significant advance in this area and resulted in the creation of recurrent neural networks (RNNs) (fig. 3). The instauration of this concept in ANNs generated that these networks start to diverge from natural neural networks, as we don’t have this process instaurated in our brain. Just consider that in the synapse process, the signal direction is one-way oriented [2].

However, thanks to these adaptations, we can now execute various tasks, such as test prediction tasks and language synthesis. Nonetheless, RNNs have certain drawbacks too, such as poor short-term memory and challenging learning curves. Long Short-Term Memory Networks (LSTMs) were developed to address these issues. They have completely changed the game by enabling RNNs to expand their memory and carry out jobs that need long-term memory (fig. 3). In a more recent story, convolutional neural networks (CNNs), a three-dimensional configuration of artificial neurons, had been created for increasingly more complex tasks. These networks are primarily employed for activities related to computer vision analogue to those you can carry out using your own eyesight (fig. 3) [5].

Fig 3. The architecture of different neural networks. a) Structures of ANN, RNN, and LSTM. Made by Catherine Cabrera. b) CNN basic structure, from https://www.interviewbit.com/blog/cnn-architecture/  

It’s fascinating how our understanding of brain functions has influenced the creation of AI. However, despite being historically inspired by our human brain, AI has always been designed to function differently from us. It’s like the difference between a bird and an aeroplane – both can fly, but they do it in their own unique ways. Similarly, AI has its unique way of processing information and making decisions, which sets it apart from our own thought processes. Specifically, AI development has been a data-centric approach. Besides combining science and technology to develop these machines, we have to give data to these AIs to teach them how to «see». For example, just like a biological neuron has dendrites to receive signals, a cell body to process them, and an axon to convey messages to other neurons, for a single perceptron to make a prediction, input channels, a processing stage, and one output channel are necessary to understand data (fig. 2). In our case, the signals we receive are stimuli from the outside world, which we later on convert into chemical signals the brain can process. For AI, data is this entry signal they need to function [1,6].

«Despite being historically inspired by our human brain, AI has always been designed to function differently from us». 

Catherine Cabrera – Chemist and microbiologist

Can AI outthink the human brain?

Nowadays, both fear and curiosity about new AIs technologies cause a typical misconception about the strengths and limitations they have. For instance, observing an AI generating new images from plain text, resuming vast amounts of information, and creating videos, all this in seconds, is scary, basically because we ourselves are not capable of this. But did you know that you, or even a kid, can probably win in Tic-Tac-Toe against some of the most amazing AIs? And even though artificial neural networks were inspired by the function of our brain, making them somehow similar in concept, important differences in structure and processing capacities cause significant divergence in the functionality they have, as shown in the table below.

natural vs artificial neural networks comparative table

Table 1. Comparison between natural neural networks and artificial neural networks. Made by Catherine Cabrera [4, 7].

From the information exposed before, we can observe that there are some huge differences between NNN and ANN, including size, flexibility, and power efficiency, among others. However, comparing just preliminary numbers or specific aspects isn’t enough to understand the whole picture, as learning is a process that goes beyond the sum of these aspects.

For example, have you ever considered how distinct machines are from people in terms of how they perceive and interpret the world around them? It’s similar to equating wisdom with knowledge. Despite having a vast quantity of knowledge in its memory, AI lacks the wisdom to evaluate tricky circumstances that humans can easily handle. For instance, it may be simple for us to identify a fuzzy image of an animal, but it may be difficult for an AI system to do so because of the rigid computer vision training parameters that are used to create them [8]. Hence, the next time you’re in awe of AIs’ incredible talents, remember that there are some things that only humans are capable of doing.

artificial intelligence sight explanation

Fig 4. Image classification, performed by AI. From https://arxiv.org/abs/1412.6572)

Exploring this idea, AI’s are recognized for performing specific tasks with ease, but did you know that our ability to multitask is one of the things that makes our brains so incredible? Our ability to multitask is due to our neurons’ asynchronous and parallel nature. Regrettably, artificial intelligence (AI) can’t completely match our capabilities in this regard because their artificial neuron layers are frequently fully connected. Artificial neurons need weights of zero to represent a lack of connections, in contrast to biological neurons, which are small-world in nature and can have fewer connections between them [2,4,7]. It just goes to show that while artificial intelligence has made great strides, nature still excels at some tasks better than even the most sophisticated machinery! Even more, the strict parameters used to build AIs and their data dependencies make them extremely vulnerable to any software or hardware malfunction. In comparison, even if a segment of our brain gets any damage, it could still function, keeping us alive!

«While artificial intelligence has made great strides, nature still excels at some tasks better than even the most sophisticated machinery». 

Catherine Cabrera – Chemist and microbiologist

Conclusions

Despite their differences, natural thinking and AI can complement each other in many ways. For example, AI systems can be used to help human decision-making by providing insights and predictions based on large amounts of data. In turn, human cognition can be used to validate and refine the output of AI systems, as well as to provide context and interpret results. So next time you learn something new, take a moment to marvel at the incredible power of your brain and natural learning, and don’t underestimate yourself! You are not a machine, and you don’t need to be.

REFERENCES

  1. Criss, E. (2008). The Natural Learning Process. Music Educators Journal, 95(2), 42–46. https://doi.org/10.1177/0027432108325071
  2. Brain Basics: The Life and Death of a Neuron. (2022). National Institute of Neurological Disorders and Stroke. https://www.ninds.nih.gov/health-information/public-education/brain-basics/brain-basics-life-and-death-neuron
  3. Gardner, H. (1987). The mind’s new science: A history of the cognitive revolution. Basic Books.
  4. Nagyfi, R. (2018). The differences between Artificial and Biological Neural Networks. Medium. https://towardsdatascience.com/the-differences-between-artificial-and-biological-neural-networks-a8b46db828b7
  5. 3 types of neural networks that AI uses. (2019). Artificial Intelligence|. https://www.allerin.com/blog/3-types-of-neural-networks-that-ai-uses
  6. Sejnowski, Terrence J. (2018). The Deep Learning Revolution. MIT Press. p. 47. ISBN 978-0-262-03803-4.
  7. Thakur, A. (2021). Fundamentals of Neural Networks. International Journal for Research in Applied Science and Engineering Technology, 9(VIII), 407–426. https://doi.org/10.22214/ijraset.2021.37362
  8. Ian J.et al. (2015). Explaining and Harnessing Adversarial Examples. Published as a conference paper at ICLR 2015. https://doi.org/10.48550/arXiv.1412.657
catherine cabrera picture

Catherine Cabrera – Data Scientist

EQUINOX

what’s ai?

Discover what is AI and how it will become revolutonary in the industry

chess game seen through computer vision
nvidia gtc 2023 poster

Authors: Cristian Zorrilla (Interaction Designer) – Ivan Caballero (AI Designer) – Carla Acosta (Visual Designer)

Nvidia pushes hard to democratise Artificial Intelligence; in Equinox AI Lab we’re excited about every new announcement. On this note, we want to focus on three main talks we find extremely interesting from the Nvidia GTC 2023.

AI Best Practices for Successful Implementations

The first was the «AI Best Practices for Successful Implementations» session. The interviewer Bob Venero, CEO at Future Tech Enterprise, met John Russell, VP & CIO at Northrop Grumman Corporation. They talked about the key points of integrating AI into organisations.

At the beginning of the session, Bob asked about the value of AI in business. John explains that it was crucial to his business that they campaigned to get people to know and be involved in AI, even if they were experts. They wanted AI to be part of every area around the company. He adds that AI could let them take advantage and understand micro-transactions across their business processes because AI can give you the information you need to make informed decisions and accelerate productivity rather than jockey the data manually.

So, humans can do what we should be doing and let technology bring, contextualise, and visualise the information for us. Then, he highlighted that this is not letting a machine take complicated & complex decisions for us, and John emphasises that. For him, educating people on AI is essential because they believe it will replace us. Still, AI will do things we don’t want to do, saying that «what we want to do is move the things that are dull, dirty and dangerous down to the machines and allow you (humans) to do the things that are more upscale.»

Later, John gives two points to consider when implementing AI. The first is to consider privacy. You need to be aware of what data you expose to the machine, their possible outcomes, and what the machine will do across a company. So having an AI accountability team is mandatory. Second, be responsible. AI is a tool; as with any other tool in a toolbox, you need to be accountable.

Finally, he resumes the session with the top business quality and speed outcomes made through AI. Again, AI can bring information to you very quickly. So, if you have a problem, you can quickly spot and understand it. Then, you can have better decisions and processes that enhance your upfront and back-end business. So, by integrating AI within your operations, you get a quality check in every part of the life cycle, accelerating and driving quality.

How Generative AI is Transforming the Creative Process

The second one was «How Generative AI is Transforming the Creative Process» in this talk, Bryan Catanzaro from Nvidia interviews Scott Belsky from Adobe. They discussed the future of design and art processes and focused renewal announcement of the partnership between Adobe and Nvidia.

Adobe couldn’t lose the AI race; its products would soon be obsolete if they didn’t evolve with the trendy and revolutionary AI generative models, and they did very well. In this session, Scott told us how Adobe is implementing AI in legendary products like Photoshop, Illustrator, Adobe Express and Firefly (The generative adobe platform).

Adobe plans to allow users to describe what they want just with text input; you will be able to create images and edit them with simple commands such as «remove the background», «draw a dragon in Times Square», «replicate the dog», or «cut my silhouette». People will be able to go further in media creation, not only images but videos, 3d models, social media content and animations will be impacted.

adobe firefly interface

Adobe Firefly (AI generative tool)

They also researched which problems users had with their products and are mitigating them with AI to make them more usable and overall faster. These new assets will allow rookies to create their ideas painlessly as tools will be easier to use; at the same time, it will benefit designers and professionals because a new world of opportunities and creativity will be opened for them.

Scott and Bryan mentioned the natural fear these implementations could cause in creatives, but they also said that it is «normal» to fear new technology (every time in human history, we’ve feared it) and that we will soon get the most out of it.

It is «normal» to fear new technology every time in human history, we’ve feared it. 

Scott Belsky (Adobe)

3D by AI: Using generative NeRFs for building virtual worlds

The third one was «3D by AI: Using generative NeRFs for building virtual worlds». In this talk Gavriel State from Nvidia explained how Artificial intelligence can be used in many areas of 3D production, from asset creation, behaviour, animation and mixed realities, especially with Neural Radiance Fields (NeRFs) technology which creates hyper-realistic 3D environments.

This technology works by teaching artificial intelligence to recognise lights within a specific image, which achieves hyper-realistic images, also speeding up the image rendering process, being able to render a very high-quality model in as little as 30 seconds, allowing to create 3D environments in almost instantly just by learning the behaviour of light.

Another great advantage of Neural Radiance Fields (NeRFs) is that they can decompose images and separate them into sets, like this picture of the flower.

flower image decomposed by nerfs

Flower image decomposed

It can also achieve much more realistic reflections in 3D materials, which are used in video games, special effects and animated films.

In addition, when it comes to texturing, a 3D model achieves a much more realistic finish, as it can understand the texture of the normals and the light of the objects in a much more accurate way; not only that, it is also possible to have a fully textured 3D object and it can do the reverse process where it is possible to separate the texture from the model.

The most exciting usability of Neural Radiance Fields (NeRFs) is to allow you to train the artificial intelligence with an object, and it will be able to generate and adjust it in a 3D space without the need to use 3D, and this allows you to change the perspective of the 2D object, within a 3D area.

example of a car turning from 2d to 3d with nerfs

Example of 2D image to 3D image

In addition, it can learn about physics just from a video; an example of this is a tennis match, the artificial intelligence learned from the game and was able to reconstruct the movement and simulate the physics of that tennis match, which will be a potent tool shortly.

ai 3d simulation example with tennis

AI 3D simulation example 

In conclusion, Nvidia is such a significant referent nowadays. We were glad to participate in the Nvidia GTC 2023 event and can’t wait to experience all these new implementations in different creative and business processes.

cristian zorrilla

Cristian Zorrilla – Interaction Designer

ivan caballero

Ivan Caballero – AI Designer

carla acosta

Carla Acosta – Visual Designer

EQUINOX

what’s ai?

Discover what is AI and how it will become revolutonary in the industry

chess game seen through computer vision
Tau

Did you know that AI can boost productivity by 40%?