Inteligencia Artificial

In this category we show you all Artificial Intelligence related blogpost.

hands holding a puzzle

Autor:

Andrés Ruiz – Data Engineer

Making decisions has been decisive in the course of human history, and it is fundamental to demarcate what can happen in the future. This process is carried out in a constant way to solve from the simplest problem such as choosing which transport you want to use to get to your work faster, to even for more complex issues, for example, growing a company more.

brain illustration making decisions

 

For this, the human being must realize a thought process that allows him to choose the best possible decision that can solve the problem, but in that case, would it be better to let ourselves be carried away by our intuition? Or is it better to do a complex process that allows us to analyze the problem and thus find the best alternative deeply?

It is clear that in order to solve a simple problem that is from our daily lives, it is enough to use our intuition, and surely we will be able to make a wise decision to solve the problem. Still, can this intuition be applied to solve more complex problems? The answer is that possibly the decision that is made will not be the most successful or the most efficient; it is here where Business Intelligence (BI) takes a fundamental role.

WHAT IS BI?

“Business Intelligence (BI) is an umbrella term for technology that enables data preparation, mining, management, and visualization. Business Intelligence tools and processes enable users to identify actionable insights from unprocessed data, making it easier to make data-driven decisions across companies across various industries.” IBM. What is BI? [1]

*Want to know first the relevance of data? Read our blog It´s all about data here.

business intelligence illustration

 

In other words, Business Intelligence is a set of tools that allows a person or organization to take advantage of a fundamental input such as data to generate “insights”. In a few words, information and knowledge can be extracted from the data, which serves as a basis to improve the decision-making process.

WHY USE BI?

BI offers organizations the opportunity to extract insights from data that serve as a basis for improving business decision-making. In addition, this allows organizations to operate in a more agile and efficient way by being able to find in the data the information they require to make intelligent decision-making related to any field of the organization [2].

data coming out from a smartphone

BI PROCESS

Applying Business Intelligence starts with analyzing the organization’s requirements and business questions. This first step is essential since it is necessary that the business questions and requirements are well defined to continue the process [3]. From this, you begin to follow the procedure shown in the following image.

BI APPLICATIONS

  1. Data Sources: This step identifies the needed data sources to solve the requirements and business questions.
  2. Extract-Transform-Load DW: First of all, you must have the data model based on the requirements given by the organization. Subsequently, we proceed to carry out the ETL process from the Data Sources to the DWH.
  3. Data Mining: This process analyzes large quantities of data to find patterns such as groups of records, unusual records, and dependency [4].
  4. Data Analysis Reporting: In this step, data visualizations are used to allow us to find insights that enable us to respond to the requirements and business questions raised.
  5. Decision Making: This last step is where the relevant conclusions could be drawn to make way for intelligent decision making.
dashboard interface in an ipad

 

In conclusion, BI is important for organizations because it can help answer many business questions and for intelligent decisions that allow a relevant solution to a respective problem.

As seen above, BI could have different applications, all to improve aspects related to any area or field of the organization, which shows that it is currently an essential tool for the growth of organizations.

REFERENCIAS

[1] IBM. ¿Qué es la Inteligencia Empresarial. Recuperado 6 de Enero de 2022, de https://www.ibm.com/co-es/analytics/business-intelligence

[2] Tableau. ¿Qúe es la inteligencia de negocios? Guia sobre la inteligencia de negocios y por qué es importante. Recuperado 6 de Enero de 2022, de https://www.tableau.com/es-es/learn/articles/business-intelligence

[3] Sherman, R. (2015). Business Intelligence Guidebook From Data Integration to Analytics. Chapter 3, pág 44-45.

[4] Sherman, R. (2015). Business Intelligence Guidebook From Data Integration to Analytics. Chapter 1,  pág 17.

 

Andres Ruiz – Data Engineer

man working on a pc

VISITA NUESTRO CENTRO DE CONOCIMIENTO

Creemos en el conocimiento democratizado 

Conocimiento para todos: Infografías, blogs y artículos 

 "Hay una gran diferencia entre lo imposible y lo difícil de imaginar. La primera tal vez lo es, la segunda se trata de ti"

data charts

If you are an Equinox AI & Data Lab follower, you’ve probably seen our blogs and Instagram posts where we talk about some solutions Artificial Intelligence can provide to different problems. For example, do you remember our post about how AI contributes to Wildlife Conservation? If you haven’t seen it, you can check it on this link! https://bit.ly/3F7yUyl.

In this case, conservationists and researchers use AI to monitor and preserve animals in their natural habitat. They need to understand how animals repeat their behaviors, reproductive and migration patterns, or hunting routines. In other words, they need to collect a vast amount of DATA.

giraffe seen through computer vision

Taken from zsl.org

Do you see? That’s only one example, but the truth is: it’s all about data. As you know, AI teaches a computer how to perform a task that humans would typically perform when given a huge dataset. That means accurate forecasting and patterns wouldn’t be possible without quality data.

But how do we know what type of data we need to collect, and how can we manage and analyze that data? We need to refer to the Data Life Cycle model to answer these questions. Based on this model, the process starts planning which information is required in order to solve a problem and how we can collect that data. After collecting it, we process and analyze the data. According to the objective, we can use tools for visualizing data, making predictions with the help of AI and Machine Learning, understanding and observing trends in data, among others. After that, we publish the results and share data with the project’s stakeholders. Finally, data is preserved and re-used for maintaining the final product updated, considering the different changes to the original data. This process could be used in academic research, business research, real-world problems, organizations, or any other data-based problem.

DATA IN ORGANIZATIONS

We’ve covered many exciting applications on Equinox’s blog in some fields such as environment or arts, but what happens in organizations? It is from data that decisions are made, which is why data is recognized nowadays as the most critical asset an organization has. In our laboratory, we know this, and that’s why we work on obtaining, processing, and analyzing data, relying on User Experience to understand our clients’ business and thus, use their data to generate value from Analytics and Artificial Intelligence.

However, not all companies have understood the value of the data they handle every day. For some decades now, giants like Google, Meta (Facebook), Amazon, and other tech companies have understood that the value of a product is not in its price. For example, Google provides its search, translation, e-mail, storage services, among others, completely free because the real value is in the users’ information. Users are classified according to their personality traits, consumption habits, or their network interaction (visits, clicks, page views, search history, and much more). Then, all this information is sold to advertisers or the government. In other contexts, such as small or medium-sized companies, data is an engine that drives the business’ core.

WHY IS DATA IMPORTANT FOR ORGANIZATIONS?

people working together

Taken from associationsnow.com[/caption]

  • As mentioned before, data help make better decisions. Even small startups generate data either from customers, user habits, demographics, and more. This information could be used to:
    • Find new customers
    • Increase customer retention
    • Predict sales trends

Those are some examples of how data can benefit organizations externally, but there’re many things that can be explored internally. For example, employee churn could be analyzed to determine retention plans for them.

  • Data helps understanding performance. It is essential to be clear about how the different parts of the company are working: teams, departments, budgets, marketing efforts, etc., and an efficient way to do it is by collecting and tracking data to identify bottlenecks.
  • Data helps understand and improve businesses processes. In that way, wasted money and time could be reduced.
  • Data helps understanding customers. When an organization knows its customers, it could be easy to know if the products or services offered are attractive and develop marketing campaigns to retain them or attract potential ones.

In conclusion, the fastest-growing companies are the ones who know and take advantage of their clients’ and business’ data. This gives them a competitive advantage over other companies and allows them to make decisions based on facts that are inherent to the business.

REFERENCIAS

Anand, A. (October 6, 2021). How AI is revolutionizing Wildlife Conservation. Recovered on January 6, 2022, from: https://www.analyticssteps.com/blogs/how-ai-revolutionizing-wildlife-conservation

GROW. (March 9, 2020). Why is data important for your business? Recovered on January 7, 2022, from: https://www.grow.com/blog/data-important-business

Leonard, K. (October 25, 2018). The Role of Data in Business. Recovered on January 7, 2022, from: https://smallbusiness.chron.com/role-data-business-20405.html

Images

Capgemini Mexico. (June 27, 2019). Improve your business model with big data. Obtained from: https://www.capgemini.com/mx-es/2019/06/mejora-tu-modelo-de-negocio-con-big-data/

zsl.org. (n.d.). Monitoring and Technology – Machine Learning. Obtained from: https://www.zsl.org/conservation/conservation-initiatives/conservation-technology/machine learning

Smith, E. (June 5, 2019). Data Culture: Why Your Organization Should Think Beyond Big Data. Obtained from: https://associationsnow.com/2019/06/data-culture-why-your-organization-should-think-beyond-big-data/

daniela ruiz data engineer

Daniela Ruiz – Ingeniera de Datos

man working on a pc

VISITA NUESTRO CENTRO DE CONOCIMIENTO

Creemos en el conocimiento democratizado 

Conocimiento para todos: Infografías, blogs y artículos 

 "Hay una gran diferencia entre lo imposible y lo difícil de imaginar. La primera tal vez lo es, la segunda se trata de ti"

social distancing robots with masks

During the last decades, AI has transformed multiple fields of knowledge; medicine is not out of this transformation. There are many different ways in which we can enhance medicine using AI. In this article, I will introduce you to some of how AI can help discover new drugs, understand the mysteries of cancer, and learn up to one billion relations between different research resources. 

scientist illustration

 

The first time AI helped humans research was in 2007 when Adam (a robot) generated hypotheses about which genes code for critical enzymes that catalyze reactions in the yeast Saccharomyces cerevisiae. Adam also used robotics to test its predictions in a lab physically. Researchers at the UK universities of Aberystwyth and Cambridge then independently tested Adam’sAdam’s hypotheses about the functions of 19 genes; 9 were new and accurate, and only one was wrong. [1] This is only one example of the multiple applications of AI in this field; get ready to learn more! 

From understanding cancer to discovering new drugs 

AI is turning the drug-discovery paradigm upside down by using patient-driven biology and data to derive more-predictive hypotheses rather than the traditional trial-and-error approach. For example, Boston’s Berg biotechnology company developed a model to identify previously unknown cancer mechanisms using tests on more than 1,000 cancerous and healthy human cell samples.

human cells sample illustration

 

Another contribution of AI to this field was made by BenevolentBio when they created a platform that took information from multiple sources such as research papers, patents, clinical trials, and patient records.  

Then this information is used to create a representation, based in the cloud, of more than one billion known and inferred relationships between biological entities such as genes, symptoms, diseases, proteins, tissues, species, and candidate drugs. The inference of those relationships uses Natural Language Processing (NLP) techniques. 

Deepchem is a neural network model based on python used to find a suitable candidate in drug discovery aimed to democratize drug discovery, material science, quantum chemistry, and biology.[2] The Neural Network model used is a multilayer perceptron, a neural network where the mapping between inputs and outputs is non-linear. A Multilayer Perceptron has input and output layers, and one or more hidden layers with many neurons stacked together[3].

multilayer perceptron neural network

 

Uses and advantages 

Only in the drug development field there are many applications of AI. In this article [4], the authors mapped the uses of AI in drug discovery: 

diagram of ai in drug discovery

 

As you can see, there are a lot of applications in different fields, only in the drug discovery area. The prediction task has a central role in most of the uses. It is one of the preferred tasks of Artificial Intelligence!  

Developing a new treatment can take over a decade and have an estimated cost of about 2.6-billion dollars. Much of that effectively goes down the drain because it includes money spent on the nine out of ten candidate therapies that fail somewhere between phase-1 trials and regulatory approval. With the right effort, AI and machine learning will usher in an era of optimizing quicker, cheaper, and more effective drug discovery. AI can recognize hit and lead compounds and provide a quicker validation of the drug target and optimize the drug structure design.[4] 

drug development cost diagram

 

Some challenges 

Despite these promising applications, many scientists are unaware of the capabilities of AI. A survey published in February by BenchSci, a start-up in Toronto, Canada, that provides a machine-learning tool for scientists searching for antibodies, found that 41% of the 330 drug-discovery researchers who took part were unfamiliar with the uses of AI [5].   

AI faces some significant data challenges, such as the data’s scale, growth, diversity, and uncertainty. The data sets available for drug development in pharmaceutical companies can involve millions of compounds, and traditional ML tools might not be able to deal with these types of data. In addition, access to data from various database providers can incur extra costs to a company. The data should also be reliable and high quality to ensure accurate result prediction.   

Other challenges that prevent full-fledged adoption of AI in the pharmaceutical industry include the lack of skilled personnel to operate AI-based platforms, limited budget for small organizations, apprehension of replacing humans leading to job loss, skepticism about the data generated by AI, and the black box phenomenon (i.e., how the AI platform reaches the conclusions)[4].

TO CONCLUDE

In spite of all the previously listed challenges, Artificial Intelligence will transform the world in the coming years. If we want to prepare, we have to endure each of these problems. Fortunately, AI is not stopping us from finding solutions and answers to different problems. If we look forward to the future, new tools, such as Computación Cuántica, will be available to develop chemical simulations [6]. If you are looking to enhance some processes in the pharmaceutical industry with AI help, don’t hesitate to contact us

REFERENCIAS

daniel panche

Daniel Panche – Data Scientist

man working on a pc

VISITA NUESTRO CENTRO DE CONOCIMIENTO

Creemos en el conocimiento democratizado 

Conocimiento para todos: Infografías, blogs y artículos 

 "Hay una gran diferencia entre lo imposible y lo difícil de imaginar. La primera tal vez lo es, la segunda se trata de ti"

robot with human appareance

The branch of robotics has advanced significantly over the years, having applications in the industry, domestic services, medical field, among others. However, have you ever wondered why humans are afraid of some robots? Is humanity prepared for robotics driven by artificial intelligence?

Take a look at the following images. How do you feel about them? Maybe neutral? Maybe uncomfortable?

human appareance robots

 

And what about these? Perhaps the sensations are better than the previous ones.

cute face robots

 

The strange but fascinating phenomenon of the Uncanny Valley

The sensations people feel with those images, including discomfort and calm, are caused by a strange phenomenon called Uncanny Valley. This concept was first introduced in the 1970s by Mashiro Mori, who attempted to describe his observations and feelings to robots that looked and acted almost like humans. He hypothesized that as robots appear more humanlike, they become more appealing, and people have more empathy with them, but only to a certain point [1]. When we reach that point (uncanny valley), the person’s response would abruptly shift from empathy to revulsion, causing a tendency to be scared of the robot or causing feelings of strangeness or a sense of unease [2]. Besides robots, the phenomenon could appear in virtually created characters, such as the new metaverse.

For more information about the metaverse, you can read this article.

Figure 1 shows the uncanny valley graph. The horizontal axis represents the robot (or virtual character) human likeness, and the vertical axis represents our affinity towards it. Nevertheless, the relation between those parameters is strange as the human likeness increases and becomes more evident when the robot moves.

uncanny valley graph

Figure 1. Uncanny Valley graph

Why is it important to avoid the uncanny valley?

Nowadays, the usage of robotics in our daily life is booming [3]; however, whole interaction with machines has not been achieved due to the low affinity and empathy between humans and robots. Their limitations lead to an inevitable rejection when interacting in terms of expressing emotions. Therefore, it is a great challenge for this industry to naturally make robots part of the environment without reaching the uncanny valley.

One of the most visionary applications of robotics is related to education and rehabilitation. These applications are achieved with robots capable of simulating human emotions, which leads to an increase in the man-machine relationship. By accomplishing this, the education or rehabilitation of disabled people is possible since the patient will be able to create a specific empathic bond with his therapist. At the same time, the robot or virtual character will be able to develop his work in a more effective way [4].

Can AI help us to avoid the uncanny valley?

Yes, the short answer is yes. And you may be thinking that the answer is obvious, but it’s not. Many companies related to this field continue creating super technological robots with considerable sensors, complex computer vision algorithms, advanced joints, etc. But forgetting one crucial but straightforward phenomenon, the uncanny valley, where many robots fall in, even the most advanced [5].

The problem is that it is thought that the more, the better, and this only applies to the use of artificial intelligence, not to the design of the robot. So, for example, when a robot with a complex design tries to emulate emotions with many facial expressions or large movements of its joints, no matter how complex its artificial intelligence models are, the result? The famous but unwanted uncanny valley.

The best example could be Ameca, a robot released in 2022 that uses Automatic Speech Recognition to recognize people’s voices, Computer Vision to recognize faces and objects, among others [6]. However, in my opinion, it is a robot that will fail in its interaction with human beings due to its way of expressing emotions.

On the other hand, let’s imagine a simple robot that can express emotions with just a few joints or by simple sounds, the empathy towards the robot will increase a lot. But that’s not enough.

If we want to reach maximum familiarity with the robot, we need to use artificial intelligence, how? For example, using Automatic Speech Emotion Recognition to listen and understand what people are saying, Facial Emotion Recognition, or Computer Vision.

At the same time, robots could use emotions to accompany what they are saying, giving a personalized treatment depending on that emotion.

To have good examples, we need to find robots that emulate living things’ emotions (animals or humans), with minimalist details or even without any human or animal trait. One of the best-known examples could be WALL-E, an animated robot that looks like a robot and can emulate emotions with just his movements and sounds. Another good example is Vector, a little AI-powered robot capable of emulating emotions without any joint, just an LCD screen. 

 

What’s next?

As mentioned before, the branch of robotics has grown in recent years; although its most considerable daily use continues to be industrial robots, the other branches of robotics have also had significant growth.

What’s the problem? Before creating any robot, we need to know its purpose well. If the tasks robots have to perform are related to human interaction and its design and technology aren’t focused on that, the only thing that will happen is that there is a repulsion towards the robot by the human, and therefore, robot mission will fail.

To conclude, if you think that robots in the future will look like those in Terminator, I robot, Transformers, Surrogates, or anyone else who might cause discontent, the answer, in my opinion, is no. Instead, they will look like the robots of Star Wars or WALL-E, which show emotions simply and without causing us, terror.

REFERENCIAS

  1. Caballar, R. D. (2019). What Is the Uncanny Valley? IEEE Spectrum. https://spectrum.ieee.org/what-is-the-uncanny-valley
  2. Mori, M. (2012). The Uncanny Valley: The Original Essay by Masahiro Mori. IEEE Spectrum. https://spectrum.ieee.org/the-uncanny-valley
  3. Patiño, P., Moreno, I., Muñoz, L., Serracín, J. R., Quintero, J., & Quiel, J. (2012). La robótica educativa, una herramienta para la enseñanza-aprendizaje de las ciencias y las tecnologías. Education in the Knowledge Society (EKS). https://www.redalyc.org/pdf/2010/201024390005.pdf
  4. Kaspar’s journey. (2017). Kaspar the Social Robot. https://www.herts.ac.uk/kaspar/meet-kaspar/kaspars-journey
  5. Rankings-Creepiest-Robots – ROBOTS: Your Guide to the World of Robotics.. Robots.Ieee. https://robots.ieee.org/robots/?t=rankings-creepiest-robots
  6. Engineered Arts Ltd. (2021). AI vs. Human Intelligence. Engineered Arts. https://www.engineeredarts.co.uk/software/artificial-intelligence-vs-human-intelligence/
juan casas

Juan Casas – Data Scientist

man working on a pc

VISITA NUESTRO CENTRO DE CONOCIMIENTO

Creemos en el conocimiento democratizado 

Conocimiento para todos: Infografías, blogs y artículos 

 "Hay una gran diferencia entre lo imposible y lo difícil de imaginar. La primera tal vez lo es, la segunda se trata de ti"

man with umbrella

It is another beautiful day in your home city, and you see throughout your window up in the blue sky… no sign of rain the whole day. Having concluded that, you apply your sunscreen and decide to wear light clothes, and before going out, you wonder to yourself: ‘do I need to take out my umbrella? No, why would I put extra weight in my bag without the need of that.’

The day goes by, and just two hours after leaving
your home, in a blink of an eye, it starts to get cloudy and darker, and here comes the rain. The cities’ crazy weather is a fact that frustrates a big majority of its citizens each day; nonetheless, AI can come to the rescue by helping the weather forecast field in a short term period.

Forecasting the rain is not just helpful to save yourself from getting wet, but also to many industries that depend on the weather, like outdoor events, aerial services, tourism agencies, and farms, among others. The predictions with AI and deep learning methods are being tested for tornadoes’ and hurricanes’ formations as well; by this way, the people living under or close to the possible affected area can be informed on time to find refuge.

To have a context, in tornado-prone regions, the residents have, on average, an early alert of 16 minutes before the tornado impacts the zone [1]. However, this time range may not be enough for some people to find a safe place, and thus their lives can be in danger. For this specific case, with the data collected from humidity, air pressure, and temperature sensors during the previous minutes, with AI, the alert can go up to an hour before the tornado comes [1].

image of a hurricane

Long-range and short-range forecasting

Now, let’s point out an important difference between two types of forecasting: long and short term.

Long-term forecasting refers to the prediction of weather for up to a month. This is done by mathematical and physical models aimed to simulate the physics of fluid dynamics weather [2]. These models usually are based on satellite images taken commonly in a year/month window. Because of the volatile and complex weather patterns, the aforementioned models are huge and take a lot of computer resources to be performed.

For short-term weather forecasting, also called nowcasting, the window size is reduced up to a few hours [3]. Here the mathematical models can become too complex due to the chaotic behavior of short climate periods. At this point, AI and deep learning are used to simplify the computational calculations and reduce the time at which these are performed.

Where to get the data to build the models?

There are a lot of sources from which scientists collect essential data to build the deep learning models, some more reliable and with more resolution than others. Some of the most used are:

Radiosondes are balloons that measure atmospheric characteristics, such as temperature, pressure, and humidity as they move through the air. These radiosondes use a radio signal to communicate the data to a station [4].

Radar stands for radio detection and ranging and sends out radio waves that bounce off the nearest object and return to a receiver. It can sense many precipitation characteristics such as location, motion, and intensity [4]. 

Satellites are geostationary objects that rotate with the earth and can be split into three categories [4]:

Visible satellites record storms, clouds, fires, and smog.

Infrared satellites record clouds, water, land temperatures, and features on the ocean.

Water vapor satellites look for the moisture content in the upper half of the atmosphere.

Drones (Still in research) are a new technology used in this field to help with short-range forecasting. They are equipped with pressure, humidity, and temperature sensors that generate data sent to a base station.

Drones for weather forecasting

The use of drones in weather predictions is a young field of research that so far has shown promising results. These researches focus on the lowest layer of the earth’s atmosphere called the boundary layer, where most of the earth’s weather is happening. 

Scientists have traditionally used weather balloons or weather stations to collect data and build the models; nonetheless, for the first tool, there’s the disadvantage of not being able to be fully controlled and depends on the wind direction. For the second method, these stations cannot be as high as needed, and they must be attached to a grounded object [1].

Drones have neither of these limitations, and they can fly with headwinds and even within a storm. However, because drones can provide higher resolution in terms of the sensed data, this issue creates challenges in the computational expense and the complexity of the physics parameterizations required in the forecasting models themselves [6].

One of the ongoing projects is directed by professor Philip Chilson at the University of Oklahoma. He envisions a drone network over a region equipped with a complete set of sensors. These non-tripulated vehicles will be launched hourly. Instead of causing more complex theoretical models, they will feed with data some AI methods [1].

drone flying over grey clouds

A model approach

The artificial intelligence company DeepMind, owned by Google-Parent-Alphabet, has the most advantage of research in weather prediction through AI. In the paper called “Skilful precipitation nowcasting using deep generative models of radar” [5], they found that 56 government meteorologists prefer the AI prediction model instead of other short-term forecasting methods in 89% of the cases.

The DeepMind model predicts up to two hours ahead thanks to weather data coming. The company developed a deep generative model (DGM), which is a statistical model that learns probability distributions of data and allows for the easy generation of samples from their learned distributions. The DGM is especially helpful to both learn from observational data as well as represent low uncertainty across a variety of spatial and temporal scales.

The DGM, in this case, is taken as probabilistic nowcasting of precipitation that addresses the almost random behavior of the climate. The methodology used helps to improve the current forecast quality and consistency through predictions over regions near to 1536 km x 1280 km with lead times from 5-90 minutes ahead.

These models are indicated to predict smaller-scale weather phenomena that are inherently hard to predict due to underlying stochasticity, which is critical for nowcasting. The input data to feed the model are radar-based observations of surface precipitation at a given time range and territory area. The mentioned observations keep updating the model at a frequency of 4 frames per 20 minutes. With this, the model is able to sample 18 frames of future precipitations in a period of 90 minutes [5].

The learning process step is driven by two loss functions and a regularization term, which guide parameter adjustment by comparing real radar observations to those generated by the model. The first loss is defined by a spatial discriminator, which is a convolutional neural network that aims to distinguish individual observed radar fields from generated fields, ensuring spatial consistency and discouraging blurry predictions.

The other loss is defined by a temporal discriminator, which is a three-dimensional convolutional neural network that aims to distinguish captured data from generated radar sequences; it imposes temporal consistency and penalizes jumpy predictions. Together with this, a regularization term is introduced to improve accuracy; this regularizer penalizes deviations at the grid search resolution between the real radar sequences and the model predictive mean. This feature also produces accurate location predictions and improves overall performance.

Speaking now about the training process, this model was prepared with a large dataset of precipitation events, which are 256 x 256 crops extracted from a radar source. The length is around 110 minutes with 22 frames. These images correspond to observations for the UK for years 2016-2018 and evaluated on a test set from 2019.

As a way of conclusion

Throughout all this article, we have been discussing how AI is helping the weather forecast and how this will help a lot of economic activities. With the guidance of high-resolution data from drones, for instance, the AI models can be fed with better sensed physical variables, and thus, they can give more concise results. The way until an exact tool for weather prediction is being walked with significant steps and soon whether to take out the umbrella or not will not be an issue anymore.

If you want to read more of our content follow the link : https://equinoxailab.ai/en/centro-de-conocimiento/

REFERENCIAS

[1] Elizabeth Ciobanu, “How drones are helping with weather forecasting” in Drone blog, Jan. 2022. [Online]. Available: https://www.droneblog.com/how-drones-are-helping-with-weather-forecasting/

[2] Aryan Thodupunuri, “The future of artificial intelligence in weather forecasting” in Towards AI, Aug. 2021. [Online]. Available: https://www.towardsai.net/p/l/the-future-of-artificial-intelligence-in-weather-forecasting

[3] James J., “How is long-range weather forecasting different than short-range forecasting?” in Socratic, Jul. 2015. [Online]. Available: https://socratic.org/questions/how-is-long-range-weather-forecasting-different-than-short-range-forecasting

[4] “Collecting weather data” in Lumen. [Online]. Available: https://courses.lumenlearning.com/geophysical/chapter/collecting-weather-data/

[5] Suman Ravuri, Karen Lenc et al., “Skilful precipitation nowcasting using deep generative models of radar” in Nature, Jul. 2021. [Online]. Available: https://www.nature.com/articles/s41586-021-03854-z.pdf

favio casas

Favio Acosta – Data Scientist

man working on a pc

VISITA NUESTRO CENTRO DE CONOCIMIENTO

Creemos en el conocimiento democratizado 

Conocimiento para todos: Infografías, blogs y artículos 

 "Hay una gran diferencia entre lo imposible y lo difícil de imaginar. La primera tal vez lo es, la segunda se trata de ti"

shopping cart

Have you ever been to the supermarket and wondered what was the most efficient path to find all your groceries? Or perhaps on holiday, have you ever wanted to plan a journey going through as many tourist attractions in the shortest route?

happy family in supermarket

 

If you only have a few items on your list, this isn’t so hard. You might know the quickest way from the vegetables isle to the bakery, but what if your shopping list had a hundred items on it?

Then it becomes harder. The number of different ways you could plan your route gets really big as the number of items increases. For 5 items on a list, there are 120 different routes you could go. For 10 items, that becomes 362,880 different possible routes!

The Travelling Salesman

This type of problem is known as the traveling salesman problem (TSP), after its popular description in the 1930s [1] where a salesman has to find the shortest route to visit a list of cities. Variants of the TSP can be seen in many areas of business and science. A shipping company wanting to lower its operating costs might want to find the best route for shipping cargo to many different countries. In X-ray crystallography [2] a detector needs to measure the intensity of X-rays from many different positions on a sample. The order in which these measurements are made doesn’t matter, but repositioning the sample requires moving 4 motors, sometimes for hundreds of thousands of samples.

For a large number of destinations, or X-ray positions, it would be impractical to go through each possible pathway and find the shortest route (the brute-force approach). What this means is we can only try some of the paths and try to make our way to a good solution (not always the best one). There are a range of different algorithms you can use to do this, genetic algorithms, simulated annealing, tabu search, ant colony optimization, and cross-entropy method. Here we will only consider simulated annealing (SA).

How to solve the problem

Simulated annealing is an optimization method inspired by metallurgy, a process that heats and cools a metal to improve its physical properties. To get a solution to our TSP, we start from some random path (we shuffle all the cities on our list), and then make changes to our path for some period of time. First, we calculate the total distance across our path by adding up all the distances from each pair of cities. Then, we propose a new path, which is the same as the old path, but we swap at random two of the cities along with the list.

Depending on whether the new path is longer than the old path, we accept or reject it with a probability. If we accept the new path, the old path is replaced by the new path. We repeat this proposal process for many iterations and eventually, we reach a path that is much better than the random one we started with. The diagrams below [3] show an example of annealing for points in a 3D grid. The process starts from the image on the left, and after the annealing process, the right.

diagram before the annealing process
diagram after the annealing process

 

Why quantum?

So, where does quantum come in? Quantum annealing (QA) follows a similar approach to SA, but it optimizes a quantum system instead of a classical system in the case of SA [5]. To do QA, you need a quantum computer known as a quantum annealer. The main difference between QA and SA is that QA maps our problem (finding the best route) to finding the lowest energy state of a physical system. This allows us to use some phenomena from quantum mechanics to potentially solve our problem faster.

With SA, the space of possible solutions is sampled one at a time. With QA, we can start from a great many possible paths at once. This is known as a superposition and effectively allows us to consider a greater number of paths at one time. QA also takes advantage of quantum tunneling, a phenomenon in which a quantum system has a probability of escaping local minima. In the diagram, we see how this works. Starting from the red path we want to get to the blue path but the paths in between have a significant increase in length. Using SA it is very unlikely for the classical path (green) because the increase in length is so great.  But with QA, we have a probability of tunneling through (purple) to get to the blue path.

superposition diagram

Whilst theoretically this is great, there are a few limitations in practice. Firstly, the quantum annealers of today are relatively small scale. As of the time of writing, only a path of 9 cities has been solved [9] which is orders of magnitude less than the world record for over a million cities that can be done on powerful supercomputers [6]. That said, the company that makes quantum annealers, D-Wave, lists over 250 early applications of QA including traffic flow optimization and waste collection optimization for sustainable cities [7].

For more information on the traveling salesman problem, click here. If you’d like to learn more about quantum annealing, I recommend you check out the description from D-Wave.

If you want to know more about Quantum Computing read my blog Random Number Generators.

REFERENCIAS

[1] Grötschel, M., Holland, O. Solution of large-scale symmetric traveling salesman problems. Mathematical Programming 51, 141–202 (1991). https://doi.org/10.1007/BF01586932

[2] Lawler, E. L. (1985). The Travelling Salesman Problem: A Guided Tour of Combinatorial Optimization (Repr. with corrections. ed.). John Wiley & sons. ISBN 978-0471904137.

[3] Original image by Panchotera~enwiki – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=101533459

[4] Arnab Das (Theoretical Condensed Matter Physics Division and Centre for Applied Mathematics and Computational Science, Saha Institute of Nuclear Physics, Kolkata, India) – Quantum Annealing and Other Optimization Methods workshop, 2005

[5] Warren, R.H. Solving the traveling salesman problem on a quantum annealer. SN Appl. Sci. 2, 75 (2020). https://doi.org/10.1007/s42452-019-1829-x

[6] http://www.math.uwaterloo.ca/tsp/world/

[7] Sheir Yarkoni, Florian Neukart, Eliane Moreno Gomez Tagle, Nicole Magiera, Bharat Mehta, Kunal Hire, Swapnil Narkhede, and Martin Hofmann. 2020. Quantum Shuttle: traffic navigation with Quantum computing. In Proceedings of the 1st ACM SIGSOFT International Workshop on Architectures and Paradigms for Engineering Quantum Software (APEQS 2020). Association for Computing Machinery, New York, NY, USA, 22–30. DOI:https://doi.org/10.1145/3412451.3428500

thomas clarke

Thomas Clarke – Quantum Strategist

man working on a pc

VISITA NUESTRO CENTRO DE CONOCIMIENTO

Creemos en el conocimiento democratizado 

Conocimiento para todos: Infografías, blogs y artículos 

 "Hay una gran diferencia entre lo imposible y lo difícil de imaginar. La primera tal vez lo es, la segunda se trata de ti"

Spanish
Tau

Did you know that AI can boost productivity by 40%?