Artificial Intelligence

In this category we show you all Artificial Intelligence related blogpost.

family buying in the supermarket

Autor:

Cristian Zorrilla – Interaction Designer Equinox AI Lab

AI IN RETAIL 

As Artificial Intelligence capabilities expand, its solutions keep transforming the landscape of retail businesses across the spectrum.
From streamlining administrative tasks with automation to virtual shopping experiences created through real-time advertising, artificial intelligence has made it simpler to increase business output’s speed, efficiency, and accuracy. This enhanced performance is linked closely to advanced data and predictive analytics systems that help companies make data-driven business decisions.

Retailers are increasingly looking for ways to optimise their business by adapting to new trends, understanding new market strategies, and utilising new technologies. As a result, they are looking to integrate technologies such as artificial intelligence, data science, data analytics, and RPA (Robotic Process Automation) into their businesses.

RPA consists of automating processes with robots, following established rules, so these robots can perform repetitive tasks, such as:

  • Entering and reviewing data.
  • Pressing buttons.
  • Uploading or downloading files.
  • Making or paying invoices, 

Automating these tasks will save you money and help your employees because they will have more time to focus on more critical tasks. 

At least £216 billion of economic growth in the UK (representing 10.1% of the UK economy per year) was achieved thanks to technology. Of this, £54 billion was acquired by London, demonstrating the great power that many companies are already harnessing to boost their business.

AI allows you to evade manual errors, empower human capabilities, automate processes and provide accurate results. According to Forbes magazine, AI will be able to increase the workforce by 70% by 2025 (Gow, 2022). Also the industry is expected to increase its value by $36.8 billion by 2025, which changes consumer preferences and customer buying patterns, meaning a significant factor for industry growth in the following years.

AI also offers insight into consumers’ behavioural analytics, which retailers can utilise to gain insights to help enhance different touchpoints throughout the customer journey. Our mission at Equinox is to build a retail solution that caters to the individual needs of your business.

hand using augmented reality interface

Equinox and its impact on retailers

At Equinox, we provide complete solutions, as we always seek to implement more than one technology when designing our projects. Therefore, all our solutions implement AI+Data+RPA because we will always give you a complete solution that intervenes in multiple problems.

With our team’s multidisciplinary work, we can address any need that our customers may bear since it comprises developers, designers, psychologists, anthropologists, AI engineers, data analysts, data scientists, and RPA specialists.

Better performance

Over time people have realised the almost infinite potential of Artificial Intelligence or AI for short, being an excellent tool when it comes to selling. For example, AI can learn about your customers so that you can learn about their preferences, behaviour and needs.

AI also allows you to create advanced algorithms to know what your customers might be interested in based on demographics, social media behaviour and buying patterns.

We can help you improve a lot of processes within your company, such as synchronising your physical and virtual store, which will allow you to know the status and behaviour of these channels at all times, being your virtual store a complement to your physical store and vice versa, which will allow these two channels do not overlap but work in harmony, complementing each other.

We can also optimise many internal processes, such as the supply chain, making it much more efficient and faster as we can track exactly where your supplies are at all times. On the other hand, we can use the RPA specifically for the supply chain. 

In this way, the robots will be able to program shipping schedules. They can also know the state of supply and demand so they can notify you when a specific product is running out, or even better, they can order the product and pay automatically.   

A study conducted by the IGS (Information Service Group) found that using RPA in the supply chain reduces approximately 43% of the resources implemented for this end-to-end process, which includes tasks such as invoicing, credits, collections, pricing and others (Pillai, 2021).  

User understanding and assistance

Thanks to AI, RPA and Data tools, you will know exactly what your customers require to sell more, you will be able to understand the most optimal time to push your products, and you will even be able to provide them with the products your customers need before they even know they need them.

You will also be able to optimise the user experience of your customers because you will offer them exactly what they are looking for at a price they are willing to pay or, for example, create a relationship of loyalty with your customers because you will always have at hand what they expect when they need it.

Another vital AI tool is Machine Learning, this will allow you to find patterns in your customers’ purchases by analysing what they buy the most and when they buy it, which will enable you to recommend products they can buy together, for example, this is what Amazon is currently doing:

amazon recommender system image

The image is taken from Google Images

AI could even recommend which products you should put together in your physical store, making you sell much more. AI has also empowered different businesses with high-level data, which has exponentially improved their internal operations and helped you find new business opportunities.

AI also allows you to create interactive chats or Chatbots. These can converse with your customers, allowing you to improve your customer service, as they can answer frequently asked questions, report on the status of their orders and provide them with the help they need 24/7.

In addition, these bots can collect information, so they can learn from your customers, helping you to have valuable information that will allow you to make better business decisions.

Image recognition and analysis is another excellent AI tool that helps your customers discover new or related products, allowing you to create recommendations based on the aesthetics and similarities of the products.

Internal processes

A problem that retailers have always had is the disconnect between their physical store and their online store, as they usually work in very different ways and have different approaches, causing your shoppers to feel an unpleasant shopping experience, leading to inefficient operations in both channels.

AI is an excellent solution since it can synchronise your different channels, giving you, for example, A list of your complete inventory and ways to take advantage of your online store and your physical store at the same time, allowing you to know what your shoppers prefer, whether to buy online or have the products in material, which will enable you to prioritise one channel, but of course without forgetting the potential of the other channels.

User experience

Now, we can also use AI for more experimental products, such as virtual fitting rooms that offer a much more personalised experience to your customers, as this allows them to quickly find the perfect outfit, plus your customers will be able to try on the different garments, without the need to put them on.

This is achieved thanks to augmented reality, in this way your customers will have the outfit on, without the need to have it, it could also change its colour, size or design simply by pressing a button, which dramatically streamlines the selection of items and customers can see an extensive catalogue without the need to search for them in the store.

*Luxury brands are exploring and implementing AI interactions in their stores; click here to read more.

AI can also help you secure your shoppers’ data, recording the movements inside the store; it also allows you to encrypt their sensitive personal information, such as credit card information, personal data or shopping lists.

Also, thanks to video analytics or computer vision, you can check your physical store products in real-time and notice suspicious activities. This way would notify you of theft instantly, and you could take action as quickly as possible. AI will help prevent and alert you about theft or events in your stores.

CONCLUSION

AI, data analysis and RPA are practical tools for changing the retail industry, and we want you to be part of that change. Equinox AI Lab can be your guide in this process; if any of this information catches your attention, don’t hesitate to contact us or schedule a meeting with the help of our ChatBot.

REFERENCES

cristian zorrilla

Cristian Zorrilla – Interaction Designer

science team working at a lab

ARE YOU READY?

Ask for a free trial, request a demo, request pricing or simply ask any question!!

Contact Form Demo
syrian boy seated in the street

Autor:

Carla Acosta – Visual Designer Equinox AI Lab

INTRO

It takes work to write about war. I was born in a country that experienced combat in the 80s: Colombia. I was born in the ’90s, a decade after; however, it sensitised me. That’s why I’ve always been interested in the Middle East conflict, especially in Palestine and Syria. Syria and Colombia have a lot in common; they are beautiful places with amazing people and even share city names like Palmyra (Syria) and Palmira (Valle, Colombia).

I’ve read several books to understand the conflict, and the most important I’ve realised is that wars do not have just one cause; it is mandatory to comprehend multiple contexts and triggers. In this study, I’ll write about the Syrian war, the conflict’s digital media, and Artificial Intelligence.

Syria is a country which resisted war for almost a decade. Since the beginning of Syria’s conflict, activists on the ground risked their lives to document human-rights violations, from torture and attacks on protesters to indiscriminate rocket strikes and barrel bombs1. There are millions of hours to analyse, understand what happened, expose to international organisations, and seek justice. All this content is known as user-generated content (UGC) or eyewitness media. AI has been used to detect war crimes, chemical weapons, unauthorised guns and victimisers on that content.

In this text, I’ll dive deeper into the Syrian war user-generated content and how it relates to AI to leverage the different justice processes in international organisations, avoid investigators’ trauma, and recognise guns and bombs to create solid cases. Finally, the end of the writing will answer the question Is UGC and AI enough to do justice? After a previous analysis.

Why there’s a war in Syria?

Syrian frictions have several roots, from past unresolved conflicts, land disputes and political tensions to ethnic and religious differences. It would take me a whole book to explain the war; that’s why I prefer to recommend one that explains it from a broad view; its name is Syria, written by the author Victor de Currea-Lugo2. All the tensions that I mentioned before detonated when young people took to the streets in the southern city of Daraa in March 20113.

Since 2011 young protesters have been countered by strong government crackdowns and increasing violence from both government forces and civilians. Soon conflict became a civil war, where different nations intervened, and atrocities were committed. Lots of videos, photographs and even voice recordings were uploaded to the internet to document the war in real-time. As a result, there are more digital media hours than conflict hours.

Content Management

Eyewitness media need to be organised and protected. That’s why the Syrian Archive was born. “It is a private Syrian-led project that aims to preserve, enhance and memorialise documentation of human rights violations and other crimes committed by all parties to conflict in Syria for use in advocacy, justice and accountability. They shed light on how civilians’ documentary practices and experiences have significantly contributed to the production of multi-source digital testimonies within diverse and constantly transforming local, social, political and organisational contexts” 4. On the Syrian Archive website, there are 3 million recordings uploaded. Those videos are always verified to stop misinformation. Time, date, location and sources are checked. This material makes collecting Syrian evidence and building cases more manageable. United Nations is working with AI to analyse the recordings and judge victimisers.

syrian girl with antiwar message

Artificial Intelligence and war content

-AI, social media and war-

In the Syrian war, AI is used for diverse purposes. The first purpose is to capture content; Syrian Archives has a very nice database with organised and verified videos and images; however, there are still many videos that no one has seen. AI can capture Syrian War content on the internet and notify investigators. Machines are trained to recognise places, dates and images.

On the other hand, AI is also capturing content from social media to ban it from Facebook, Twitter or YouTube (Tiktok didn’t exist at that moment, so there’s not a lot of content there, and Instagram had 1-year-old). The COVID-19 virus forced social media workers to empty offices and rely on automated takedown software5. “AI is notoriously context-blind” and is threatening and deleting documentaries and videos, confusing them with terrorist content. “With the help of automated software, YouTube removes millions of videos a year, and Facebook deleted more than 1 billion accounts last year for violating rules like posting terrorist content” 5.

Different algorithms are looking for war content on the internet to send it to the International Criminal Court or United Nations, for example. Other algorithms are taking down “dangerous” content (according to what they’ve learned is dangerous). Is AI helping or blocking war justice? How can algorithms differentiate war content from terrorist content? How can we benefit from terrorist content to seek justice?

The main problem is when memories are lost, the act itself will be forgotten, and justice won’t ever come. Preserving war content is essential to humanise war. People have uploaded their relatives’ deaths to have proof and to expose it to the world, and as hard as it sounds, it is the only way they have evidence of what’s happening in their country. For them, it is difficult to trust the government or other internal institutions.

Artificial Intelligence needs to be more efficient in this case, and it is data scientists, UX designers, psychologists and tech professionals’ job. Kids are often exposed to internet content, and it’s reasonable that parents don’t want their kids to find explosions, murders, or torture online. Of course, they have the right to decide what content they want to watch. But Syrians and victims around the world also have the right to speak and denounce crimes. As professionals working on tech, we need to create strategies and solutions to promote free speech without violating others’ rights, for instance, user profile creation, tags or specific platforms for kids and war content. A great example is eyeWitness, an app created by the International Bar Association; it allows one to upload sensitive content, leave it securely on lockers, and use it later for investigation or trials.6

-AI to avoid trauma-

AI can see the data and can reduce the time human investigators spend watching through hours of traumatic videos and images. Can you imagine spending your days looking at tortures, murders and explosions? AI can analyse and cluster different kinds of crimes to further investigation and also delete duplicates or unrelated images.

During a war, trauma is not limited to the battlefield. Investigators, journalists, and even social media users can be exposed to violent and distressing content that leaves hard impressions. Those impressions are called secondary trauma, and it is a result of second-hand exposure7.

In table 48, we can see that from a sample of 209 journalists and humanitarian workers interviewed, 33% have seen disturbing material online once per week or more. So if someone consumes war media on a regular basis, they can feel trauma, and sometimes they even feel the shame of feeling trauma because they are not the direct crime victims.

social media impact chart statistics

Table 4 – Making Secondary Trauma a Primary Issue

An editor at a news agency explained how he was traumatised by the unexpectedly distressing content of the picture of Alan Kurdi, a 3-year-old Syrian boy found drowned on a beach in Turkey in September 2015.

alan kurdi syrian boy lifeless body

Alan Kurdi photograph taken by Nilüfer Demir

“The dead child on the beach. I walked into the office, and a colleague rushed up to me saying, ‘look at this, look at this, it’s really important’, and you don’t have time … the guards haven’t gone up, and I spent the entire evening in tears, I was really shaken by it…” 8

AI analyses the content; it can be trained with computer vision techniques to tag disturbing images or sounds on media and raise awareness among the viewer. “If there is a warning of what you are about to see, you are steeled for it; you can brace yourself. 8” (For example, warning advice on Instagram). AI can also create media clusters, so investigators don’t have to review all the war material but just fragments of specific cases they are building.

At the same time, Natural Language Processing can be helpful. As almost all the Syrian War User Generated Content (UGC) is in Arabic, it is hard to share the burden among all the NGO members; those who speak Arabic are loaded with traumatic content. With NLP, AI translations can be helpful to understand the recordings faster, easier, and by all the members. In fact, the first time technologists (IBM) and international criminal justice professionals collaborated was to translate evidence in the Nuremberg trial between English, French, German, and Russian.9

If Artificial Intelligence is able not only to reduce revision time but improving the mental health of second-line workers, it is worth it. Fight injustice for victims and survivors without traumatising those who want to help.

-AI, war content and Object Detection-

Object recognition algorithms work to detect specific objects, for example, weapons and all related data in recordings, to create and build cases. (Remember that there are some countries accused of giving the government weapons and interfering in the war).

One example of object detection is Mr. Khateeb’s. He is the founder of the Syrian Archives and wanted to assemble a searchable database on all munition attacks. He hoped to build a case where Syria and its military backer, Russia, were accused of using internationally banned weapons during the conflict.1 This was a challenging task. They needed to have lots of images, explosion sounds, and videos of where those guns were used, to train the machines. Still, they are not on the internet (remember that other AI algorithms are banning war content on social media). In this case, tech professionals had to be creative.

They created synthetic data (2D and 3D gun models and did sound and image recreations) to train machines and identify attacks in 1.5 million videos recorded during the war. Since mid-2021, the case was ready to start, they thought that showing the International Community what was happening would lead to an intervention against Syria’s regime, but it was not the case1. AI is not enough to break the sovereignty of a state in this case. But at least some proof is being compiled to keep on making the perpetrators visible.

Machines don’t do it alone; they can’t; this is a sentence I comprehend working here in Equinox AI Lab. It is necessary to have a creative mind behind to solve problems (like we saw in the previous example) and also someone who checks if the model is accurate. You have to test and prototype uncountable times. That’s why I believe Object detection could be such a powerful tool to analyse war media content; this technology is trustworthy as professionals follow quality checks while training and implementing the algorithms.

Bomb attack in Kafranbel Surgical Hospital in Syria – Taken from https://syrianarchive.org/

Why is it important to seek justice for the survivors?

According to Laura, (a psychologist I interviewed to write this article), survivors need closure. Then, they can process and accept the events with justice, reparation and guarantees of non-recurrence. Therefore, it is possible that flashbacks and post-traumatic stress may be reduced in their ordinariness.

It is not easy to record the death of your relatives, the destruction of your house, or the bombs attacks on the school you attended when you were a child, but it is one of the few ways Syrian people have to denounce or at least have proofs for the future with the hope of justice. If they make an effort to archive traumatic media, the least we, as an external party, can do, is review it, listen to it and use it to find justice, its primary purpose.

If there are tools to find justice, human rights workers and justice professionals should take advantage of them always in pro of the victims. The arena in which technology, international human rights, and criminal prosecution intersect is new and growing, so they must find strategies to adapt laws and accept easier and faster digital evidence.

Are UGC and AI enough to do justice?

“In 2017, the International Criminal Court issued its first arrest warrant that rested primarily on social media evidence, after a video emerged on Facebook”5. That video showed Mahmoud al-Werfalli, a Libyan commandant shooting dead ten prisoners. According to the Irish Times, it seems that some of those videos were posted online by Werfalli’s Al-Saiqa colleagues, which means it is alleged terrorist content.

This is when we should ask ourselves, can we benefit from terrorist content? How can we teach AI to use it in our favour instead of just banning or deleting it? One solution is that big social media companies like YouTube or Facebook algorithms don’t focus just on deleting or banning content. First, it should review it, notify the proper entities if it’s valuable, and preserve the media to use it in the survivors’ favour.
“There are, of course, procedural difficulties in this new era of “open-source evidence”, sometimes it is hard to verify, it’s hard to prove that it hasn’t been tampered with, and its date, time and location can be hard to establish beyond doubt”.6 But UGC allied with AI can be a great duet.

Personally, I think it’s possible to build a trustworthy case with AI-provided evidence. The first thing we need to clarify, as Alejandro, my interviewed Data Scientist told me, is the role of AI when we say “AI to seek justice”. AI won’t be the judge; it is still unable to be. As I wrote before, to exemplify, it will take the role of evidence provider, or it will be a tool to build cases. A lot of human beings will collaborate in the process, and that’s the synergy we have to reach in human-machine interaction.

Computers won’t replace people; they will potentiate their capabilities. Therefore, AI won’t be the judge; all the people who work with it will contribute to finding the truth and expose it to a tribunal.

Let’s clarify how courts define media content evidence: “Digital evidence is “information and data of value to an investigation that is stored on, received, or transmitted by an electronic device”. At international organisations, chambers will examine its provenance, source or author.” 9 If they consider it has weight in the investigation, they will use it as evidence in the cases. It is critical that judges understand how AI works, so they will understand how the proofs were collected, how models validate media information and why it has weight.

There are some examples where media was sufficient evidence. Since the Second World War in the International Military Tribunal for Nuremberg, a mountain of Third Reich propaganda, public campaigns, films, and photographs prove the Nazi’s genocidal intent and other criminal incidents10.

A most recent example is Sweden; in 2012, the police got access to a film posted on Facebook where a Syrian rebel participated in a serious assault, and the court accepted it as evidence to prosecute him.6 Last example is Ukraine; in this war, people are taught to record videos and collect witness evidence to make it reliable. AI is also used to analyse Russian social media pictures and detect perpetrators. There aren’t a lot of cases built yet, but they will come, and we will be able to watch them.

As a general principle, international courts and tribunals have three basic rules, and all evidence has to fall into one of the three categories. The first is “power-based rules” that define the prosecutor’s authority to collect evidence. The second is “rights-based rules” that require the prosecutor to accord certain privileges to suspects and witnesses during evidence collection. And third, “procedural rules” govern the techniques the prosecutor can use to gather and preserve evidence.9 It is easy that digital media evidence doesn’t fall into any of the three rules; that’s when there is the need to have a defined admissibility.

AI and UGC will be enough to seek justice as the people who manage them follow protocols and discover creative ways to analyse, verify and expose what they want to highlight. It is mandatory that international courts create new rules and regulations to support digital evidence and authenticate it. Since photographs and films have existed, they’ve been a rich source to penalise in different conflicts and definitely, witness testimony is no longer the only critical body of proof.

CONCLUSION

The Syrian conflict was particular; not only journalists and institutions documented the war events, but the civil population uploaded tons of videos, photographs and voice recordings to social media platforms and apps. These archives became a valuable asset in prosecuting perpetrators. AI has become a facilitator of truth and justice; it is now used to find conflict content online, tag specific attacks or occurrences to make investigators’ jobs easier, delete duplicates, and detect key faces and objects to understand what happened.

At the same time, victims or survivors need reparation and guarantees of non-recurrence, justice will help them process the events, and therefore it’s possible that flashbacks and post-traumatic stress are reduced. That’s why international human rights and criminal prosecution have to find strategies to accept easier and faster digital evidence.

Answering the question, are UGC and AI enough to make justice? They are. We have to keep in mind that the AI role is not being a judge but a tool to provide and authenticate evidence. This evidence can be helpful in finding accountability in natural persons and institutions. But still, there’s a need to create new rules that take into account digital media evidence.

Finally, after the Syrian civil war, more conflicts are recorded in real-time. For example, in Ukraine, witnesses are taught to record videos correctly to make them helpful evidence, and AI techniques like face recognition are used to identify victimisers. So, the field where international human rights, prosecution, AI and digital content is growing, and we have to be prepared.

“Justice will not be served until those who are unaffected are as outraged as those who are”. Benjamin Franklin

INTERVIEWS

Watch here the interviews the author Carla Acosta did with Alejandro Salamanca Equinox Director and Data scientist, and Laura Vesga a psychologist expert in armed conflict.

REFERENCES

1. Abdulrahim, R. (2021, 13 February). AI Emerges as Crucial Tool for Groups Seeking Justice for Syria War Crimes. WSJ. https://www.wsj.com/articles/ai-emerges-as-crucial-tool-for-groups-seeking-justice-for-syria-war-crimes-11613228401

2. Currea-Lugo, D. V. (2019). Syria: Where hate displaced hope. AGUILAR.

3. Reid. (2022, 12 July). Syrian refugee crisis: Facts, FAQs, and how to help. Taken 10 December 2022, from
https://www.worldvision.org/refugees-news-stories/syrian-refugee-crisis-facts#why-start

4. Syrian Archive. (s. f.). https://syrianarchive.org/

5. Asher-Schapiro, A. B. B. (2020, 19 June). «Lost memories»: War crimes evidence threatened by AI moderation. U.S. https://www.reuters.com/article/us-global-socialmedia-rights-trfn-idUSKBN23Q2TO

6. Cluskey, P. (2017, 3 October). Social media evidence a game-changer in war crimes trial. The Irish Times.
https://www.irishtimes.com/news/world/europe/social-media-evidence-a-game-changer-in-war-crimes-trial-1.3243098

7. Spangenberg, J. (2022, 9 March). How war videos on social media can trigger secondary trauma. dw.com. https://www.dw.com/en/how-war-videos-on-social-media-can-trigger-secondary-trauma/a-61049292

8. Dubberley, Griffin & Mert Bal. (2015). Making Secondary Trauma a Primary Issue: A Study of Eyewitness Media and Vicarious Trauma on the Digital Frontline. eyewitnessmediahub. Taken 10 December 2022, from http://eyewitnessmediahub.com/uploads/browser/files/Trauma%20Report.pdf

9. Freeman. (s. f.). Digital Evidence and War Crimes Prosecutions: The Impact of Digital Technologies on International Criminal Investigations and Trials. law net. Taken 10 December 2022, from https://ir.lawnet.fordham.edu/cgi/viewcontent.cgi?article=2696&context=ilj

carla acosta

Carla Acosta – Lead Designer

EQUINOX

& COMPUTER VISION

Discover how we apply Computer Vision to comprehensive solutions for different industries and business challenges

eye with computer vision representation on it
jetson nano photography

Autors:

Brhayan Liberato – Data Scientist  Equinox AI Lab

Nicolas Diaz – Data Scientist  Equinox AI Lab

 

Getting started with Jetson Nano and Jetson Inference

NVIDIA’s Jetson Nano is one of the most straightforward prototyping environments for Computer Vision projects. It includes a Graphics Processing Unit (GPU) and is designed specifically for implementing basic AI applications. We have experience with this development platform at Equinox, particularly in Computer Vision challenges such as face recognition and object detection. In this article, we will show some options for remote controlling so you can start developing right away, as well as the basic installation and configuration of a deep learning library, developed explicitly for the Jetson Nano.

jetson nano and scope images

Jetson Nano and images of its scope

SSH connection and VNC

The control and configuration of the Jetson Nano can be done through a remote connection, using VNC Viewer and the VS-code Remote-SSH extension. This section lists all the steps needed to configure both tools.

Installation and Configuration of VNC-Viewer on the Jetson Nano and the main Windows computer

  1. Download Vino (VNC server) on the Jetson Nano.

$ sudo apt update

$ sudo apt install vino

  1. Create a new directory.

$ mkdir -p ~/.config/autostart

$ cp /usr/share/applications/vino-server.desktop ~/.config/autostart

  1. Configure the VNC server.

$ gsettings set org.gnome.Vino prompt-enabled false

$ gsettings set org.gnome.Vino require-encryption false

  1. Set a password

$ gsettings set org.gnome.Vino authentication-methods “[‘vnc’]”

$ gsettings set org.gnome.Vino vnc-password $(echo -n ‘yourPassword’|base64)

‘yourPassword’ can be replaced with any other for example, if the desired password is abc123, then the command would be:

$ gsettings set org.gnome.Vino vnc-password $(echo -n ‘abc123’|base64)

  1. Find and save the Jetson’s IP address.

$ ifconfig

This output shows the different internet connections on the Jetson Nano where:

  • eth0 is for Ethernet.

  • wlan0 is for Wi-Fi.

  • l4tbr0 is for the USB-mode connection.

  1. Restart the Jetson Nano device.

$ sudo reboot

  1. Download and install VNC Viewer on the main computer.

https://www.realvnc.com/en/connect/download/viewer/

  1. On VNC Viewer, go to File > New connection > General.

  2. Enter the IP address of the Jetson Nano obtained in step 5, define an identifier, then click “ok”.

  3. After the new connection is created, double-click it. A prompt box will appear, click “Continue” and enter the password that you defined in step 4.

 

Installation and Configuration VS Code Remote-SSH extension

  1. Download and install VS Code.

https://code.visualstudio.com/

  1. On VS Code, go to the left sidebar, click “Extensions”, click on the search bar, and write Remote – SSH, then click “install”.

  2. On VS Code, go to the left sidebar, click “Remote Explorer”, on the Extensions panel click “Add New” and write the following command:

ssh “jetson_nano_Username”@”jetsonNano_IP”

For example, if the username is myjetson and IP is 192.168.101.15, then the command will be:

ssh myjetson@192.168.101.154

  1. Select the SSH configuration file to update, press enter to select the first option, which should contain “user” or “home”.

  2. Select Linux, and enter the password for the Jetson Nano user.

 

Installing Jetson Inference

We need a library that offers optimised deep model implementations to use the Jetson Nano with a deep learning model efficiently. One of the best options is Jetson Inference, a library that uses TensorRT to deploy neural networks using graph optimisations and kernel fusion. You can find the complete installation and setup guide on their GitHub repository. In short, you need to have your Jetson Nano flashed with JetPack, and then run the following commands to install and build the project:

  1. Clone the repo: make sure that git and cmake are installed, then navigate to your chosen folder and clone the project:

$ sudo apt-get update

$ sudo apt-get install git cmake

$ git clone https://github.com/dusty-nv/jetson-inference

$ cd jetson-inference

$ git submodule update –init

  1. Install the development packages: this library builds bindings for each version of Python that is installed on the system. In order to do that, you need to install the following development packages:

$ sudo apt-get install libpython3-dev python3-numpy

  1. Configure the build: next, create a build directory and configure the build using cmake:

$ mkdir build

$ cd build

$ cmake ../

  1. Download the models: Jetson Inference provides a model downloader tool that provides several pre-trained networks that you can easily install:

$ cd jetson-inference/tools

$ ./download-models.sh

  1. Compile the project: finally build the libraries, Python extension bindings and code samples:

$ cd jetson-inference/build

$ make

$ sudo make install

$ sudo ldconfig

 

With Jetson Inference installed, you can start testing some pre-trained detection models such as MobileNet, classification models such as GoogleNet, and semantic segmentation models such as Cityscapes. An example of detectNet, the detection object used in detection models, can be accessed in the following link.

With all remote connections set up and Jetson Inference installed, you can start developing your Computer Vision projects in Python. In the following sections, you will see how to create a people detection application using Jetson Inference’s detectNet object and deep learning architectures such as MobileNet and Inception.

 

People detection using Jetson Inference and OpenCV

One of the most popular and computationally high problems in Computer Vision is object detection: identifying the class and location of an object given an input image. For this task, we can use deep learning models or digital image processing techniques to detect objects and get their location in a bounding box.

In recordings, we can detect a moving object in every frame and track its location, saving that information for more specific applications. This article will describe some object detection models and algorithms to create a people-detecting application on Jetson Nano. You can download the code for both approaches in our Github repository.

Detection models

Jetson Inference provides several models for machine learning applications, such as image classification, object detection, semantic segmentation, and pose estimation. In addition, this library offers two deep learning models for object detection: MobileNetV2 and InceptionV2, which are implemented using Jetson Inference’s generic object called detectNet.

This object accepts an image as input and outputs a list of coordinates of the detected bounding boxes along with their classes and confidence values. Both models are trained with the COCO dataset, a large-scale object detection, segmentation, and captioning dataset, which provides 91 different classes.

First, MobileNetV2 is a convolutional network optimised for mobile devices that uses convolutional layers with 32 filters and depth-wise convolutions that filter features. More specifically, it uses a unique framework for object detection called SSDLite, and its performance is measured with the COCO dataset for this machine-learning task. Using Jetson Inference, we can create a detectNet instance with the parameter “ssd-mobilenet-v2”. Notice that we also set the confidence threshold for detections as 0.5. You can tune this value depending on your needs:

detectnet instance example

detectNet instance example

With our model ready to go, we can set the input data and the output format. To do that, we create a videoSource instance with the parameter “csi://0”, which corresponds to a Raspberry Camera connected to the Jetson. Other options could be “/dev/video1” for a USB camera or the path to an image or video file. For the output, we set “display://0” to show the detections on screen, but you can also set it to a path and file name to save the result instead.

videosource instance example

videoSource instance

With the input and output set-up, we can start with the main loop. We use the Capture() method to get the next input frame and pass it to the network using the Detect() method. We then use Render() to render the resulting image with detections and set the window title with SetStatus(). Notice that we break the loop if one of the inputs or outputs stops streaming data:

capture and detect method example

example

For example, this is one output produced by MobileNet from Equinox’s installations:

mobilenet output

Fig. 3. Object detection with MobileNet model

Another model is InceptionV2, the second iteration of the Inception architecture. It uses depth-wise convolutions, factorisation, and aggressive regularisation.

Using Jetson Inference, we can create the same detectNet instance but use the parameter “ssd-inception-v2” instead.

As we are using an instance from the same class, we can still use the Detect() method and obtain the same output, as this implementation of Inception also provides predictions for the 91 COCO classes.

inception v2 example

example

Deep model limitations

As we concluded from testing, people detection is unreliable with either of the two models. The confidence threshold becomes relevant when a person is detected for only some of the frames in a sequence. Still, even then, the model might confuse people with another object, assigning a wrong label.

One of the main problems with the environment we are working on is that there is not enough computational power to use the entire architecture of both models, but rather an optimised version with reduced capabilities.

This means that there is a high error rate on people detections, where the model cannot detect a single person consistently over a sequence of frames or detects them but assigns a wrong label. In scenarios where there are multiple people simultaneously, the error rate is even worse. You can see some examples of this behaviour in Fig. 4:

inconsistent detections of mobilenetv2 and inception v2

Fig. 4. Inconsistent detections for (a) MobilenetV2 and (b) InceptionV2

As you can see, there are evident inconsistencies over a sequence of frames in the same scene. Neither model can detect two overlapping persons; sometimes, they get caught but are assigned a completely different class. In the context of a product, it means that using a Jetson Nano might not be enough to provide a reliable and robust implementation by itself.

Detection algorithms

This implementation relies on two basic assumptions about the objects: they are all people and must move. We can take advantage of a controlled environment for a tracking implementation depending mainly on movement by detecting objects of specific dimensions that could be people. Object detection is done by implementing background subtraction.

Initially, to detect the moving objects in video frames, select the background scene and then subtract the current frame from the background. Note that both images must be grayscale. The resulting image shows the displacement of an object in the given sequence of frames. The next step is to binarise the resulting image to isolate the pixels whose values are higher than a given value (which you can tune depending on the lighting conditions of the environment).

This value determines the segmentation between the background and the moving objects, helping eliminate noise and better define the contours, dilation, and morphological closure transformations applied. Next, a black-and-white image is obtained from which the objects with a certain number of pixels are selected. The process of identifying moving objects is illustrated in Fig. 5.

For each of the contours selected, the rectangular bounding box is found. These are determined by the and  coordinates of the upper-left corner of the rectangle and,  and , width and height respectively of this rectangle. With those points, we calculated the coordinates of the centroids using Eq.1.

centroids coordinates equation

Centroids coordinates (Eq1)

movement detection process

Movement detection process

Below you can see the code for motion detection using background subtraction. We implement the process illustrated in Fig. 5 inside the function detect_motion, which returns all contours detected. Inside the main loop, we process the video input signal frame by frame, getting the first frame considered the background. Next, use the function detect_motion to get all contours, and select objects with a particular area and width inside the for-loop. Finally, we compute the bounding box and the centroids for all resulting objects.

motion detection using background subtraction
motion detection using background subtraction continuation

 

Motion detection limitations

Background subtraction is one of the simplest techniques and is computationally less expensive than a deep learning model. Unfortunately, this technique is susceptible to changes in the input frames, mainly lighting variation. In Fig. 6 you can see other contours appear between consecutive frames Fig. 6a and Fig. 6b, because of a bit of change in the lighting, causing false-positive motion detections.

computer vision images with different light

Fig. 6. Lighting changes between frames

In this article, we described two approaches to people tracking and showed their implementations on Jetson Nano, and some of their limitations. In particular, we told how both the Jetson inference models and the motion detection algorithm lose track of people or fail to detect them correctly when they are grouped together. Therefore, these implemented models and techniques are recommended for applications with controlled environments, i.e., where the lighting does not have significant variations and the flow of people is controlled.

Object tracking and counting using Python

In this section we will explore a simple tracking algorithm that relies on object detections made by either a deep learning model or an image processing algorithm. This algorithm uses the centroids and bounding boxes of detected objects as inputs, saving the data and updating the object locations after the next frame of a video is processed. We will provide an example in the context of people counting, so we want to count how many people enter or exit a building. You can access the complete code in our Github repository.

The algorithm

In frame N, we receive the centroids of all objects detected using a detection algorithm or model, saving the data until the next frame is processed. In frame N+1, we receive the new object centroids and compute the Euclidean distance between any new centroids and the existing ones, updating each object centroid with the latest detections. To do that, we assume that pairs of centroids with minimum Euclidean distance between them must be the same object, so the centroid location is updated to the latest detection. If the Euclidean distance is greater than a threshold between a new centroid and all existing objects, we assume it to be a new object and give it a new object ID. This process is illustrated in Fig. 1.

object tracking process

Fig 1. Object tracking process

In code, we define the class Tracker, which is in charge of saving and updating all detected objects. First, we start by defining its attributes:

class tracker attributes example

 

Each object is identified by an ObjectID, which serves as an index for all dictionaries. For each object we save a list of centroids (variable centroids), a colour (variable colours), a bounding box (variable boxes), an integer that counts the number of times we lost track of the object (variable disappeared), and a Boolean that indicates if the object was already counted (variable counted).

Note that the ObjectID is given by an integer value, which we keep track of using the variable nextID.

For the object tracking operations, we set a tolerance for the number of times that an object can be undetected before being deleted, the maximum distance between new centroids and previous centroids to be considered from the same object, and the maximum number of previous centroids that we will keep for each object. Finally, for the counting operations, we track the limit where a person should move to be in or out of the building and the number of people that enter and exit it.

Adding and removing objects

Now we will define some methods using these variables. First, for an object to be tracked, we need the detections produced by any object detection model or algorithm; that is, we need a centroid and a bounding box. We save the centroid inside a list (that can have a length less or equal to max_history), and then we define a random colour with a tuple of four integers (each representing an RGB value but keeping the alpha value as 255). Finally, we set the number of times the object has disappeared as 0, save the bounding box, save the object as not counted and update the available ID. The code is as follows:

centroid and bounding box example

Conversely, we remove an object by deleting all data associated with the particular ObjectID:

 

Calculating distances and updating centroids

Note that we save a list of centroids instead of a single centroid. However, this list has a unique property: it has to have a length less than or equal to max_history. In that sense, we will only store the last max_history detected centroids, and we do that by deleting the first item of the list if its length is equal to the maximum and then appending the new centroid. Finally, we update the bounding box of the object with the last detection:

bounding box update

 

But how do we append the correct centroid to the object? As we said earlier, we use the Euclidean distance and append the centroid to the closest object, and we need a list of available IDs (as an object can only be updated once per frame) and the centroid to be appended. We calculate the Euclidean distance for each object and return the ID corresponding to the object closest to the new centroid. Note that if no object is close (when all distances are more significant than the maximum or no objects are available), this method returns -1:

append the centroid to the closest object example

 

Finally, we use both helper methods in the primary centroid updater method. First, we set the available IDs as all current objects. Then, for each detection (defined by a tuple containing a bounding box and a centroid), we use the method get_min_distance to obtain the closest object. If the ID is valid, we append the detected centroid to the existing object and remove it from the available objects; if not, we create a new object. Finally, if there are no more detections but we still have available objects, we assume that these objects disappeared, removing the objects that disappeared max_disappeared times:

 

Updating and counting

There are two main methods in the Tracker class. The first is the update method, which uses a list of detections (defined the same way as before). If there are no detections, we assume that all objects disappeared, removing all objects that disappeared max_disappeared times. If there are no saved objects, we create objects for all detections. Finally, if there are saved objects and new detections, we update the centroids using the update_centroids method:

 

The second method is “count_people”, which iterates over all centroids that have not yet been counted. First, we calculate the direction the object is going towards with the difference between the mean of the previous centroids and the last centroid. If the object goes upwards, the last centroid is over the limit, and the mean of the previous centroids is under the limit. Then we count the object as entering the building. Similarly, if the object is going downwards, the last centroid is under the limit, and the mean of the previous centroids is over the limit, then the object is exiting the building:

count people method

 

With the complete code, you can start identifying and tracking objects detected by any algorithm of your choice. For example, if you are processing a video frame by frame, you would get the bounding boxes and centroids of all objects and pass them to the update method of this class on each frame. Similarly, you can call the method count_people on every frame to update the counters based on the data that is stored at that moment.

Remember that, to use this object, you need to provide a bounding box and a centroid for all objects in each processed frame of a video. The following example was taken using an InceptionV2 implementation in the Jetson Inference library:

 

In this example, we use a video as input, detecting people in each frame and passing the corresponding bounding boxes and centroids to the Tracker object. Note that the quality of detections influences the tracking algorithm, so if your chosen detection algorithm fails to detect objects in a specific frame, the tracker could assume that the objects disappeared.

Conclusion

In this section, we explored a simple tracking algorithm and a counting algorithm that can be used together with any object detection algorithm or deep learning model. We defined the Tracker class, which uses bounding boxes and centroids to identify and keep track of moving objects in a video, using the Euclidean distance to associate new data with previous objects. Note that you have to tune the values for the number of times before an object disappears, the number of saved centroids, and the maximum distance between a centroid and new detections, to optimise the algorithm and get better performance.

EQUINOX

& COMPUTER VISION

Discover how we apply Computer Vision to comprehensive solutions for different industries and business challenges

eye with computer vision representation on it
bts collab with louis vuitton

Day after day, luxury brands are diving deeper into the technology world. Recently, it has become more common for fashion, vehicles, and jewellery brands to add digital experiences to strengthen the interactions between their clients and technology and create brand loyalty. This blog will explain why and how they do it and their reward after digital transformation.

Let’s start… What is a luxury brand?

luxury brands collage

Influencer´s collage with luxury brands articles – Pinterest

First, it is essential to define what’s a luxury brand. Luxury brands aim to reach two central values the first one is exclusivity and the second one is identity.

-The exclusivity gives status to the brand. The idea is that not all people can get brand articles because their price is higher than the price of supplementary products in the market. In that way, the clients who buy a luxury object feel that they have a higher social status than those who don’t; in other words, they feel that the brand adds value to themselves. 

-The exclusivity is also represented in the few productions of the same item. Generally, luxury brands don’t produce massively, and they create strategies to launch few copies, creating an effect of high demand and low offer. That way, the consumer that gets a product feels unique. 

-Luxury brands don’t compete with similar brands in the market; they have a reflective and introspective job in which they improve themselves to be more recognised and keep their Luxury brand label. The identity of these brands is so recognised that just because of their logo, brands charge even 100 times more for the same item.

Luxury brands and technology

Day after day, with humans’ constant interaction with technology, new advances have emerged, and new ways of selling and promoting brands have appeared. We have migrated from catalogues to e-commerce and from websites to purchases on social networks.
That’s why the most prestigious brands adapt to these new dynamics and to the latest and future generations of consumers, who live very differently from their grandparents. In the following paragraphs, I will list some of the main reasons brands use technology more and more every day.

Connect with new customers

One of the main reasons is to connect with a new audience. Luxury brands have begun to launch strategies to captivate Millennials who quickly adapted to connectivity and entertainment on demand and to captivate the next generation called Generation Z (under 25 y.o), who have taken technology for granted since birth. In the end, brands know that they are not the generation with the most significant purchasing power at present, but they are the ones that have the most influence in the digital market.

pink gucci glasses

Gucci glasses in influencer account picture

A good example is LOUIS VUITTON. The brand has started a collaboration with BTS, one of the biggest boy bands of the moment, to reach the band’s young fans, and launched a new collection through a futuristic video on YouTube, one of the most visited platforms, with the members of the musical group modelling. In a few hours, the video reached two million visits, a chipper they wouldn’t achieve in the same period with catalogues or photographs.

bts collab with louis vuitton

BTS wore Louis Vuitton and displayed his immaculate fashion sense

Improve customer experience

Another reason why luxury brands are implementing technology is the consumer experience. The help of apps and QR code scans allows visitors to have the best possible experience in their stores by mixing reality with digital interaction.
BURBERRY, for example, has been a pioneering brand in technology implementation. Altagamma and the Italian National Fashion Chamber recognised the British brand for its Exceptional Digital Deals throughout 2020 *. The brand has been recognised for its expansion in digital stores and social networks (they were one of the first Instagram partners to test the shopping functionality). The English brand is also known for its immersion in video games and shopping tools, improving and facilitating customer experience.

videogame character dressed with burberry

Burberry designs skins for Honor of Kings characters

Regarding the consumer experience, BURBERRY has established augmented reality in its app. The app allows consumers to see how the garments they sell would look in a person’s daily environment and enable customers to pay for them immediately through the app without lines or delays.

burberry ar experience app

Burberry AR experience

Additionally, at the launch of the Olympia bag, the brand decided to bring to life the Greek goddess who gave the piece its name. By scanning a QR code, consumers could observe the statue in 3D and capture images of it, creating a luxurious experience that is not common, at least at the moment, in other stores.

burberry olympia ar experience

Burberry invites customers into the world of Olympia with its latest augmented reality experience

Being sustainable and green

Luxury brands are renewing themselves to act against the current environmental problems and to provide solutions that align with the new generations’ ideals. That way, they retain consumers and have a clear and strong voice in the market. Moreover, with the help of technology, they are improving processes and materials.
An example of green renewal is CHANEL. The brand invested in a green chemistry startup that is experimenting with liquid silk to produce high-quality textiles, avoiding the use of toxic and environmentally harmful products, as well as the use of animal skins. This way, CHANEL explores innovative materials and mechanical and optical improvements in different fabrics of unique quality.
Another example is Stella McCartney, who works with other designers and Google to use Data Analytics and Machine learning on Google Cloud. The aim is to give other brands a better understanding of the supply chain to measure their environmental impact and reduce it.

chanel model on the catwalk

Chanel cruise collection Winter 2020

What are they getting in Exchange?

According to a Forbes study, Generation Z in 2018 was only responsible for 4% of luxury sales worldwide; however, by 2020, luxury sales in this market niche increased to 10%. Furthermore, in regions like China, they grew to 15%. Apparently, it was an effect of the adoption of emerging technologies, digital marketing tools to create content (such as augmented reality, virtual reality, and Artificial Intelligence chatbots), and the adoption of collaborations to create digital versions of their products or collections (See the virtual Gucci sneakers that cost $ 12).

In summary, luxury brands are expanding more and more to get new consumers loyal to them. Those brands improve the shopping experience either online or in-person with the help of technology. They are greener and more sustainable, which has led them to increase sales in the new generations and has led them to implement new, more robust, and updated strategies that target the digital and collaborative market.

REFERENCES

https://www.forbes.com/sites/josephdeacetis/2020/10/04/how-technology-is-helping-luxury-fashion-brands-to-gain-traction/?sh=1630aa793640

https://ww.fashionnetwork.com/news/Chanel-invests-in-green-chemistry-startup,1108103.html

https://www.burberryplc.com/en/news/brand/2021/burberry-invites-customers-into-the-world-of-olympia-with-its-la.html

carla acosta

Carla Acosta – Visual Designer

man working on a pc

VISIT OUR KNOWLEDGE CENTER

We believe in democratized knowledge 

Understanding for everyone: Infographics, blogs, and articles 

 ” There’s a big difference between impossible and hard to imagine. The first is about it; the second is about you “

By: Esteban Peña 

Rpa Engineer

robot hand

INTRODUCTION

RPA is a technology that allows bots’ creation by observing digital human actions 1 and through the identification and interaction of technological processes. For this, software has been developed to cover more environments to automate, using tools such as selectors detection or images to achieve a reliable interaction. The next step of RPA is for the bot to identify and perform process tasks by itself; this step is IPA (Intelligence Process Automation), in short, an Intelligent Bot.
The role of AI in IPA is to extend the bot’s capacities, using technologies that allow reaching new levels of productivity2; for example, better execution times and a further understanding of automated processes.
RPA is reaching a new era in which greater complexity and high profitability are possible.

WHAT IS IPA?

IPA is the use of AI disciplines that gives the bot an emerging set of new technologies that mix the redesign of fundamental processes with robotic process automation3; such tools allow, in the first instance, the consolidation of a process where the bot will be able to react in cases outside of those previously programmed, resulting in a mutualistic relationship in which the RPA is considered as the body that executes the process, and AI is the brain that deals with the different situations that arise.

CAPABILITIES

IPA advances are directed towards three main objectives: interaction with the environment (Computer Vision), an extension of automation possibilities (Process Mining), and document recognition (Intelligent Document Processing).

COMPUTER VISION

chess game seen through computer vision

Computer Vision allows the bot to identify interfaces more efficiently and accurately; it eliminates the dependence on selectors to maintain familiar workflows for RPA programmers 4 through a more reliable identification of the applications it works with. Computer vision also provides tolerance to changes that increase the number of automatable processes; it also decreases the probability of errors when identifying an element in an interface.

PROCESS MINING

With process mining, it is possible to analyze the flow of information using AI to identify automation opportunities throughout the company2 scientifically, thus facilitating the process of requirements gathering, having as an additional value the improvement and more excellent reliability that To-Be will have when performing automation.
At the same time, it provides the opportunity to evaluate the automation opportunity beyond the biases that Business Analysts may have.

INTELLIGENT DOCUMENT PROCESSING

IDP uses AI tools such as RPA, machine learning, and natural language processing (NLP), to extract, validate and process this data5, thereby increasing the value provided by the bot, providing information systems that not only serve to reduce work hours but also enable faster and more effective decision making.

EXPECTATIONS

Given the advances in AI and the more significant impact of digital artifacts in daily life, it is expected that the automatable digital processes will increase, and in the future, most of the robots will be lesser or greater extent influenced. Robots’ functions will be manipulated, altered, or expanded using the knowledge that this discipline can provide, and in return, with the increased field of what can be automated. More stylized methods will appear not only for the treatment of the data used, but also by interacting with applications in different machines.

CONCLUSIONS

1. IPA is a branch of RPA that influences beyond development, having essential and exciting participation in each of the processes carried out in a project, from analyzing how automatable a process can be to its testing.

2. In the same way as RPA can be benefited by using AI in IPA, several AI fields can be improved by using such technology, as it can provide support and insight to AI tools in real-time.

REFERENCES

  1. What is RPA: https://www.automationanywhere.com/la/rpa/robotic-process-automation
  2. Automate more processes by bringing AI into RPA: https://www.uipath.com/product/ai-rpa-capabilities
  3. Intelligent process automation: The engine at the core of the next-generation operating model: sipotra.it/wp-content/uploads/2017/04/Intelligent-process-automation-The-engine-at-the-core-of-the-next-generation-operating-model.pdf
  4. Automate on dynamic interfaces and virtual desktops: https://www.uipath.com/es/product/platform/ai-computer-vision-for-rpa
  5. Intelligent Automation: How Combining RPA and AI Can Digitally Transform Your Organization: https://www.ibm.com/cloud/blog/intelligent-automation-how-combining-rpa-and-ai-can-digitally-transform-your-organization
man working on a pc

VISIT OUR KNOWLEDGE CENTER

We believe in democratized knowledge 

Understanding for everyone: Infographics, blogs, and articles 

 ” There’s a big difference between impossible and hard to imagine. The first is about it; the second is about you “

doctor writing

Autors:

Álvaro Valbuena – Data Scientist  Equinox AI Lab

Tifanny Fresneda – UX Researcher Holistic Design Lab

collage of health and technology items

Globally, the healthcare system changed the way users connect to different types of medical services. After COVID-19, the appropriation of telemedicine has manifested itself globally. In 2018, this medicine modality started to be priced at $38.046 million USD and is expected to increase to $103.897 million USD by 2024. (Frontiersin, 2018).

According to the article How to Measure the Value of Virtual Health Care (2021), before COVID-19, telemedicine care only reached 1% in the United States. At those times, virtual care did not have a direct relationship with a person in the health sector behind a screen, i.e., there was a total disconnection between the virtual ecosystem and face-to-face care.

Moving on to a more specific context, in Colombia, according to figures from MinSalud, on April 30th, 2021, only 4% of the total number of health service provider institutions had enabled the telemedicine modality (MinSalud, 2021). Concerning the number of appointments performed using telemedicine before the pandemic caused by COVID-19, in Colombia, this figure did not reach 50,000 (Delgado, 2020).

On the other hand, during the period between March 2020 and March 2021, an average of 49.5 million teleconsultations were performed under this modality per month (MinSalud, 2021). Additionally, for the same period of time, 32.5 telehealth consultations were carried out (2.5 per month on average) (MinSalud, 2021).

Due to the pandemic, medical care through teleconsultations increased considerably, undeniably generating significant challenges for the sector. As a result, its operations and technology centers of medical care centers and hospitals have been transformed, and the experience in medical health services has had to face new ways of solving the challenges and needs of users when making use of the medical service virtually.

Opportunities and new needs in telemedicine

mom and boy in telemedicine appointment

Benefits that telemedicine has brought with it should be emphasized, for example, shortening distances and times for users, more efficient operating processes, and a model of care based on clinical suitability. It means that it is possible to decide which patients should and can be attended by teleconsultation and which can be attended by traditional face-to-face medicine, and who will be able to have hybrid care. In addition, it is possible to decide who will have the ability to monitor and carry out remote controls of medical and surgical procedures; and which communities will be protected by reducing their exposure to hospitals and centers with high potential infectious load, among others (Márquez, 2020).

Thus, the telemedicine service has solved several of the needs initiated from the pandemic in the health service before it. But the use of this new modality has also originated new pains and conditions in the service. It has differences from the traditional care; the interaction between doctor-patient has been weakened, and some users claim that the empathy that doctors have in front of a screen is lower than usual (Márquez, 2020).

On the other hand, patients also claim their fear of misdiagnosis because telemedicine generates gaps concerning the physical review; it can trigger that the service experience is not optimal for the patient. Many physicians were not trained in this type of care before the pandemic, which generated inconveniences when it came to humanizing a service provided in a purely digital way.

In addition to this, another pain of this modality is that it requires the population to use smart devices. Still, the country has a population with low monetary acquisition for this type of commodity and high levels of digital illiteracy (Márquez, 2020). 

Design Thinking in healthcare

With new needs, design thinking generates value by promoting ideas that innovate and produce solutions to user pains in the health sector. However, design can intervene in any user medical care touchpoints, either from the medical closer approach, in appointment preparation, following up the treatment, and at the end of the intervention.

Thanks to in-depth interviews, it is possible to understand the needs of the patient’s health service experience to understand their context and generate value solutions consistent with their realities. After this, based on the information collected and preliminary findings, brainstorming, and co-creation sessions, creativity-based techniques are promoted to generate as many solutions as possible to choose the most valuable ones in the end.

Finally, the aim is to convert these ideas into tangible prototypes to visualize and corroborate these opportunities and reach a successful final result.

The ultimate purpose of applying design thinking to any sector is to improve the user experience in all service touchpoints. To create new dynamics that change some moment of the journey or generate points of contact (both digital and non-digital) that improve or enhance the patient’s experience.

Artificial Intelligence in the development of the user experience

brain illustration with lights

Now, why use AI to improve the user experience? Because Artificial Intelligence (AI) also has the potential to transform the way healthcare is delivered and can lead to better outcomes and improve productivity and efficiency in service delivery. 

This technology can produce a wide range of improvements, such as better focusing a physician’s efforts to create a diagnosis or early detection of the development of more severe conditions before they arise.

Although AI is rising and its long-term implications are uncertain, its future applications in healthcare delivery and how each of us thinks about our health may be transformative. 

We can imagine a future where data obtained from wearables, portable devices, and implants change our understanding of human biology, enabling personalized, real-time treatment for all.

Doctor-free testing

man using his smartwatch

Preferential behaviors towards remote monitoring and diagnostic tests are currently trending. In addition, the relevance of algorithms for sorting and classifying stored patient data has led to a change in the dynamics of visiting the doctor (Future Today Institute, 2021).

Thanks to smartphones and smartwatches use, blood pressure readings and electrocardiograms are just a click away. In addition, data recorded and stored in the cloud daily has made it possible to monitor and diagnose people’s health status in real-time. (Future Today Institute, 2021).

 “People who wear an Apple Watch know that an abnormally high or low heart rate or rhythm may suggest atrial fibrillation” (Future Today Institute, 2021).

Thus, the aforementioned diagnostic devices link to moments of need for contact, consultation, and follow-up/treatment. In other words, the data collected by this type of device in the home will allow quick and timely detection of health problems, generating alerts for the user to inquire more about their medical condition and approach specialists in the sector.

On the other hand, in this first need for contact, the dynamics of taking action through these devices begin to change. However, when the patient detects atypical data in his devices generated daily, he would contact his health care provider to evaluate the problem in greater detail.

The intervention of these devices to automatically connect patients with doctors and pharmacists generates less concern for the user to take action regarding their medical condition. Automating these processes will allow for better medical care, more assertive diagnoses, and a faster connection with the entire healthcare ecosystem related to the patient, alleviating the pains mentioned above of teleconsultation, such as the fear of misdiagnosis.

Likewise, the data generated will provide a more efficient medical consultation and a better follow-up of the treatment given; this data will help the medical staff to have an accurate and precise diagnosis of the pathology at the time of the consultation.

The creation of these devices has generated changes in the way patients interact with their doctors and the healthcare system. However, these devices do not come out of anywhere; they are born from a rigorous process of conceptualization through Design Thinking and the technological development behind it. In short, they are born thanks to methodologies (such as AI4UX from Equinox AI Lab) that unite the best of both fields of knowledge.

AI models could be used to analyze data generated by wearables (for example, a portable heart monitor or a smartwatch) to identify among many measurements points those that could represent conditions of severity to then be analyzed by specialized medical personnel. This approach can optimize the time spent by specialists in these tasks by helping them focus their efforts on analyzing the most critical data, avoiding the review of thousands of measurements that do not represent significant values.

Pharmacies as health allies

woman and man in a drugstore

Pharmacies have been the most frequent point of contact for people regarding immediate and fast medical care (Pérez & Silva, 2015). Being family and community spaces, they generate a strategic role in advising and connecting patients with other health services.

If we add to this connection the use of data related to patient’s health, the role of these places would be enhanced and with more significant evolution in the sector. For example, the pilot program of CVS Pharmacy, with more than 1,000 branches in the United States, automatically analyzed and detected customers with high rates of non-compliance in the control and management of their health conditions. This information enabled the prioritization of patients for training programs and individual counseling for each pharmacist to prevent or treat chronic diseases for each branch’s users (CVS Health, 2021).

This pilot shows a judicious consumption of medicines by patients and a closing of gaps in medical care, an increased focus on preventive medicine, a reduction of unnecessary doctor or emergency services visits, and lower medical costs (CVS Health, 2021).

It is not just a matter of generating data just to develop it. It is essential to give this information a valuable purpose that addresses the needs of patients to provide a higher level of medical well-being. Currently, there is pain in the virtual care of patients, which requires the support of non-specialized medical personnel for the physical care of patients who cannot go to medical health centers due to mobility problems or lack of time or resources.

At first glance, it is possible to say that this pain could be covered by non-specialized health personnel in pharmaceutical centers in the community. Still, a strategic Design Thinking process would be of great importance to design the whole experience and correctly address every need and opportunity in this situation, from the appointment request to the follow-up of the condition and closure of the process.

In this case, a possible AI solution could be analyzing the different drugs that a person consumes to identify potential diseases that could develop because of their doses and the side effects that the current drugs could create. Therefore, this approach could be a step forward in implementing preventive medicine since it is looking for ways to foresee possible medical conditions so that the user can take the necessary measures.

CONCLUSIONS

As could be seen in the previous examples, the business opportunities are many, considering that other types of approaches can be developed for these same situations. There are many opportunities for improvement if you look closely at the whole process that a user has, from the moment the patient has a medical appointment to the completion of treatment. Identifying the user’s most important pains, and adding the use of technological tools such as Artificial Intelligence, will impact user experience and will optimate resources in the health system enormously.

In countries where the transition of using new technologies started, those who correctly identify the user’s pains and locate the most successful technologies are the ones who will be able to take advantage of all the opportunities that open up by addressing problems using UX.

REFERENCES

Angela Spatharou, Solveigh Hieronimus, y Jonathan Jenkins. (10 de marzo 2020). Transforming healthcare with AI: The impact on the workforce and organizations. McKinsey & Company. https://www.mckinsey.com/industries/healthcare-systems-and-services/our-insights/transforming-healthcare-with-ai 

CVS Health (2021). Health Trends Report. Recuperado de https://www.cvshealth.com/sites/default/files/cvs-health-trends-report-2021.pdf  

 

Delgado, H. (2020, julio 19). «La pandemia generó una transformación en el sistema de salud»: Presidente de Acemi. El País. Recuperado 29 Mayo, 2022, a partir de https://www.elpais.com.co/economia/la-pandemia-genero-una-transformacion-en-el-sistema-de-salud-presidente-de-acemi.html 

Fuller D, Colwell E, Low J, Orychock K, Tobin MA, Simango B, Buote R, Van Heerden D, Luan H, Cullen K, Slade L y Taylor NGA. (8 de septiembre 2020). Reliability and Validity of Commercially Available Wearable Devices for Measuring Steps, Energy Expenditure, and Heart Rate: Systematic Review. https://mhealth.jmir.org/2020/9/e18694/ 

 

Future Tech Institute. (2021). Tech Trends.Miami, Estados Unidos.Recuperado de https://asesoftware-my.sharepoint.com/:b:/p/tfresneda/EdG9OxyuRoJPoWw_IGtzpg4BYF_m3x2Hc6cJP7EZU6ox3A?e=8SK6Xs 

 

MinSalud. (2021a). Cifras Aseguramiento en Salud. Recuperado 29 Mayo, 2022, a partir de https://www.minsalud.gov.co/proteccionsocial/Paginas/cifras-aseguramiento-salud.aspx 

 

MinSalud. (2021b). Gasto de Salud en Colombia. Recuperado 29 Mayo, 2022, a partir de https://www.minsalud.gov.co/proteccionsocial/Financiamiento/Paginas/indicadores_generales.aspx  

 

Márquez V, Juan Ricardo. (2020). Teleconsulta en la pandemia por Coronavirus: desafíos para la telemedicina pos-COVID-19. Revista colombiana de Gastroenterología, 35(Suppl. 1), 5-16. https://doi.org/10.22516/25007440.543 

 

Omar Ford. (20 de septiembre 2020). 10 FDA Cleared or Approved Wearable Devices that Redefined Healthcare. Medical Device and Diagnostic Industry. https://www.mddionline.com/digital-health/10-fda-cleared-or-approved-wearable-devices-redefined-healthcare/ 

 Revista Frontiersin (2020). Telemedicine Across the Globe-Position Paper From the COVID-19 Pandemic Health System Resilience PROGRAM (REPROGRAM) International Consortium (Part 1) . Frontiers in Public Health . Recuperado de https://www.frontiersin.org/article/10.3389/fpubh.2020.556720 

 

man working on a pc

VISIT OUR KNOWLEDGE CENTER

We believe in democratized knowledge 

Understanding for everyone: Infographics, blogs, and articles 

 ” There’s a big difference between impossible and hard to imagine. The first is about it; the second is about you “

English
Tau

Did you know that AI can boost productivity by 40%?