If AI Is Here to Stay, It Should Do the Right Thing

Artificial Intelligence (AI) is one of the most fascinating aspects of sci-fi movies. Many times, AI has been represented in the form of physical robots, smart computers or even software forms as a normal part of society. We are now witnessing sci-fi technology moving from the big screen and into our reality with AI applications such as in the Minority Report movie becoming part of our day to day lives.

In the last few years, AI systems have played important roles in decision-making tasks. For instance, surveillance systems are using facial recognition to identify criminals; criminal courts are using AI to predict the likelihood of future crimes during sentencing; recruiting agencies are using AI to find the best candidates for their advertised job positions. It is fascinating how AI systems have increasingly become part of our modern life and how they are taking control of several aspects of it. With AI influencing how the world operates it is important that the decisions performed by such systems are fair and ethical.

Research has shown that the same AI systems influencing our lives have social bias that has been influenced either consciously by the developer’s own experiences and beliefs. In addition, the technology is moving so fast that makes it even harder to fix these biases. We know that AI systems were not intentionally built to be biased, but, as with any new technology, they reflect the bias of their creators.

To know how to solve the AI bias we need first to understand the bias of their creators, or, in other words, our natural bias. We are naturally biased by cultural, familiar and/or even particular beliefs. When we transfer our knowledge to someone we transfer our bias as well, then they become biased by our thoughts and beliefs. Societal bias is a dominant problem that has hampered humans since the dawn of civilization and this has led to racism, sexism and even multicultural problems in the society.

The same concept is applied in Machine Learning, an AI approach that uses large datasets to build AI systems. Generally, these systems require huge amounts of training datasets during the learning process in order to achieve an acceptable performance. The training datasets are provided and labelled by a specialist and therefore end up with the specialist’s bias. As a consequence, the AI systems learns that same bias.

This bias can be observed in several recent AI systems and have negatively affected society. In fact, major technology companies have demonstrated both skin color and gender biases that can lead to a grave threat to civil rights. Infamous examples are a system that identifies African-Americans as gorillas, or biased conclusions against African-Americans during sentencing, whilst underestimating future crimes by Caucasians.

There is a growing list of examples that collectively raise a more fundamental question: how can we transform AI systems to make fair decisions and help society by removing the barriers of exclusion?

Let’s look closer at AI systems based on visual information (Facial Recognition, AI based Surveillance Systems, etc.), where visual appearance such as clothes, beard, skin color and gender are the most common triggers for the bias problem. In surveillance systems, for instance, such bias can lead to erroneous decisions which affect civil rights.

For those systems, what information should matter? An African-American man walking in the airport to catch a flight or a Caucasian man loitering with suspicious behaviour? Which information is important from a security perspective? In this example, most of the current AI based surveillance systems will sadly consider the appearance of the person to make the decision. The AI system is probably prone to classify the African-American example as suspicious. But, from our understanding, it is the Caucasian man who is showing a kind of suspicious behavior.

For the sake of the fair judgment, the iCetana AI algorithm is not interested in appearance, it is only interested in suspicious or abnormal events that are triggered by actions. This is how AI based systems should work and this is what iCetana believes. We believe that action matters more than appearance and that an AI system can learn by itself what is normal and trigger what is abnormal. Developing an unbiased system is possible and achieving fair decision-making in feasible. This is where iCetana’s system shines – the system understands the environment and highlights just the abnormal events.

iCetana’s AI algorithm was built in a way that does not require training datasets provided by the users. Instead, the system seeks for unexpected changes in the data pattern. Therefore, it does not care about the appearance, it is pure (not biased) and does not lead to a grave threat to civil rights.

Given its powerful AI core, iCetana has been applied for several different use cases beyond surveillance environments, such as manufacturing and operational optimization processes. Our system has helped organizations improve the security of their environments, reduce risks and at the same time, increase the return of the investment.

We are living in exciting times with AI giving rise to many new capabilities. It is imperative that we make sure that we are doing the right thing to make these AI systems fair for all.

By Dr. Moussa Reda Mansour, R&D Lead at iCetana

iCetana is a Milestone Certified Solution Partner. Learn more about the integrated solution: https://www.milestonesys.com/solution-finder/icetana/icetana/

Article References:







Related posts