IBM refuses to develop, research or sell Facial recognition

    IBM refuses to develop_ research or sell Facial recognition

    IBM CEO announced that the company will no longer develop facial recognition technology due to bias based on race & gender

    The new CEO Arvind Krishna has spoken up against Facial Recognition technology because they feel it promotes gender and racial bias. He stated that IBM would no longer condone this or any technology that is used for mass surveillance, violation of basic human rights and freedom, racial profiling, etc. He further stated that it is time for a national dialogue on the use of technology by domestic law enforcement firms.

    Advancements in the last decade have greatly improved facial recognition technology. The problem associated with the technology arises from the minimal regulations followed by the private companies, where a significant amount of federal oversight has also been noticed. The tool is unreliable for security and law enforcement as it has been proven to have a bias based on ethnicity, age, and race.

    The National Institute of Standards and Technology published a study in December 2019 which showed empirical data for the existence of a big range of accuracy across demographic variations in the majority of the evaluated recognition technology and algorithms used currently. Privacy violation has been a major thorn in the side for facial recognition.

    IBM has not made significant profits from the facial recognition technology business since the technology is still in its initial stages. Another giant dabbling in the technology is the Rekognition software launched by Amazon. This software has also been not well received, though it has been tested by quite a number of law enforcement entities. Competing with such a product that is quite similar in functionality but barely used may not be profitable for an enterprise vendor, IBM.

    Amazon has been constantly called out for the below standard quality of the facial recognition system. But despite this, the software has not been taken off the market. Krishna has further stated that AI systems users and vendors have a mutual responsibility to test the technology for bias especially if it’s being marketed to the law enforcement agencies.

    Last year IBM had emphasized on the updated database of face data which was the most diverse as compared to other similar databases available at that time.

    Clearview AI, another firm’s facial recognition tool was being used by law enforcement agencies and private sector organizations. The firm had created its database with 3 billion images, some of which included data from social media platforms. The firm is currently mired in several privacy lawsuits and was issued a number of cease and desist orders. January 2020 saw Facebook being ordered to pay $550 million for unlawful use of the recognition technology.

    In 2018, IBM had tried to rectify the bias faced on facial recognition by releasing a public data set which aimed to reduce bias as a subset of training data for the recognition technology. In 2019, however, IBM was accused of taking data from Flickr without the user content and sharing the separate training data set. The data comprised of nearly 1 million photos and was shared under the Creative Commons license. The firm had stated that the data set was accessible only to authentic researches and had images that were publicly present. The users could supposedly opt-out of the data set when they wished.

    The ACLU had stated that it is time for investment in technologies that work to remove the digital divide. Technologies that develop a surveillance system that promotes structural racism and policing abuse should be avoided.