Computer vision analytic combine with AI and responsible ML is as good as the human eye or, in some specific cases, much better!
Someone explains to me what is the difference between a human eye and a camera on a mobile device or a surveillance system. Not much if you ask me, Wikipedia defines the human eye as “The human eye is an organ that reacts to light and allows vision” and for a digital camera they say this: “Digital and digital movie cameras share an optical system, typically using a lens with a variable diaphragm to focus light onto an image pickup device”. Pretty close, no?
Not long ago three large companies, IBM´s Arvind Krishna, Amazon´s Jeff Bezos and Microsoft Brad Smith, announced that they would suspend the development and commercialization of facial recognition technologies for various markets (law enforcement) in fear of human rights violation. Woww,,, since when a computing engine executing a data model or a convolutional neural network has as a mechanism to make a racialized determination or has the capacity to violates human rights, well, when human beings configure it to do just that. When a machine learning model recognize a human face, it uses an optical device to captures a video or an image of that face, turn it into an imbedding and then compares it to an existing trained model. When a human eye recognizes a face, the brain processes the image and immediately transform and classifies into its judgment and emotion compartment which allows for adoption, discrimination and or ignorance. These companies, maybe, are taking a political stand on the uses of such technologies in an effort to be a part of the news perhaps…
If I would be racialized (I think I am, Caucasian white is a race), I would certainly prefer to be arrested for an expired license plate tag by a self-driving police cruiser equipped with optical devices and a SIRI like speech2text agent (artificial intelligence applying the law) that would hand me over a social distancing contactless ticket fine rather than a stressed out, over worked human police officer on his third shift who maybe did not have time to have lunch. Recognizing human faces at an airport security checkpoint and allowing for boarding at a social distance is not a human right threatening task, if we can just apply the rules as define and into a data model. Facial recognition and authentication for any type of analog or digital access to improve our own security and to allow for predefine and tailored services offerings is not a racialized procedure if we take advantage of the responsible uses of those very same technologies. Wait a minute, I just realized that I use to words “responsible use” a lot, machine learning models and computerized systems are not responsible, humans are.
The fact and the matter are that computer vision analytic, facial recognition, artificial intelligence and machine learning can greatly improve our analog and digital lives by making us safer, with less contact and more distance if desired while offering every person the opportunity to protect their identity and personal data by transforming our own faces into an advanced cryptographic and data protection algorithm. I think that our cognitive judgment and emotion compartment which allows for adoption, discrimination and or ignorance can certainly use a little bit of computer rational.
Visit www.dicio.ai for more information.
Stephane Mathieu, 18-06-2020