Crime and Facial Recognition, or Lack of Recognition By Brian Simpson

One area where AI has been used, with some disastrous results, is in police use of facial recognition neural net programs. The leading case comes from Detroit in the United States, where a Black man, Robert Williams was arrested for robberies, based upon surveillance footage. It turned out that Williams was innocent, having an alibi. But he was still arrested on the basis of unclear grainy footage. It is known that there are limitations of the programs based upon the training data used for the neural net, which has been largely based upon white faces and creates a racial bias, with Black’s being 100 times more likely to be falsely identified than Whites. At present there is a movement to either ban or restrict the use of these technologies which conduct one-many matches of suspects faces to data banks such as licence face photos.

 

Given that the same sort of technology, and other types are in use in Austrasia, with lack of regulation of use of the national face recognition system, and private corporate sources like Clearview AT having data, using images from the internet, a situation like the Williams one is possible in Australia. This is an area where legislative protections are needed, but such measures are not high on the present government’s agenda, who like all Labor governments enjoy the power given by the surveillance society, the UK being a prime example.

 

 

https://www.abc.net.au/news/science/2023-11-01/ai-facial-recognition-robert-williams-crime-prison/103032148?fbclid=IwAR0VVoobZD2IIFuAdZ598VFYX4rTrBtaOiglQdVYQm5jjLmAd8h1S29r94o

“One day in 2020, police arrested Robert Williams in his Detroit driveway, handcuffed him in front of his children, and took him away.

He had no idea what he'd done wrong.

That night he slept on a cold cell floor using his jacket for a pillow. His wife called the Detroit detention centre repeatedly, but got no answers.

The next day Mr Williams was told his alleged crime.

But the full story of why him — why police mistakenly thought he was the criminal — only surfaced later, after a lot of digging.

Finally, having sued the Detroit police, he learnt he was the victim of a faulty artificial intelligence (AI) facial recognition system.

And even as he fought to fully clear his name, the system continued operating. 

Police used it to identify suspected criminals. It still made mistakes.

And the men and women it falsely singled out? They were all African American, like Robert.

Now, with similar technology being used in Australia, and the government introducing legislation to partly regulate its use, Mr Williams is telling his story as a cautionary tale.

"I knew that that technology had evolved and this was a thing, but I didn't know they were just taking the technology and arresting people," he says, speaking from his home in Detroit.

"I didn't know the technology could make an arrest."

A powerful tool with a hidden flaw

Robert Williams' arrest in January 2020 was the first documented US case of a person being wrongfully detained based on facial recognition technology.

When the officers knocked on his door, police departments were in the midst of a technological revolution.

A new kind of powerful AI was driving a rollout of facial recognition in law enforcement.

It wasn't just the US. This was happening around the world, including in Australia.

For police, the benefits were obvious. Facial recognition could analyse a blown-up still taken from a security tape, sift through a database of millions of driver licence photos, and identify the person who did the crime.

But the technology wasn't perfect.

Apart from facilitating a system of mass surveillance that threatened people's privacy, the new AI systems were racially biased.

They were significantly more likely to falsely identify people of colour.

Despite this documented problem, police relied increasingly heavily on AI systems for investigations.

Sucked into the criminal justice system

In 2018, a man in a baseball cap stole thousands of dollars worth of watches from a store in central Detroit.

Months later, the facial recognition system used by Detroit police combed through its database of millions of driver licences to identify the criminal in the grainy security tapes.

Mr Williams' photo didn't come up first.

In fact, it was the ninth-most-probable match.

But it didn't matter.

Officers drove to Mr Williams' house and handcuffed him. He'd never been arrested before. It felt like a movie, or a bad dream.

"And at the time we had a five- and a two-year-old," his wife Melissa Williams says.

"I was trying to keep them somewhat shielded, and also see what was happening."

The arresting officers didn't know the details of the crime. 

As they drove him to the detention centre, Mr Williams was sucked into the machine of criminal justice.

And once he was in the system, he'd spend years trying to get out.

"I tried to tell the guy he was making a mistake. I say, 'Y'all got the wrong guy.'"

"And he was like, 'Look, I'm just here doing my job.'"

Racial bias creeps into facial recognition

The reason the AI system identified the wrong guy goes back to a flaw in the way it was trained to detect faces.

Modern facial recognition uses a machine-learning method called neural networks, which recognise complex patterns in information.

Instead of being programmed with rules-based logic, they're "trained" on data.

For instance, if they're fed lots of photos with and without faces (where each photo is labelled to say whether it has a face or not), they learn through trial and error to identify faces within photos.

For facial recognition, the AI maps a face's distinctive features (such as the space between nose and mouth, or the size of the eyebrows), and then converts the image data into a string of numbers, or "faceprint", that corresponds to the colour and tone of these features.

It then runs this unique code through its database of other images to see if there's a close-enough match.

Simple, right? Yes, but the AI is only as good as its training data.

If it's mostly trained on one kind of face, it's significantly worse at accurately matching other kinds of faces.

And that's exactly what happened with a lot of facial recognition, including the system that falsely identified Mr  Williams.

The AI was trained on a database of mostly white people. White people are disproportionately represented on the internet, and therefore also in image datasets compiled by scraping photos from social media.

And this wasn't the only problem. The photos of people of colour in the dataset were generally of worse quality, as default camera settings are often not optimised to capture darker skin tones.

As a result, the system was bad at accurately matching faces of people of colour.

Racial bias quietly crept into facial recognition.

Since most people working in AI were white, they didn't notice.

In 2019, a US study of more than 100 facial recognition systems found they falsely identified African American faces up to 100 times more than Caucasian faces.

This study included algorithms used in the facial recognition system that picked out Robert Williams' licence photo.

By January 2020, as Mr Williams had his mug shot taken in the Detroit detention centre, civil liberties groups knew that black people were being falsely accused due to this technology.

But they couldn't prove it was happening, says Phil Mayor, a senior staff attorney at the American Civil Liberties Union (ACLU) of Michigan.

"All sorts of people around the country were saying this technology doesn't work," he says.

"We knew this was going to happen."

'The computer got it wrong'

Alongside his mugshot, Mr Williams had his fingerprints and DNA taken, and was held overnight.

The next day, two detectives took him to an interrogation room and placed pieces of paper face down on the table in front of him.

They explained these were blown-up security tape stills from a store that was robbed, about 30 minutes' drive from his house.

Then they turned the photos face up, one by one.

The first photo showed a heavy-set black man in a red baseball cap standing beside a watch display.

The second was a blurry close-up.

It clearly wasn't Robert Williams.

"I wanted to ask, 'Do you think all black people look alike?' Because he was a big black guy, but that don't make it me though."

One of the detectives then asked, "So the computer got it wrong?"

It was Mr Williams' first clue that the arrest was based on facial recognition.

"And I'm like, 'Yeah, the computer got it wrong.'"

Mr Williams later found out police did almost no other investigative work after getting the computer match.

If they'd asked him for an alibi, they'd have found he couldn't have done the crime.

A video on his phone proved he was miles away at the time of the theft.

AI keeps falsely identifying black men and women 

Mr Williams was released from detention that night, but his journey through the justice system was only just beginning.

His mug shot, fingerprints and DNA were still on file, and he needed a lawyer to defend against the theft charge.

He hired the ACLU's Phil Mayor, who got the charge dismissed in court.

But Mr Williams wasn't done. He then campaigned for Detroit police to stop using facial recognition. When they refused, he sued them for wrongful arrest. This case is ongoing.

"Detroit is one of the blackest cities in America," Mr Mayor says.

"It's a majority black city, and here it is investing millions of dollars of taxpayer money in using a technology that is particularly unreliable in identifying black faces."

Police use of facial recognition is now a polarising issue in the US.

At least five more people were wrongfully arrested after being falsely identified by facial recognition systems.

They were all black men and women.

The most recent example is an eight-month-pregnant black woman in Detroit, wrongfully arrested for robbery and carjacking this year. 

Detroit police didn't respond to the ABC's request for comment.

The problem with facial recognition isn't just that it can be bad at identifying black faces, but the way police end up using it, Mr Mayor says.

In theory, they're only meant to use a facial recognition match as a clue in a case.

But that doesn't always happen, Mr Mayor says.

Police sometimes use the face match solely as grounds for an arrest.

The AI effectively decides who gets arrested.

"Here in America, the police are trying to say, don't worry, we're only using this technology to get a lead," he says.

"And then we go out and we do an investigation. But the thing is, you know, shoddy technology leads to shoddy investigations."

Facial recognition widespread in Australia, but with no legal guardrails

In Australia, various types of facial recognition are widely used, but the issue is less public than in the US.

This is partly due to a history of failed regulation attempts.

In 2015, the federal government proposed a national facial recognition system it dubbed "the capability".

It would give law enforcement and security agencies quick access to up to 100 million facial images from databases around Australia, including driver licences and passport photos.

In 2019, it introduced legislation to govern the system's use.

But the legislation was widely criticised as draconian and never passed parliament.

That didn't stop the then government from ploughing ahead with its planned national facial recognition system, says Edward Santow, an expert on responsible AI at the University of Technology Sydney, and the Australian Human Rights Commissioner at the time.

The capability was rolled out without any legislation dealing specifically with how it should be used.

It was the worst possible scenario, Professor Santow says.

"The only thing worse than really bad legal guardrails is no legal guardrail. And that's what we've had for the last four years."

Could a case like Robert Williams' happen in Australia?

As a result of the lack of rules around facial recognition in Australia, it's unclear if a case like Robert Williams' could happen here, Professor Santow says.

"We simply don't know, which is the problem in itself.

Police have made broad public assurances they don't use the national facial recognition system to compare one person's photo against the entire national database to identify them.

This is known as "one-to-many" face matching, which is what police used to arrest Mr Williams.

Other kinds of facial recognition include "one-to-one" services used to verify documents, such as confirming a person's face matches the photo on their passport. This kind is used millions of times per year.

But even if police are not using the national system for one-to-many facial recognition, they have been using commercial one-to-many services, such as Clearview AI, which relies on images scraped from the internet.

In late 2021, Australia's Information Commissioner found use of Clearview broke Australia's privacy law.

Despite this, last month Senate estimates heard the federal police tested a second commercial one-to-many face matching service, Pim Eyes, earlier this year.

Australian retailers such as Bunnings and Kmart have also used commercial one-to-many services to surveil customers. 

The federal government recently introduced a bill to govern some uses of one-to-one and one-to-many facial recognition.

At the time, Attorney General Mark Dreyfus said the Identity Verification Services Bill would put strict limits on the use of one-to-many face matching.

But Professor Santow says these restrictions only apply to the national facial recognition system.

It doesn't restrict police use of commercial one-to-many services, he says.

"It's going to probably have zero effect on the police."

In the US, Robert Williams is campaigning to ban the use of facial recognition by law enforcement. Having lived a quiet suburban life up to 2020, he now speaks to lawmakers around the country as they consider whether to ban or approve the technology.

Mr Williams acknowledges the bias problem may be fixed. Training systems on larger and more diverse databases appears to be helping. 

But even if that happens, he'll still oppose facial recognition for mass surveillance.

"I don't want to be surveilled at all times, so that every red light there's a camera looking into your car.

"I guess it makes sense for crime, but what about people who are just living life?"

https://www.naturalnews.com/2023-11-03-civil-rights-groups-urge-ban-facial-recognition.html

Thirty-two civil rights groups, under the leadership of the Surveillance Technology Oversight Project (STOP), have called on the state of New York to outlaw government use of facial recognition and other biometric technologies in residential buildings and public accommodations or facilities.

These groups have declared that facial recognition technology (FRT) is an "immediate threat" to New Yorkers" safety and civil rights.

In a memo of support for two pending state bills 1014-2023 and 1024-2023, "Ban the Scan" advocates pointed out that biometric technology, including facial recognition, can be "biased, error-prone and harmful."

With the advancement of facial recognition technology, here are some of the main privacy and security concerns that can't just be ignored.

Barriers to cybercrime are low

Greater complexity and interdependence among security systems give cybercriminals more opportunity for widespread global damage, according to cybersecurity industry experts.

No security system is airtight and this alone can make biometric databases, including facial recognition records an extremely attractive target for tech-savvy hackers looking to exploit invaluable information.

"Now the barriers to cybercrime entry are low and cybercrime is becoming a service. Moreover, unlike in the past, more nation-states are entering the cybercrime arena. And that to me is concerning in itself," said Kevin Mandia, CEO of intelligence-led security company FireEye.

Outright violation of data privacy laws – from the collection, improper storage and mishandling of facial recognition and other biometric data – leads to the decline or complete loss of public trust and confidence in both government agencies and private companies that use these technologies.

Collected facial recognition data could be misused

Facial recognition is not immune to conscious or unconscious judgmental bias that leads to discrimination and wrongful convictions against certain groups.

While faces are becoming easier to capture even from remote distances and are cheaper to collect and store with today's tech advancements, faces cannot be "encrypted" unlike many other forms of data, according to information systems/information technology (IS/IT) professionals at the Information Systems Audit and Control Association (ISACA).

Facial recognition data breaches increase the potential for harassment, identity theft, stalking, surveillance and monitoring.

Data collection could infringe on individual privacy

The collection of data through the use of facial recognition technology can be done without a person's consent or knowledge – a clear infringement of a person's freedom and privacy. The accuracy and bias of data and algorithms are another privacy risk as facial recognition and biometrics are not completely error-free. Some have produced false negatives, false positives or misidentifications.

Because of its infallibility, any person can be wrongly accused of a crime, offense or violation; denied access to essential services or discriminated against age, gender or race – evidenced by a study published in the journal Cognitive Science that exposed error rates across demographic groups – with the poorest accuracy consistently found in subjects who were Black, female and between 18 to 30 years old.

A research article published in the Harvard Journal of Law & Technology explained why racial bias is prevalent in facial recognition technology. It listed three distinct factors that drive racially disparate results: the lack of diversity and representation in the training data and algorithms; human selection of facial features; and image quality issues.

This was confirmed by the National Institute of Standards and Technology (NIST) study that tried out 189 facial recognition algorithms submitted by 99 major surveillance tech developers with 18.27 million images.

Researchers found that many of these algorithms are between "10 up to 100 times more likely to misidentify a Black, East Asian and Native American face than a white."

Facial recognition could infringe on freedom of speech and association

Self-censorship, suppression of dissent and associated chills are just three reasons why uncontrolled use of facial recognition technology can take away or limit someone's rights to freedom of speech and association.

"The fear and uncertainty generated by surveillance inhibit activity more than any action by the police, and if you feel you're being watched, you self-police, and this pushes people out of the public space," said Joshua Franco, a senior research advisor and the deputy director of Amnesty Tech at Amnesty International.”

 

 

 

 

Comments

No comments made yet. Be the first to submit a comment
Already Registered? Login Here
Sunday, 28 April 2024

Captcha Image