The United Kingdom may face a scandal due to potential bias found in AI tools being used in various areas of the public sector.


Kate Osamor, the Member of Parliament for Edmonton under the Labour Party, was recently contacted by a charitable organization regarding a resident of her constituency who had their benefits unexpectedly halted without explanation.

“For more than a year, she has been attempting to reach out to DWP (Department for Work and Pensions) to inquire about the suspension of her UC (Universal Credit). However, neither she nor our casework team have been successful in receiving any information,” the email stated. “The reason for the suspension of her claim is still unknown, and it is uncertain if there was any valid reason for it. As a result, she has been unable to make rent payments for 18 months and is now facing potential eviction.”

In recent years, Osamor has been handling numerous cases, many of which involve Bulgarian citizens. She suspects that they were targeted by a partially automated process that uses an algorithm to identify potential instances of benefit fraud. These cases are then passed on to human decision-makers who determine whether or not to suspend individuals’ claims.

“I received numerous communications in early 2022 from Bulgarian citizens, who had their benefits put on hold,” Osamor stated. “The Department for Work and Pensions’ Integrated Risk and Intelligence Service had flagged their cases as high risk through automated data analysis.”

For months, they were left without any resources to turn to, but in almost all instances, no proof of deceit was discovered and their benefits were eventually reinstated. Unfortunately, there was no responsibility taken for this procedure.

Since 2021, the DWP has utilized artificial intelligence to identify instances of benefits fraud. The program identifies potential cases that require further examination by a human and refers them for review.

The Department for Work and Pensions (DWP) stated that it is unable to disclose the specifics of the algorithm’s functioning to the Guardian’s request for freedom of information, out of concern that it may enable individuals to exploit the system.

The department stated that the algorithm does not consider nationality. However, since these algorithms are self-learning, it is impossible to determine how they balance the data they receive.

The DWP stated in its most recent yearly reports that it kept track of the system for indications of prejudice, but was restricted in its ability to do so due to insufficient user data. The government’s financial oversight agency has advised them to release summaries of any internal assessments regarding equality.

The CEO of the Public Law Project, Shameem Ahmad, stated that despite the potential dangers, the DWP has consistently denied requests for information about their AI tools. Basic details, such as who is being tested and the accuracy of the systems, have not been disclosed despite multiple Freedom of Information Act requests.

The DWP is not the only department using AI in a way that can have major impacts on people’s daily lives. A Guardian investigation has found such tools in use in at least eight Whitehall departments and a handful of police forces around the UK.

The Home Office utilizes a comparable mechanism for identifying possible fraudulent marriages. A programming code identifies marriage license requests for evaluation by a designated case worker who can either authorize, postpone, or deny the request.

The department has been able to expedite the processing of applications thanks to the tool. However, its equality impact assessment revealed that it was identifying a disproportionately large number of marriages from four specific countries: Greece, Albania, Bulgaria, and Romania.

According to the evaluation, which was reviewed by the Guardian, it was determined that any potential indirect discrimination is justifiable based on the overall objectives and results of the procedure.

A number of law enforcement agencies are utilizing artificial intelligence technology, particularly for analyzing crime patterns and implementing facial recognition. The Metropolitan Police has implemented live facial recognition cameras throughout London to assist officers in identifying individuals on their “watchlist.”

However, similar to other artificial intelligence tools, there is proof that the facial recognition systems used by the Metropolitan Police can result in bias. A recent study conducted by the National Physical Laboratory revealed that, in the majority of situations, the cameras had a minimal rate of error and any errors were distributed evenly among various demographics.

If the sensitivity settings were decreased, potentially in an attempt to capture more individuals, the results showed a false detection of at least five times more African American individuals compared to Caucasian individuals.

The Met did not reply to a comment request.

The West Midlands police department is utilizing artificial intelligence to forecast potential areas at risk for knife violence and car theft. They are also creating a distinct tool to anticipate which individuals may become “high harm offenders”.

These examples are those about which the Guardian was able to find out most information.

In numerous instances, government agencies and law enforcement employed a variety of exceptions to freedom of information regulations in order to prevent disclosing information about their artificial intelligence tools.

There is concern that the UK may face a situation similar to the scandal in the Netherlands where the tax authorities violated European data regulations. This is also seen in Australia, where 400,000 individuals were falsely accused of providing inaccurate income information to authorities.

The UK’s information commissioner, John Edwards, stated that he has evaluated numerous AI tools being utilized in the public sector, such as the DWP’s fraud detection systems, and has not identified any violations of data protection regulations. He mentioned, “We have reviewed the DWP’s applications and have also examined how local authorities are using AI for benefit purposes. We have determined that they have been used responsibly and with enough human intervention to prevent potential harm.”

Nevertheless, he expressed worries about the use of facial recognition cameras. He stated, “We are keeping an eye on the advancements of real-time facial recognition. It has the potential to be invasive and we are closely observing its usage.”

Several government departments are aiming to increase transparency regarding the implementation of AI in the public domain. The Cabinet Office is currently compiling a central repository of these tools, but the decision to include their respective systems lies with each department individually.

Meanwhile, activists are concerned that those impacted by decisions made with the help of AI may be suffering without their knowledge.

Ahmad cautioned that looking at examples from other countries shows the disastrous impact on individuals, governments, and society as a whole. The government’s lack of transparency and regulation is creating the perfect conditions for similar consequences to occur here as well.

Source: theguardian.com

You May Also Like

More From Author