A company that has worked closely with the UK government on artificial intelligence safety, the NHS and education is also developing AI for military drones.
The consultancy Faculty AI has “experience developing and deploying AI models on to UAVs”, or unmanned aerial vehicles, according to a defence industry partner company.
Faculty has emerged as one of the most active companies selling AI services in the UK. Unlike the likes of OpenAI, Deepmind or Anthropic, it does not develop models itself, instead focusing on reselling models, notably from OpenAI, and consulting on their use in government and industry.
Faculty gained particular prominence in the UK after working on data analysis for the Vote Leave campaign before the Brexit vote. Boris Johnson’s former adviser, Dominic Cummings, then gave government work to Faculty during the pandemic, and included its chief executive, Marc Warner, in meetings of the government’s scientific advisory panel.
Since then the company, officially called Faculty Science, has carried out testing of AI models for the UK government’s AI Safety Institute (AISI), set up in 2023 under former prime minister Rishi Sunak.
Governments around the world are racing to understand the safety implications of artificial intelligence, after rapid improvements in generative AI prompted a wave of hype around its possibilities.
Weapons companies are keen to potentially put AI on drones, ranging from “loyal wingmen” that could fly alongside fighter jets, to loitering munitions that are already capable of waiting for targets to appear before firing on them.
The latest technological developments have raised the prospect of drones that can track and kill without a human “in the loop” making the final decision.
In a press release announcing a partnership with London-based Faculty, the British startup Hadean wrote that the two companies are working together on “subject identification, tracking object movement, and exploring autonomous swarming development, deployment and operations”.
It is understood that Faculty’s work with Hadean did not include weapons targeting. However, Faculty did not answer questions on whether it was working on drones capable of applying lethal force, or to give further details of its defence work, citing confidentiality agreements.
A spokesperson for Faculty said: “We help to develop novel AI models that will help our defence partners create safer, more robust solutions,” adding that it has “rigorous ethical policies and internal processes” and follows ethical guidelines on AI from the Ministry of Defence.
The spokesperson said Faculty has a decade of experience in AI safety, including on countering child sexual abuse and terrorism.
The Scott Trust, the ultimate owner of the Guardian, is an investor in Mercuri VC, formerly GMG Ventures, which is a minority shareholder in Faculty.
“We’ve worked on AI safety for a decade and are world-leading experts in this field,” the spokesperson said. “That’s why we’re trusted by governments and model developers to ensure frontier AI is safe, and by defence clients to apply AI ethically to help keep citizens safe.”
Many experts and politicians have called for caution before introducing more autonomous technologies into the military. A House of Lords committee in 2023 called for the UK government to try to set up a treaty or a non-binding agreement to clarify the application of international humanitarian law when it comes to lethal drones. The Green party in September called for laws to outlaw lethal autonomous weapons systems completely.
Faculty continues to work closely with the AISI, putting it in a position where its judgments could be influential for UK government policy.
In November, the AISI contracted Faculty to survey how large language models “are used to aid in criminal or otherwise undesirable behaviour”. The AISI said the winner of the contract – Faculty – “will be a significant strategic collaborator of AISI’s safeguards team, directly contributing key information to AISI’s models of system security.”
The company works directly with OpenAI, the startup that began the latest wave of AI enthusiasm, to use its ChatGPT model. Experts have previously raised concerns over a potential conflict of work in the work Faculty has carried out with AISI, according to Politico, a news website. Faculty did not detail which companies’ models it had tested, although it tested OpenAI’s o1 model before its release.
The government has previously said of Faculty AI’s work for AISI: “crucially, they are not conflicted through the development of their own model”.
Natalie Bennett, a peer for the Green party, said: “The Green party has long expressed grave concern about the ‘revolving door’ between industry and government, raising issues from gas company staff being seconded to work on energy policy to former defence ministers going to work for arms companies.
“That a single company has been both taking up a large number of government contracts to work on AI while also working with the AI Safety Institute on testing large language models is a serious concern – not so much ‘poacher turned gamekeeper’ as doing both roles at the same time.”
Bennett also highlighted that the UK government has “yet to make a full commitment” to ensure there is a human in the loop for autonomous weapons systems, as recommended by the Lords committee.
Faculty, whose biggest shareholder is a Guernsey-registered holding company, has also sought to cultivate close ties across the UK government, winning contracts worth at least £26.6m, according to government disclosures. Those include contracts with the NHS, the Department of Health and Social Care, the Department for Education and the Department for Culture, Media and Sport.
Those contracts represent a significant source of revenue for a company that made sales worth £32m in the year to 31 March. I t lost £4.4m during that period.
Albert Sanchez-Graells, a professor of economic law at the University of Bristol, warned that the UK is relying on tech firms’ “self-restraint and responsibility in AI development”.
“Companies supporting AISI’s work need to avoid organisational conflicts of interest arising from their work for other parts of government and broader market-based AI business,” Sanchez-Graells said.
“Companies with such broad portfolios of AI activity as Faculty have questions to answer on how they ensure their advice to AISI is independent and unbiased, and how they avoid taking advantage of that knowledge in their other activities.”
The Department for Science, Innovation and Technology declined to comment, saying it would not go into detail on individual commercial contracts.
Source: theguardian.com