Error-prone facial recognition leads to another wrongful arrest

The Detroit Police Department (DPD) is once again under scrutiny as a new lawsuit emerges, revealing that another innocent person has been wrongly arrested due to a flawed facial recognition match.

Porcha Woodruff, an African American woman who was eight months pregnant at the time, is the sixth individual to come forward and report being falsely accused of a crime because of the controversial technology utilised by law enforcement.

Woodruff was accused of robbery and...

BSI publishes guidance to boost trust in AI for healthcare

In a bid to foster greater digital trust in AI products used for medical diagnoses and treatment, the British Standards Institution (BSI) has released high-level guidance.

The guidance, titled ’Validation framework for the use of AI within healthcare – Specification (BS 30440),’ aims to bolster confidence among clinicians, healthcare professionals, and providers regarding the safe, effective, and ethical development of AI tools.

As the global debate on the...

AI regulation: A pro-innovation approach – EU vs UK

In this article, the writers compare the United Kingdom’s plans for implementing a pro-innovation approach to regulation (“UK Approach”) versus the European Union’s proposed Artificial Intelligence Act (the “EU AI Act”).

Authors: Sean Musch, AI & Partners and Michael Borrelli, AI & Partners

AI – The opportunity and the challenge

AI currently delivers broad societal benefits, from medical advances to mitigating climate change. As an example, an AI...

AI Act: The power of open-source in guiding regulations

As the EU debates the AI Act, lessons from open-source software can inform the regulatory approach to open ML systems.

The AI Act, set to be a global precedent, aims to address the risks associated with AI while encouraging the development of cutting-edge technology. One of the key aspects of this Act is its support for open-source, non-profit, and academic research and development in the AI ecosystem. Such support ensures the development of safe, transparent, and accountable AI...

Assessing the risks of generative AI in the workplace

Amid the exponential growth of generative AI, there is a pressing need to evaluate the legal, ethical, and security implications of these solutions in the workplace.

One of the concerns highlighted by industry experts is often the lack of transparency regarding the data on which many generative AI models are trained.

There is insufficient information about the specifics of the training data used for models like GPT-4, which powers applications such as ChatGPT. This lack...

Beijing publishes its AI governance rules

Chinese authorities have published rules governing generative AI which go substantially beyond current regulations in other parts of the world.

One notable requirement is that operators of generative AI must ensure that their services adhere to the core values of socialism, while also avoiding content that incites subversion of state power, secession, terrorism, or any actions undermining national unity and social stability.

Generative AI services within China are...

OpenAI introduces team dedicated to stopping rogue AI

The potential dangers of highly-intelligent AI systems have been a topic of concern for experts in the field.

Recently, Geoffrey Hinton – the so-called “Godfather of AI” – expressed his worries about the possibility of superintelligent AI surpassing human capabilities and causing catastrophic consequences for humanity.

Similarly, Sam Altman, CEO of OpenAI, the company behind the popular ChatGPT chatbot, admitted to being fearful of the potential effects of...

European Parliament adopts AI Act position

The European Parliament has taken a significant step towards the regulation of artificial intelligence by voting to adopt its position for the upcoming AI Act with an overwhelming majority. 

The act aims to regulate AI based on its potential to cause harm and follows a risk-based approach, prohibiting applications that pose an unacceptable risk while imposing strict regulations for high-risk use cases.

The timing of AI regulation has been a subject of debate, but...

UK will host global AI summit to address potential risks

The UK has announced that it will host a global summit this autumn to address the most significant risks associated with AI.

The decision comes after meetings between Prime Minister Rishi Sunak, US President Joe Biden, Congress, and business leaders.

“AI has an incredible potential to transform our lives for the better. But we need to make sure it is developed and used in a way that is safe and secure,” explained Sunak.

“No one country can do this alone....

AI Task Force adviser: AI will threaten humans in two years

An artificial intelligence task force adviser to the UK prime minister has a stark warning: AI will threaten humans in two years.

The adviser, Matt Clifford, said in an interview with TalkTV that humans have a narrow window of two years to control and regulate AI before it becomes too powerful.

“The near-term risks are actually pretty scary. You can use AI today to create new recipes for bioweapons or to launch large-scale cyber attacks,” said...