governance Archives - AI News https://www.artificialintelligence-news.com/tag/governance/ Artificial Intelligence News Tue, 19 Dec 2023 12:17:11 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png governance Archives - AI News https://www.artificialintelligence-news.com/tag/governance/ 32 32 Ethics, governance and data for good at the AI & Big Data Expo https://www.artificialintelligence-news.com/2023/12/19/ethics-governance-data-for-good-ai-big-data-expo/ https://www.artificialintelligence-news.com/2023/12/19/ethics-governance-data-for-good-ai-big-data-expo/#respond Tue, 19 Dec 2023 12:17:09 +0000 https://www.artificialintelligence-news.com/?p=14115 AI is more than a trend and it’s also not a specialist space anymore. This year, the topic was embedded across the tech conference calendar in London—with every event packed full of people keen to learn and share their experiences. The AI & Big Data Expo stood out for its great mixture of speakers, not... Read more »

The post Ethics, governance and data for good at the AI & Big Data Expo appeared first on AI News.

]]>
AI is more than a trend and it’s also not a specialist space anymore. This year, the topic was embedded across the tech conference calendar in London—with every event packed full of people keen to learn and share their experiences.

The AI & Big Data Expo stood out for its great mixture of speakers, not only targeting people working within data, but making the topics feel completely accessible to somebody like me, who isn’t a data scientist by background. As the CEO of an infrastructure charity, I know our beneficiaries don’t necessarily work closely with data, or hold it at the forefront of their minds, so it was very interesting to see how AI and big data impacts a diverse range of different sectors, and how they employ and deploy different strategies to work with its new challenges. 

I especially enjoyed the talks focused on ethics and governance, which resonate with our beneficiaries and the challenges that they face. What’s interesting is that there seems to be a real drive to ensure that ethics is baked into AI strategies moving forward. It’s very heartening that ethics is being talked about at this early stage – that it isn’t being ignored as it may have been when past technologies developed as quickly.

One talk tackled governance, and how governments are still playing catch up. There seems to be an overarching feeling that AI has to be regulated, but whether the regulation that people want is possible is the next big question. Can it be regulated, and how? Is the EU act going to work well, and is the legislation in the US going to be effective, or will it be watered down? What systems do you use, and therefore what do you endorse? Is this the right thing to do, is this the right way to deploy this sort of power, and what would the fallout be if we did?

As a charity working to serve other charities, safeguarding is a huge area of concern for us, so it’s good to know that the mainstream is also thinking about transparency. AI providers and tools have not yet done enough to flag the potential risks for third sector organisations that, for example, routinely handle sensitive data about vulnerable individuals. This could result in a number of issues for charities using AI for the first time, not least data breaches. We have already seen the misuse of AI to replace services that are still necessarily led by humans – in one example, a chatbot that replaced a manned helpline gave people with eating disorders dangerous dieting advice. Tech leaders and governments must take the lead by demonstrating responsible approaches and creating frameworks around safeguarding and risks. There will always be bad actors in this space, but there seems to be a ‘coalition of the willing’ that wants to ensure AI is continually safe, not just for those with enough resources to create their own safeguarding.

As these debates continue and the technology develops apace, it’s so important that there are spaces in which the third sector can be heard alongside private or statutory organisations. At the AI & Big Data Expo, we were able to showcase our work as a representative voice, and garner enthusiasm in the ‘data for good’ movement. We made some fantastic connections with others, as a result of realising how aligned our overarching missions are. Testament to that was the enthusiasm of our audience, asking our wonderful volunteers Adam and Alvaro tons of questions, and chatting to us in person afterwards. We are thrilled to have been part of these conversations.

Finally, we want to say a big thank you to the organisers for the opportunity to get stuck into a cross-sector event like this. We’re looking forward to the next one!

To find out more about DataKind UK and how you can support our vision of a strong, thriving third sector that embraces data science to become more impactful, visit our website.

The post Ethics, governance and data for good at the AI & Big Data Expo appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/12/19/ethics-governance-data-for-good-ai-big-data-expo/feed/ 0
MIT publishes white papers to guide AI governance https://www.artificialintelligence-news.com/2023/12/11/mit-publishes-white-papers-guide-ai-governance/ https://www.artificialintelligence-news.com/2023/12/11/mit-publishes-white-papers-guide-ai-governance/#respond Mon, 11 Dec 2023 16:34:19 +0000 https://www.artificialintelligence-news.com/?p=14040 A committee of MIT leaders and scholars has published a series of white papers aiming to shape the future of AI governance in the US. The comprehensive framework outlined in these papers seeks to extend existing regulatory and liability approaches to effectively oversee AI while fostering its benefits and mitigating potential harm. Titled “A Framework... Read more »

The post MIT publishes white papers to guide AI governance appeared first on AI News.

]]>
A committee of MIT leaders and scholars has published a series of white papers aiming to shape the future of AI governance in the US. The comprehensive framework outlined in these papers seeks to extend existing regulatory and liability approaches to effectively oversee AI while fostering its benefits and mitigating potential harm.

Titled “A Framework for U.S. AI Governance: Creating a Safe and Thriving AI Sector,” the main policy paper proposes leveraging current US government entities to regulate AI tools within their respective domains.

Dan Huttenlocher, dean of the MIT Schwarzman College of Computing, emphasises the pragmatic approach of initially focusing on areas where human activity is already regulated and gradually expanding to address emerging risks associated with AI.

The framework underscores the importance of defining the purpose of AI tools, aligning regulations with specific applications and holding AI providers accountable for the intended use of their technologies.

Asu Ozdaglar, deputy dean of academics in the MIT Schwarzman College of Computing, believes having AI providers articulate the purpose and intent of their tools is crucial for determining liability in case of misuse.

Addressing the complexity of AI systems existing at multiple levels, the brief acknowledges the challenges of governing both general and specific AI tools. The proposal advocates for a self-regulatory organisation (SRO) structure to supplement existing agencies, offering responsive and flexible oversight tailored to the rapidly evolving AI landscape.

Furthermore, the policy papers call for advancements in auditing AI tools—exploring various pathways such as government-initiated, user-driven, or legal liability proceedings.

The consideration of a government-approved SRO – akin to the Financial Industry Regulatory Authority (FINRA) – is proposed to enhance domain-specific knowledge and facilitate practical engagement with the dynamic AI industry.

MIT’s involvement in AI governance stems from its recognised expertise in AI research, positioning the institution as a key contributor to addressing the challenges posed by evolving AI technologies. The release of these whitepapers signals MIT’s commitment to promoting responsible AI development and usage.

You can find MIT’s series of AI policy briefs here.

(Photo by Aaron Burden on Unsplash)

See also: AI & Big Data Expo: Demystifying AI and seeing past the hype

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post MIT publishes white papers to guide AI governance appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/12/11/mit-publishes-white-papers-guide-ai-governance/feed/ 0
NIST announces AI consortium to shape US policies https://www.artificialintelligence-news.com/2023/11/03/nist-announces-ai-consortium-shape-us-policies/ https://www.artificialintelligence-news.com/2023/11/03/nist-announces-ai-consortium-shape-us-policies/#respond Fri, 03 Nov 2023 10:13:14 +0000 https://www.artificialintelligence-news.com/?p=13831 In a bid to address the challenges associated with the development and deployment of AI, the National Institute of Standards and Technology (NIST) has formed a new consortium.  This development was announced in a document published to the Federal Registry on November 2, alongside an official notice inviting applications from individuals with the relevant credentials.... Read more »

The post NIST announces AI consortium to shape US policies appeared first on AI News.

]]>
In a bid to address the challenges associated with the development and deployment of AI, the National Institute of Standards and Technology (NIST) has formed a new consortium. 

This development was announced in a document published to the Federal Registry on November 2, alongside an official notice inviting applications from individuals with the relevant credentials.

The document states, “This notice is the initial step for NIST in collaborating with non-profit organisations, universities, other government agencies, and technology companies to address challenges associated with the development and deployment of AI.”

The primary objective of this collaboration is to create and implement specific policies and measurements that ensure a human-centred approach to AI safety and governance within the United States.

Collaborators within the consortium will be tasked with a range of functions, including the development of measurement and benchmarking tools, policy recommendations, red-teaming efforts, psychoanalysis, and environmental analysis.

NIST’s initiative comes in response to a recent executive order issued by US President Joseph Biden, which outlined six new standards for AI safety and security.

While European and Asian countries have been proactive in instituting policies governing AI systems concerning user and citizen privacy, security, and potential unintended consequences, the US has lagged.

President Biden’s executive order and the establishment of the Safety Institute Consortium mark significant strides in the right direction, yet there remains a lack of clarity regarding the timeline for the implementation of laws governing AI development and deployment in the US.

Many experts have expressed concerns about the adequacy of current laws, designed for conventional businesses and technology, when applied to the rapidly-evolving AI sector.

The formation of the AI consortium signifies a crucial step towards shaping the future of AI policies in the US. It reflects a collaborative effort between government bodies, non-profit organisations, universities, and technology companies to ensure responsible and ethical AI practices within the nation.

(Photo by Muhammad Rizki on Unsplash)

See also: UK paper highlights AI risks ahead of global Safety Summit

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post NIST announces AI consortium to shape US policies appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/11/03/nist-announces-ai-consortium-shape-us-policies/feed/ 0
CDEI: Public believes tech isn’t being fully utilised to tackle pandemic, greater use depends on governance trust https://www.artificialintelligence-news.com/2021/03/05/cdei-public-tech-tackle-pandemic-use-governance-trust/ https://www.artificialintelligence-news.com/2021/03/05/cdei-public-tech-tackle-pandemic-use-governance-trust/#comments Fri, 05 Mar 2021 09:47:47 +0000 http://artificialintelligence-news.com/?p=10365 Research from the UK government’s Centre for Data Ethics and Innovation (CDEI) has found the public believes technology isn’t being fully utilised to tackle the pandemic, but greater use requires trust in how it is governed. CDEI advises the government on the responsible use of AI and data-driven technologies. Between June and December 2020, the... Read more »

The post CDEI: Public believes tech isn’t being fully utilised to tackle pandemic, greater use depends on governance trust appeared first on AI News.

]]>
Research from the UK government’s Centre for Data Ethics and Innovation (CDEI) has found the public believes technology isn’t being fully utilised to tackle the pandemic, but greater use requires trust in how it is governed.

CDEI advises the government on the responsible use of AI and data-driven technologies. Between June and December 2020, the advisory body polled over 12,000 people to gauge sentiment around how such technologies are being used.

Edwina Dunn, Deputy Chair for the CDEI, said:

“Data-driven technologies including AI have great potential for our economy and society. We need to ensure that the right governance regime is in place if we are to unlock the opportunities that these technologies present.

The CDEI will be playing its part to ensure that the UK is developing governance approaches that the public can have confidence in.”

Close to three quarters (72%) of respondents expressed confidence in digital technology having the potential to help tackle the pandemic—a belief shared across all demographics.

A majority (~69%) also support, in principle, the use of technologies such as wearables to assist with social distancing in the workplace.

Wearables haven’t yet been used to help counter the spread of coronavirus. The most widely deployed technology is the contact-tracing app, but its effectiveness has often come into question.

Many people feel data-driven technologies are not being used to their full potential. Under half (42%) believe digital technology is improving the situation in the UK. Seven percent even think current technologies are making the situation worse.

The scepticism expressed about the use of digital technologies in tackling the pandemic is less about the technology itself – with just 17 percent of respondents expressing that view – and more a lack of faith in whether it will be used by people and organisations properly (39%).

John Whittingdale, Minister of State for Media and Data at the Department for Digital, Culture, Media and Sport, commented:

“We are determined to build back better and capitalise on all we have learnt from the pandemic, which has forced us to share data quickly, efficiently and responsibly for the public good. This research confirms that public trust in how we govern data is essential. 

Through our National Data Strategy, we have committed to unlocking the huge potential of data to tackle some of society’s greatest challenges, while maintaining our high standards of data protection and governance.”

When controlling for all other variables, the CDEI found that “trust that the right rules and regulations are in place” is the single biggest predictor of whether someone will support the use of digital technology.

Among the key ways to help improve public trust is by increasing transparency and accountability. Less than half (45%) of respondents know where to raise concerns if they feel digital technology is causing harm.

CDEI’s research highlighted that people, on the whole, believe data-driven technologies can help tackle the pandemic. However, work needs to be done to improve trust in how such technologies are deployed and managed.

(Photo by Mangopear creative on Unsplash)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post CDEI: Public believes tech isn’t being fully utilised to tackle pandemic, greater use depends on governance trust appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/03/05/cdei-public-tech-tackle-pandemic-use-governance-trust/feed/ 1
Algorithmia’s latest tools aim to solve ML governance challenges https://www.artificialintelligence-news.com/2021/03/03/algorithmia-latest-tools-aim-solve-ml-governance-challenges/ https://www.artificialintelligence-news.com/2021/03/03/algorithmia-latest-tools-aim-solve-ml-governance-challenges/#comments Wed, 03 Mar 2021 12:15:29 +0000 http://artificialintelligence-news.com/?p=10326 Algorithmia is today debuting new reporting tools which aim to solve machine learning (ML) governance challenges. Research conducted by the company found that the number one challenge organisations are facing with their deployments is governance. 56 percent of the IT leaders surveyed by Algorithmia ranked governance, security, and auditability issues as a major concern. 67... Read more »

The post Algorithmia’s latest tools aim to solve ML governance challenges appeared first on AI News.

]]>
Algorithmia is today debuting new reporting tools which aim to solve machine learning (ML) governance challenges.

Research conducted by the company found that the number one challenge organisations are facing with their deployments is governance.

56 percent of the IT leaders surveyed by Algorithmia ranked governance, security, and auditability issues as a major concern. 67 percent report having to comply with multiple regulations—of which the penalties of failing to do so can be severe.

Diego Oppenheimer, CEO of Algorithmia, said:

“We’re still in the early days of ML governance, and organisations lack a clear roadmap or prescriptive advice for implementing it effectively in their own unique environments.

Regulations are undefined and a changing and ambiguous regulatory landscape leads to uncertainty and the need for companies to invest significant resources to maintain compliance. Those that can’t keep up risk losing their competitive edge.

Furthermore, existing solutions are manual and incomplete. Even organisations that are implementing governance today are doing so with a patchwork of disparate tools and manual processes. Not only do such solutions require constant maintenance, but they also risk critical gaps in coverage.”

Organisations test ML models, quite rightly, prior to deployment. However, Algorithmia says that IT leaders, business line leaders, CIOs and chief risk officers have realised over the past year – following an acceleration in ML deployments – that what happens after a model is deployed “is even more important than pre-deployment testing.”

As of today, Algorithmia’s Enterprise product has added the following reporting and governance capabilities to help manage operational risk:

  • Cost and usage reporting on infrastructure, storage and compute consumption within Algorithmia to understand and manage the overall cost of maintaining the platform.
  • Enhanced chargeback and showback reporting for monthly costs of storage, CPU and GPU consumption and usage billing.
  • Algorithm usage reporting with details of the algorithm used, so organizations can bill users for their usage.
  • Enhanced audit reports and logs so examiners and auditors can review model results, history of changes, and a record of data errors or past model failures and actions taken.
  • Advanced reporting panel for Algorithmia admins that provide an overview of all available metrics and usage reporting, ability to build reports and export reports and metrics to systems of record.

The new reporting and governance tools added to Algorithmia will help IT decision-makers tackle one of the biggest challenges they face with deployments today.

(Photo by Belhadj lamine on Unsplash)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Algorithmia’s latest tools aim to solve ML governance challenges appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/03/03/algorithmia-latest-tools-aim-solve-ml-governance-challenges/feed/ 2