Society Archives - AI News https://www.artificialintelligence-news.com/tag/society/ Artificial Intelligence News Tue, 02 Jan 2024 16:48:38 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png Society Archives - AI News https://www.artificialintelligence-news.com/tag/society/ 32 32 US Chief Justice: AI won’t replace judges but will ‘transform our work’ https://www.artificialintelligence-news.com/2024/01/02/us-chief-justice-ai-wont-replace-judges-will-transform-our-work/ https://www.artificialintelligence-news.com/2024/01/02/us-chief-justice-ai-wont-replace-judges-will-transform-our-work/#respond Tue, 02 Jan 2024 16:48:36 +0000 https://www.artificialintelligence-news.com/?p=14126 In the Federal Judiciary’s year-end report, US Chief Justice John Roberts addressed the potential impact of AI on the judicial system. In particular, he aimed to quell concerns about the obsolescence of judges in the face of technological advancements. “As 2023 draws to a close with breathless predictions about the future of artificial intelligence, some... Read more »

The post US Chief Justice: AI won’t replace judges but will ‘transform our work’ appeared first on AI News.

]]>
In the Federal Judiciary’s year-end report, US Chief Justice John Roberts addressed the potential impact of AI on the judicial system. In particular, he aimed to quell concerns about the obsolescence of judges in the face of technological advancements.

“As 2023 draws to a close with breathless predictions about the future of artificial intelligence, some may wonder whether judges are about to become obsolete. I am sure we are not—but equally confident that technological changes will continue to transform our work,” stated Roberts.

Roberts stressed the intrinsic value of human judgement, asserting that machines could not fully replace the nuanced decisions made by individuals.

In his report, Roberts pointed out the importance of subtle factors such as a trembling hand, a momentary hesitation, or a fleeting break in eye contact—aspects that machines might struggle to discern accurately. The Chief Justice underlined the public’s inherent trust in human judgement over AI when it comes to evaluating such nuances.

However, Roberts expressed legitimate concerns about the potential drawbacks of AI in the legal domain. He warned against the possibility of AI-generated fabricated answers or “hallucinations,” citing instances where lawyers used AI-powered applications to submit briefs that referenced imaginary cases.

Additionally, Roberts highlighted the risks associated with AI influencing privacy and the potential for bias in decisions in discretionary matters like flight risk and recidivism.

Despite these apprehensions, Roberts acknowledged the positive aspects of incorporating AI in the legal system. He recognised AI’s potential to democratise access to legal advice and tools, particularly benefiting those who cannot afford legal representation.

As the legal world adapts to AI, Chief Justice Roberts’ reflections underscore the importance of striking a balance between harnessing its substantial benefits while managing the potentially devastating risks.

(Image Credit: DOD photo by Navy Petty Officer 1st Class Carlos M. Vazquez II under CC BY 2.0 DEED license)

See also: AI & Big Data Expo: Ethical AI integration and future trends

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post US Chief Justice: AI won’t replace judges but will ‘transform our work’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/01/02/us-chief-justice-ai-wont-replace-judges-will-transform-our-work/feed/ 0
Ethics, governance and data for good at the AI & Big Data Expo https://www.artificialintelligence-news.com/2023/12/19/ethics-governance-data-for-good-ai-big-data-expo/ https://www.artificialintelligence-news.com/2023/12/19/ethics-governance-data-for-good-ai-big-data-expo/#respond Tue, 19 Dec 2023 12:17:09 +0000 https://www.artificialintelligence-news.com/?p=14115 AI is more than a trend and it’s also not a specialist space anymore. This year, the topic was embedded across the tech conference calendar in London—with every event packed full of people keen to learn and share their experiences. The AI & Big Data Expo stood out for its great mixture of speakers, not... Read more »

The post Ethics, governance and data for good at the AI & Big Data Expo appeared first on AI News.

]]>
AI is more than a trend and it’s also not a specialist space anymore. This year, the topic was embedded across the tech conference calendar in London—with every event packed full of people keen to learn and share their experiences.

The AI & Big Data Expo stood out for its great mixture of speakers, not only targeting people working within data, but making the topics feel completely accessible to somebody like me, who isn’t a data scientist by background. As the CEO of an infrastructure charity, I know our beneficiaries don’t necessarily work closely with data, or hold it at the forefront of their minds, so it was very interesting to see how AI and big data impacts a diverse range of different sectors, and how they employ and deploy different strategies to work with its new challenges. 

I especially enjoyed the talks focused on ethics and governance, which resonate with our beneficiaries and the challenges that they face. What’s interesting is that there seems to be a real drive to ensure that ethics is baked into AI strategies moving forward. It’s very heartening that ethics is being talked about at this early stage – that it isn’t being ignored as it may have been when past technologies developed as quickly.

One talk tackled governance, and how governments are still playing catch up. There seems to be an overarching feeling that AI has to be regulated, but whether the regulation that people want is possible is the next big question. Can it be regulated, and how? Is the EU act going to work well, and is the legislation in the US going to be effective, or will it be watered down? What systems do you use, and therefore what do you endorse? Is this the right thing to do, is this the right way to deploy this sort of power, and what would the fallout be if we did?

As a charity working to serve other charities, safeguarding is a huge area of concern for us, so it’s good to know that the mainstream is also thinking about transparency. AI providers and tools have not yet done enough to flag the potential risks for third sector organisations that, for example, routinely handle sensitive data about vulnerable individuals. This could result in a number of issues for charities using AI for the first time, not least data breaches. We have already seen the misuse of AI to replace services that are still necessarily led by humans – in one example, a chatbot that replaced a manned helpline gave people with eating disorders dangerous dieting advice. Tech leaders and governments must take the lead by demonstrating responsible approaches and creating frameworks around safeguarding and risks. There will always be bad actors in this space, but there seems to be a ‘coalition of the willing’ that wants to ensure AI is continually safe, not just for those with enough resources to create their own safeguarding.

As these debates continue and the technology develops apace, it’s so important that there are spaces in which the third sector can be heard alongside private or statutory organisations. At the AI & Big Data Expo, we were able to showcase our work as a representative voice, and garner enthusiasm in the ‘data for good’ movement. We made some fantastic connections with others, as a result of realising how aligned our overarching missions are. Testament to that was the enthusiasm of our audience, asking our wonderful volunteers Adam and Alvaro tons of questions, and chatting to us in person afterwards. We are thrilled to have been part of these conversations.

Finally, we want to say a big thank you to the organisers for the opportunity to get stuck into a cross-sector event like this. We’re looking forward to the next one!

To find out more about DataKind UK and how you can support our vision of a strong, thriving third sector that embraces data science to become more impactful, visit our website.

The post Ethics, governance and data for good at the AI & Big Data Expo appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/12/19/ethics-governance-data-for-good-ai-big-data-expo/feed/ 0
AI & Big Data Expo: Ethical AI integration and future trends https://www.artificialintelligence-news.com/2023/12/18/ai-big-data-expo-ethical-ai-integration-future-trends/ https://www.artificialintelligence-news.com/2023/12/18/ai-big-data-expo-ethical-ai-integration-future-trends/#respond Mon, 18 Dec 2023 16:10:52 +0000 https://www.artificialintelligence-news.com/?p=14111 Grace Zheng, Data Analyst at Canon and Founder of Kosh Duo, recently sat down for an interview with AI News during AI & Big Data Expo Global to discuss integrating AI ethically as well as provide her insights around future trends.  Zheng first explained how over a decade working in digital marketing and e-commerce sparked... Read more »

The post AI & Big Data Expo: Ethical AI integration and future trends appeared first on AI News.

]]>
Grace Zheng, Data Analyst at Canon and Founder of Kosh Duo, recently sat down for an interview with AI News during AI & Big Data Expo Global to discuss integrating AI ethically as well as provide her insights around future trends. 

Zheng first explained how over a decade working in digital marketing and e-commerce sparked her interest more recently in data analytics and artificial intelligence as machine learning has become hugely popular.

At Canon, Zheng’s team focuses on ethically integrating AI into business by first mapping current and potential AI applications across areas like marketing and e-commerce. They then analyse and assess risks to ensure compliance with regulations.

Canon is actively mapping out AI applications and assessing risks, as Grace explained, “to align with regulations such as the EU legislations.”

As founder of Kosh Duo, Zheng also provides coaching to help businesses scale up through the use of AI marketing and data-driven approaches. She coaches professionals on achieving greater recognition and rewards by leveraging AI tools as well.

A key challenge she encounters is misunderstandings around what AI truly means – many conflate it solely with chatbots like ChatGPT rather than appreciating the full breadth of machine learning, neural networks, natural language processing, and more that enable today’s AI.

“There’s a lot of misconceptions, definitely. One of the biggest fears, as I touched on, is the very generic understanding that GPT equals AI,” says Zheng. “[Kosh Duo] provides coaching services to businesses to scale to the next level using AI marketing and data-driven approaches.”

When asked about trends to watch, Zheng emphasised the need for continual learning given how rapidly the field evolves. She expects that 2024 will be an “awakening year” where businesses truly grasp AI’s potential and individuals appreciate the need to evaluate their current skillsets.

The interview highlighted the transformative but often misunderstood power of AI in business and the importance of developing specialised skills to properly harness it. Zheng stressed that with the right ethical foundations and coaching, AI and machine learning can become positive forces to drive growth rather than something to fear.

Watch our full interview with Grace Zheng below:

(Photo by Benjamin Davies on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI & Big Data Expo: Ethical AI integration and future trends appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/12/18/ai-big-data-expo-ethical-ai-integration-future-trends/feed/ 0
MIT publishes white papers to guide AI governance https://www.artificialintelligence-news.com/2023/12/11/mit-publishes-white-papers-guide-ai-governance/ https://www.artificialintelligence-news.com/2023/12/11/mit-publishes-white-papers-guide-ai-governance/#respond Mon, 11 Dec 2023 16:34:19 +0000 https://www.artificialintelligence-news.com/?p=14040 A committee of MIT leaders and scholars has published a series of white papers aiming to shape the future of AI governance in the US. The comprehensive framework outlined in these papers seeks to extend existing regulatory and liability approaches to effectively oversee AI while fostering its benefits and mitigating potential harm. Titled “A Framework... Read more »

The post MIT publishes white papers to guide AI governance appeared first on AI News.

]]>
A committee of MIT leaders and scholars has published a series of white papers aiming to shape the future of AI governance in the US. The comprehensive framework outlined in these papers seeks to extend existing regulatory and liability approaches to effectively oversee AI while fostering its benefits and mitigating potential harm.

Titled “A Framework for U.S. AI Governance: Creating a Safe and Thriving AI Sector,” the main policy paper proposes leveraging current US government entities to regulate AI tools within their respective domains.

Dan Huttenlocher, dean of the MIT Schwarzman College of Computing, emphasises the pragmatic approach of initially focusing on areas where human activity is already regulated and gradually expanding to address emerging risks associated with AI.

The framework underscores the importance of defining the purpose of AI tools, aligning regulations with specific applications and holding AI providers accountable for the intended use of their technologies.

Asu Ozdaglar, deputy dean of academics in the MIT Schwarzman College of Computing, believes having AI providers articulate the purpose and intent of their tools is crucial for determining liability in case of misuse.

Addressing the complexity of AI systems existing at multiple levels, the brief acknowledges the challenges of governing both general and specific AI tools. The proposal advocates for a self-regulatory organisation (SRO) structure to supplement existing agencies, offering responsive and flexible oversight tailored to the rapidly evolving AI landscape.

Furthermore, the policy papers call for advancements in auditing AI tools—exploring various pathways such as government-initiated, user-driven, or legal liability proceedings.

The consideration of a government-approved SRO – akin to the Financial Industry Regulatory Authority (FINRA) – is proposed to enhance domain-specific knowledge and facilitate practical engagement with the dynamic AI industry.

MIT’s involvement in AI governance stems from its recognised expertise in AI research, positioning the institution as a key contributor to addressing the challenges posed by evolving AI technologies. The release of these whitepapers signals MIT’s commitment to promoting responsible AI development and usage.

You can find MIT’s series of AI policy briefs here.

(Photo by Aaron Burden on Unsplash)

See also: AI & Big Data Expo: Demystifying AI and seeing past the hype

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post MIT publishes white papers to guide AI governance appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/12/11/mit-publishes-white-papers-guide-ai-governance/feed/ 0
Paul O’Sullivan, Salesforce: Transforming work in the GenAI era https://www.artificialintelligence-news.com/2023/11/21/paul-osullivan-salesforce-transforming-work-genai-era/ https://www.artificialintelligence-news.com/2023/11/21/paul-osullivan-salesforce-transforming-work-genai-era/#respond Tue, 21 Nov 2023 10:20:49 +0000 https://www.artificialintelligence-news.com/?p=13931 In the wake of the generative AI (GenAI) revolution, UK businesses find themselves at a crossroads between unprecedented opportunities and inherent challenges. Paul O’Sullivan, Senior Vice President of Solution Engineering (UKI) at Salesforce, sheds light on the complexities of this transformative landscape, urging businesses to tread cautiously while embracing the potential of artificial intelligence. Unprecedented... Read more »

The post Paul O’Sullivan, Salesforce: Transforming work in the GenAI era appeared first on AI News.

]]>
In the wake of the generative AI (GenAI) revolution, UK businesses find themselves at a crossroads between unprecedented opportunities and inherent challenges.

Paul O’Sullivan, Senior Vice President of Solution Engineering (UKI) at Salesforce, sheds light on the complexities of this transformative landscape, urging businesses to tread cautiously while embracing the potential of artificial intelligence.

Unprecedented opportunities

Generative AI has stormed the scene with remarkable speed. ChatGPT, for example, amassed 100 million users in a mere two months.

“If you put that into context, it took 10 years to reach 100 million users on Netflix,” says O’Sullivan.

This rapid adoption signals a seismic shift, promising substantial economic growth. O’Sullivan estimates that generative AI has the potential to contribute a staggering £3.5 trillion ($4.4 trillion) to the global economy.

“Again, if you put that into context, that’s about as much tax as the entire US takes in,” adds O’Sullivan.

One of its key advantages lies in driving automation, with the prospect of automating up to 40 percent of the average workday—leading to significant productivity gains for businesses.

The AI trust gap

However, amid the excitement, there looms a significant challenge: the AI trust gap. 

O’Sullivan acknowledges that despite being a top priority for C-suite executives, over half of customers remain sceptical about the safety and security of AI applications.

Addressing this gap will require a multi-faceted approach including grappling with issues related to data quality and ensuring that AI systems are built on reliable, unbiased, and representative datasets. 

“Companies have struggled with data quality and data hygiene. So that’s a key area of focus,” explains O’Sullivan.

Safeguarding data privacy is also paramount, with stringent measures needed to prevent the misuse of sensitive customer information.

“Both customers and businesses are worried about data privacy—we can’t let large language models store and learn from sensitive customer data,” says O’Sullivan. “Over half of customers and their customers don’t believe AI is safe and secure today.”

Ethical considerations

AI also prompts ethical considerations. Concerns about hallucinations – where AI systems generate inaccurate or misleading information – must be addressed meticulously.

Businesses must confront biases and toxicities embedded in AI algorithms, ensuring fairness and inclusivity. Striking a balance between innovation and ethical responsibility is pivotal to gaining customer trust.

“A trustworthy AI should consistently meet expectations, adhere to commitments, and create a sense of dependability within the organisation,” explains O’Sullivan. “It’s crucial to address the limitations and the potential risks. We’ve got to be open here and lead with integrity.”

As businesses embrace AI, upskilling the workforce will also be imperative.

O’Sullivan advocates for a proactive approach, encouraging employees to master the art of prompt writing. Crafting effective prompts is vital, enabling faster and more accurate interactions with AI systems and enhancing productivity across various tasks.

Moreover, understanding AI lingo is essential to foster open conversations and enable informed decision-making within organisations.

A collaborative future

Crucially, O’Sullivan emphasises a collaborative future where AI serves as a co-pilot rather than a replacement for human expertise.

“AI, for now, lacks cognitive capability like empathy, reasoning, emotional intelligence, and ethics—and these are absolutely critical business skills that humans need to bring to the table,” says O’Sullivan.

This collaboration fosters a sense of trust, as humans act as a check and balance to ensure the responsible use of AI technology.

By addressing the AI trust gap, upskilling the workforce, and fostering a harmonious collaboration between humans and AI, businesses can harness the full potential of generative AI while building trust and confidence among customers.

You can watch our full interview with Paul O’Sullivan below:

Paul O’Sullivan and the Salesforce team will be sharing their invaluable insights at this year’s AI & Big Data Expo Global. O’Sullivan will feature on a day one panel titled ‘Converging Technologies – We Work Better Together’.

The post Paul O’Sullivan, Salesforce: Transforming work in the GenAI era appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/11/21/paul-osullivan-salesforce-transforming-work-genai-era/feed/ 0
Umbar Shakir, Gate One: Unlocking the power of generative AI ethically https://www.artificialintelligence-news.com/2023/11/17/umbar-shakir-gate-one-unlocking-power-generative-ai-ethically/ https://www.artificialintelligence-news.com/2023/11/17/umbar-shakir-gate-one-unlocking-power-generative-ai-ethically/#respond Fri, 17 Nov 2023 08:54:26 +0000 https://www.artificialintelligence-news.com/?p=13911 Ahead of this year’s AI & Big Data Expo Global, Umbar Shakir, Partner and AI Lead at Gate One, shared her insights into the diverse landscape of generative AI (GenAI) and its impact on businesses. From addressing the spectrum of use cases to navigating digital transformation, Shakir shed light on the challenges, ethical considerations, and... Read more »

The post Umbar Shakir, Gate One: Unlocking the power of generative AI ethically appeared first on AI News.

]]>
Ahead of this year’s AI & Big Data Expo Global, Umbar Shakir, Partner and AI Lead at Gate One, shared her insights into the diverse landscape of generative AI (GenAI) and its impact on businesses.

From addressing the spectrum of use cases to navigating digital transformation, Shakir shed light on the challenges, ethical considerations, and the promising future of this groundbreaking technology.

Wide spectrum of use cases

Shakir highlighted the wide array of GenAI applications, ranging from productivity enhancements and research support to high-stakes areas such as strategic data mining and knowledge bots. She emphasised the transformational power of AI in understanding customer data, moving beyond simple sentiment analysis to providing actionable insights, thus elevating customer engagement strategies.

“GenAI now can take your customer insights to another level. It doesn’t just tell you whether something’s a positive or negative sentiment like old AI would do, it now says it’s positive or negative. It’s negative because X, Y, Z, and here’s the root cause for X, Y, Z,” explains Shakir.

Powering digital transformation

Gate One adopts an adaptive strategy approach, abandoning traditional five-year strategies for more agile, adaptable frameworks.

“We have a framework – our 5P model – where it’s: identify your people, identify the problem statement that you’re trying to solve for, appoint some partnerships, think about what’s the right capability mix that you have, think about the pathway through which you’re going to deliver, be use case or risk-led, and then proof of concept,” says Shakir.

By solving specific challenges and aligning strategies with business objectives, Gate One aims to drive meaningful digital transformation for its clients.

Assessing client readiness

Shakir discussed Gate One’s diagnostic tools, which blend technology maturity and operating model innovation questions to assess a client’s readiness to adopt GenAI successfully.

“We have a proprietary tool that we’ve built, a diagnostic tool where we look at blending tech maturity capability type questions with operating model innovation questions,” explains Shakir.

By categorising clients as “vanguard” or “safe” players, Gate One tailors their approach to meet individual readiness levels—ensuring a seamless integration of GenAI into the client’s operations.

Key challenges and ethical considerations

Shakir acknowledged the challenges associated with GenAI, especially concerning the quality of model outputs. She stressed the importance of addressing biases, amplifications, and ethical concerns, calling for a more meaningful and sustainable implementation of AI.

“Poor quality data or poorly trained models can create biases, racism, sexism… those are the things that worry me about the technology,” says Shakir.

Gate One is actively working on refining models and data inputs to mitigate such problems.

The future of GenAI

Looking ahead, Shakir predicted a demand for more ethical AI practices from consumers and increased pressure on developers to create representative and unbiased models.

Shakir also envisioned a shift in work dynamics where AI liberates humans from mundane tasks to allow them to focus on solving significant global challenges, particularly in the realm of sustainability.

Later this month, Gate One will be attending and sponsoring this year’s AI & Big Data Expo Global. During the event, Gate One aims to share its ethos of meaningful AI and emphasise ethical and sustainable approaches.

Gate One will also be sharing with attendees GenAI’s impact on marketing and experience design, offering valuable insights into the changing landscape of customer interactions and brand experiences.

As businesses navigate the evolving landscape of GenAI, Gate One stands at the forefront, advocating for responsible, ethical, and sustainable practices and ensuring a brighter, more impactful future for businesses and society.

Umbar Shakir and the Gate One team will be sharing their invaluable insights at this year’s AI & Big Data Expo Global. Find out more about Umbar Shakir’s day one keynote presentation here.

The post Umbar Shakir, Gate One: Unlocking the power of generative AI ethically appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/11/17/umbar-shakir-gate-one-unlocking-power-generative-ai-ethically/feed/ 0
Google expands partnership with Anthropic to enhance AI safety https://www.artificialintelligence-news.com/2023/11/10/google-expands-partnership-anthropic-enhance-ai-safety/ https://www.artificialintelligence-news.com/2023/11/10/google-expands-partnership-anthropic-enhance-ai-safety/#respond Fri, 10 Nov 2023 15:56:36 +0000 https://www.artificialintelligence-news.com/?p=13870 Google has announced the expansion of its partnership with Anthropic to work towards achieving the highest standards of AI safety. The collaboration between Google and Anthropic dates back to the founding of Anthropic in 2021. The two companies have closely collaborated, with Anthropic building one of the largest Google Kubernetes Engine (GKE) clusters in the... Read more »

The post Google expands partnership with Anthropic to enhance AI safety appeared first on AI News.

]]>
Google has announced the expansion of its partnership with Anthropic to work towards achieving the highest standards of AI safety.

The collaboration between Google and Anthropic dates back to the founding of Anthropic in 2021. The two companies have closely collaborated, with Anthropic building one of the largest Google Kubernetes Engine (GKE) clusters in the industry.

“Our longstanding partnership with Google is founded on a shared commitment to develop AI responsibly and deploy it in a way that benefits society,” said Dario Amodei, co-founder and CEO of Anthropic.

“We look forward to our continued collaboration as we work to make steerable, reliable and interpretable AI systems available to more businesses around the world.”

Anthropic utilises Google’s AlloyDB, a fully managed PostgreSQL-compatible database, for handling transactional data with high performance and reliability. Additionally, Google’s BigQuery data warehouse is employed to analyse vast datasets, extracting valuable insights for Anthropic’s operations.

As part of the expanded partnership, Anthropic will leverage Google’s latest generation Cloud TPU v5e chips for AI inference. Anthropic will use the chips to efficiently scale its powerful Claude large language model, which ranks only behind GPT-4 in many benchmarks.

The announcement comes on the heels of both companies participating in the inaugural AI Safety Summit (AISS) at Bletchley Park, hosted by the UK government. The summit brought together government officials, technology leaders, and experts to address concerns around frontier AI.

Google and Anthropic are also engaged in collaborative efforts with the Frontier Model Forum and MLCommons, contributing to the development of robust measures for AI safety.

To enhance security for organisations deploying Anthropic’s models on Google Cloud, Anthropic is now utilising Google Cloud’s security services. This includes Chronicle Security Operations, Secure Enterprise Browsing, and Security Command Center, providing visibility, threat detection, and access control.

“Anthropic and Google Cloud share the same values when it comes to developing AI–it needs to be done in both a bold and responsible way,” commented Thomas Kurian, CEO of Google Cloud. 

“This expanded partnership with Anthropic – built on years of working together – will bring AI to more people safely and securely, and provides another example of how the most innovative and fastest growing AI startups are building on Google Cloud.”

Google and Anthropic’s expanded partnership promises to be a critical step in advancing AI safety standards and fostering responsible development.

(Photo by charlesdeluvio on Unsplash)

See also: Amazon is building a LLM to rival OpenAI and Google

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Google expands partnership with Anthropic to enhance AI safety appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/11/10/google-expands-partnership-anthropic-enhance-ai-safety/feed/ 0
NIST announces AI consortium to shape US policies https://www.artificialintelligence-news.com/2023/11/03/nist-announces-ai-consortium-shape-us-policies/ https://www.artificialintelligence-news.com/2023/11/03/nist-announces-ai-consortium-shape-us-policies/#respond Fri, 03 Nov 2023 10:13:14 +0000 https://www.artificialintelligence-news.com/?p=13831 In a bid to address the challenges associated with the development and deployment of AI, the National Institute of Standards and Technology (NIST) has formed a new consortium.  This development was announced in a document published to the Federal Registry on November 2, alongside an official notice inviting applications from individuals with the relevant credentials.... Read more »

The post NIST announces AI consortium to shape US policies appeared first on AI News.

]]>
In a bid to address the challenges associated with the development and deployment of AI, the National Institute of Standards and Technology (NIST) has formed a new consortium. 

This development was announced in a document published to the Federal Registry on November 2, alongside an official notice inviting applications from individuals with the relevant credentials.

The document states, “This notice is the initial step for NIST in collaborating with non-profit organisations, universities, other government agencies, and technology companies to address challenges associated with the development and deployment of AI.”

The primary objective of this collaboration is to create and implement specific policies and measurements that ensure a human-centred approach to AI safety and governance within the United States.

Collaborators within the consortium will be tasked with a range of functions, including the development of measurement and benchmarking tools, policy recommendations, red-teaming efforts, psychoanalysis, and environmental analysis.

NIST’s initiative comes in response to a recent executive order issued by US President Joseph Biden, which outlined six new standards for AI safety and security.

While European and Asian countries have been proactive in instituting policies governing AI systems concerning user and citizen privacy, security, and potential unintended consequences, the US has lagged.

President Biden’s executive order and the establishment of the Safety Institute Consortium mark significant strides in the right direction, yet there remains a lack of clarity regarding the timeline for the implementation of laws governing AI development and deployment in the US.

Many experts have expressed concerns about the adequacy of current laws, designed for conventional businesses and technology, when applied to the rapidly-evolving AI sector.

The formation of the AI consortium signifies a crucial step towards shaping the future of AI policies in the US. It reflects a collaborative effort between government bodies, non-profit organisations, universities, and technology companies to ensure responsible and ethical AI practices within the nation.

(Photo by Muhammad Rizki on Unsplash)

See also: UK paper highlights AI risks ahead of global Safety Summit

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post NIST announces AI consortium to shape US policies appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/11/03/nist-announces-ai-consortium-shape-us-policies/feed/ 0
UK paper highlights AI risks ahead of global Safety Summit https://www.artificialintelligence-news.com/2023/10/26/uk-paper-highlights-ai-risks-ahead-global-safety-summit/ https://www.artificialintelligence-news.com/2023/10/26/uk-paper-highlights-ai-risks-ahead-global-safety-summit/#respond Thu, 26 Oct 2023 15:48:59 +0000 https://www.artificialintelligence-news.com/?p=13793 The UK Government has unveiled a comprehensive paper addressing the capabilities and risks associated with frontier AI. UK Prime Minister Rishi Sunak has spoken today on the global responsibility to confront the risks highlighted in the report and harness AI’s potential. Sunak emphasised the need for honest dialogue about the dual nature of AI: offering... Read more »

The post UK paper highlights AI risks ahead of global Safety Summit appeared first on AI News.

]]>
The UK Government has unveiled a comprehensive paper addressing the capabilities and risks associated with frontier AI.

UK Prime Minister Rishi Sunak has spoken today on the global responsibility to confront the risks highlighted in the report and harness AI’s potential. Sunak emphasised the need for honest dialogue about the dual nature of AI: offering unprecedented opportunities, while also posing significant dangers.

“AI will bring new knowledge, new opportunities for economic growth, new advances in human capability, and the chance to solve problems we once thought beyond us. But it also brings new dangers and new fears,” said Sunak.

“So, the responsible thing for me to do is to address those fears head-on, giving you the peace of mind that we will keep you safe while making sure you and your children have all the opportunities for a better future that AI can bring.

“Doing the right thing, not the easy thing, means being honest with people about the risks from these technologies.”

The report delves into the rapid advancements of frontier AI, drawing on numerous sources. It highlights the diverse perspectives within scientific, expert, and global communities regarding the risks associated with the swift evolution of AI technology. 

The publication comprises three key sections:

  1. Capabilities and risks from frontier AI: This section presents a discussion paper advocating further research into AI risk. It delineates the current state of frontier AI capabilities, potential future improvements, and associated risks, including societal harms, misuse, and loss of control.
  2. Safety and security risks of generative AI to 2025: Drawing on intelligence assessments, this report outlines the potential global benefits of generative AI while highlighting the increased safety and security risks. It underscores the enhancement of threat actor capabilities and the effectiveness of attacks due to generative AI development.
  3. Future risks of frontier AI: Prepared by the Government Office for Science, this report explores uncertainties in frontier AI development, future system risks, and potential scenarios for AI up to 2030.

The report – based on declassified information from intelligence agencies – focuses on generative AI, the technology underpinning popular chatbots and image generation software. It foresees a future where AI might be exploited by terrorists to plan biological or chemical attacks, raising serious concerns about global security.

Sjuul van der Leeuw, CEO of Deployteq, commented: “It is good to see the government take a serious approach, offering a report ahead of the Safety Summit next week however more must be done.

“An ongoing effort to address AI risks is needed and we hope that the summit brings much-needed clarity, allowing businesses and marketers to enjoy the benefits this emerging piece of technology offers, without the worry of backlash.”

The report highlights that generative AI could be utilised to gather knowledge on physical attacks by non-state violent actors, including creating chemical, biological, and radiological weapons.

Although companies are working to implement safeguards, the report emphasises the varying effectiveness of these measures. Obstacles to obtaining the necessary knowledge, raw materials, and equipment for such attacks are decreasing, with AI potentially accelerating this process.

Additionally, the report warns of the likelihood of AI-driven cyber-attacks becoming faster-paced, more effective, and on a larger scale by 2025. AI could aid hackers in mimicking official language, and overcome previous challenges faced in this area.

However, some experts have questioned the UK Government’s approach.

Rashik Parmar MBE, CEO of BCS, The Chartered Institute for IT, said: “Over 1,300 technologists and leaders signed our open letter calling AI a force for good rather than an existential threat to humanity.

“AI won’t grow up like The Terminator. If we take the proper steps, it will be a trusted co-pilot from our earliest school days to our retirement.

The AI Safety Summit will aim to foster healthy discussion around how to address frontier AI risks, encompassing misuse by non-state actors for cyberattacks or bioweapon design and concerns related to AI systems acting autonomously contrary to human intentions. Discussions at the summit will also extend to broader societal impacts, such as election disruption, bias, crime, and online safety.

Claire Trachet, CEO of Trachet, commented: “The fast-growing nature of AI has made it difficult for governments to balance creating effective regulation which safeguards the interest of businesses and consumers without stifling investment opportunities. Even though there are some forms of risk management and different reports coming out now, none of them are true coordinated approaches.

“The UK Government’s commitment to AI safety is commendable, but the criticism surrounding the summit serves as a reminder of the importance of a balanced, constructive, and forward-thinking approach to AI regulation.”

If the UK Government’s report is anything to go by, the need for collaboration around proportionate but rigorous measures to manage the risks posed by AI is more imperative than ever.

The global AI Safety Summit is set to take place at the historic Bletchley Park on 1 – 2 November 2023.

(Image Credit: GOV.UK)

See also: BSI: Closing ‘AI confidence gap’ key to unlocking benefits

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK paper highlights AI risks ahead of global Safety Summit appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/26/uk-paper-highlights-ai-risks-ahead-global-safety-summit/feed/ 0
Nightshade ‘poisons’ AI models to fight copyright theft https://www.artificialintelligence-news.com/2023/10/24/nightshade-poisons-ai-models-fight-copyright-theft/ https://www.artificialintelligence-news.com/2023/10/24/nightshade-poisons-ai-models-fight-copyright-theft/#respond Tue, 24 Oct 2023 14:49:13 +0000 https://www.artificialintelligence-news.com/?p=13779 University of Chicago researchers have unveiled Nightshade, a tool designed to disrupt AI models attempting to learn from artistic imagery. The tool – still in its developmental phase – allows artists to protect their work by subtly altering pixels in images, rendering them imperceptibly different to the human eye but confusing to AI models. Many... Read more »

The post Nightshade ‘poisons’ AI models to fight copyright theft appeared first on AI News.

]]>
University of Chicago researchers have unveiled Nightshade, a tool designed to disrupt AI models attempting to learn from artistic imagery.

The tool – still in its developmental phase – allows artists to protect their work by subtly altering pixels in images, rendering them imperceptibly different to the human eye but confusing to AI models.

Many artists and creators have expressed concern over the use of their work in training commercial AI products without their consent.

AI models rely on vast amounts of multimedia data – including written material and images, often scraped from the web – to function effectively. Nightshade offers a potential solution by sabotaging this data.

When integrated into digital artwork, Nightshade misleads AI models, causing them to misidentify objects and scenes.

For instance, Nightshade transformed images of dogs into data that appeared to AI models as cats. After exposure to a mere 100 poison samples, the AI reliably generated a cat when asked for a dog—demonstrating the tool’s effectiveness.

This technique not only confuses AI models but also challenges the fundamental way in which generative AI operates. By exploiting the clustering of similar words and ideas in AI models, Nightshade can manipulate responses to specific prompts and further undermine the accuracy of AI-generated content.

Developed by computer science professor Ben Zhao and his team, Nightshade is an extension of their prior product, Glaze, which cloaks digital artwork and distorts pixels to baffle AI models regarding artistic style.

While the potential for misuse of Nightshade is acknowledged, the researchers’ primary objective is to shift the balance of power from AI companies back to artists and discourage intellectual property violations.

The introduction of Nightshade presents a major challenge to AI developers. Detecting and removing images with poisoned pixels is a complex task, given the imperceptible nature of the alterations.

If integrated into existing AI training datasets, these images necessitate removal and potential retraining of AI models, posing a substantial hurdle for companies relying on stolen or unauthorised data.

As the researchers await peer review of their work, Nightshade is a beacon of hope for artists seeking to protect their creative endeavours.

(Photo by Josie Weiss on Unsplash)

See also: UMG files landmark lawsuit against AI developer Anthropic

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Nightshade ‘poisons’ AI models to fight copyright theft appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/24/nightshade-poisons-ai-models-fight-copyright-theft/feed/ 0