legal Archives - AI News https://www.artificialintelligence-news.com/tag/legal/ Artificial Intelligence News Tue, 02 Jan 2024 16:48:38 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png legal Archives - AI News https://www.artificialintelligence-news.com/tag/legal/ 32 32 US Chief Justice: AI won’t replace judges but will ‘transform our work’ https://www.artificialintelligence-news.com/2024/01/02/us-chief-justice-ai-wont-replace-judges-will-transform-our-work/ https://www.artificialintelligence-news.com/2024/01/02/us-chief-justice-ai-wont-replace-judges-will-transform-our-work/#respond Tue, 02 Jan 2024 16:48:36 +0000 https://www.artificialintelligence-news.com/?p=14126 In the Federal Judiciary’s year-end report, US Chief Justice John Roberts addressed the potential impact of AI on the judicial system. In particular, he aimed to quell concerns about the obsolescence of judges in the face of technological advancements. “As 2023 draws to a close with breathless predictions about the future of artificial intelligence, some... Read more »

The post US Chief Justice: AI won’t replace judges but will ‘transform our work’ appeared first on AI News.

]]>
In the Federal Judiciary’s year-end report, US Chief Justice John Roberts addressed the potential impact of AI on the judicial system. In particular, he aimed to quell concerns about the obsolescence of judges in the face of technological advancements.

“As 2023 draws to a close with breathless predictions about the future of artificial intelligence, some may wonder whether judges are about to become obsolete. I am sure we are not—but equally confident that technological changes will continue to transform our work,” stated Roberts.

Roberts stressed the intrinsic value of human judgement, asserting that machines could not fully replace the nuanced decisions made by individuals.

In his report, Roberts pointed out the importance of subtle factors such as a trembling hand, a momentary hesitation, or a fleeting break in eye contact—aspects that machines might struggle to discern accurately. The Chief Justice underlined the public’s inherent trust in human judgement over AI when it comes to evaluating such nuances.

However, Roberts expressed legitimate concerns about the potential drawbacks of AI in the legal domain. He warned against the possibility of AI-generated fabricated answers or “hallucinations,” citing instances where lawyers used AI-powered applications to submit briefs that referenced imaginary cases.

Additionally, Roberts highlighted the risks associated with AI influencing privacy and the potential for bias in decisions in discretionary matters like flight risk and recidivism.

Despite these apprehensions, Roberts acknowledged the positive aspects of incorporating AI in the legal system. He recognised AI’s potential to democratise access to legal advice and tools, particularly benefiting those who cannot afford legal representation.

As the legal world adapts to AI, Chief Justice Roberts’ reflections underscore the importance of striking a balance between harnessing its substantial benefits while managing the potentially devastating risks.

(Image Credit: DOD photo by Navy Petty Officer 1st Class Carlos M. Vazquez II under CC BY 2.0 DEED license)

See also: AI & Big Data Expo: Ethical AI integration and future trends

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post US Chief Justice: AI won’t replace judges but will ‘transform our work’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/01/02/us-chief-justice-ai-wont-replace-judges-will-transform-our-work/feed/ 0
AI & Big Data Expo: Ethical AI integration and future trends https://www.artificialintelligence-news.com/2023/12/18/ai-big-data-expo-ethical-ai-integration-future-trends/ https://www.artificialintelligence-news.com/2023/12/18/ai-big-data-expo-ethical-ai-integration-future-trends/#respond Mon, 18 Dec 2023 16:10:52 +0000 https://www.artificialintelligence-news.com/?p=14111 Grace Zheng, Data Analyst at Canon and Founder of Kosh Duo, recently sat down for an interview with AI News during AI & Big Data Expo Global to discuss integrating AI ethically as well as provide her insights around future trends.  Zheng first explained how over a decade working in digital marketing and e-commerce sparked... Read more »

The post AI & Big Data Expo: Ethical AI integration and future trends appeared first on AI News.

]]>
Grace Zheng, Data Analyst at Canon and Founder of Kosh Duo, recently sat down for an interview with AI News during AI & Big Data Expo Global to discuss integrating AI ethically as well as provide her insights around future trends. 

Zheng first explained how over a decade working in digital marketing and e-commerce sparked her interest more recently in data analytics and artificial intelligence as machine learning has become hugely popular.

At Canon, Zheng’s team focuses on ethically integrating AI into business by first mapping current and potential AI applications across areas like marketing and e-commerce. They then analyse and assess risks to ensure compliance with regulations.

Canon is actively mapping out AI applications and assessing risks, as Grace explained, “to align with regulations such as the EU legislations.”

As founder of Kosh Duo, Zheng also provides coaching to help businesses scale up through the use of AI marketing and data-driven approaches. She coaches professionals on achieving greater recognition and rewards by leveraging AI tools as well.

A key challenge she encounters is misunderstandings around what AI truly means – many conflate it solely with chatbots like ChatGPT rather than appreciating the full breadth of machine learning, neural networks, natural language processing, and more that enable today’s AI.

“There’s a lot of misconceptions, definitely. One of the biggest fears, as I touched on, is the very generic understanding that GPT equals AI,” says Zheng. “[Kosh Duo] provides coaching services to businesses to scale to the next level using AI marketing and data-driven approaches.”

When asked about trends to watch, Zheng emphasised the need for continual learning given how rapidly the field evolves. She expects that 2024 will be an “awakening year” where businesses truly grasp AI’s potential and individuals appreciate the need to evaluate their current skillsets.

The interview highlighted the transformative but often misunderstood power of AI in business and the importance of developing specialised skills to properly harness it. Zheng stressed that with the right ethical foundations and coaching, AI and machine learning can become positive forces to drive growth rather than something to fear.

Watch our full interview with Grace Zheng below:

(Photo by Benjamin Davies on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI & Big Data Expo: Ethical AI integration and future trends appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/12/18/ai-big-data-expo-ethical-ai-integration-future-trends/feed/ 0
MIT publishes white papers to guide AI governance https://www.artificialintelligence-news.com/2023/12/11/mit-publishes-white-papers-guide-ai-governance/ https://www.artificialintelligence-news.com/2023/12/11/mit-publishes-white-papers-guide-ai-governance/#respond Mon, 11 Dec 2023 16:34:19 +0000 https://www.artificialintelligence-news.com/?p=14040 A committee of MIT leaders and scholars has published a series of white papers aiming to shape the future of AI governance in the US. The comprehensive framework outlined in these papers seeks to extend existing regulatory and liability approaches to effectively oversee AI while fostering its benefits and mitigating potential harm. Titled “A Framework... Read more »

The post MIT publishes white papers to guide AI governance appeared first on AI News.

]]>
A committee of MIT leaders and scholars has published a series of white papers aiming to shape the future of AI governance in the US. The comprehensive framework outlined in these papers seeks to extend existing regulatory and liability approaches to effectively oversee AI while fostering its benefits and mitigating potential harm.

Titled “A Framework for U.S. AI Governance: Creating a Safe and Thriving AI Sector,” the main policy paper proposes leveraging current US government entities to regulate AI tools within their respective domains.

Dan Huttenlocher, dean of the MIT Schwarzman College of Computing, emphasises the pragmatic approach of initially focusing on areas where human activity is already regulated and gradually expanding to address emerging risks associated with AI.

The framework underscores the importance of defining the purpose of AI tools, aligning regulations with specific applications and holding AI providers accountable for the intended use of their technologies.

Asu Ozdaglar, deputy dean of academics in the MIT Schwarzman College of Computing, believes having AI providers articulate the purpose and intent of their tools is crucial for determining liability in case of misuse.

Addressing the complexity of AI systems existing at multiple levels, the brief acknowledges the challenges of governing both general and specific AI tools. The proposal advocates for a self-regulatory organisation (SRO) structure to supplement existing agencies, offering responsive and flexible oversight tailored to the rapidly evolving AI landscape.

Furthermore, the policy papers call for advancements in auditing AI tools—exploring various pathways such as government-initiated, user-driven, or legal liability proceedings.

The consideration of a government-approved SRO – akin to the Financial Industry Regulatory Authority (FINRA) – is proposed to enhance domain-specific knowledge and facilitate practical engagement with the dynamic AI industry.

MIT’s involvement in AI governance stems from its recognised expertise in AI research, positioning the institution as a key contributor to addressing the challenges posed by evolving AI technologies. The release of these whitepapers signals MIT’s commitment to promoting responsible AI development and usage.

You can find MIT’s series of AI policy briefs here.

(Photo by Aaron Burden on Unsplash)

See also: AI & Big Data Expo: Demystifying AI and seeing past the hype

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post MIT publishes white papers to guide AI governance appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/12/11/mit-publishes-white-papers-guide-ai-governance/feed/ 0
NIST announces AI consortium to shape US policies https://www.artificialintelligence-news.com/2023/11/03/nist-announces-ai-consortium-shape-us-policies/ https://www.artificialintelligence-news.com/2023/11/03/nist-announces-ai-consortium-shape-us-policies/#respond Fri, 03 Nov 2023 10:13:14 +0000 https://www.artificialintelligence-news.com/?p=13831 In a bid to address the challenges associated with the development and deployment of AI, the National Institute of Standards and Technology (NIST) has formed a new consortium.  This development was announced in a document published to the Federal Registry on November 2, alongside an official notice inviting applications from individuals with the relevant credentials.... Read more »

The post NIST announces AI consortium to shape US policies appeared first on AI News.

]]>
In a bid to address the challenges associated with the development and deployment of AI, the National Institute of Standards and Technology (NIST) has formed a new consortium. 

This development was announced in a document published to the Federal Registry on November 2, alongside an official notice inviting applications from individuals with the relevant credentials.

The document states, “This notice is the initial step for NIST in collaborating with non-profit organisations, universities, other government agencies, and technology companies to address challenges associated with the development and deployment of AI.”

The primary objective of this collaboration is to create and implement specific policies and measurements that ensure a human-centred approach to AI safety and governance within the United States.

Collaborators within the consortium will be tasked with a range of functions, including the development of measurement and benchmarking tools, policy recommendations, red-teaming efforts, psychoanalysis, and environmental analysis.

NIST’s initiative comes in response to a recent executive order issued by US President Joseph Biden, which outlined six new standards for AI safety and security.

While European and Asian countries have been proactive in instituting policies governing AI systems concerning user and citizen privacy, security, and potential unintended consequences, the US has lagged.

President Biden’s executive order and the establishment of the Safety Institute Consortium mark significant strides in the right direction, yet there remains a lack of clarity regarding the timeline for the implementation of laws governing AI development and deployment in the US.

Many experts have expressed concerns about the adequacy of current laws, designed for conventional businesses and technology, when applied to the rapidly-evolving AI sector.

The formation of the AI consortium signifies a crucial step towards shaping the future of AI policies in the US. It reflects a collaborative effort between government bodies, non-profit organisations, universities, and technology companies to ensure responsible and ethical AI practices within the nation.

(Photo by Muhammad Rizki on Unsplash)

See also: UK paper highlights AI risks ahead of global Safety Summit

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post NIST announces AI consortium to shape US policies appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/11/03/nist-announces-ai-consortium-shape-us-policies/feed/ 0
Biden issues executive order to ensure responsible AI development https://www.artificialintelligence-news.com/2023/10/30/biden-issues-executive-order-responsible-ai-development/ https://www.artificialintelligence-news.com/2023/10/30/biden-issues-executive-order-responsible-ai-development/#respond Mon, 30 Oct 2023 10:18:14 +0000 https://www.artificialintelligence-news.com/?p=13798 President Biden has issued an executive order aimed at positioning the US at the forefront of AI while ensuring the technology’s safe and responsible use. The order establishes stringent standards for AI safety and security, safeguards Americans’ privacy, promotes equity and civil rights, protects consumers and workers, fosters innovation and competition, and enhances American leadership... Read more »

The post Biden issues executive order to ensure responsible AI development appeared first on AI News.

]]>
President Biden has issued an executive order aimed at positioning the US at the forefront of AI while ensuring the technology’s safe and responsible use.

The order establishes stringent standards for AI safety and security, safeguards Americans’ privacy, promotes equity and civil rights, protects consumers and workers, fosters innovation and competition, and enhances American leadership on the global stage.

Key actions outlined in the order:

  1. New standards for AI safety and security: The order mandates that developers of powerful AI systems share safety test results and critical information with the U.S. government. Rigorous standards, tools, and tests will be developed to ensure AI systems are safe, secure, and trustworthy before public release. Additionally, measures will be taken to protect against the risks of using AI to engineer dangerous biological materials and combat AI-enabled fraud and deception.
  2. Protecting citizens’ privacy: The President calls on Congress to pass bipartisan data privacy legislation, prioritizing federal support for privacy-preserving techniques, especially those using AI. Guidelines will be developed for federal agencies to evaluate the effectiveness of privacy-preserving techniques, including those used in AI systems.
  3. Advancing equity and civil rights: Clear guidance will be provided to prevent AI algorithms from exacerbating discrimination, especially in areas like housing and federal benefit programs. Best practices will be established for the use of AI in the criminal justice system to ensure fairness.
  4. Standing up for consumers, patients, and students: Responsible use of AI in healthcare and education will be promoted, ensuring that consumers are protected from harmful AI applications while benefiting from its advancements in these sectors.
  5. Supporting workers: Principles and best practices will be developed to mitigate the harms and maximise the benefits of AI for workers, addressing issues such as job displacement, workplace equity, and health and safety. A report on AI’s potential labour-market impacts will be produced, identifying options for strengthening federal support for workers facing labour disruptions due to AI.
  6. Promoting innovation and competition: The order aims to catalyse AI research across the US, promote a fair and competitive AI ecosystem, and expand the ability of highly skilled immigrants and non-immigrants to study, stay, and work in the US to foster innovation in the field.
  7. Advancing leadership abroad: The US will collaborate with other nations to establish international frameworks for safe and trustworthy AI deployment. Efforts will be made to accelerate the development and implementation of vital AI standards with international partners and promote the responsible development and deployment of AI abroad to address global challenges.
  8. Ensuring responsible and effective government adoption: Clear standards and guidelines will be issued for government agencies’ use of AI to protect rights and safety. Efforts will be made to help agencies acquire AI products and services more rapidly and efficiently, and an AI talent surge will be initiated to enhance government capacity in AI-related fields.

The executive order signifies a major step forward in the US towards harnessing the potential of AI while safeguarding individuals’ rights and security.

“As we advance this agenda at home, the Administration will work with allies and partners abroad on a strong international framework to govern the development and use of AI,” wrote the White House in a statement.

“The actions that President Biden directed today are vital steps forward in the US’ approach on safe, secure, and trustworthy AI. More action will be required, and the Administration will continue to work with Congress to pursue bipartisan legislation to help America lead the way in responsible innovation.”

The administration’s commitment to responsible innovation is paramount and sets the stage for continued collaboration with international partners to shape the future of AI globally.

(Photo by David Everett Strickler on Unsplash)

See also: UK paper highlights AI risks ahead of global Safety Summit

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Biden issues executive order to ensure responsible AI development appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/30/biden-issues-executive-order-responsible-ai-development/feed/ 0
UMG files landmark lawsuit against AI developer Anthropic https://www.artificialintelligence-news.com/2023/10/19/umg-files-landmark-lawsuit-ai-developer-anthropic/ https://www.artificialintelligence-news.com/2023/10/19/umg-files-landmark-lawsuit-ai-developer-anthropic/#respond Thu, 19 Oct 2023 15:54:37 +0000 https://www.artificialintelligence-news.com/?p=13770 Universal Music Group (UMG) has filed a lawsuit against Anthropic, the developer of Claude AI. This landmark case represents the first major legal battle where the music industry confronts an AI developer head-on. UMG – along with several other key industry players including Concord Music Group, ABKCO, Worship Together Music, and Plaintiff Capital CMG –... Read more »

The post UMG files landmark lawsuit against AI developer Anthropic appeared first on AI News.

]]>
Universal Music Group (UMG) has filed a lawsuit against Anthropic, the developer of Claude AI.

This landmark case represents the first major legal battle where the music industry confronts an AI developer head-on. UMG – along with several other key industry players including Concord Music Group, ABKCO, Worship Together Music, and Plaintiff Capital CMG – is seeking $75 million in damages.

The lawsuit centres around the alleged unauthorised use of copyrighted music by Anthropic to train its AI models. The publishers claim that Anthropic illicitly incorporated songs from artists they represent into its AI dataset without obtaining the necessary permissions.

Legal representatives for the publishers have asserted that the action was taken to address the “systematic and widespread infringement” of copyrighted song lyrics by Anthropic.

The lawsuit, spanning 60 pages and posted online by The Hollywood Reporter, emphasises the publishers’ support for innovation and ethical AI use. However, they contend that Anthropic has violated these principles and must be held accountable under established copyright laws.

Anthropic, despite positioning itself as an AI ‘safety and research’ company, stands accused of copyright infringement without regard for the law or the creative community whose works underpin its services, according to the lawsuit.

In addition to the significant monetary damages, the publishers have demanded a jury trial. They also seek reimbursement for legal fees, the destruction of all infringing material, public disclosure of how Anthropic’s AI model was trained, and financial penalties of up to $150,000 per infringed work.

This latest lawsuit follows a string of legal battles between AI developers and creators. Each new case is worth observing to see the precedent that is set for future battles.

(Photo by Jason Rosewell on Unsplash)

See also: BSI: Closing ‘AI confidence gap’ key to unlocking benefits

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UMG files landmark lawsuit against AI developer Anthropic appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/19/umg-files-landmark-lawsuit-ai-developer-anthropic/feed/ 0
UK races to agree statement on AI risks with global leaders https://www.artificialintelligence-news.com/2023/10/10/uk-races-agree-statement-ai-risks-global-leaders/ https://www.artificialintelligence-news.com/2023/10/10/uk-races-agree-statement-ai-risks-global-leaders/#respond Tue, 10 Oct 2023 13:40:33 +0000 https://www.artificialintelligence-news.com/?p=13709 Downing Street officials find themselves in a race against time to finalise an agreed communique from global leaders concerning the escalating concerns surrounding artificial intelligence.  This hurried effort comes in anticipation of the UK’s AI Safety Summit scheduled next month at the historic Bletchley Park. The summit, designed to provide an update on White House-brokered... Read more »

The post UK races to agree statement on AI risks with global leaders appeared first on AI News.

]]>
Downing Street officials find themselves in a race against time to finalise an agreed communique from global leaders concerning the escalating concerns surrounding artificial intelligence. 

This hurried effort comes in anticipation of the UK’s AI Safety Summit scheduled next month at the historic Bletchley Park.

The summit, designed to provide an update on White House-brokered safety guidelines – as well as facilitate a debate on how national security agencies can scrutinise the most dangerous versions of this technology – faces a potential hurdle. It’s unlikely to generate an agreement on establishing a new international organisation to scrutinise cutting-edge AI, apart from its proposed communique.

The proposed AI Safety Institute, a brainchild of the UK government, aims to enable national security-related scrutiny of frontier AI models. However, this ambition might face disappointment if an international consensus is not reached.

Claire Trachet, tech industry expert and CEO of business advisory Trachet, said:

“I think that this marks a very important moment for the UK, especially in terms of recognising that there are other players across Europe also hoping to catch up with the US in the AI space. It’s therefore essential that the UK continues to balance its drive for innovation with creating effective regulation that will not stifle the country’s growth prospects.

While the UK possesses the potential to be a frontrunner in the global tech race, concerted efforts are needed to strengthen the country’s position. By investing in research, securing supply chains, promoting collaboration, and nurturing local talent, the UK can position itself as a prominent player in shaping the future of AI-driven technologies.”

Currently, the UK stands as a key player in the global tech arena, with its AI market valued at over £16.9 billion and expected to soar to £803.7 billion by 2035, according to the US International Trade.

The British government’s commitment is evident through its £1 billion investment in supercomputing and AI research. Moreover, the introduction of seven new AI principles for regulation – focusing on accountability, access, diversity, choice, flexibility, fair dealing, and transparency – showcases the government’s dedication to fostering a robust AI ecosystem.

Despite these efforts, challenges loom as France emerges as a formidable competitor within Europe.

French billionaire Xavier Niel recently announced a €200 million investment in artificial intelligence, including a research lab and supercomputer, aimed at bolstering Europe’s competitiveness in the global AI race.

Niel’s initiative aligns with President Macron’s commitment, who announced €500 million in new funding at VivaTech to create new AI champions. Furthermore, France plans to attract companies through its own AI summit.

Claire Trachet acknowledges the intensifying competition between the UK and France, stating that while the rivalry adds complexity to the UK’s goals, it can also spur innovation within the industry. However, Trachet emphasises the importance of the UK striking a balance between innovation and effective regulation to sustain its growth prospects.

“In my view, if Europe wants to truly make a meaningful impact, they must leverage their collective resources, foster collaboration, and invest in nurturing a robust ecosystem,” adds Trachet.

“This means combining the strengths of the UK, France and Germany, to possibly create a compelling alternative in the next 10-15 years that disrupts the AI landscape, but again, this would require a heavily strategic vision and collaborative approach.”

(Photo by Nick Kane on Unsplash)

See also: Cyber Security & Cloud Expo: The alarming potential of AI-powered cybercrime

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK races to agree statement on AI risks with global leaders appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/10/uk-races-agree-statement-ai-risks-global-leaders/feed/ 0
UK deputy PM warns UN that AI regulation is falling behind advances https://www.artificialintelligence-news.com/2023/09/22/uk-deputy-pm-warns-un-ai-regulation-falling-behind-advances/ https://www.artificialintelligence-news.com/2023/09/22/uk-deputy-pm-warns-un-ai-regulation-falling-behind-advances/#respond Fri, 22 Sep 2023 09:24:44 +0000 https://www.artificialintelligence-news.com/?p=13630 In a stark address to the UN, UK Deputy PM Oliver Dowden has sounded the alarm on the potentially destabilising impact of AI on the world order. Dowden has urged governments to take immediate action to regulate AI development, warning that the rapid pace of advancement in AI technology could outstrip their ability to ensure... Read more »

The post UK deputy PM warns UN that AI regulation is falling behind advances appeared first on AI News.

]]>
In a stark address to the UN, UK Deputy PM Oliver Dowden has sounded the alarm on the potentially destabilising impact of AI on the world order.

Dowden has urged governments to take immediate action to regulate AI development, warning that the rapid pace of advancement in AI technology could outstrip their ability to ensure its safe and responsible use.

Speaking at the UN General Assembly in New York, Dowden highlighted that the UK will host a global summit in November to discuss the regulation of AI. The summit aims to bring together international leaders, experts, and industry representatives to address the pressing concerns surrounding AI.

One of the primary fears surrounding unchecked AI development is the potential for widespread job displacement, the proliferation of misinformation, and the deepening of societal discrimination. Without adequate regulations in place, AI technologies could be harnessed to magnify these negative effects.

“The starting gun has been fired on a globally competitive race in which individual companies as well as countries will strive to push the boundaries as far and fast as possible,” Dowden cautioned during his address.

Dowden went on to note that the current state of global regulation lags behind the rapid advances in AI technology. Unlike the past, where regulations followed technological developments, Dowden stressed that rules must now be established in tandem with AI’s evolution.

Oseloka Obiora, CTO at RiverSafe, said: “Business leaders are jumping into bed with the latest AI trends at an alarming rate, with little or no concern for the consequences.

“With global regulatory standards falling way behind and the most basic cyber security checks being neglected, it is right for the government to call for new global standards to prevent the AI ticking timebomb from exploding.”

Dowden underscored the importance of ensuring that AI companies do not have undue influence over the regulatory process. He emphasised the need for transparency and oversight, stating that AI companies should not “mark their own homework.” Instead, governments and citizens should have confidence that risks associated with AI are being properly mitigated.

Moreover, Dowden highlighted that only coordinated action by nation-states could provide the necessary assurance to the public that significant national security concerns stemming from AI have been adequately addressed.

He also cautioned against oversimplifying the role of AI—noting that it can be both a tool for good and a tool for ill, depending on its application. During the UN General Assembly, the UK also pitched AI’s potential to accelerate development in the world’s most impoverished nations.

The UK’s initiative to host a global AI regulation summit signals a growing recognition among world leaders of the urgent need to establish a robust framework for AI governance. As AI technology continues to advance, governments are under increasing pressure to strike the right balance between innovation and safeguarding against potential risks.

Jake Moore, Global Cybersecurity Expert at ESET, comments: “The fear that AI could shape our lives in a completely new direction is not without substance, as the power behind the technology churning this wheel is potentially destructive. Not only could AI change jobs, it also has the ability to change what we know to be true and impact what we believe.   

“Regulating it would mean potentially stifling innovation. But even attempting to regulate such a powerful beast would be like trying to regulate the dark web, something that is virtually impossible. Large datasets and algorithms can be designed to do almost anything, so we need to start looking at how we can improve educating people, especially young people in schools, into understanding this new wave of risk.”

Dowden’s warning to the United Nations serves as a clarion call for nations to come together and address the challenges posed by AI head-on. The global summit in November will be a critical step in shaping the future of AI governance and ensuring that the world order remains stable in the face of unprecedented technological change.

(Image Credit: UK Government under CC BY 2.0 license)

See also: CMA sets out principles for responsible AI development 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK deputy PM warns UN that AI regulation is falling behind advances appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/22/uk-deputy-pm-warns-un-ai-regulation-falling-behind-advances/feed/ 0
Is Europe killing itself financially with the AI Act? https://www.artificialintelligence-news.com/2023/09/18/is-europe-killing-itself-financially-with-ai-act/ https://www.artificialintelligence-news.com/2023/09/18/is-europe-killing-itself-financially-with-ai-act/#respond Mon, 18 Sep 2023 15:59:15 +0000 https://www.artificialintelligence-news.com/?p=13606 Europe is tinkering with legislation to regulate artificial intelligence. European regulators are delighted with this, but what does the world say about the AI Act? Now the outlines for the AI Act are known, a debate is beginning to erupt around its possible implications. One camp believes regulations are needed to curb the risks of powerful AI... Read more »

The post Is Europe killing itself financially with the AI Act? appeared first on AI News.

]]>
Europe is tinkering with legislation to regulate artificial intelligence. European regulators are delighted with this, but what does the world say about the AI Act?

Now the outlines for the AI Act are known, a debate is beginning to erupt around its possible implications. One camp believes regulations are needed to curb the risks of powerful AI technology, while the other is convinced that regulation will prove pernicious for the European economy. Is it out of the question that safe AI products also bring economic prosperity?

‘Industrial revolution’ without Europe

The EU “prevents the industrial revolution from happening” and portrays itself as “no part of the future world,” Joe Lonsdale told Bloomberg. He regularly appears in the US media around AI topics as an outspoken advocate of the technology. According to him, the technology has the potential to cause a third industrial revolution, and every company should already have implemented it in its organization.

He earned a bachelor’s degree in computer science in 2003. Meanwhile, he co-founded several technology companies, including those that deploy artificial intelligence. He later grew to become a businessman and venture capitalist.

The only question is, are the concerns well-founded? At the very least, caution seems necessary to avoid seeing major AI products disappear from Europe. Sam Altman, a better-known IT figure as CEO of OpenAI, previously spoke out about the possible disappearance of AI companies from Europe if the rules become too hard to apply. He does not plan to pull ChatGPT out of Europe because of the AI law, but he warns here of the possible actions of other companies.

ChatGPT stays

The CEO himself is essentially a strong supporter of security legislation for AI. He advocates for clear security requirements that AI developers must meet before the official release of a new product.

When a major player in the AI field calls for regulation of the technology he is working with, perhaps we as Europe should listen. That is what is happening with the AI Act, through which the EU is trying to be the first in the world to put out a set of rules for artificial intelligence. The EU is a pioneer, but it will also have to discover the pitfalls of a policy in the absence of a working example in the world.

The rules will be continuously tested until they officially come into effect in 2025 by experts who publicly give their opinions on the law. A public testing period which AI developers should also find important, Altman said. The European Union also avoids making up rules from higher up for a field it doesn’t know much about itself. The legislation will come bottom-up by involving companies and developers already actively engaged in AI setting the standards.

Copy off

Although the EU often pronounces that the AI law will be the world’s first regulation of artificial intelligence, other places are tinkering with a legal framework just as much. The United Kingdom, for example, is eager to embrace the technology but also wants certainty about its security. To that end, it immerses itself in the technology and gains early access to DeepMind, OpenAI and Anthropic’s models for research purposes.

However, Britain has no plans to punish companies that do not comply. The country limits itself to a framework of five principles that artificial intelligence should comply with. The choice seems to play to the disadvantage of guaranteed safety of AI products, as the country says it is necessary not to make a mandatory political framework for companies, to attract investment from AI companies in the UK. So secure AI products and economic prosperity do not appear to fit well together according to the country. Wait and see if Europe’s AI law validates that.

(Editor’s note: This article first appeared on Techzine)

The post Is Europe killing itself financially with the AI Act? appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/18/is-europe-killing-itself-financially-with-ai-act/feed/ 0
White House secures safety commitments from eight more AI companies https://www.artificialintelligence-news.com/2023/09/13/white-house-safety-commitments-eight-more-ai-companies/ https://www.artificialintelligence-news.com/2023/09/13/white-house-safety-commitments-eight-more-ai-companies/#respond Wed, 13 Sep 2023 14:56:10 +0000 https://www.artificialintelligence-news.com/?p=13585 The Biden-Harris Administration has announced that it has secured a second round of voluntary safety commitments from eight prominent AI companies. Representatives from Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability attended the White House for the announcement. These eight companies have pledged to play a pivotal role in promoting the development of... Read more »

The post White House secures safety commitments from eight more AI companies appeared first on AI News.

]]>
The Biden-Harris Administration has announced that it has secured a second round of voluntary safety commitments from eight prominent AI companies.

Representatives from Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability attended the White House for the announcement. These eight companies have pledged to play a pivotal role in promoting the development of safe, secure, and trustworthy AI.

The Biden-Harris Administration is actively working on an Executive Order and pursuing bipartisan legislation to ensure the US leads the way in responsible AI development that unlocks its potential while managing its risks.

The commitments made by these companies revolve around three fundamental principles: safety, security, and trust. They have committed to:

  1. Ensure products are safe before introduction:

The companies commit to rigorous internal and external security testing of their AI systems before releasing them to the public. This includes assessments by independent experts, helping guard against significant AI risks such as biosecurity, cybersecurity, and broader societal effects.

They will also actively share information on AI risk management with governments, civil society, academia, and across the industry. This collaborative approach will include sharing best practices for safety, information on attempts to circumvent safeguards, and technical cooperation.

  1. Build systems with security as a top priority:

The companies have pledged to invest in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights. Recognising the critical importance of these model weights in AI systems, they commit to releasing them only when intended and when security risks are adequately addressed.

Additionally, the companies will facilitate third-party discovery and reporting of vulnerabilities in their AI systems. This proactive approach ensures that issues can be identified and resolved promptly even after an AI system is deployed.

  1. Earn the public’s trust:

To enhance transparency and accountability, the companies will develop robust technical mechanisms – such as watermarking systems – to indicate when content is AI-generated. This step aims to foster creativity and productivity while reducing the risks of fraud and deception.

They will also publicly report on their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use, covering both security and societal risks, including fairness and bias. Furthermore, these companies are committed to prioritising research on the societal risks posed by AI systems, including addressing harmful bias and discrimination.

These leading AI companies will also develop and deploy advanced AI systems to address significant societal challenges, from cancer prevention to climate change mitigation, contributing to the prosperity, equality, and security of all.

The Biden-Harris Administration’s engagement with these commitments extends beyond the US, with consultations involving numerous international partners and allies. These commitments complement global initiatives, including the UK’s Summit on AI Safety, Japan’s leadership of the G-7 Hiroshima Process, and India’s leadership as Chair of the Global Partnership on AI.

The announcement marks a significant milestone in the journey towards responsible AI development, with industry leaders and the government coming together to ensure that AI technology benefits society while mitigating its inherent risks.

(Photo by Tabrez Syed on Unsplash)

See also: UK’s AI ecosystem to hit £2.4T by 2027, third in global race

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post White House secures safety commitments from eight more AI companies appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/13/white-house-safety-commitments-eight-more-ai-companies/feed/ 0