study Archives - AI News https://www.artificialintelligence-news.com/tag/study/ Artificial Intelligence News Tue, 12 Dec 2023 13:00:07 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png study Archives - AI News https://www.artificialintelligence-news.com/tag/study/ 32 32 Dynatrace: Organisations embrace AI, yet face challenges https://www.artificialintelligence-news.com/2023/12/12/dynatrace-organisations-embrace-ai-yet-face-challenges/ https://www.artificialintelligence-news.com/2023/12/12/dynatrace-organisations-embrace-ai-yet-face-challenges/#respond Tue, 12 Dec 2023 13:00:04 +0000 https://www.artificialintelligence-news.com/?p=14058 Research from Dynatrace sheds light on the challenges and risks associated with AI implementation. The report underscores the need for a composite AI approach. This involves combining various AI types – such as generative, predictive, and causal – along with diverse data sources like observability, security, and business events. This holistic strategy aims to provide... Read more »

The post Dynatrace: Organisations embrace AI, yet face challenges appeared first on AI News.

]]>
Research from Dynatrace sheds light on the challenges and risks associated with AI implementation.

The report underscores the need for a composite AI approach. This involves combining various AI types – such as generative, predictive, and causal – along with diverse data sources like observability, security, and business events. This holistic strategy aims to provide precision, context, and meaning to AI outputs, ensuring reliable results.

Key findings:

  • 83% of tech leaders emphasise the mandatory role of AI in navigating the dynamic nature of cloud environments.
  • 82% anticipate AI’s critical role in security threat detection, investigation, and response.
  • 88% foresee AI extending access to data analytics for non-technical employees through natural language queries.
  • 88% believe AI will enhance cloud cost efficiencies through support for Financial Operations (FinOps) practices.

“AI has become central to how organisations drive efficiency, improve productivity, and accelerate innovation,” said Bernd Greifeneder, Chief Technology Officer at Dynatrace.

“The release of ChatGPT late last year triggered a significant generative AI hype cycle. Business, development, operations, and security leaders have set high expectations for generative AIs to help them deliver new services with less effort and at record speeds.”

While organisations express optimism about AI’s transformative potential, concerns linger:

  • 93% of tech leaders worry about potential non-approved uses of AI as employees become more accustomed to tools like ChatGPT.
  • 95% express concerns about using generative AI for code generation, fearing leakage and improper use of intellectual property.
  • 98% are apprehensive about unintentional bias, errors, and misinformation in generative AI.

“Especially for use cases that involve automation and depend on data context, taking a composite approach to AI is critical. For instance, automating software services, resolving security vulnerabilities, predicting maintenance needs, and analysing business data all need a composite AI approach,” added Greifeneder.

“This approach should deliver the precision of causal AI, which determines the underlying causes and effects of systems’ behaviours, and predictive AI, which forecasts future events based on historical data.”

As organisations forge ahead with AI adoption, balancing enthusiasm with a mindful approach to challenges becomes paramount. The survey underscores the transformative potential of AI, but its effective integration requires careful consideration and a diversified AI strategy.

“Predictive AI and causal AI not only provide essential context for responses produced by generative AI but can also prompt generative AI to ensure precise, non-probabilistic answers are embedded into its response,” says Greifeneder.

“If organisations get their strategy right, combining these different types of AI with high-quality observability, security, and business events data can significantly boost the productivity of their development, operations, and security teams and deliver lasting business value.”

A full copy of the report can be found here (registration required)

(Photo by Matt Sclarandis on Unsplash)

See also: AI & Big Data Expo: Demystifying AI and seeing past the hype

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Dynatrace: Organisations embrace AI, yet face challenges appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/12/12/dynatrace-organisations-embrace-ai-yet-face-challenges/feed/ 0
Quantum AI represents a ‘transformative advancement’ https://www.artificialintelligence-news.com/2023/11/14/quantum-ai-represents-transformative-advancement/ https://www.artificialintelligence-news.com/2023/11/14/quantum-ai-represents-transformative-advancement/#respond Tue, 14 Nov 2023 16:29:33 +0000 https://www.artificialintelligence-news.com/?p=13880 Quantum AI is the next frontier in the evolution of artificial intelligence, harnessing the power of quantum mechanics to propel capabilities beyond current limits. GlobalData highlights a 14 percent compound annual growth rate (CAGR) increase in related patent filings from 2020 to 2022, underscoring the vast influence and potential of quantum AI across industries. Adarsh... Read more »

The post Quantum AI represents a ‘transformative advancement’ appeared first on AI News.

]]>
Quantum AI is the next frontier in the evolution of artificial intelligence, harnessing the power of quantum mechanics to propel capabilities beyond current limits.

GlobalData highlights a 14 percent compound annual growth rate (CAGR) increase in related patent filings from 2020 to 2022, underscoring the vast influence and potential of quantum AI across industries.

Adarsh Jain, Director of Financial Markets at GlobalData, emphasises the transformative nature of Quantum AI:

“Quantum AI represents a transformative advancement in technology. As we integrate quantum principles into AI algorithms, the potential for speed and efficiency in processing complex data sets grows exponentially. This not only enhances current AI applications but also opens new possibilities across various industries. 

The surge in patent filings is a testament to its growing importance and the pivotal role it will play in the future of AI-driven solutions.”

Kiran Raj, Practice Head of Disruptive Tech at GlobalData, highlights that while AI thrives on data and computational power, the inner workings of the technology often remain unclear. Quantum computing not only promises increased power but also potentially provides greater insights into these workings, paving the way for AI to transcend its current capabilities.

GlobalData’s Disruptor Intelligence Center analysis reveals significant synergy between quantum computing and AI innovations, leading to revolutionary impacts in various industries. Notable collaborations include HSBC and IBM in finance, Menten AI’s healthcare advancements, Volkswagen’s partnership with Xanadu for battery simulation, Intel’s Quantum SDK, and Zapata’s collaboration with BMW.

Raj concludes with a note of caution: “Quantum AI offers the potential for smarter, faster AI systems, but its adoption is complex and demands caution. The technology is still in its early stages, requiring significant investment and expertise.

“Key challenges include the need for advanced cybersecurity measures and ensuring ethical AI practices as we navigate this promising yet intricate landscape.”

(Photo by Anton Maksimov 5642.su on Unsplash)

See also: Google expands partnership with Anthropic to enhance AI safety

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Quantum AI represents a ‘transformative advancement’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/11/14/quantum-ai-represents-transformative-advancement/feed/ 0
GitLab’s new AI capabilities empower DevSecOps https://www.artificialintelligence-news.com/2023/11/13/gitlab-new-ai-capabilities-empower-devsecops/ https://www.artificialintelligence-news.com/2023/11/13/gitlab-new-ai-capabilities-empower-devsecops/#respond Mon, 13 Nov 2023 17:27:18 +0000 https://www.artificialintelligence-news.com/?p=13876 GitLab is empowering DevSecOps with new AI-powered capabilities as part of its latest releases. The recent GitLab 16.6 November release includes the beta launch of GitLab Duo Chat, a natural-language AI assistant. Additionally, the GitLab 16.7 December release sees the general availability of GitLab Duo Code Suggestions. David DeSanto, Chief Product Officer at GitLab, said:... Read more »

The post GitLab’s new AI capabilities empower DevSecOps appeared first on AI News.

]]>
GitLab is empowering DevSecOps with new AI-powered capabilities as part of its latest releases.

The recent GitLab 16.6 November release includes the beta launch of GitLab Duo Chat, a natural-language AI assistant. Additionally, the GitLab 16.7 December release sees the general availability of GitLab Duo Code Suggestions.

David DeSanto, Chief Product Officer at GitLab, said: “To realise AI’s full potential, it needs to be embedded across the software development lifecycle, allowing DevSecOps teams to benefit from boosts to security, efficiency, and collaboration.”

GitLab Duo Chat – arguably the star of the show – provides users with invaluable insights, guidance, and suggestions. Beyond code analysis, it supports planning, security issue comprehension and resolution, troubleshooting CI/CD pipeline failures, aiding in merge requests, and more.

As part of GitLab’s commitment to providing a comprehensive AI-powered experience, Duo Chat joins Code Suggestions as the primary interface into GitLab’s AI suite within its DevSecOps platform.

GitLab Duo comprises a suite of 14 AI capabilities:

  • Suggested Reviewers
  • Code Suggestions
  • Chat
  • Vulnerability Summary
  • Code Explanation
  • Planning Discussions Summary
  • Merge Request Summary
  • Merge Request Template Population
  • Code Review Summary
  • Test Generation
  • Git Suggestions
  • Root Cause Analysis
  • Planning Description Generation
  • Value Stream Forecasting

In response to the evolving needs of development, security, and operations teams, Code Suggestions is now generally available. This feature assists in creating and updating code, reducing cognitive load, enhancing efficiency, and accelerating secure software development.

GitLab’s commitment to privacy and transparency stands out in the AI space. According to the GitLab report, 83 percent of DevSecOps professionals consider implementing AI in their processes essential, with 95 percent prioritising privacy and intellectual property protection in AI tool selection.

The State of AI in Software Development report by GitLab reveals that developers spend just 25 percent of their time writing code. The Duo suite aims to address this by reducing toolchain sprawl—enabling 7x faster cycle times, heightened developer productivity, and reduced software spend.

Kate Holterhoff, Industry Analyst at Redmonk, commented: “The developers we speak with at RedMonk are keenly interested in the productivity and efficiency gains that code assistants promise.

“GitLab’s Duo Code Suggestions is a welcome player in this space, expanding the available options for enabling an AI-enhanced software development lifecycle.”

(Photo by Pankaj Patel on Unsplash)

See also: OpenAI battles DDoS against its API and ChatGPT services

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post GitLab’s new AI capabilities empower DevSecOps appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/11/13/gitlab-new-ai-capabilities-empower-devsecops/feed/ 0
Enterprises struggle to address generative AI’s security implications https://www.artificialintelligence-news.com/2023/10/18/enterprises-struggle-address-generative-ai-security-implications/ https://www.artificialintelligence-news.com/2023/10/18/enterprises-struggle-address-generative-ai-security-implications/#respond Wed, 18 Oct 2023 15:54:37 +0000 https://www.artificialintelligence-news.com/?p=13766 In a recent study, cloud-native network detection and response firm ExtraHop unveiled a concerning trend: enterprises are struggling with the security implications of employee generative AI use. Their new research report, The Generative AI Tipping Point, sheds light on the challenges faced by organisations as generative AI technology becomes more prevalent in the workplace. The... Read more »

The post Enterprises struggle to address generative AI’s security implications appeared first on AI News.

]]>
In a recent study, cloud-native network detection and response firm ExtraHop unveiled a concerning trend: enterprises are struggling with the security implications of employee generative AI use.

Their new research report, The Generative AI Tipping Point, sheds light on the challenges faced by organisations as generative AI technology becomes more prevalent in the workplace.

The report delves into how organisations are dealing with the use of generative AI tools, revealing a significant cognitive dissonance among IT and security leaders. Astonishingly, 73 percent of these leaders confessed that their employees frequently use generative AI tools or Large Language Models (LLM) at work. Despite this, a staggering majority admitted to being uncertain about how to effectively address the associated security risks.

When questioned about their concerns, IT and security leaders expressed more worry about the possibility of inaccurate or nonsensical responses (40%) than critical security issues such as exposure of customer and employee personal identifiable information (PII) (36%) or financial loss (25%).

Raja Mukerji, Co-Founder and Chief Scientist at ExtraHop, said: “By blending innovation with strong safeguards, generative AI will continue to be a force that will uplevel entire industries in the years to come.”

One of the startling revelations from the study was the ineffectiveness of generative AI bans. About 32 percent of respondents stated that their organisations had prohibited the use of these tools. However, only five percent reported that employees never used these tools—indicating that bans alone are not enough to curb their usage.

The study also highlighted a clear desire for guidance, particularly from government bodies. A significant 90 percent of respondents expressed the need for government involvement, with 60 percent advocating for mandatory regulations and 30 percent supporting government standards for businesses to adopt voluntarily.

Despite a sense of confidence in their current security infrastructure, the study revealed gaps in basic security practices.

While 82 percent felt confident in their security stack’s ability to protect against generative AI threats, less than half had invested in technology to monitor generative AI use. Alarmingly, only 46 percent had established policies governing acceptable use and merely 42 percent provided training to users on the safe use of these tools.

The findings come in the wake of the rapid adoption of technologies like ChatGPT, which have become an integral part of modern businesses. Business leaders are urged to understand their employees’ generative AI usage to identify potential security vulnerabilities.

You can find a full copy of the report here.

(Photo by Hennie Stander on Unsplash)

See also: BSI: Closing ‘AI confidence gap’ key to unlocking benefits

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Enterprises struggle to address generative AI’s security implications appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/18/enterprises-struggle-address-generative-ai-security-implications/feed/ 0
BSI: Closing ‘AI confidence gap’ key to unlocking benefits https://www.artificialintelligence-news.com/2023/10/17/bsi-closing-ai-confidence-gap-key-unlocking-benefits/ https://www.artificialintelligence-news.com/2023/10/17/bsi-closing-ai-confidence-gap-key-unlocking-benefits/#respond Tue, 17 Oct 2023 14:34:00 +0000 https://www.artificialintelligence-news.com/?p=13759 The UK’s potential to harness the benefits of AI in crucial sectors such as healthcare, food safety, and sustainability is under threat due to a significant “confidence gap” among the public. According to a study conducted by BSI, 54 percent of UK respondents expressed excitement about AI’s potential to revolutionise medical diagnoses and 43 percent... Read more »

The post BSI: Closing ‘AI confidence gap’ key to unlocking benefits appeared first on AI News.

]]>
The UK’s potential to harness the benefits of AI in crucial sectors such as healthcare, food safety, and sustainability is under threat due to a significant “confidence gap” among the public.

According to a study conducted by BSI, 54 percent of UK respondents expressed excitement about AI’s potential to revolutionise medical diagnoses and 43 percent welcomed AI’s role in reducing food waste. However, there is a prevailing lack of trust.

This scepticism could hinder the integration of AI technologies in the NHS, which is currently grappling with challenges like the COVID-19 backlog and an ageing population. Almost half of Britons (49%) support the use of AI to alleviate pressure on the healthcare system and reduce waiting times. However, only 20 percent have more confidence in AI than humans in detecting food contamination issues.

The study also highlighted a pressing need for education, as 65 percent of respondents felt patients should be informed about the use of AI tools in diagnosis or treatment. 37 percent of respondents expect to use AI regularly in medical settings by 2030.

Craig Civil, Director of Data Science and AI at BSI, said:

“The magnitude of ways AI can shape the UK’s future means we are seeing some degree of hesitation of the unknown. This can be addressed by developing greater understanding and recognition that human involvement will always be needed if we are to make the best use of this technology, and by ensuring we have frameworks that are in place to govern its use and build trust.

Now is the moment for the UK to collaborate to balance the great power of this tool with the realities of actually using it in a credible, authentic, well-executed, and well-governed way.

Closing the confidence gap and building the appropriate checks and balances can enable us to make not just good but great use of AI in every area of life and society.”

60 percent believed consumers needed protections regarding AI technologies. The study also revealed that 61 percent of Britons are calling for international guidelines to ensure the safe use of AI. This demand reflects a global sentiment, with 50 percent of respondents highlighting the need for ethical safeguards on patient data use.

Harold Pradal, Chief Commercial Officer at BSI, commented:

“AI is a transformational technology. For it to be a powerful force for good, trust needs to be the critical factor. There is a clear opportunity to harness AI to drive societal impact, change lives, and accelerate progress towards a better future and a sustainable world.

Closing the AI confidence gap is the first necessary step, it has to be delivered through education to help realise AI’s benefits and shape Society 5.0 in a positive way.”

The study’s findings are a call to action for the UK, urging collaboration and the establishment of frameworks to govern AI’s use.

The UK Government, recognising the importance of safe AI implementation, is set to host a global AI Safety Summit at the historic Bletchley Park on 1-2 November 2023. BSI is an official partner for the much-anticipated event.

(Photo by Suad Kamardeen on Unsplash)

See also: UK reveals AI Safety Summit opening day agenda

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post BSI: Closing ‘AI confidence gap’ key to unlocking benefits appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/17/bsi-closing-ai-confidence-gap-key-unlocking-benefits/feed/ 0
Amperity recognised as a leader in Snowflake’s modern marketing data stack report https://www.artificialintelligence-news.com/2023/10/09/amperity-recognised-as-leader-snowflake-modern-marketing-data-stack-report/ https://www.artificialintelligence-news.com/2023/10/09/amperity-recognised-as-leader-snowflake-modern-marketing-data-stack-report/#respond Mon, 09 Oct 2023 14:55:28 +0000 https://www.artificialintelligence-news.com/?p=13703 Amperity, the leading AI-powered customer data platform (CDP) for enterprise brands, today announced that it has been recognised as a “Customer Data Activation Leader” in the Modern Marketing Data Stack 2023: How Data-Forward Marketers Are Redefining Strategies to Unify, Analyze, and Activate Data to Boost Revenue executed and launched by Snowflake, the Data Cloud company.... Read more »

The post Amperity recognised as a leader in Snowflake’s modern marketing data stack report appeared first on AI News.

]]>
Amperity, the leading AI-powered customer data platform (CDP) for enterprise brands, today announced that it has been recognised as a “Customer Data Activation Leader” in the Modern Marketing Data Stack 2023: How Data-Forward Marketers Are Redefining Strategies to Unify, Analyze, and Activate Data to Boost Revenue executed and launched by Snowflake, the Data Cloud company.

Snowflake’s data-backed report identifies the best-of-breed solutions used by Snowflake customers to show how marketers can leverage the Snowflake Data Cloud with accompanying partner solutions to best identify, serve, and convert valuable prospects into loyal customers. By analysing usage patterns from a pool of approximately 8,100 customers as of April 2023, Snowflake identified ten technology categories that organisations consider when building their marketing data stacks. The extensive research reflects how customers are adopting solutions from a rapidly-changing ecosystem and highlights the convergence of adtech and martech, the increased importance of privacy-enhancing technologies, and the heightened focus marketers have on measurement to maximise campaign ROI. The ten categories include:

  • Analytics & Data Capture
  • Enrichment
  • Identity & Activation
    • Identity & Onboarders
    • Customer Data Activation
    • Advertising Platforms
  • Measurement & Attribution
  • Integration & Modeling
  • Business Intelligence
  • AI & Machine Learning
  • Privacy-Enhancing Technologies

Focusing on those companies that are active members of the Snowflake Partner Network (or ones with a comparable agreement in place with Snowflake), as well as Snowflake Marketplace providers, the report explores each of these categories that comprise the Modern Marketing Data Stack, highlighting technology partners and their solutions as “leaders” or “ones to watch” within each category. The report also details how current Snowflake customers leverage a number of these partner technologies to enable data-driven marketing strategies and informed business decisions. Snowflake’s report provides a concrete overview of the partner solution providers and data providers marketers choose to create their data stacks.

“Marketing professionals continue to expand their investment in their customer data to improve their organization’s digital marketing activities. Snowflake’s goal is to empower them in their journey to data-driven marketing,” said Denise Persson, Chief Marketing Officer at Snowflake. “Amperity emerged as a leader in customer data activation because of its multi-patented approach to identifying, unifying and activating first-party online and offline data through a 360-degree view of the customer.”

Amperity was identified in Snowflake’s report as a leader in the Customer Data Activation category for data activation solutions, such as customer data platforms, customer engagement platforms, reverse ETL providers, and others, which are designed to make the activation process faster and easier. Activating data means doing something with it to derive valuable outcomes. In the case of the marketing data stack, that means taking identified and enriched audience data, creating relevant segments and audiences, and ultimately bringing it to the owned-media platforms in particular (website, email, in-app.etc) that help companies reach those individuals with the right messages.  

“We’re honoured that Snowflake has recognised Amperity as a customer data activation leader in this year’s Modern Marketing Data Stack report,” said Derek Slager, co-founder & CTO at Amperity. “Together, we enable our joint customers to comprehensively unify all of their customer data using AI. We then enable comprehensive multi-channel activation across the marketing and advertising technology ecosystems. Amperity’s modern, future-proof connectors, bring first-party data to the post-cookie ecosystem through Amperity for Paid Media.”

Click here to learn more about how Amperity and Snowflake partner to bring the modern martech stack to life.

(Editor’s note: This post is sponsored by Amperity)

The post Amperity recognised as a leader in Snowflake’s modern marketing data stack report appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/09/amperity-recognised-as-leader-snowflake-modern-marketing-data-stack-report/feed/ 0
IFOW: AI can have a positive impact on jobs https://www.artificialintelligence-news.com/2023/09/20/ifow-ai-can-have-positive-impact-jobs/ https://www.artificialintelligence-news.com/2023/09/20/ifow-ai-can-have-positive-impact-jobs/#respond Wed, 20 Sep 2023 12:15:37 +0000 https://www.artificialintelligence-news.com/?p=13618 In a world where sensational headlines about AI and autonomous robots dominate the media landscape, a new report sheds light on a different narrative. The research, funded by the Nuffield Foundation, explores the nuanced impacts of AI adoption on jobs and work quality. Contrary to the doomsday predictions, the report suggests that AI could have... Read more »

The post IFOW: AI can have a positive impact on jobs appeared first on AI News.

]]>
In a world where sensational headlines about AI and autonomous robots dominate the media landscape, a new report sheds light on a different narrative.

The research, funded by the Nuffield Foundation, explores the nuanced impacts of AI adoption on jobs and work quality. Contrary to the doomsday predictions, the report suggests that AI could have a positive influence on employment and job quality.

The study, conducted by the Institute for the Future of Work (IFOW), indicates that AI adoption is already well underway in UK firms. However, rather than leading to widespread job loss, it suggests that AI has the potential to create more jobs and improve the quality of existing ones.

Anna Thomas, Co-Founder and Director of the IFOW, expressed optimism about the study’s results: “This report not only highlights that the adoption of AI is well underway across UK firms but that it is possible for this tech transformation to lead to both net job creation and more ‘good work’ – great news as we look to solve the UK’s productivity puzzle.”

“With the [UK-hosted global] AI Summit fast approaching, Government must act urgently to regulate, legislate and invest so that UK firms and workers can benefit from this fast-moving technology.”

One key takeaway from the study is the importance of regional investment in education and infrastructure to make all areas of the UK ‘innovation ready.’ The study also emphasises the need for firms to engage workers when investing in automation and AI.

Taking these suggested actions could help ensure that the benefits of AI are distributed more evenly across regions and demographics, reducing existing inequalities.

Professor Sir Christopher Pissarides, Nobel Laureate and Co-Founder of IFOW, stressed the significance of placing “good jobs” at the heart of an economic and industrial strategy in the age of automation. He believes that the study provides valuable insights into how this can be achieved.

The IFOW’s study suggests that with the right approach, AI adoption can lead to a positive transformation of the labour market. By investing in education, infrastructure, and worker engagement, the UK can harness the potential of AI to create more jobs and improve job quality across the country.

Matt Robinson, Head of Nations and Regions, techUK, commented: “Realising the benefits of technologies like AI for all will mean getting the right foundations in place across areas like digital infrastructure and skills provision in every part of the UK to enable and create high-quality digital jobs.

“Access to good digital infrastructure, as well as skills and talent, is a priority for techUK members, and the Institute’s work provides welcome insights into their importance for creating good work throughout the country.”

While the IFOW’s study paints a more positive outlook on the adoption of AI than most headlines, it will be an uphill battle to convince the wider public.

A poll of US adults released this week by Mitre-Harris found the majority (54%) believe the risks of AI and just 39 percent of adults said they believed today’s AI technologies are safe and secure — down nine points from the previous survey.

As the AI industry continues to evolve, urgent action from governments, employers, and employees is essential to realise the opportunities, manage the risks, and convince a wary public of the technology’s benefits.

A copy of the full working paper can be found here (PDF)

(Photo by Damian Zaleski on Unsplash)

See also: CMA sets out principles for responsible AI development 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post IFOW: AI can have a positive impact on jobs appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/20/ifow-ai-can-have-positive-impact-jobs/feed/ 0
GitLab: Developers view AI as ‘essential’ despite concerns https://www.artificialintelligence-news.com/2023/09/06/gitlab-developers-ai-essential-despite-concerns/ https://www.artificialintelligence-news.com/2023/09/06/gitlab-developers-ai-essential-despite-concerns/#respond Wed, 06 Sep 2023 09:48:08 +0000 https://www.artificialintelligence-news.com/?p=13564 A survey by GitLab has shed light on the views of developers on the landscape of AI in software development. The report, titled ‘The State of AI in Software Development,’ presents insights from over 1,000 global senior technology executives, developers, and security and operations professionals. The report reveals a complex relationship between enthusiasm for AI... Read more »

The post GitLab: Developers view AI as ‘essential’ despite concerns appeared first on AI News.

]]>
A survey by GitLab has shed light on the views of developers on the landscape of AI in software development.

The report, titled ‘The State of AI in Software Development,’ presents insights from over 1,000 global senior technology executives, developers, and security and operations professionals.

The report reveals a complex relationship between enthusiasm for AI adoption and concerns about data privacy, intellectual property, and security.

“Enterprises are seeking out platforms that allow them to harness the power of AI while addressing potential privacy and security risks,” said Alexander Johnston, Research Analyst in the Data, AI & Analytics Channel at 451 Research, a part of S&P Global Market Intelligence.

While 83 percent of the survey’s respondents view AI implementation as essential to stay competitive, a significant 79 percent expressed worries about AI tools accessing sensitive information and intellectual property.

Impact on developer productivity

AI is perceived as a boon for developer productivity, with 51 percent of all respondents citing it as a key benefit of AI implementation. However, security professionals are apprehensive that AI-generated code might lead to an increase in security vulnerabilities, potentially creating more work for them.

Only seven percent of developers’ time is currently spent identifying and mitigating security vulnerabilities, compared to 11 percent allocated to testing code. This raises questions about the widening gap between developers and security professionals in the AI era.

Privacy and intellectual property concerns

The survey underscores the paramount importance of data privacy and intellectual property protection when selecting AI tools. 95 percent of senior technology executives prioritise these aspects when choosing AI solutions.

Moreover, 32 percent of respondents admitted to being “very” or “extremely” concerned about introducing AI into the software development lifecycle. Within this group, 39 percent cited worries about AI-generated code introducing security vulnerabilities, and 48 percent expressed concerns that AI-generated code may not receive the same copyright protection as code produced by humans.

AI skills gap

Despite optimism about AI’s potential, the report identifies a disconnect between organisations’ provision of AI training resources and practitioners’ satisfaction with them. 

While 75 percent of respondents stated that their organisations offer training and resources for using AI, an equivalent proportion expressed the need to seek resources independently—suggesting that the available training may be insufficient.

A striking 81 percent of respondents said they require more training to effectively utilise AI in their daily work. Furthermore, 65 percent of those planning to use AI for software development indicated that their organsations plan to hire new talent to manage AI implementation.

David DeSanto, Chief Product Officer at GitLab, said:

“According to the GitLab Global DevSecOps Report, only 25 percent of developers’ time is spent on code generation, but the data shows AI can boost productivity and collaboration in nearly 60 percent of developers’ day-to-day work.

To realise AI’s full potential, it needs to be embedded across the software development lifecycle, allowing everyone involved in delivering secure software – not just developers – to benefit from the efficiency boost.” 

While AI holds immense promise for the software development industry, GitLab’s report makes it clear that addressing cybersecurity and privacy concerns, bridging the skills gap, and fostering collaboration between developers and security professionals are pivotal to successful AI adoption.

(Photo by Luca Bravo on Unsplash)

See also: UK government outlines AI Safety Summit plans

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post GitLab: Developers view AI as ‘essential’ despite concerns appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/06/gitlab-developers-ai-essential-despite-concerns/feed/ 0
ChatGPT’s political bias highlighted in study https://www.artificialintelligence-news.com/2023/08/18/chatgpt-political-bias-highlighted-study/ https://www.artificialintelligence-news.com/2023/08/18/chatgpt-political-bias-highlighted-study/#respond Fri, 18 Aug 2023 09:47:26 +0000 https://www.artificialintelligence-news.com/?p=13496 A study conducted by computer and information science researchers from the UK and Brazil has raised concerns about the objectivity of ChatGPT. The researchers claim to have discovered substantial political bias in ChatGPT’s responses, leaning towards the left side of the political spectrum. Published in the journal Public Choice this week, the study – conducted... Read more »

The post ChatGPT’s political bias highlighted in study appeared first on AI News.

]]>
A study conducted by computer and information science researchers from the UK and Brazil has raised concerns about the objectivity of ChatGPT.

The researchers claim to have discovered substantial political bias in ChatGPT’s responses, leaning towards the left side of the political spectrum.

Published in the journal Public Choice this week, the study – conducted by Fabio Motoki, Valdemar Pinho, and Victor Rodrigues – argues that the presence of political bias in AI-generated content could perpetuate existing biases found in traditional media.

The research highlights the potential impact of such bias on various stakeholders, including policymakers, media outlets, political groups, and educational institutions.

Utilising an empirical approach, the researchers employed a series of questionnaires to gauge ChatGPT’s political orientation. The chatbot was asked to answer political compass questions, capturing its stance on various political issues.

Furthermore, the study examined scenarios where ChatGPT impersonated both an average Democrat and a Republican, revealing the algorithm’s inherent bias towards Democratic-leaning responses.

The study’s findings indicate that ChatGPT’s bias extends beyond the US and is also noticeable in its responses regarding Brazilian and British political contexts. Notably, the research even suggests that this bias is not merely a mechanical result but a deliberate tendency in the algorithm’s output.

Determining the exact source of ChatGPT’s political bias remains a challenge. The researchers investigated both the training data and the algorithm itself, concluding that both factors likely contribute to the bias. They highlighted the need for future research to delve into disentangling these components for a clearer understanding of the bias’s origins.

OpenAI, the organisation behind ChatGPT, has not yet responded to the study’s findings. This study joins a growing list of concerns surrounding AI technology, including issues related to privacy, education, and identity verification in various sectors.

As the influence of AI-driven tools like ChatGPT continues to expand, experts and stakeholders are grappling with the implications of biased AI-generated content.

This latest study serves as a reminder that vigilance and critical evaluation are necessary to ensure that AI technologies are developed and deployed in a fair and balanced manner, devoid of undue political influence.

(Photo by Priscilla Du Preez on Unsplash)

See also: Study highlights impact of demographics on AI training

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post ChatGPT’s political bias highlighted in study appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/08/18/chatgpt-political-bias-highlighted-study/feed/ 0
Study highlights impact of demographics on AI training https://www.artificialintelligence-news.com/2023/08/17/study-highlights-impact-demographics-ai-training/ https://www.artificialintelligence-news.com/2023/08/17/study-highlights-impact-demographics-ai-training/#respond Thu, 17 Aug 2023 12:39:29 +0000 https://www.artificialintelligence-news.com/?p=13491 A study conducted in collaboration between Prolific, Potato, and the University of Michigan has shed light on the significant influence of annotator demographics on the development and training of AI models. The study delved into the impact of age, race, and education on AI model training data—highlighting the potential dangers of biases becoming ingrained within... Read more »

The post Study highlights impact of demographics on AI training appeared first on AI News.

]]>
A study conducted in collaboration between Prolific, Potato, and the University of Michigan has shed light on the significant influence of annotator demographics on the development and training of AI models.

The study delved into the impact of age, race, and education on AI model training data—highlighting the potential dangers of biases becoming ingrained within AI systems.

“Systems like ChatGPT are increasingly used by people for everyday tasks,” explains assistant professor David Jurgens from the University of Michigan School of Information. 

“But on whose values are we instilling in the trained model? If we keep taking a representative sample without accounting for differences, we continue marginalising certain groups of people.” 

Machine learning and AI systems increasingly rely on human annotation to train their models effectively. This process, often referred to as ‘Human-in-the-loop’ or Reinforcement Learning from Human Feedback (RLHF), involves individuals reviewing and categorising language model outputs to refine their performance.

One of the most striking findings of the study is the influence of demographics on labelling offensiveness.

The research found that different racial groups had varying perceptions of offensiveness in online comments. For instance, Black participants tended to rate comments as more offensive compared to other racial groups. Age also played a role, as participants aged 60 or over were more likely to label comments as offensive than younger participants.

The study involved analysing 45,000 annotations from 1,484 annotators and covered a wide array of tasks, including offensiveness detection, question answering, and politeness. It revealed that demographic factors continue to impact even objective tasks like question answering. Notably, accuracy in answering questions was affected by factors like race and age, reflecting disparities in education and opportunities.

Politeness, a significant factor in interpersonal communication, was also impacted by demographics.

Women tended to judge messages as less polite than men, while older participants were more likely to assign higher politeness ratings. Additionally, participants with higher education levels often assigned lower politeness ratings and differences were observed between racial groups and Asian participants.

Phelim Bradley, CEO and co-founder of Prolific, said:

“Artificial intelligence will touch all aspects of society and there is a real danger that existing biases will get baked into these systems.

This research is very clear: who annotates your data matters.

Anyone who is building and training AI systems must make sure that the people they use are nationally representative across age, gender, and race or bias will simply breed more bias.”

As AI systems become more integrated into everyday tasks, the research underscores the imperative of addressing biases at the early stages of model development to avoid exacerbating existing biases and toxicity.

You can find a full copy of the paper here (PDF)

(Photo by Clay Banks on Unsplash)

See also: Error-prone facial recognition leads to another wrongful arrest

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Study highlights impact of demographics on AI training appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/08/17/study-highlights-impact-demographics-ai-training/feed/ 0