report Archives - AI News https://www.artificialintelligence-news.com/tag/report/ Artificial Intelligence News Tue, 12 Dec 2023 13:00:07 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png report Archives - AI News https://www.artificialintelligence-news.com/tag/report/ 32 32 Dynatrace: Organisations embrace AI, yet face challenges https://www.artificialintelligence-news.com/2023/12/12/dynatrace-organisations-embrace-ai-yet-face-challenges/ https://www.artificialintelligence-news.com/2023/12/12/dynatrace-organisations-embrace-ai-yet-face-challenges/#respond Tue, 12 Dec 2023 13:00:04 +0000 https://www.artificialintelligence-news.com/?p=14058 Research from Dynatrace sheds light on the challenges and risks associated with AI implementation. The report underscores the need for a composite AI approach. This involves combining various AI types – such as generative, predictive, and causal – along with diverse data sources like observability, security, and business events. This holistic strategy aims to provide... Read more »

The post Dynatrace: Organisations embrace AI, yet face challenges appeared first on AI News.

]]>
Research from Dynatrace sheds light on the challenges and risks associated with AI implementation.

The report underscores the need for a composite AI approach. This involves combining various AI types – such as generative, predictive, and causal – along with diverse data sources like observability, security, and business events. This holistic strategy aims to provide precision, context, and meaning to AI outputs, ensuring reliable results.

Key findings:

  • 83% of tech leaders emphasise the mandatory role of AI in navigating the dynamic nature of cloud environments.
  • 82% anticipate AI’s critical role in security threat detection, investigation, and response.
  • 88% foresee AI extending access to data analytics for non-technical employees through natural language queries.
  • 88% believe AI will enhance cloud cost efficiencies through support for Financial Operations (FinOps) practices.

“AI has become central to how organisations drive efficiency, improve productivity, and accelerate innovation,” said Bernd Greifeneder, Chief Technology Officer at Dynatrace.

“The release of ChatGPT late last year triggered a significant generative AI hype cycle. Business, development, operations, and security leaders have set high expectations for generative AIs to help them deliver new services with less effort and at record speeds.”

While organisations express optimism about AI’s transformative potential, concerns linger:

  • 93% of tech leaders worry about potential non-approved uses of AI as employees become more accustomed to tools like ChatGPT.
  • 95% express concerns about using generative AI for code generation, fearing leakage and improper use of intellectual property.
  • 98% are apprehensive about unintentional bias, errors, and misinformation in generative AI.

“Especially for use cases that involve automation and depend on data context, taking a composite approach to AI is critical. For instance, automating software services, resolving security vulnerabilities, predicting maintenance needs, and analysing business data all need a composite AI approach,” added Greifeneder.

“This approach should deliver the precision of causal AI, which determines the underlying causes and effects of systems’ behaviours, and predictive AI, which forecasts future events based on historical data.”

As organisations forge ahead with AI adoption, balancing enthusiasm with a mindful approach to challenges becomes paramount. The survey underscores the transformative potential of AI, but its effective integration requires careful consideration and a diversified AI strategy.

“Predictive AI and causal AI not only provide essential context for responses produced by generative AI but can also prompt generative AI to ensure precise, non-probabilistic answers are embedded into its response,” says Greifeneder.

“If organisations get their strategy right, combining these different types of AI with high-quality observability, security, and business events data can significantly boost the productivity of their development, operations, and security teams and deliver lasting business value.”

A full copy of the report can be found here (registration required)

(Photo by Matt Sclarandis on Unsplash)

See also: AI & Big Data Expo: Demystifying AI and seeing past the hype

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Dynatrace: Organisations embrace AI, yet face challenges appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/12/12/dynatrace-organisations-embrace-ai-yet-face-challenges/feed/ 0
Quantum AI represents a ‘transformative advancement’ https://www.artificialintelligence-news.com/2023/11/14/quantum-ai-represents-transformative-advancement/ https://www.artificialintelligence-news.com/2023/11/14/quantum-ai-represents-transformative-advancement/#respond Tue, 14 Nov 2023 16:29:33 +0000 https://www.artificialintelligence-news.com/?p=13880 Quantum AI is the next frontier in the evolution of artificial intelligence, harnessing the power of quantum mechanics to propel capabilities beyond current limits. GlobalData highlights a 14 percent compound annual growth rate (CAGR) increase in related patent filings from 2020 to 2022, underscoring the vast influence and potential of quantum AI across industries. Adarsh... Read more »

The post Quantum AI represents a ‘transformative advancement’ appeared first on AI News.

]]>
Quantum AI is the next frontier in the evolution of artificial intelligence, harnessing the power of quantum mechanics to propel capabilities beyond current limits.

GlobalData highlights a 14 percent compound annual growth rate (CAGR) increase in related patent filings from 2020 to 2022, underscoring the vast influence and potential of quantum AI across industries.

Adarsh Jain, Director of Financial Markets at GlobalData, emphasises the transformative nature of Quantum AI:

“Quantum AI represents a transformative advancement in technology. As we integrate quantum principles into AI algorithms, the potential for speed and efficiency in processing complex data sets grows exponentially. This not only enhances current AI applications but also opens new possibilities across various industries. 

The surge in patent filings is a testament to its growing importance and the pivotal role it will play in the future of AI-driven solutions.”

Kiran Raj, Practice Head of Disruptive Tech at GlobalData, highlights that while AI thrives on data and computational power, the inner workings of the technology often remain unclear. Quantum computing not only promises increased power but also potentially provides greater insights into these workings, paving the way for AI to transcend its current capabilities.

GlobalData’s Disruptor Intelligence Center analysis reveals significant synergy between quantum computing and AI innovations, leading to revolutionary impacts in various industries. Notable collaborations include HSBC and IBM in finance, Menten AI’s healthcare advancements, Volkswagen’s partnership with Xanadu for battery simulation, Intel’s Quantum SDK, and Zapata’s collaboration with BMW.

Raj concludes with a note of caution: “Quantum AI offers the potential for smarter, faster AI systems, but its adoption is complex and demands caution. The technology is still in its early stages, requiring significant investment and expertise.

“Key challenges include the need for advanced cybersecurity measures and ensuring ethical AI practices as we navigate this promising yet intricate landscape.”

(Photo by Anton Maksimov 5642.su on Unsplash)

See also: Google expands partnership with Anthropic to enhance AI safety

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Quantum AI represents a ‘transformative advancement’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/11/14/quantum-ai-represents-transformative-advancement/feed/ 0
GitLab’s new AI capabilities empower DevSecOps https://www.artificialintelligence-news.com/2023/11/13/gitlab-new-ai-capabilities-empower-devsecops/ https://www.artificialintelligence-news.com/2023/11/13/gitlab-new-ai-capabilities-empower-devsecops/#respond Mon, 13 Nov 2023 17:27:18 +0000 https://www.artificialintelligence-news.com/?p=13876 GitLab is empowering DevSecOps with new AI-powered capabilities as part of its latest releases. The recent GitLab 16.6 November release includes the beta launch of GitLab Duo Chat, a natural-language AI assistant. Additionally, the GitLab 16.7 December release sees the general availability of GitLab Duo Code Suggestions. David DeSanto, Chief Product Officer at GitLab, said:... Read more »

The post GitLab’s new AI capabilities empower DevSecOps appeared first on AI News.

]]>
GitLab is empowering DevSecOps with new AI-powered capabilities as part of its latest releases.

The recent GitLab 16.6 November release includes the beta launch of GitLab Duo Chat, a natural-language AI assistant. Additionally, the GitLab 16.7 December release sees the general availability of GitLab Duo Code Suggestions.

David DeSanto, Chief Product Officer at GitLab, said: “To realise AI’s full potential, it needs to be embedded across the software development lifecycle, allowing DevSecOps teams to benefit from boosts to security, efficiency, and collaboration.”

GitLab Duo Chat – arguably the star of the show – provides users with invaluable insights, guidance, and suggestions. Beyond code analysis, it supports planning, security issue comprehension and resolution, troubleshooting CI/CD pipeline failures, aiding in merge requests, and more.

As part of GitLab’s commitment to providing a comprehensive AI-powered experience, Duo Chat joins Code Suggestions as the primary interface into GitLab’s AI suite within its DevSecOps platform.

GitLab Duo comprises a suite of 14 AI capabilities:

  • Suggested Reviewers
  • Code Suggestions
  • Chat
  • Vulnerability Summary
  • Code Explanation
  • Planning Discussions Summary
  • Merge Request Summary
  • Merge Request Template Population
  • Code Review Summary
  • Test Generation
  • Git Suggestions
  • Root Cause Analysis
  • Planning Description Generation
  • Value Stream Forecasting

In response to the evolving needs of development, security, and operations teams, Code Suggestions is now generally available. This feature assists in creating and updating code, reducing cognitive load, enhancing efficiency, and accelerating secure software development.

GitLab’s commitment to privacy and transparency stands out in the AI space. According to the GitLab report, 83 percent of DevSecOps professionals consider implementing AI in their processes essential, with 95 percent prioritising privacy and intellectual property protection in AI tool selection.

The State of AI in Software Development report by GitLab reveals that developers spend just 25 percent of their time writing code. The Duo suite aims to address this by reducing toolchain sprawl—enabling 7x faster cycle times, heightened developer productivity, and reduced software spend.

Kate Holterhoff, Industry Analyst at Redmonk, commented: “The developers we speak with at RedMonk are keenly interested in the productivity and efficiency gains that code assistants promise.

“GitLab’s Duo Code Suggestions is a welcome player in this space, expanding the available options for enabling an AI-enhanced software development lifecycle.”

(Photo by Pankaj Patel on Unsplash)

See also: OpenAI battles DDoS against its API and ChatGPT services

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post GitLab’s new AI capabilities empower DevSecOps appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/11/13/gitlab-new-ai-capabilities-empower-devsecops/feed/ 0
UK paper highlights AI risks ahead of global Safety Summit https://www.artificialintelligence-news.com/2023/10/26/uk-paper-highlights-ai-risks-ahead-global-safety-summit/ https://www.artificialintelligence-news.com/2023/10/26/uk-paper-highlights-ai-risks-ahead-global-safety-summit/#respond Thu, 26 Oct 2023 15:48:59 +0000 https://www.artificialintelligence-news.com/?p=13793 The UK Government has unveiled a comprehensive paper addressing the capabilities and risks associated with frontier AI. UK Prime Minister Rishi Sunak has spoken today on the global responsibility to confront the risks highlighted in the report and harness AI’s potential. Sunak emphasised the need for honest dialogue about the dual nature of AI: offering... Read more »

The post UK paper highlights AI risks ahead of global Safety Summit appeared first on AI News.

]]>
The UK Government has unveiled a comprehensive paper addressing the capabilities and risks associated with frontier AI.

UK Prime Minister Rishi Sunak has spoken today on the global responsibility to confront the risks highlighted in the report and harness AI’s potential. Sunak emphasised the need for honest dialogue about the dual nature of AI: offering unprecedented opportunities, while also posing significant dangers.

“AI will bring new knowledge, new opportunities for economic growth, new advances in human capability, and the chance to solve problems we once thought beyond us. But it also brings new dangers and new fears,” said Sunak.

“So, the responsible thing for me to do is to address those fears head-on, giving you the peace of mind that we will keep you safe while making sure you and your children have all the opportunities for a better future that AI can bring.

“Doing the right thing, not the easy thing, means being honest with people about the risks from these technologies.”

The report delves into the rapid advancements of frontier AI, drawing on numerous sources. It highlights the diverse perspectives within scientific, expert, and global communities regarding the risks associated with the swift evolution of AI technology. 

The publication comprises three key sections:

  1. Capabilities and risks from frontier AI: This section presents a discussion paper advocating further research into AI risk. It delineates the current state of frontier AI capabilities, potential future improvements, and associated risks, including societal harms, misuse, and loss of control.
  2. Safety and security risks of generative AI to 2025: Drawing on intelligence assessments, this report outlines the potential global benefits of generative AI while highlighting the increased safety and security risks. It underscores the enhancement of threat actor capabilities and the effectiveness of attacks due to generative AI development.
  3. Future risks of frontier AI: Prepared by the Government Office for Science, this report explores uncertainties in frontier AI development, future system risks, and potential scenarios for AI up to 2030.

The report – based on declassified information from intelligence agencies – focuses on generative AI, the technology underpinning popular chatbots and image generation software. It foresees a future where AI might be exploited by terrorists to plan biological or chemical attacks, raising serious concerns about global security.

Sjuul van der Leeuw, CEO of Deployteq, commented: “It is good to see the government take a serious approach, offering a report ahead of the Safety Summit next week however more must be done.

“An ongoing effort to address AI risks is needed and we hope that the summit brings much-needed clarity, allowing businesses and marketers to enjoy the benefits this emerging piece of technology offers, without the worry of backlash.”

The report highlights that generative AI could be utilised to gather knowledge on physical attacks by non-state violent actors, including creating chemical, biological, and radiological weapons.

Although companies are working to implement safeguards, the report emphasises the varying effectiveness of these measures. Obstacles to obtaining the necessary knowledge, raw materials, and equipment for such attacks are decreasing, with AI potentially accelerating this process.

Additionally, the report warns of the likelihood of AI-driven cyber-attacks becoming faster-paced, more effective, and on a larger scale by 2025. AI could aid hackers in mimicking official language, and overcome previous challenges faced in this area.

However, some experts have questioned the UK Government’s approach.

Rashik Parmar MBE, CEO of BCS, The Chartered Institute for IT, said: “Over 1,300 technologists and leaders signed our open letter calling AI a force for good rather than an existential threat to humanity.

“AI won’t grow up like The Terminator. If we take the proper steps, it will be a trusted co-pilot from our earliest school days to our retirement.

The AI Safety Summit will aim to foster healthy discussion around how to address frontier AI risks, encompassing misuse by non-state actors for cyberattacks or bioweapon design and concerns related to AI systems acting autonomously contrary to human intentions. Discussions at the summit will also extend to broader societal impacts, such as election disruption, bias, crime, and online safety.

Claire Trachet, CEO of Trachet, commented: “The fast-growing nature of AI has made it difficult for governments to balance creating effective regulation which safeguards the interest of businesses and consumers without stifling investment opportunities. Even though there are some forms of risk management and different reports coming out now, none of them are true coordinated approaches.

“The UK Government’s commitment to AI safety is commendable, but the criticism surrounding the summit serves as a reminder of the importance of a balanced, constructive, and forward-thinking approach to AI regulation.”

If the UK Government’s report is anything to go by, the need for collaboration around proportionate but rigorous measures to manage the risks posed by AI is more imperative than ever.

The global AI Safety Summit is set to take place at the historic Bletchley Park on 1 – 2 November 2023.

(Image Credit: GOV.UK)

See also: BSI: Closing ‘AI confidence gap’ key to unlocking benefits

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK paper highlights AI risks ahead of global Safety Summit appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/26/uk-paper-highlights-ai-risks-ahead-global-safety-summit/feed/ 0
Enterprises struggle to address generative AI’s security implications https://www.artificialintelligence-news.com/2023/10/18/enterprises-struggle-address-generative-ai-security-implications/ https://www.artificialintelligence-news.com/2023/10/18/enterprises-struggle-address-generative-ai-security-implications/#respond Wed, 18 Oct 2023 15:54:37 +0000 https://www.artificialintelligence-news.com/?p=13766 In a recent study, cloud-native network detection and response firm ExtraHop unveiled a concerning trend: enterprises are struggling with the security implications of employee generative AI use. Their new research report, The Generative AI Tipping Point, sheds light on the challenges faced by organisations as generative AI technology becomes more prevalent in the workplace. The... Read more »

The post Enterprises struggle to address generative AI’s security implications appeared first on AI News.

]]>
In a recent study, cloud-native network detection and response firm ExtraHop unveiled a concerning trend: enterprises are struggling with the security implications of employee generative AI use.

Their new research report, The Generative AI Tipping Point, sheds light on the challenges faced by organisations as generative AI technology becomes more prevalent in the workplace.

The report delves into how organisations are dealing with the use of generative AI tools, revealing a significant cognitive dissonance among IT and security leaders. Astonishingly, 73 percent of these leaders confessed that their employees frequently use generative AI tools or Large Language Models (LLM) at work. Despite this, a staggering majority admitted to being uncertain about how to effectively address the associated security risks.

When questioned about their concerns, IT and security leaders expressed more worry about the possibility of inaccurate or nonsensical responses (40%) than critical security issues such as exposure of customer and employee personal identifiable information (PII) (36%) or financial loss (25%).

Raja Mukerji, Co-Founder and Chief Scientist at ExtraHop, said: “By blending innovation with strong safeguards, generative AI will continue to be a force that will uplevel entire industries in the years to come.”

One of the startling revelations from the study was the ineffectiveness of generative AI bans. About 32 percent of respondents stated that their organisations had prohibited the use of these tools. However, only five percent reported that employees never used these tools—indicating that bans alone are not enough to curb their usage.

The study also highlighted a clear desire for guidance, particularly from government bodies. A significant 90 percent of respondents expressed the need for government involvement, with 60 percent advocating for mandatory regulations and 30 percent supporting government standards for businesses to adopt voluntarily.

Despite a sense of confidence in their current security infrastructure, the study revealed gaps in basic security practices.

While 82 percent felt confident in their security stack’s ability to protect against generative AI threats, less than half had invested in technology to monitor generative AI use. Alarmingly, only 46 percent had established policies governing acceptable use and merely 42 percent provided training to users on the safe use of these tools.

The findings come in the wake of the rapid adoption of technologies like ChatGPT, which have become an integral part of modern businesses. Business leaders are urged to understand their employees’ generative AI usage to identify potential security vulnerabilities.

You can find a full copy of the report here.

(Photo by Hennie Stander on Unsplash)

See also: BSI: Closing ‘AI confidence gap’ key to unlocking benefits

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Enterprises struggle to address generative AI’s security implications appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/18/enterprises-struggle-address-generative-ai-security-implications/feed/ 0
BSI: Closing ‘AI confidence gap’ key to unlocking benefits https://www.artificialintelligence-news.com/2023/10/17/bsi-closing-ai-confidence-gap-key-unlocking-benefits/ https://www.artificialintelligence-news.com/2023/10/17/bsi-closing-ai-confidence-gap-key-unlocking-benefits/#respond Tue, 17 Oct 2023 14:34:00 +0000 https://www.artificialintelligence-news.com/?p=13759 The UK’s potential to harness the benefits of AI in crucial sectors such as healthcare, food safety, and sustainability is under threat due to a significant “confidence gap” among the public. According to a study conducted by BSI, 54 percent of UK respondents expressed excitement about AI’s potential to revolutionise medical diagnoses and 43 percent... Read more »

The post BSI: Closing ‘AI confidence gap’ key to unlocking benefits appeared first on AI News.

]]>
The UK’s potential to harness the benefits of AI in crucial sectors such as healthcare, food safety, and sustainability is under threat due to a significant “confidence gap” among the public.

According to a study conducted by BSI, 54 percent of UK respondents expressed excitement about AI’s potential to revolutionise medical diagnoses and 43 percent welcomed AI’s role in reducing food waste. However, there is a prevailing lack of trust.

This scepticism could hinder the integration of AI technologies in the NHS, which is currently grappling with challenges like the COVID-19 backlog and an ageing population. Almost half of Britons (49%) support the use of AI to alleviate pressure on the healthcare system and reduce waiting times. However, only 20 percent have more confidence in AI than humans in detecting food contamination issues.

The study also highlighted a pressing need for education, as 65 percent of respondents felt patients should be informed about the use of AI tools in diagnosis or treatment. 37 percent of respondents expect to use AI regularly in medical settings by 2030.

Craig Civil, Director of Data Science and AI at BSI, said:

“The magnitude of ways AI can shape the UK’s future means we are seeing some degree of hesitation of the unknown. This can be addressed by developing greater understanding and recognition that human involvement will always be needed if we are to make the best use of this technology, and by ensuring we have frameworks that are in place to govern its use and build trust.

Now is the moment for the UK to collaborate to balance the great power of this tool with the realities of actually using it in a credible, authentic, well-executed, and well-governed way.

Closing the confidence gap and building the appropriate checks and balances can enable us to make not just good but great use of AI in every area of life and society.”

60 percent believed consumers needed protections regarding AI technologies. The study also revealed that 61 percent of Britons are calling for international guidelines to ensure the safe use of AI. This demand reflects a global sentiment, with 50 percent of respondents highlighting the need for ethical safeguards on patient data use.

Harold Pradal, Chief Commercial Officer at BSI, commented:

“AI is a transformational technology. For it to be a powerful force for good, trust needs to be the critical factor. There is a clear opportunity to harness AI to drive societal impact, change lives, and accelerate progress towards a better future and a sustainable world.

Closing the AI confidence gap is the first necessary step, it has to be delivered through education to help realise AI’s benefits and shape Society 5.0 in a positive way.”

The study’s findings are a call to action for the UK, urging collaboration and the establishment of frameworks to govern AI’s use.

The UK Government, recognising the importance of safe AI implementation, is set to host a global AI Safety Summit at the historic Bletchley Park on 1-2 November 2023. BSI is an official partner for the much-anticipated event.

(Photo by Suad Kamardeen on Unsplash)

See also: UK reveals AI Safety Summit opening day agenda

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post BSI: Closing ‘AI confidence gap’ key to unlocking benefits appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/17/bsi-closing-ai-confidence-gap-key-unlocking-benefits/feed/ 0
Amperity recognised as a leader in Snowflake’s modern marketing data stack report https://www.artificialintelligence-news.com/2023/10/09/amperity-recognised-as-leader-snowflake-modern-marketing-data-stack-report/ https://www.artificialintelligence-news.com/2023/10/09/amperity-recognised-as-leader-snowflake-modern-marketing-data-stack-report/#respond Mon, 09 Oct 2023 14:55:28 +0000 https://www.artificialintelligence-news.com/?p=13703 Amperity, the leading AI-powered customer data platform (CDP) for enterprise brands, today announced that it has been recognised as a “Customer Data Activation Leader” in the Modern Marketing Data Stack 2023: How Data-Forward Marketers Are Redefining Strategies to Unify, Analyze, and Activate Data to Boost Revenue executed and launched by Snowflake, the Data Cloud company.... Read more »

The post Amperity recognised as a leader in Snowflake’s modern marketing data stack report appeared first on AI News.

]]>
Amperity, the leading AI-powered customer data platform (CDP) for enterprise brands, today announced that it has been recognised as a “Customer Data Activation Leader” in the Modern Marketing Data Stack 2023: How Data-Forward Marketers Are Redefining Strategies to Unify, Analyze, and Activate Data to Boost Revenue executed and launched by Snowflake, the Data Cloud company.

Snowflake’s data-backed report identifies the best-of-breed solutions used by Snowflake customers to show how marketers can leverage the Snowflake Data Cloud with accompanying partner solutions to best identify, serve, and convert valuable prospects into loyal customers. By analysing usage patterns from a pool of approximately 8,100 customers as of April 2023, Snowflake identified ten technology categories that organisations consider when building their marketing data stacks. The extensive research reflects how customers are adopting solutions from a rapidly-changing ecosystem and highlights the convergence of adtech and martech, the increased importance of privacy-enhancing technologies, and the heightened focus marketers have on measurement to maximise campaign ROI. The ten categories include:

  • Analytics & Data Capture
  • Enrichment
  • Identity & Activation
    • Identity & Onboarders
    • Customer Data Activation
    • Advertising Platforms
  • Measurement & Attribution
  • Integration & Modeling
  • Business Intelligence
  • AI & Machine Learning
  • Privacy-Enhancing Technologies

Focusing on those companies that are active members of the Snowflake Partner Network (or ones with a comparable agreement in place with Snowflake), as well as Snowflake Marketplace providers, the report explores each of these categories that comprise the Modern Marketing Data Stack, highlighting technology partners and their solutions as “leaders” or “ones to watch” within each category. The report also details how current Snowflake customers leverage a number of these partner technologies to enable data-driven marketing strategies and informed business decisions. Snowflake’s report provides a concrete overview of the partner solution providers and data providers marketers choose to create their data stacks.

“Marketing professionals continue to expand their investment in their customer data to improve their organization’s digital marketing activities. Snowflake’s goal is to empower them in their journey to data-driven marketing,” said Denise Persson, Chief Marketing Officer at Snowflake. “Amperity emerged as a leader in customer data activation because of its multi-patented approach to identifying, unifying and activating first-party online and offline data through a 360-degree view of the customer.”

Amperity was identified in Snowflake’s report as a leader in the Customer Data Activation category for data activation solutions, such as customer data platforms, customer engagement platforms, reverse ETL providers, and others, which are designed to make the activation process faster and easier. Activating data means doing something with it to derive valuable outcomes. In the case of the marketing data stack, that means taking identified and enriched audience data, creating relevant segments and audiences, and ultimately bringing it to the owned-media platforms in particular (website, email, in-app.etc) that help companies reach those individuals with the right messages.  

“We’re honoured that Snowflake has recognised Amperity as a customer data activation leader in this year’s Modern Marketing Data Stack report,” said Derek Slager, co-founder & CTO at Amperity. “Together, we enable our joint customers to comprehensively unify all of their customer data using AI. We then enable comprehensive multi-channel activation across the marketing and advertising technology ecosystems. Amperity’s modern, future-proof connectors, bring first-party data to the post-cookie ecosystem through Amperity for Paid Media.”

Click here to learn more about how Amperity and Snowflake partner to bring the modern martech stack to life.

(Editor’s note: This post is sponsored by Amperity)

The post Amperity recognised as a leader in Snowflake’s modern marketing data stack report appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/09/amperity-recognised-as-leader-snowflake-modern-marketing-data-stack-report/feed/ 0
IFOW: AI can have a positive impact on jobs https://www.artificialintelligence-news.com/2023/09/20/ifow-ai-can-have-positive-impact-jobs/ https://www.artificialintelligence-news.com/2023/09/20/ifow-ai-can-have-positive-impact-jobs/#respond Wed, 20 Sep 2023 12:15:37 +0000 https://www.artificialintelligence-news.com/?p=13618 In a world where sensational headlines about AI and autonomous robots dominate the media landscape, a new report sheds light on a different narrative. The research, funded by the Nuffield Foundation, explores the nuanced impacts of AI adoption on jobs and work quality. Contrary to the doomsday predictions, the report suggests that AI could have... Read more »

The post IFOW: AI can have a positive impact on jobs appeared first on AI News.

]]>
In a world where sensational headlines about AI and autonomous robots dominate the media landscape, a new report sheds light on a different narrative.

The research, funded by the Nuffield Foundation, explores the nuanced impacts of AI adoption on jobs and work quality. Contrary to the doomsday predictions, the report suggests that AI could have a positive influence on employment and job quality.

The study, conducted by the Institute for the Future of Work (IFOW), indicates that AI adoption is already well underway in UK firms. However, rather than leading to widespread job loss, it suggests that AI has the potential to create more jobs and improve the quality of existing ones.

Anna Thomas, Co-Founder and Director of the IFOW, expressed optimism about the study’s results: “This report not only highlights that the adoption of AI is well underway across UK firms but that it is possible for this tech transformation to lead to both net job creation and more ‘good work’ – great news as we look to solve the UK’s productivity puzzle.”

“With the [UK-hosted global] AI Summit fast approaching, Government must act urgently to regulate, legislate and invest so that UK firms and workers can benefit from this fast-moving technology.”

One key takeaway from the study is the importance of regional investment in education and infrastructure to make all areas of the UK ‘innovation ready.’ The study also emphasises the need for firms to engage workers when investing in automation and AI.

Taking these suggested actions could help ensure that the benefits of AI are distributed more evenly across regions and demographics, reducing existing inequalities.

Professor Sir Christopher Pissarides, Nobel Laureate and Co-Founder of IFOW, stressed the significance of placing “good jobs” at the heart of an economic and industrial strategy in the age of automation. He believes that the study provides valuable insights into how this can be achieved.

The IFOW’s study suggests that with the right approach, AI adoption can lead to a positive transformation of the labour market. By investing in education, infrastructure, and worker engagement, the UK can harness the potential of AI to create more jobs and improve job quality across the country.

Matt Robinson, Head of Nations and Regions, techUK, commented: “Realising the benefits of technologies like AI for all will mean getting the right foundations in place across areas like digital infrastructure and skills provision in every part of the UK to enable and create high-quality digital jobs.

“Access to good digital infrastructure, as well as skills and talent, is a priority for techUK members, and the Institute’s work provides welcome insights into their importance for creating good work throughout the country.”

While the IFOW’s study paints a more positive outlook on the adoption of AI than most headlines, it will be an uphill battle to convince the wider public.

A poll of US adults released this week by Mitre-Harris found the majority (54%) believe the risks of AI and just 39 percent of adults said they believed today’s AI technologies are safe and secure — down nine points from the previous survey.

As the AI industry continues to evolve, urgent action from governments, employers, and employees is essential to realise the opportunities, manage the risks, and convince a wary public of the technology’s benefits.

A copy of the full working paper can be found here (PDF)

(Photo by Damian Zaleski on Unsplash)

See also: CMA sets out principles for responsible AI development 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post IFOW: AI can have a positive impact on jobs appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/20/ifow-ai-can-have-positive-impact-jobs/feed/ 0
UK’s AI ecosystem to hit £2.4T by 2027, third in global race https://www.artificialintelligence-news.com/2023/09/07/uk-ai-ecosystem-hit-2-4t-by-2027-third-global-race/ https://www.artificialintelligence-news.com/2023/09/07/uk-ai-ecosystem-hit-2-4t-by-2027-third-global-race/#respond Thu, 07 Sep 2023 14:23:10 +0000 https://www.artificialintelligence-news.com/?p=13569 Projections released by the newly launched Global AI Ecosystem open-source knowledge platform indicate that the UK’s AI sector is set to skyrocket from £1.36 trillion ($1.7 trillion) to £2.4 trillion ($3 trillion) by 2027. The findings suggest the UK is set to remain Europe’s AI leader and secure third place in the global AI race... Read more »

The post UK’s AI ecosystem to hit £2.4T by 2027, third in global race appeared first on AI News.

]]>
Projections released by the newly launched Global AI Ecosystem open-source knowledge platform indicate that the UK’s AI sector is set to skyrocket from £1.36 trillion ($1.7 trillion) to £2.4 trillion ($3 trillion) by 2027. The findings suggest the UK is set to remain Europe’s AI leader and secure third place in the global AI race behind the US and China.

The Global AI Ecosystem platform is developed with support from AI Industry Analytics (AiiA) and Deep Knowledge Group. Designed as a universally accessible space for community interaction, collaboration, content sharing, and knowledge exchange, it has become a vital hub for AI enthusiasts and professionals.

AiiA, in its Global AI Economy Size Assessment report, conducted groundbreaking research showcasing the rapid expansion of the UK’s AI industry.

With over 8,900 companies operating in the sector, the UK AI economy’s valuation of £1.36 trillion underscores its substantial contribution to the national GDP. Approximately 4,100 investment funds are dedicated to AI, with 600 of them based in the UK.

A robust workforce of 500,000 UK-based AI specialists is driving innovation, solidifying the nation’s position in the global AI landscape. This skilled workforce not only bolsters GDP growth but also acts as a safety net against unemployment.

The UK government’s active prioritisation of its national AI agenda is a significant factor in this remarkable growth. Last month, UK Deputy PM Oliver Dowden called AI the most ‘extensive’ industrial revolution yet.

With 280 ongoing projects harnessing AI technology, the UK’s commitment to AI is clear. AI is a major pillar of the country’s national industrial strategy, making the UK one of the most proactive nations in shaping its AI future.

Dmitry Kaminskiy, Founder of AI Industry Analytics (AiiA) and General Partner of Deep Knowledge Group, said:

“Despite an economic downturn and other challenges, the UK stands as an undoubtable, dynamic, and proactive leader in the global AI arena, having surpassed £1.3 trillion in 2023 and projected to reach £2.4 trillion by 2027.

There is no question that AI is poised to be the major driver for economic growth, fuelling the further development of the entire UK DeepTech industry, and creating a cumulative, systemic, positive impact on the full scope of the nation’s integral infrastructure.”

Key cities like London, Cambridge, Manchester, and Edinburgh have emerged as leading AI hubs, fostering collaboration and providing access to essential resources. With nearly 5,000 AI companies in London alone, it competes with entire countries on the global AI stage and solidifies its European leadership status.

AiiA’s estimation of the UK AI economy size used AI algorithms to map the global AI industry, profiling 50,000 companies, 20,000 investors, 2,000 AI leaders, and 2,500 R&D hubs. Building upon previous reports, it conducted the most comprehensive assessment of the Global AI Economy to date, projecting a global AI economy exceeding £27.2 trillion ($34 trillion) by 2027.

The UK’s position as a hub for science, R&D, DeepTech, and AI governance places it in good stead for leveraging AI as a core engine of technological progress and driving economic growth.

(Image Credit: Global AI Ecosystem)

See also: UK government outlines AI Safety Summit plans

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK’s AI ecosystem to hit £2.4T by 2027, third in global race appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/07/uk-ai-ecosystem-hit-2-4t-by-2027-third-global-race/feed/ 0
GitLab: Developers view AI as ‘essential’ despite concerns https://www.artificialintelligence-news.com/2023/09/06/gitlab-developers-ai-essential-despite-concerns/ https://www.artificialintelligence-news.com/2023/09/06/gitlab-developers-ai-essential-despite-concerns/#respond Wed, 06 Sep 2023 09:48:08 +0000 https://www.artificialintelligence-news.com/?p=13564 A survey by GitLab has shed light on the views of developers on the landscape of AI in software development. The report, titled ‘The State of AI in Software Development,’ presents insights from over 1,000 global senior technology executives, developers, and security and operations professionals. The report reveals a complex relationship between enthusiasm for AI... Read more »

The post GitLab: Developers view AI as ‘essential’ despite concerns appeared first on AI News.

]]>
A survey by GitLab has shed light on the views of developers on the landscape of AI in software development.

The report, titled ‘The State of AI in Software Development,’ presents insights from over 1,000 global senior technology executives, developers, and security and operations professionals.

The report reveals a complex relationship between enthusiasm for AI adoption and concerns about data privacy, intellectual property, and security.

“Enterprises are seeking out platforms that allow them to harness the power of AI while addressing potential privacy and security risks,” said Alexander Johnston, Research Analyst in the Data, AI & Analytics Channel at 451 Research, a part of S&P Global Market Intelligence.

While 83 percent of the survey’s respondents view AI implementation as essential to stay competitive, a significant 79 percent expressed worries about AI tools accessing sensitive information and intellectual property.

Impact on developer productivity

AI is perceived as a boon for developer productivity, with 51 percent of all respondents citing it as a key benefit of AI implementation. However, security professionals are apprehensive that AI-generated code might lead to an increase in security vulnerabilities, potentially creating more work for them.

Only seven percent of developers’ time is currently spent identifying and mitigating security vulnerabilities, compared to 11 percent allocated to testing code. This raises questions about the widening gap between developers and security professionals in the AI era.

Privacy and intellectual property concerns

The survey underscores the paramount importance of data privacy and intellectual property protection when selecting AI tools. 95 percent of senior technology executives prioritise these aspects when choosing AI solutions.

Moreover, 32 percent of respondents admitted to being “very” or “extremely” concerned about introducing AI into the software development lifecycle. Within this group, 39 percent cited worries about AI-generated code introducing security vulnerabilities, and 48 percent expressed concerns that AI-generated code may not receive the same copyright protection as code produced by humans.

AI skills gap

Despite optimism about AI’s potential, the report identifies a disconnect between organisations’ provision of AI training resources and practitioners’ satisfaction with them. 

While 75 percent of respondents stated that their organisations offer training and resources for using AI, an equivalent proportion expressed the need to seek resources independently—suggesting that the available training may be insufficient.

A striking 81 percent of respondents said they require more training to effectively utilise AI in their daily work. Furthermore, 65 percent of those planning to use AI for software development indicated that their organsations plan to hire new talent to manage AI implementation.

David DeSanto, Chief Product Officer at GitLab, said:

“According to the GitLab Global DevSecOps Report, only 25 percent of developers’ time is spent on code generation, but the data shows AI can boost productivity and collaboration in nearly 60 percent of developers’ day-to-day work.

To realise AI’s full potential, it needs to be embedded across the software development lifecycle, allowing everyone involved in delivering secure software – not just developers – to benefit from the efficiency boost.” 

While AI holds immense promise for the software development industry, GitLab’s report makes it clear that addressing cybersecurity and privacy concerns, bridging the skills gap, and fostering collaboration between developers and security professionals are pivotal to successful AI adoption.

(Photo by Luca Bravo on Unsplash)

See also: UK government outlines AI Safety Summit plans

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post GitLab: Developers view AI as ‘essential’ despite concerns appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/06/gitlab-developers-ai-essential-despite-concerns/feed/ 0