AI Virtual Assistants News | Latest Virtual Assistants Updates | AI News https://www.artificialintelligence-news.com/categories/ai-applications/ai-virtual-assistants/ Artificial Intelligence News Mon, 13 Nov 2023 17:27:21 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png AI Virtual Assistants News | Latest Virtual Assistants Updates | AI News https://www.artificialintelligence-news.com/categories/ai-applications/ai-virtual-assistants/ 32 32 GitLab’s new AI capabilities empower DevSecOps https://www.artificialintelligence-news.com/2023/11/13/gitlab-new-ai-capabilities-empower-devsecops/ https://www.artificialintelligence-news.com/2023/11/13/gitlab-new-ai-capabilities-empower-devsecops/#respond Mon, 13 Nov 2023 17:27:18 +0000 https://www.artificialintelligence-news.com/?p=13876 GitLab is empowering DevSecOps with new AI-powered capabilities as part of its latest releases. The recent GitLab 16.6 November release includes the beta launch of GitLab Duo Chat, a natural-language AI assistant. Additionally, the GitLab 16.7 December release sees the general availability of GitLab Duo Code Suggestions. David DeSanto, Chief Product Officer at GitLab, said:... Read more »

The post GitLab’s new AI capabilities empower DevSecOps appeared first on AI News.

]]>
GitLab is empowering DevSecOps with new AI-powered capabilities as part of its latest releases.

The recent GitLab 16.6 November release includes the beta launch of GitLab Duo Chat, a natural-language AI assistant. Additionally, the GitLab 16.7 December release sees the general availability of GitLab Duo Code Suggestions.

David DeSanto, Chief Product Officer at GitLab, said: “To realise AI’s full potential, it needs to be embedded across the software development lifecycle, allowing DevSecOps teams to benefit from boosts to security, efficiency, and collaboration.”

GitLab Duo Chat – arguably the star of the show – provides users with invaluable insights, guidance, and suggestions. Beyond code analysis, it supports planning, security issue comprehension and resolution, troubleshooting CI/CD pipeline failures, aiding in merge requests, and more.

As part of GitLab’s commitment to providing a comprehensive AI-powered experience, Duo Chat joins Code Suggestions as the primary interface into GitLab’s AI suite within its DevSecOps platform.

GitLab Duo comprises a suite of 14 AI capabilities:

  • Suggested Reviewers
  • Code Suggestions
  • Chat
  • Vulnerability Summary
  • Code Explanation
  • Planning Discussions Summary
  • Merge Request Summary
  • Merge Request Template Population
  • Code Review Summary
  • Test Generation
  • Git Suggestions
  • Root Cause Analysis
  • Planning Description Generation
  • Value Stream Forecasting

In response to the evolving needs of development, security, and operations teams, Code Suggestions is now generally available. This feature assists in creating and updating code, reducing cognitive load, enhancing efficiency, and accelerating secure software development.

GitLab’s commitment to privacy and transparency stands out in the AI space. According to the GitLab report, 83 percent of DevSecOps professionals consider implementing AI in their processes essential, with 95 percent prioritising privacy and intellectual property protection in AI tool selection.

The State of AI in Software Development report by GitLab reveals that developers spend just 25 percent of their time writing code. The Duo suite aims to address this by reducing toolchain sprawl—enabling 7x faster cycle times, heightened developer productivity, and reduced software spend.

Kate Holterhoff, Industry Analyst at Redmonk, commented: “The developers we speak with at RedMonk are keenly interested in the productivity and efficiency gains that code assistants promise.

“GitLab’s Duo Code Suggestions is a welcome player in this space, expanding the available options for enabling an AI-enhanced software development lifecycle.”

(Photo by Pankaj Patel on Unsplash)

See also: OpenAI battles DDoS against its API and ChatGPT services

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post GitLab’s new AI capabilities empower DevSecOps appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/11/13/gitlab-new-ai-capabilities-empower-devsecops/feed/ 0
OpenAI introduces GPT-4 Turbo, platform enhancements, and reduced pricing https://www.artificialintelligence-news.com/2023/11/07/openai-gpt-4-turbo-platform-enhancements-reduced-pricing/ https://www.artificialintelligence-news.com/2023/11/07/openai-gpt-4-turbo-platform-enhancements-reduced-pricing/#respond Tue, 07 Nov 2023 11:59:31 +0000 https://www.artificialintelligence-news.com/?p=13851 OpenAI has announced a slew of new additions and improvements to its platform, alongside reduced pricing, aimed at empowering developers and enhancing user experience. Following yesterday’s leak of a custom GPT-4 chatbot creator, OpenAI unveiled several other key features during its DevDay that promise a transformative impact on the landscape of AI applications: OpenAI’s latest... Read more »

The post OpenAI introduces GPT-4 Turbo, platform enhancements, and reduced pricing appeared first on AI News.

]]>
OpenAI has announced a slew of new additions and improvements to its platform, alongside reduced pricing, aimed at empowering developers and enhancing user experience.

Following yesterday’s leak of a custom GPT-4 chatbot creator, OpenAI unveiled several other key features during its DevDay that promise a transformative impact on the landscape of AI applications:

  • GPT-4 Turbo: OpenAI introduced the preview of GPT-4 Turbo, the next generation of its renowned language model. This new iteration boasts enhanced capabilities and an extensive knowledge base encompassing world events up until April 2023.
    • One of GPT-4 Turbo’s standout features is the impressive 128K context window, allowing it to process the equivalent of more than 300 pages of text in a single prompt.
    • Notably, OpenAI has optimised the pricing structure, making GPT-4 Turbo 3x cheaper for input tokens and 2x cheaper for output tokens compared to its predecessor.
  • Assistants API: OpenAI also unveiled the Assistants API, a tool designed to simplify the process of building agent-like experiences within applications.
    • The API equips developers with the ability to create purpose-built AIs with specific instructions, leveraging additional knowledge and calling models and tools to perform tasks.
  • Multimodal capabilities: OpenAI’s platform now supports a range of multimodal capabilities, including vision, image creation (DALL·E 3), and text-to-speech (TTS).
    • GPT-4 Turbo can process images, opening up possibilities such as generating captions, detailed image analysis, and reading documents with figures.
    • Additionally, DALL·E 3 integration allows developers to create images and designs programmatically, while the text-to-speech API enables the generation of human-quality speech from text.
  • Pricing overhaul: OpenAI has significantly reduced prices across its platform, making it more accessible to developers.
    • GPT-4 Turbo input tokens are now 3x cheaper than its predecessor at $0.01, and output tokens are 2x cheaper at $0.03. Similar reductions apply to GPT-3.5 Turbo, catering to various user requirements and ensuring affordability.
  • Copyright Shield: To bolster customer protection, OpenAI has introduced Copyright Shield.
    • This initiative sees OpenAI stepping in to defend customers and cover the associated legal costs if they face copyright infringement claims related to the generally available features of ChatGPT Enterprise and the developer platform.

OpenAI’s latest announcements mark a significant stride in the company’s mission to democratise AI technology, empowering developers to create innovative and intelligent applications across various domains.

See also: OpenAI set to unveil custom GPT-4 chatbot creator

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI introduces GPT-4 Turbo, platform enhancements, and reduced pricing appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/11/07/openai-gpt-4-turbo-platform-enhancements-reduced-pricing/feed/ 0
Mark Zuckerberg: AI will be built into all of Meta’s products https://www.artificialintelligence-news.com/2023/06/09/mark-zuckerberg-ai-built-into-all-meta-products/ https://www.artificialintelligence-news.com/2023/06/09/mark-zuckerberg-ai-built-into-all-meta-products/#respond Fri, 09 Jun 2023 14:41:18 +0000 https://www.artificialintelligence-news.com/?p=13176 Meta CEO Mark Zuckerberg unveiled the extent of the company’s AI investments during an internal company meeting. The meeting included discussions about new products, such as chatbots for Messenger and WhatsApp that can converse with different personas. Additionally, Meta announced new features for Instagram, including the ability to modify user photos via text prompts and... Read more »

The post Mark Zuckerberg: AI will be built into all of Meta’s products appeared first on AI News.

]]>
Meta CEO Mark Zuckerberg unveiled the extent of the company’s AI investments during an internal company meeting.

The meeting included discussions about new products, such as chatbots for Messenger and WhatsApp that can converse with different personas. Additionally, Meta announced new features for Instagram, including the ability to modify user photos via text prompts and create emoji stickers for messaging services.

These developments come at a crucial time for Meta, as the company has faced financial struggles and an identity crisis in recent years. Investors criticised Meta for focusing too heavily on its metaverse ambitions and not paying enough attention to AI.

Meta’s decision to focus on AI tools follows in the footsteps of its competitors, including Google, Microsoft, and Snapchat, who have received significant investor attention for their generative AI products. Unlike the aforementioned rivals, Meta is yet to release any consumer-facing generative AI products.

To address this gap, Meta has been reorganising its AI divisions and investing heavily in infrastructure to support its AI product needs.

Zuckerberg expressed optimism during the company meeting, stating that advancements in generative AI have made it possible to integrate the technology into “every single one” of Meta’s products. This signifies Meta’s intention to leverage AI across its platforms, including Facebook, Instagram, and WhatsApp.

In addition to consumer-facing tools, Meta also announced a productivity assistant called Metamate for its employees. This assistant is designed to answer queries and perform tasks based on internal company information.

Meta is also exploring open-source models, allowing users to build their own AI-powered chatbots and technologies. However, critics and competitors have raised concerns about the potential misuse of these tools, as they can be utilised to spread misinformation and hate speech on a larger scale.

Zuckerberg addressed these concerns during the meeting, emphasising the value of democratising access to AI. He expressed hope that users would be able to develop AI programs independently in the future, without relying on frameworks provided by a few large technology companies.

Despite the increased focus on AI, Zuckerberg reassured employees that Meta would not be abandoning its plans for the metaverse, indicating that both AI and the metaverse would remain key areas of focus for the company.

The success of these endeavours will determine whether Meta can catch up with its competitors and solidify its position among tech leaders in the rapidly-evolving landscape.

(Photo by Mariia Shalabaieva on Unsplash)

Related: Meta’s open-source speech AI models support over 1,100 languages

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Mark Zuckerberg: AI will be built into all of Meta’s products appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/06/09/mark-zuckerberg-ai-built-into-all-meta-products/feed/ 0
​​Italy will lift ChatGPT ban if OpenAI fixes privacy issues https://www.artificialintelligence-news.com/2023/04/13/italy-lift-chatgpt-ban-openai-fixes-privacy-issues/ https://www.artificialintelligence-news.com/2023/04/13/italy-lift-chatgpt-ban-openai-fixes-privacy-issues/#respond Thu, 13 Apr 2023 15:18:41 +0000 https://www.artificialintelligence-news.com/?p=12944 Italy’s data protection authority has said that it’s willing to lift its ChatGPT ban if OpenAI meets specific conditions. The Guarantor for the Protection of Personal Data (GPDP) announced last month that it was blocking access to OpenAI’s ChatGPT. The move was part of an ongoing investigation into whether the chatbot violated Italy’s data privacy... Read more »

The post ​​Italy will lift ChatGPT ban if OpenAI fixes privacy issues appeared first on AI News.

]]>
Italy’s data protection authority has said that it’s willing to lift its ChatGPT ban if OpenAI meets specific conditions.

The Guarantor for the Protection of Personal Data (GPDP) announced last month that it was blocking access to OpenAI’s ChatGPT. The move was part of an ongoing investigation into whether the chatbot violated Italy’s data privacy laws and the EU’s infamous General Data Protection Regulation (GDPR).

The GPDP was concerned that ChatGPT could recall and emit personal information, such as phone numbers and addresses, from input queries. Additionally, officials were worried that the chatbot could expose minors to inappropriate answers that could potentially be harmful.

The GPDP says it will lift the ban on ChatGPT if its creator, OpenAI, enforces rules protecting minors and users’ personal data by 30th April 2023.

OpenAI has been asked to notify people on its website how ChatGPT stores and processes their data and require users to confirm that they are 18 and older before using the software.

An age verification process will be required when registering new users and children below the age of 13 must be prevented from accessing the software. People aged 13-18 must obtain consent from their parents to use ChatGPT.

The company must also ask for explicit consent to use people’s data to train its AI models and allow anyone – whether they’re a user or not – to request any false personal information generated by ChatGPT to be corrected or deleted altogether.

All of these changes must be implemented by September 30th or the ban will be reinstated.

This move is part of a larger trend of increased scrutiny of AI technologies by regulators around the world. ChatGPT is not the only AI system that has faced regulatory challenges.

Regulators in Canada and France have also launched investigations into whether ChatGPT violates data privacy laws after receiving official complaints. Meanwhile, Spain has urged the EU’s privacy watchdog to launch a deeper investigation into ChatGPT.

The international scrutiny of ChatGPT and similar AI systems highlights the need for developers to be proactive in addressing privacy concerns and implementing safeguards to protect users’ personal data.

(Photo by Levart_Photographer on Unsplash)

Related: AI think tank calls GPT-4 a risk to public safety

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post ​​Italy will lift ChatGPT ban if OpenAI fixes privacy issues appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/04/13/italy-lift-chatgpt-ban-openai-fixes-privacy-issues/feed/ 0
Google plays it safe with initial Bard rollout https://www.artificialintelligence-news.com/2023/03/21/google-plays-it-safe-initial-bard-rollout/ https://www.artificialintelligence-news.com/2023/03/21/google-plays-it-safe-initial-bard-rollout/#respond Tue, 21 Mar 2023 16:51:45 +0000 https://www.artificialintelligence-news.com/?p=12851 Google has begun rolling out early access to its Bard chatbot in the US and UK. The ChatGPT rival was announced via a blog post in February, seemingly to get ahead of Microsoft’s own big reveal event the next day. Microsoft’s plans to integrate a new version of ChatGPT into its Bing search engine set... Read more »

The post Google plays it safe with initial Bard rollout appeared first on AI News.

]]>
Google has begun rolling out early access to its Bard chatbot in the US and UK.

The ChatGPT rival was announced via a blog post in February, seemingly to get ahead of Microsoft’s own big reveal event the next day.

Microsoft’s plans to integrate a new version of ChatGPT into its Bing search engine set off alarm bells at Google. In response, Google CEO Sundar Pichai invited the company’s founders – Larry Page and Sergey Brin – to return for a series of meetings to review its AI strategy.

In stark contrast to Microsoft’s polished event, a last-minute event held by Google the day after was a mess. Previous announcements were rehashed, a presenter’s phone went missing, and Pichai was nowhere to be seen.

Googlers took to the internal forum ‘Memegen’ to criticise Pichai’s leadership. One wrote, “Dear Sundar, the Bard launch and the layoffs were rushed, botched, and myopic” and called on Pichai to “please return to taking a long-term outlook.”

During a Bard promo video, an incorrect answer was shown that sent Google’s shares plummeting and wiped $120 billion from its value.

Google AI Chief Jeff Dean had allegedly even warned colleagues ahead of the reveal that it cannot rush products like Bard to the market because the company has more “reputational risk” in providing wrong information.

The contrast between Microsoft’s and Google’s announcements was stark. Microsoft looked unstoppable while Google appeared to be in absolute chaos. However, things shifted in the coming weeks.

Users began reporting “unhinged” responses from Microsoft’s chatbot—including not just incorrect information, but also the bot appearing to be in a depressive state, wanting to be human, and even claiming to spy on people through their webcams.

Suddenly, that one error in Bard’s promo video didn’t look so bad. Furthermore, it justified Google’s decision to hold fire on releasing Bard to the public.

Google now appears to be comfortable with Bard being ready for public testing:

“Our work on Bard is guided by our AI Principles, and we continue to focus on quality and safety,” wrote Google in a blog post.

“We’re using human feedback and evaluation to improve our systems, and we’ve also built in guardrails, like capping the number of exchanges in a dialogue, to try to keep interactions helpful and on topic.”

In a quick test, Bard (left of screenshot) is subjectively more concise with its responses than Bing (right of screenshot) while the latter is slightly more comprehensive:

However, there are currently a few key differences:

  • Bing’s chatbot wants to continue the conversation and suggests possible follow-up questions.
  • Bing’s chatbot makes it clear where it’s getting its information so users can get more background.
  • Bard reminds the user before starting a conversation that it has “limitations” and “won’t always get it right”. Furthermore, the page always displays a warning that Bard “may display inaccurate or offensive information that doesn’t represent Google’s views.”

You can sign up to try Bard here. Google is currently rolling out access in the US and UK but will be expanding to other countries and languages over time.

(Image Credit: Google)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Google plays it safe with initial Bard rollout appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/03/21/google-plays-it-safe-initial-bard-rollout/feed/ 0
Apple shies from the spotlight with staff-only AI summit https://www.artificialintelligence-news.com/2023/02/20/apple-shies-from-spotlight-staff-only-ai-summit/ https://www.artificialintelligence-news.com/2023/02/20/apple-shies-from-spotlight-staff-only-ai-summit/#respond Mon, 20 Feb 2023 15:55:03 +0000 https://www.artificialintelligence-news.com/?p=12757 Apple seems happy to stay out of the spotlight when it comes to the “AI race” going by its latest summit. Microsoft, Google, Baidu, and others have all raced to make very public AI announcements over the past month. Apple held its own AI event earlier this month but it was a staff-only affair. Apple’s... Read more »

The post Apple shies from the spotlight with staff-only AI summit appeared first on AI News.

]]>
Apple seems happy to stay out of the spotlight when it comes to the “AI race” going by its latest summit.

Microsoft, Google, Baidu, and others have all raced to make very public AI announcements over the past month. Apple held its own AI event earlier this month but it was a staff-only affair.

Apple’s low-key AI event was notable as being the first to be held in-person at the Steve Jobs Theatre since the pandemic began. Other than that, it wasn’t particularly newsworthy—which is somewhat newsworthy in itself.

Most AI solutions rely on the cloud for processing. Google is moving an increasing amount to on-device but Apple, for better or worse, has made a big deal about its on-device AI strategy.

One of the ways that Apple markets itself as differing from rivals is its privacy-first approach. The firm collects minimal data and processes it on-device. That approach has worked great for Apple but the company may begin to struggle as it requires more data and processing power—something we may already be seeing.

Siri is widely perceived to be the third most capable virtual assistant behind Google and Alexa. Apple currently has no answer to the ChatGPT and Bard chatbots unveiled by Microsoft and Google respectively.

One of the primary uses for machine learning over the years has been web search. The threat that a ChatGPT-integrated Bing poses to Google reportedly set off the alarm bells over at Mountain View and led to the frantic (and “botched”) announcement of Bard.

Apple has reportedly been working on its own search engine but the company’s ethos against data collection could be holding it back from launching a product that can go toe-to-toe against Google and Bing.

At its AI event this month, Apple appeared set on rallying employees and convincing them it isn’t falling behind. Apple’s AI chief told attendees that “machine learning is moving faster than ever” and that Apple has talent that is “truly at the forefront.”

That doesn’t sound like a company that is particularly confident.

“While that may be Apple’s belief, I haven’t heard of anything — for consumers — that is a game changer coming out of the summit,” wrote Bloomberg’s Mark Gurman in the latest edition of his Power On newsletter.

“For those wondering, I don’t believe Apple previewed a ChatGPT/New Bing competitor or anything of the sort.”

Apple isn’t known to rush products to market and it’s not surprising that we’re not getting any major announcements ahead of WWDC. However, this staff-only event – and Gurman’s report – certainly gives the impression that Apple knows it’s not as well-positioned as its rivals when it comes to AI.

For now, Apple looks quite happy to sit out of the spotlight when it comes to AI. This year, all the attention will be firmly on its mixed-reality headset. However, questions will certainly be raised in the coming years about whether Apple is an AI leader unless it can silence the critics.

(Photo by Oscar Keys on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Apple shies from the spotlight with staff-only AI summit appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/02/20/apple-shies-from-spotlight-staff-only-ai-summit/feed/ 0
Inworld closes $50M Series A for its realistic NPC generator https://www.artificialintelligence-news.com/2022/08/24/inworld-closes-50m-series-a-realistic-npc-generator/ https://www.artificialintelligence-news.com/2022/08/24/inworld-closes-50m-series-a-realistic-npc-generator/#respond Wed, 24 Aug 2022 09:47:52 +0000 https://www.artificialintelligence-news.com/?p=12213 Inworld, a Disney-backed startup using AI to create realistic non-playable virtual characters (NPCs), has closed a $50 million Series A funding round. The startup has attracted interest for its ability to design and deploy interactive characters with more realistic interactions across the metaverse and other virtual words like video games. In current video games, NPCs... Read more »

The post Inworld closes $50M Series A for its realistic NPC generator appeared first on AI News.

]]>
Inworld, a Disney-backed startup using AI to create realistic non-playable virtual characters (NPCs), has closed a $50 million Series A funding round.

The startup has attracted interest for its ability to design and deploy interactive characters with more realistic interactions across the metaverse and other virtual words like video games.

In current video games, NPCs have pre-scripted responses. AI-powered virtual characters like those Inworld are developing can offer dynamic responses to general questions about the local area or wider world.

While graphics have generally become more immersive over the years, interactions have largely remained the same. The ability for NPCs to hold conversations more naturally will be a giant leap forward for gaming and help to deliver on the metaverse’s bold promises.

Inworld’s potentially key role in the future of the metaverse and gaming market – which is already worth hundreds of billions of dollars – has opened investors’ wallets.

Since it left stealth last year, Inworld has raised $70 million. In March, it raised $10 million in a strategic funding round.

Inworld was also notably included in the latest Disney Accelerator cohort which also included blockchain network Polygon, NFT platform Flickplay, AR hardware and interface developer Red 6, metaverse e-commerce platform Obsess, and immersive experience provider Lockerverse.

“I’ve long believed that interactions, memories, and connections formed in immersive worlds can become as real as those formed in real life. It’s why I’m excited about being named a Disney Accelerator company,” wrote Inworld AI CEO Ilya Gelfenbeyn in a LinkedIn post about the accelerator.

“Our platform for creating memorable, AI-powered characters will unlock new opportunities for creators in entertainment, gaming, the metaverse, and other digital realms who are ready to create memories like mine.”

Inworld Studio launched in beta a few months ago. The no-code system works with most common game and metaverse engines, including Unreal and unity.

“It’s crazy to think that Inworld was founded just 13 months ago. Since then, we raised two seed rounds, were selected for the Disney Accelerator, launched the beta version of our product, and now, closed our Series A!” wrote Gelfenbeyn in a blog post.

“It’s been impressive to see our community bring characters to life with AI. I know the best is yet to come.”

(Image Credit: Inworld)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Inworld closes $50M Series A for its realistic NPC generator appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/08/24/inworld-closes-50m-series-a-realistic-npc-generator/feed/ 0
IRS expands voice bot options for faster service https://www.artificialintelligence-news.com/2022/06/21/irs-expands-voice-bot-options-for-faster-service/ https://www.artificialintelligence-news.com/2022/06/21/irs-expands-voice-bot-options-for-faster-service/#respond Tue, 21 Jun 2022 13:51:14 +0000 https://www.artificialintelligence-news.com/?p=12096 The US Internal Revenue Service has unveiled expanded voice bot options to help eligible taxpayers easily verify their identity to set up or modify a payment plan while avoiding long wait times. “This is part of a wider effort at the IRS to help improve the experience of taxpayers,” said IRS commissioner Chuck Rettig. “We... Read more »

The post IRS expands voice bot options for faster service appeared first on AI News.

]]>
The US Internal Revenue Service has unveiled expanded voice bot options to help eligible taxpayers easily verify their identity to set up or modify a payment plan while avoiding long wait times.

“This is part of a wider effort at the IRS to help improve the experience of taxpayers,” said IRS commissioner Chuck Rettig. “We continue to look for ways to better assist taxpayers, and that includes helping people avoid waiting on hold or having to make a second phone call to get what they need. The expanded voice bots are another example of how technology can help the IRS provide better service to taxpayers.”

Voice bots run on software powered by artificial intelligence, which enables a caller to navigate an interactive voice response. The IRS has been using voice bots on numerous toll-free lines since January, enabling taxpayers with simple payment or notice questions to get what they need quickly and avoid waiting. Taxpayers can always speak with an English- or Spanish-speaking IRS telephone representative if needed.

Eligible taxpayers who call the Automated Collection System (ACS) and Accounts Management toll-free lines and want to discuss payment plan options can authenticate or verify their identities through a personal identification number (PIN) creation process. Setting up a PIN is easy: Taxpayers will need their most recent IRS bill and some basic personal information to complete the process.

“To date, the voice bots have answered over three million calls. As we add more functions for taxpayers to resolve their issues, I anticipate many more taxpayers getting the service they need quickly and easily,” said Darren Guillot, IRS deputy commissioner of Small Business/Self Employed Collection & Operations Support.

Additional voice bot service enhancements are planned in 2022 that will allow authenticated individuals (taxpayers with established or newly created PINs) to get:

  • Account and return transcripts.
  • Payment history.
  • Current balance owed.

In addition to the payment lines, voice bots help people who call the Economic Impact Payment (EIP) toll-free line with general procedural responses to frequently asked questions. The IRS also added voice bots for the Advance Child Tax Credit toll-free line in February to provide similar assistance to callers who need help reconciling the credits on their 2021 tax return.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post IRS expands voice bot options for faster service appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/06/21/irs-expands-voice-bot-options-for-faster-service/feed/ 0
The EU’s AI rules will likely take over a year to be agreed https://www.artificialintelligence-news.com/2022/02/17/eu-ai-rules-likely-take-over-year-to-be-agreed/ https://www.artificialintelligence-news.com/2022/02/17/eu-ai-rules-likely-take-over-year-to-be-agreed/#respond Thu, 17 Feb 2022 12:34:20 +0000 https://artificialintelligence-news.com/?p=11691 Rules governing the use of artificial intelligence across the EU will likely take over a year to be agreed upon. Last year, the European Commission drafted AI laws. While the US and China are set to dominate AI development with their vast resources, economic might, and light-touch regulation, European rivals – including the UK and... Read more »

The post The EU’s AI rules will likely take over a year to be agreed appeared first on AI News.

]]>
Rules governing the use of artificial intelligence across the EU will likely take over a year to be agreed upon.

Last year, the European Commission drafted AI laws. While the US and China are set to dominate AI development with their vast resources, economic might, and light-touch regulation, European rivals – including the UK and EU members – believe they can lead in ethical standards.

In the draft of the EU regulations, companies that are found guilty of AI misuse face a fine of €30 million or six percent of their global turnover (whichever is greater). The risk of such fines has been criticised as driving investments away from Europe.

The EU’s draft AI regulation classifies systems into three risk categories:

  • Limited risk – includes systems like chatbots, inventory management, spam filters, and video games.
  • High risk – includes systems that make vital decisions like evaluating creditworthiness, recruitment, justice administration, and biometric identification in non-public spaces.
  • Unacceptable risk – includes systems that are manipulative or exploitative, create social scoring, or conduct real-time biometric authentication in public spaces for law enforcement.

Unacceptable risk systems will face a blanket ban from deployment in the EU while limited risk will require minimal oversight.

Organisations deploying high-risk AI systems would be required to have things like:

  • Human oversight.
  • A risk-management system.
  • Record keeping and logging.
  • Transparency to users.
  • Data governance and management.
  • Conformity assessment.
  • Government registration.

However, the cumbersome nature of the EU – requiring agreement from all member states, each with their own priorities – means that new regulations are often subject to more debate and delay than national lawmaking.

Reuters reports that two key lawmakers on Wednesday said the EU’s AI regulations will likely take over a year more to agree. The delay is primarily due to debates over whether facial recognition should be banned and who should enforce the rules.

“Facial recognition is going to be the biggest ideological discussion between the right and left,” said one lawmaker, Dragos Tudorache, in a Reuters interview.

“I don’t believe in an outright ban. For me, the solution is to put the right rules in place.”

With leading academic institutions and more than 1,300 AI companies employing over 30,000 people, the UK is the biggest destination for AI investment in Europe and the third in the world. Between January and June 2021, global investors poured £13.5 billion into more than 1,400 “deep tech” UK private technology firms—more than Germany, France, and Israel combined.

In September 2021, the UK published its 10-year National Artificial Intelligence Strategy in a bid to secure its European AI leadership. Governance plays a large role in the strategy.

“The UK already punches above its weight internationally and we are ranked third in the world behind the USA and China in the list of top countries for AI,” commented DCMS Minister Chris Philp.

“We’re laying the foundations for the next ten years’ growth with a strategy to help us seize the potential of artificial intelligence and play a leading role in shaping the way the world governs it.”

As part of its strategy, the UK is creating an ‘AI Standards Hub’ to coordinate the country’s engagement in establishing global rules and is working with The Alan Turing Institute to update guidance on AI ethics and safety.

“We are proud of creating a dynamic, collaborative community of diverse researchers and are growing world-leading capabilities in responsible, safe, ethical, and inclusive AI research and innovation,” said Professor Sir Adrian Smith, Chief Executive of The Alan Turing Institute.

Striking a balance between innovation-stifling overregulation and ethics-compromising underregulation is never a simple task. It will be interesting to observe how AI regulations in Europe will differ across the continent and beyond.

(Photo by Christian Lue on Unsplash)

Related: British intelligence agency GCHQ publishes ‘Ethics of AI’ report

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post The EU’s AI rules will likely take over a year to be agreed appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/02/17/eu-ai-rules-likely-take-over-year-to-be-agreed/feed/ 0
Why AI needs human intervention https://www.artificialintelligence-news.com/2022/01/19/why-ai-needs-human-intervention/ https://www.artificialintelligence-news.com/2022/01/19/why-ai-needs-human-intervention/#respond Wed, 19 Jan 2022 17:07:47 +0000 https://artificialintelligence-news.com/?p=11586 In today’s tight labour market and hybrid work environment, organizations are increasingly turning to AI to support various functions within their business, from delivering more personalized experiences to improving operations and productivity to helping organizations make better and faster decisions. That is why the worldwide market for AI software, hardware, and services is expected to... Read more »

The post Why AI needs human intervention appeared first on AI News.

]]>
In today’s tight labour market and hybrid work environment, organizations are increasingly turning to AI to support various functions within their business, from delivering more personalized experiences to improving operations and productivity to helping organizations make better and faster decisions. That is why the worldwide market for AI software, hardware, and services is expected to surpass $500 billion by 2024, according to IDC.

Yet, many enterprises aren’t ready to have their AI systems run independently and entirely without human intervention – nor should they do so. 

In many instances, enterprises simply don’t have sufficient expertise in the systems they use as AI technologies are extraordinarily complex. In other instances, rudimentary AI is built into enterprise software. These can be fairly static and remove control over the parameters of the data most organizations need. But even the most AI savvy organizations keep humans in the equation to avoid risks and reap the maximum benefits of AI. 

AI Checks and Balances

There are clear ethical, regulatory, and reputational reasons to keep humans in the loop. Inaccurate data can be introduced over time leading to poor decisions or even dire circumstances in some cases. Biases can also creep into the system whether it is introduced while training the AI model, as a result of changes in the training environment, or due to trending bias where the AI system reacts to recent activities more than previous ones. Moreover, AI is often incapable of understanding the subtleties of a moral decision. 

Take healthcare for instance. The industry perfectly illustrates how AI and humans can work together to improve outcomes or cause great harm if humans are not fully engaged in the decision-making process. For example, in diagnosing or recommending a care plan for a patient, AI is ideal for making the recommendation to the doctor, who then evaluates if that recommendation is sound and then gives the counsel to the patient.

Having a way for people to continually monitor AI responses and accuracy will avoid flaws that could lead to harm or catastrophe while providing a means for continuous training of the models so they get continuously better and better. That’s why IDC expects more than 70% of G2000 companies will have formal programs to monitor their digital trustworthiness by 2022.

Models for Human-AI Collaboration

Human-in-the-Loop (HitL) Reinforcement Learning and Conversational AI are two examples of how human intervention supports AI systems in making better decisions.

HitL allows AI systems to leverage machine learning to learn by observing humans dealing with real-life work and use cases. HitL models are like traditional AI models except they are continuously self-developing and improving based on human feedback while, in some cases, augmenting human interactions. It provides a controlled environment that limits the inherent risk of biases—such as the bandwagon effect—that can have devastating consequences, especially in crucial decision-making processes.

We can see the value of the HitL model in industries that manufacture critical parts for vehicles or aircraft requiring equipment that is up to standard. In situations like this, machine learning increases the speed and accuracy of inspections, while human oversight provides added assurances that parts are safe and secure for passengers.

Conversational AI, on the other hand, provides near-human-like communication. It can offload work from employees in handling simpler problems while knowing when to escalate an issue to humans for solving more complex issues. Contact centres provide a primary example.

When a customer reaches out to a contact centre, they have the option to call, text, or chat virtually with a representative. The virtual agent listens and understands the needs of the customer and engages back and forth in a conversation. It uses machine learning and AI to decide what needs to be done based on what it has learned from prior experience. Most AI systems within contact centres generate speech to help communicate with the customer and mimic the feeling of a human doing the typing or talking.

For most situations, a virtual agent is enough to help service customers and resolve their problems. However, there are cases where AI can stop typing or talking and then make a seamless transfer to a live representative to take over the call or chat.  Even in these examples, the AI system can shift from automation to augmentation, by still listening to the conversation and providing recommendations to the live representative to aid them in their decisions

Going beyond conversational AI with cognitive AI, these systems can learn to understand the emotional state of the other party, handle complex dialogue, provide real-time translation and even adjust based on the behaviour of the other person, taking human assistance to the next level of sophistication.

Blending Automation and Human Interaction Leads to Augmented Intelligence

AI is best applied when it is both monitored by and augments people. When that happens, people move up the skills continuum, taking on more complex challenges, while the AI continually learns, improves, and is kept in check, avoiding potentially harmful effects. Using models like HitL, conversational AI, and cognitive AI in collaboration with real people who possess expertise, ingenuity, empathy and moral judgment ultimately leads to augmented intelligence and more positive outcomes.

(Photo by Arteum.ro on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Why AI needs human intervention appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/01/19/why-ai-needs-human-intervention/feed/ 0