AI Chatbots News | Latest Chatbot Developments | AI News https://www.artificialintelligence-news.com/categories/ai-applications/ai-chatbots/ Artificial Intelligence News Wed, 06 Dec 2023 15:41:31 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png AI Chatbots News | Latest Chatbot Developments | AI News https://www.artificialintelligence-news.com/categories/ai-applications/ai-chatbots/ 32 32 Google’s next-gen AI model Gemini outperforms GPT-4 https://www.artificialintelligence-news.com/2023/12/06/google-next-gen-ai-model-gemini-outperforms-gpt-4/ https://www.artificialintelligence-news.com/2023/12/06/google-next-gen-ai-model-gemini-outperforms-gpt-4/#respond Wed, 06 Dec 2023 15:41:29 +0000 https://www.artificialintelligence-news.com/?p=14016 Google has unveiled Gemini, a cutting-edge AI model that stands as the company’s most capable and versatile to date. Demis Hassabis, CEO and Co-Founder of Google DeepMind, introduced Gemini as a multimodal model that is capable of seamlessly understanding and combining various types of information, including text, code, audio, image, and video. Gemini comes in... Read more »

The post Google’s next-gen AI model Gemini outperforms GPT-4 appeared first on AI News.

]]>
Google has unveiled Gemini, a cutting-edge AI model that stands as the company’s most capable and versatile to date.

Demis Hassabis, CEO and Co-Founder of Google DeepMind, introduced Gemini as a multimodal model that is capable of seamlessly understanding and combining various types of information, including text, code, audio, image, and video.

Gemini comes in three optimised versions: Ultra, Pro, and Nano. The Ultra model boasts state-of-the-art performance, surpassing human experts in language understanding and demonstrating unprecedented capabilities in tasks ranging from coding to multimodal benchmarks.

What sets Gemini apart is its native multimodality, eliminating the need for stitching together separate components for different modalities. This groundbreaking approach, fine-tuned through large-scale collaborative efforts across Google teams, positions Gemini as a flexible and efficient model capable of running on data centres to mobile devices.

One of Gemini’s standout features is its sophisticated multimodal reasoning, enabling it to extract insights from vast datasets with remarkable precision. The model’s prowess extends to understanding and generating high-quality code in popular programming languages.

However, as Google ventures into this new era of AI, responsibility and safety remain paramount. Gemini undergoes rigorous safety evaluations, including assessments for bias and toxicity. Google is actively collaborating with external experts to address potential blind spots and ensure the model’s ethical deployment.

Gemini 1.0 is now rolling out across various Google products – including the Bard chatbot – with plans for integration into Search, Ads, Chrome, and Duet AI. However, the Bard upgrade will not be released in Europe pending clearance from regulators.

Developers and enterprise customers can access Gemini Pro via the Gemini API in Google AI Studio or Google Cloud Vertex AI. Android developers will also be able to build with Gemini Nano via AICore, a new system capability available in Android 14.

(Image Credit: Google)

See also: AI & Big Data Expo: AI’s impact on decision-making in marketing

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Google’s next-gen AI model Gemini outperforms GPT-4 appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/12/06/google-next-gen-ai-model-gemini-outperforms-gpt-4/feed/ 0
Anthropic upsizes Claude 2.1 to 200K tokens, nearly doubling GPT-4 https://www.artificialintelligence-news.com/2023/11/22/anthropic-upsizes-claude-2-1-to-200k-tokens-nearly-doubling-gpt-4/ https://www.artificialintelligence-news.com/2023/11/22/anthropic-upsizes-claude-2-1-to-200k-tokens-nearly-doubling-gpt-4/#respond Wed, 22 Nov 2023 11:33:19 +0000 https://www.artificialintelligence-news.com/?p=13942 San Francisco-based AI startup Anthropic has unveiled Claude 2.1, an upgrade to its language model that boasts a 200,000-token context window—vastly outpacing the recently released 120,000-token GPT-4 model from OpenAI.   The release comes on the heels of an expanded partnership with Google that provides Anthropic access to advanced processing hardware, enabling the substantial expansion of... Read more »

The post Anthropic upsizes Claude 2.1 to 200K tokens, nearly doubling GPT-4 appeared first on AI News.

]]>
San Francisco-based AI startup Anthropic has unveiled Claude 2.1, an upgrade to its language model that boasts a 200,000-token context window—vastly outpacing the recently released 120,000-token GPT-4 model from OpenAI.  

The release comes on the heels of an expanded partnership with Google that provides Anthropic access to advanced processing hardware, enabling the substantial expansion of Claude’s context-handling capabilities.

With the ability to process lengthy documents like full codebases or novels, Claude 2.1 is positioned to unlock new potential across applications from contract analysis to literary study. 

The 200K token window represents more than just an incremental improvement—early tests indicate Claude 2.1 can accurately grasp information from prompts over 50 percent longer than GPT-4 before the performance begins to degrade.

Anthropic also touted a 50 percent reduction in hallucination rates for Claude 2.1 over version 2.0. Increased accuracy could put the model in closer competition with GPT-4 in responding precisely to complex factual queries.

Additional new features include an API tool for advanced workflow integration and “system prompts” that allow users to define Claude’s tone, goals, and rules at the outset for more personalised, contextually relevant interactions. For instance, a financial analyst could direct Claude to adopt industry terminology when summarising reports.

However, the full 200K token capacity remains exclusive to paying Claude Pro subscribers for now. Free users will continue to be limited to Claude 2.0’s 100K tokens.

As the AI landscape shifts, Claude 2.1’s enhanced precision and adaptability promise to be a game changer—presenting new options for businesses exploring how to strategically leverage AI capabilities.

With its substantial context expansion and rigorous accuracy improvements, Anthropic’s latest offering signals its determination to compete head-to-head with leading models like GPT-4.

(Image Credit: Anthropic)

See also: Paul O’Sullivan, Salesforce: Transforming work in the GenAI era

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Anthropic upsizes Claude 2.1 to 200K tokens, nearly doubling GPT-4 appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/11/22/anthropic-upsizes-claude-2-1-to-200k-tokens-nearly-doubling-gpt-4/feed/ 0
OpenAI battles DDoS against its API and ChatGPT services https://www.artificialintelligence-news.com/2023/11/09/openai-battles-ddos-against-api-chatgpt-services/ https://www.artificialintelligence-news.com/2023/11/09/openai-battles-ddos-against-api-chatgpt-services/#respond Thu, 09 Nov 2023 15:50:14 +0000 https://www.artificialintelligence-news.com/?p=13866 OpenAI has been grappling with a series of distributed denial-of-service (DDoS) attacks targeting its API and ChatGPT services over the past 24 hours. While the company has not yet disclosed specific details about the source of these attacks, OpenAI acknowledged that they are dealing with “periodic outages due to an abnormal traffic pattern reflective of... Read more »

The post OpenAI battles DDoS against its API and ChatGPT services appeared first on AI News.

]]>
OpenAI has been grappling with a series of distributed denial-of-service (DDoS) attacks targeting its API and ChatGPT services over the past 24 hours.

While the company has not yet disclosed specific details about the source of these attacks, OpenAI acknowledged that they are dealing with “periodic outages due to an abnormal traffic pattern reflective of a DDoS attack.”

Users affected by these incidents reported encountering errors such as “something seems to have gone wrong” and “There was an error generating a response” when accessing ChatGPT.

This recent wave of attacks follows a major outage that impacted ChatGPT and its API on Wednesday, along with partial ChatGPT outages on Tuesday, and elevated error rates in Dall-E on Monday.

OpenAI displayed a banner across ChatGPT’s interface, attributing the disruptions to “exceptionally high demand” and reassuring users that efforts were underway to scale their systems.

Threat actor group Anonymous Sudan has claimed responsibility for the DDoS attacks on OpenAI. According to the group, the attacks are in response to OpenAI’s perceived bias towards Israel and against Palestine.

The attackers utilised the SkyNet botnet, which recently incorporated support for application layer attacks or Layer 7 (L7) DDoS attacks. In Layer 7 attacks, threat actors overwhelm services at the application level with a massive volume of requests to strain the targets’ server and network resources.

Brad Freeman, Director of Technology at SenseOn, commented:

“Distributed denial of service attacks are internet vandalism. Low effort, complexity, and in most cases more of a nuisance than a long-term threat to a business. Often DDOS attacks target services with high volumes of traffic which can be ’off-ramped, by their cloud or Internet service provider.

However, as the attacks are on Layer 7 they will be targeting the application itself, therefore OpenAI will need to make some changes to mitigate the attack. It’s likely the threat actor is sending complex queries to OpenAI to overload it, I wonder if they are using AI-generated content to attack AI content generation.”

However, the attribution of these attacks to Anonymous Sudan has raised suspicions among cybersecurity researchers. Some experts suggest that this could be a false flag operation and the group might have connections to Russia instead which, along with Iran, is suspected of stoking the bloodshed and international outrage to benefit its domestic interests.

The situation once again highlights the ongoing challenges faced by organisations dealing with DDoS attacks and the complexities of accurately identifying the perpetrators.

(Photo by Johann Walter Bantz on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI battles DDoS against its API and ChatGPT services appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/11/09/openai-battles-ddos-against-api-chatgpt-services/feed/ 0
OpenAI introduces GPT-4 Turbo, platform enhancements, and reduced pricing https://www.artificialintelligence-news.com/2023/11/07/openai-gpt-4-turbo-platform-enhancements-reduced-pricing/ https://www.artificialintelligence-news.com/2023/11/07/openai-gpt-4-turbo-platform-enhancements-reduced-pricing/#respond Tue, 07 Nov 2023 11:59:31 +0000 https://www.artificialintelligence-news.com/?p=13851 OpenAI has announced a slew of new additions and improvements to its platform, alongside reduced pricing, aimed at empowering developers and enhancing user experience. Following yesterday’s leak of a custom GPT-4 chatbot creator, OpenAI unveiled several other key features during its DevDay that promise a transformative impact on the landscape of AI applications: OpenAI’s latest... Read more »

The post OpenAI introduces GPT-4 Turbo, platform enhancements, and reduced pricing appeared first on AI News.

]]>
OpenAI has announced a slew of new additions and improvements to its platform, alongside reduced pricing, aimed at empowering developers and enhancing user experience.

Following yesterday’s leak of a custom GPT-4 chatbot creator, OpenAI unveiled several other key features during its DevDay that promise a transformative impact on the landscape of AI applications:

  • GPT-4 Turbo: OpenAI introduced the preview of GPT-4 Turbo, the next generation of its renowned language model. This new iteration boasts enhanced capabilities and an extensive knowledge base encompassing world events up until April 2023.
    • One of GPT-4 Turbo’s standout features is the impressive 128K context window, allowing it to process the equivalent of more than 300 pages of text in a single prompt.
    • Notably, OpenAI has optimised the pricing structure, making GPT-4 Turbo 3x cheaper for input tokens and 2x cheaper for output tokens compared to its predecessor.
  • Assistants API: OpenAI also unveiled the Assistants API, a tool designed to simplify the process of building agent-like experiences within applications.
    • The API equips developers with the ability to create purpose-built AIs with specific instructions, leveraging additional knowledge and calling models and tools to perform tasks.
  • Multimodal capabilities: OpenAI’s platform now supports a range of multimodal capabilities, including vision, image creation (DALL·E 3), and text-to-speech (TTS).
    • GPT-4 Turbo can process images, opening up possibilities such as generating captions, detailed image analysis, and reading documents with figures.
    • Additionally, DALL·E 3 integration allows developers to create images and designs programmatically, while the text-to-speech API enables the generation of human-quality speech from text.
  • Pricing overhaul: OpenAI has significantly reduced prices across its platform, making it more accessible to developers.
    • GPT-4 Turbo input tokens are now 3x cheaper than its predecessor at $0.01, and output tokens are 2x cheaper at $0.03. Similar reductions apply to GPT-3.5 Turbo, catering to various user requirements and ensuring affordability.
  • Copyright Shield: To bolster customer protection, OpenAI has introduced Copyright Shield.
    • This initiative sees OpenAI stepping in to defend customers and cover the associated legal costs if they face copyright infringement claims related to the generally available features of ChatGPT Enterprise and the developer platform.

OpenAI’s latest announcements mark a significant stride in the company’s mission to democratise AI technology, empowering developers to create innovative and intelligent applications across various domains.

See also: OpenAI set to unveil custom GPT-4 chatbot creator

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI introduces GPT-4 Turbo, platform enhancements, and reduced pricing appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/11/07/openai-gpt-4-turbo-platform-enhancements-reduced-pricing/feed/ 0
OpenAI set to unveil custom GPT-4 chatbot creator https://www.artificialintelligence-news.com/2023/11/06/openai-set-unveil-custom-gpt-4-chatbot-creator/ https://www.artificialintelligence-news.com/2023/11/06/openai-set-unveil-custom-gpt-4-chatbot-creator/#respond Mon, 06 Nov 2023 13:05:22 +0000 https://www.artificialintelligence-news.com/?p=13838 As OpenAI gears up for its inaugural developer conference, some major announcements appear to have leaked. The leak includes screenshots and videos showcasing a custom chatbot creator utilising GPT-4. This advanced version of ChatGPT boasts features such as web browsing and data analysis, enhancing its capabilities significantly. According to the leaked information, OpenAI will introduce... Read more »

The post OpenAI set to unveil custom GPT-4 chatbot creator appeared first on AI News.

]]>
As OpenAI gears up for its inaugural developer conference, some major announcements appear to have leaked.

The leak includes screenshots and videos showcasing a custom chatbot creator utilising GPT-4. This advanced version of ChatGPT boasts features such as web browsing and data analysis, enhancing its capabilities significantly.

According to the leaked information, OpenAI will introduce a new marketplace where users can share their custom chatbots or explore creations made by others.

The leaker, a Twitter user named CHOI, provided a summary of the anticipated updates:

Additionally, SEO tools developer Tibor Blaho shared a video demonstrating the user interface of the new feature—revealing a GPT Builder option that enables users to input prompts and create bespoke chatbots.

The GPT Builder interface offers a user-friendly experience, allowing individuals to select a default language, tone, and writing style for their chatbot. Users can configure the bot by providing a name, description, and instructions, along with the ability to upload files for a personalised knowledgebase.

The tool also allows toggling of features such as web browsing and image generation; giving users unprecedented control over their chatbot’s capabilities. Custom actions can be added to enhance the bot’s functionality.

Furthermore, the leaked information suggests that OpenAI plans to launch an enterprise-level “Team” subscription plan with both “Flexible” and “Annual” options.

The Team plan reportedly offers benefits such as unlimited high-speed GPT-4 usage and extended context capabilities, with a pricing structure of $25 per user, per month for the annual subscription and $30 per month for the non-annual option, requiring a minimum of three users.

OpenAI has recently rolled out several beta features for ChatGPT, including live web results, image generation, and voice chat.

The company is set to provide a preview of the new tools at the upcoming developer conference, offering the tech community a firsthand look at the future of conversational AI.

(Photo by Jem Sahagun on Unsplash)

See also: NIST announces AI consortium to shape US policies

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI set to unveil custom GPT-4 chatbot creator appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/11/06/openai-set-unveil-custom-gpt-4-chatbot-creator/feed/ 0
How information retrieval is being revolutionised with RAG technology https://www.artificialintelligence-news.com/2023/10/02/how-information-retrieval-is-being-revolutionised-with-rag-technology/ https://www.artificialintelligence-news.com/2023/10/02/how-information-retrieval-is-being-revolutionised-with-rag-technology/#respond Mon, 02 Oct 2023 13:07:10 +0000 https://www.artificialintelligence-news.com/?p=13659 In an era where digital data proliferates at an unprecedented pace, finding the right information amidst the digital deluge is akin to navigating a complex maze. Traditional enterprise search engines, while powerful, often inundate us with a barrage of results, making it challenging to discern the relevant from the irrelevant. However, amidst this vast expanse... Read more »

The post How information retrieval is being revolutionised with RAG technology appeared first on AI News.

]]>
In an era where digital data proliferates at an unprecedented pace, finding the right information amidst the digital deluge is akin to navigating a complex maze. Traditional enterprise search engines, while powerful, often inundate us with a barrage of results, making it challenging to discern the relevant from the irrelevant. However, amidst this vast expanse of digital information, a revolutionary technology has emerged, promising to transform the way we interact with data in the enterprise. Enter the power of Retrieval-Augmented Generation (RAG) to redefine our relationship with information.

The internet, once seen as a source of knowledge for all, has now become a complex maze. Although traditional search engines are powerful, they often inundate users with a flood of results, making it difficult to find what they are searching for. The emergence of new technologies like ChatGPT from OpenAI has been impressive, along with other language models such as Bard. However, these models also come with certain drawbacks for business users, such as the risk of generating inaccurate information, a lack of proper citation, potential copyright infringements, and a scarcity of reliable information in the business domain. The challenge lies not only in finding information but in finding the right information. In order to make Generative AI effective in the business world, we must address these concerns, which is the focal point of RAG.

The digital challenge: A sea of information

At the corner of platforms like Microsoft Copilot and Lucy is the transformative approach of the Retrieval-Augmented Generation (RAG) model.

Understanding RAG

What precisely is RAG, and how does it work? In simple terms, RAG is a two-step process:

1. Retrieval: Prior to providing an answer, the system delves into an extensive database, meticulously retrieving pertinent documents or passages. This isn’t a rudimentary matching of keywords; it’s a cutting-edge process that comprehends the intricate context and nuances of the query. RAG systems rely on the data owned or licensed by companies, and ensure that Enterprise Levels of access control are impeccably managed and preserved.

2. Generation: Once the pertinent information is retrieved, it serves as the foundation for generating a coherent and contextually accurate response. This isn’t just about regurgitating data; it’s about crafting a meaningful and informative answer.

By integrating these two critical processes, RAG ensures that the responses delivered are not only precise but also well-informed. It’s akin to having a dedicated team of researchers at your disposal, ready to delve into a vast library, select the most appropriate sources, and present you with a concise and informative summary.

Why RAG matters

Leading technology platforms that have embraced RAG – such as Microsoft Copilot for content creation or federated search platforms like Lucy – represent a significant breakthrough for several reasons:

1. Efficiency: Traditional models often demand substantial computational resources, particularly when dealing with extensive datasets. RAG, with its process segmentation, ensures efficiency, even when handling complex queries.

2. Accuracy: By first retrieving relevant data and then generating a response based on that data, RAG guarantees that the answers provided are firmly rooted in credible sources, enhancing accuracy and reliability.

3. Adaptability: RAG’s adaptability shines through as new information is continually added to the database. This ensures that the answers generated by platforms remain up-to-date and relevant.

RAG platforms in action

Picture yourself as a financial analyst seeking insights into market trends. Traditional research methods would require hours, if not days, to comb through reports, articles, and data sets. Lucy, however, simplifies the process – you merely pose your question. Behind the scenes, the RAG model springs into action, retrieving relevant financial documents and promptly generating a comprehensive response, all within seconds.

Similarly, envision a student conducting research on a historical event. Instead of becoming lost in a sea of search results, Lucy, powered by RAG, provides a concise, well-informed response, streamlining the research process and enhancing efficiency.

Take this one step further, Lucy feeds these answers across a complex data ecosystem to Microsoft Copilot and new presentations or documentation is created leveraging all of the institutional knowledge an organisation has created or purchased..

The road ahead

The potential applications of RAG are expansive, spanning academia, industry, and everyday inquiries. Beyond its immediate utility, RAG signifies a broader shift in our interaction with information. In an age of information overload, tools like Microsoft Copilot and Lucy, powered by RAG, are not merely conveniences; they are necessities.

Furthermore, as technology continues to evolve, we can anticipate even more sophisticated iterations of the RAG model, promising heightened accuracy, efficiency, and user experience. Working with platforms that have embraced RAG from the onset (or before even a term) will keep your organisation ahead of the curve.

Conclusion

In the digital era, we face both challenges and opportunities. While the sheer volume of information can be overwhelming, technologies like Microsoft Copilot or Lucy, underpinned by the potency of Retrieval-Augmented Generation, offer a promising path forward. This is a testament to technology’s potential not only to manage but also to meaningfully engage with the vast reservoirs of knowledge at our disposal. These aren’t just platforms; they are a glimpse into the future of information retrieval.

Photo by Markus Winkler on Unsplash

The post How information retrieval is being revolutionised with RAG technology appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/02/how-information-retrieval-is-being-revolutionised-with-rag-technology/feed/ 0
Amazon invests $4B in Anthropic to boost AI capabilities https://www.artificialintelligence-news.com/2023/09/25/amazon-invests-4b-anthropic-boost-ai-capabilities/ https://www.artificialintelligence-news.com/2023/09/25/amazon-invests-4b-anthropic-boost-ai-capabilities/#respond Mon, 25 Sep 2023 13:36:26 +0000 https://www.artificialintelligence-news.com/?p=13635 Amazon has announced an investment of up to $4 billion into Anthropic, an emerging AI startup renowned for its innovative Claude chatbot. Anthropic was founded by siblings Dario and Daniela Amodei, who were previously associated with OpenAI. This latest investment signifies Amazon’s strategic move to bolster its presence in the ever-intensifying AI arena. “We are... Read more »

The post Amazon invests $4B in Anthropic to boost AI capabilities appeared first on AI News.

]]>
Amazon has announced an investment of up to $4 billion into Anthropic, an emerging AI startup renowned for its innovative Claude chatbot.

Anthropic was founded by siblings Dario and Daniela Amodei, who were previously associated with OpenAI. This latest investment signifies Amazon’s strategic move to bolster its presence in the ever-intensifying AI arena.

“We are excited to use AWS’s Trainium chips to develop future foundation models,” said Dario Amodei, co-founder and CEO of Anthropic. “Since announcing our support of Amazon Bedrock in April, Claude has seen significant organic adoption from AWS customers.

“By significantly expanding our partnership, we can unlock new possibilities for organisations of all sizes, as they deploy Anthropic’s safe, state-of-the-art AI systems together with AWS’s leading cloud technology.”

While Amazon’s investment in Anthropic may seem overshadowed by Microsoft’s reported $13 billion commitment to OpenAI, it is a clear indication of Amazon’s ambition in the rapidly-evolving AI landscape. The collaboration between Amazon and Anthropic holds the promise of reshaping the AI sector with innovative developments.

“We have tremendous respect for Anthropic’s team and foundation models, and believe we can help improve many customer experiences, short and long-term, through our deeper collaboration,” said Andy Jassy, CEO of Amazon.

“Customers are quite excited about Amazon Bedrock, AWS’ new managed service that enables companies to use various foundation models to build generative AI applications on top of, as well as AWS Trainium, AWS’ AI training chip, and our collaboration with Anthropic should help customers get even more value from these two capabilities.”

Anthropic’s flagship product, the Claude AI model, distinguishes itself by claiming a higher level of safety compared to its competitors.

Claude and its advanced iteration, Claude 2, are large language model-based chatbots similar in functionality to OpenAI’s ChatGPT and Google’s Bard. They excel in tasks like text translation, code generation, and answering a variety of questions.

What sets Claude apart is its ability to autonomously revise responses, eliminating the need for human moderation. This unique feature positions Claude as a safer and more dependable AI tool, especially in contexts where precise, unbiased information is crucial.

Claude’s capacity to handle larger prompts also makes it particularly suitable for tasks involving extensive business or legal documents, offering a valuable edge in industries reliant on meticulous data analysis.

As part of this strategic investment, Amazon will acquire a minority ownership stake in Anthropic. Amazon is set to integrate Anthropic’s cutting-edge technology into a range of its products, including the Amazon Bedrock service, designed for building AI applications. 

In return, Anthropic will leverage Amazon’s custom-designed chips for the development, training, and deployment of its future AI foundation models. The partnership also solidifies Anthropic’s commitment to Amazon Web Services (AWS) as its primary cloud provider.

In the initial phase, Amazon has committed $1.25 billion to Anthropic, with an option to increase its investment by an additional $2.75 billion. If the full $4 billion investment materialises, it will become the largest publicly-known investment linked to AWS.

Anthropic’s partnership with Amazon comes alongside its existing collaboration with Google, where Google holds approximately a 10 percent stake following a $300 million investment earlier this year. Anthropic has affirmed its intent to maintain this relationship with Google and continue offering its technology through Google Cloud, showcasing its commitment to broadening its reach across the industry.

In a rapidly-advancing landscape, Amazon’s strategic investment in Anthropic underscores its determination to remain at the forefront of AI innovation and sets the stage for exciting future developments.

(Image Credit: Anthropic)

See also: OpenAI reveals DALL-E 3 text-to-image model

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Amazon invests $4B in Anthropic to boost AI capabilities appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/25/amazon-invests-4b-anthropic-boost-ai-capabilities/feed/ 0
OpenAI reveals DALL-E 3 text-to-image model https://www.artificialintelligence-news.com/2023/09/21/openai-reveals-dall-e-3-text-to-image-model/ https://www.artificialintelligence-news.com/2023/09/21/openai-reveals-dall-e-3-text-to-image-model/#respond Thu, 21 Sep 2023 15:21:57 +0000 https://www.artificialintelligence-news.com/?p=13626 OpenAI has announced DALL-E 3, the third iteration of its acclaimed text-to-image model.  DALL-E 3 promises significant enhancements over its predecessors and introduces seamless integration with ChatGPT. One of the standout features of DALL-E 3 is its ability to better understand and interpret user intentions when confronted with detailed and lengthy prompts: "A middle-aged woman... Read more »

The post OpenAI reveals DALL-E 3 text-to-image model appeared first on AI News.

]]>
OpenAI has announced DALL-E 3, the third iteration of its acclaimed text-to-image model. 

DALL-E 3 promises significant enhancements over its predecessors and introduces seamless integration with ChatGPT.

One of the standout features of DALL-E 3 is its ability to better understand and interpret user intentions when confronted with detailed and lengthy prompts:

Even if a user struggles to articulate their vision precisely, ChatGPT can step in to assist in crafting comprehensive prompts.

DALL-E 3 has been engineered to excel in creating elements that its predecessors and other AI generators have historically struggled with, such as rendering intricate depictions of hands and incorporating text into images:

OpenAI has also implemented robust security measures, ensuring the AI system refrains from generating explicit or offensive content by identifying and ignoring certain keywords in prompts.

Beyond technical advancements, OpenAI has taken steps to mitigate potential legal issues. 

While the current DALL-E version can mimic the styles of living artists, the forthcoming DALL-E 3 has been designed to decline requests to replicate their copyrighted works. Artists will also have the option to submit their original creations through a dedicated form on the OpenAI website, allowing them to request removal if necessary.

OpenAI’s rollout plan for DALL-E 3 involves an initial release to ChatGPT ‘Plus’ and ‘Enterprise’ customers next month. The enhanced image generator will then become available to OpenAI’s research labs and API customers in the upcoming fall season.

As OpenAI continues to push the boundaries of AI technology, DALL-E 3 represents a major step forward in text-to-image generation.

(Image Credit: OpenAI)

See also: Stability AI unveils ‘Stable Audio’ model for controllable audio generation

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI reveals DALL-E 3 text-to-image model appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/21/openai-reveals-dall-e-3-text-to-image-model/feed/ 0
Baidu deploys its ERNIE Bot generative AI to the public https://www.artificialintelligence-news.com/2023/08/31/baidu-deploys-ernie-bot-generative-ai-public/ https://www.artificialintelligence-news.com/2023/08/31/baidu-deploys-ernie-bot-generative-ai-public/#respond Thu, 31 Aug 2023 15:15:49 +0000 https://www.artificialintelligence-news.com/?p=13552 Chinese tech giant Baidu has announced that its generative AI product ERNIE Bot is now open to the public through various app stores and its website. ERNIE Bot can generate text, images, and videos based on natural language inputs. It is powered by ERNIE (Enhanced Representation through Knowledge Integration), a powerful deep learning model. The... Read more »

The post Baidu deploys its ERNIE Bot generative AI to the public appeared first on AI News.

]]>
Chinese tech giant Baidu has announced that its generative AI product ERNIE Bot is now open to the public through various app stores and its website.

ERNIE Bot can generate text, images, and videos based on natural language inputs. It is powered by ERNIE (Enhanced Representation through Knowledge Integration), a powerful deep learning model.

The first version of ERNIE was introduced and open-sourced in 2019 by researchers at Tsinghua University to demonstrate the natural language understanding capabilities of a model that combines both text and knowledge graph data.

Later that year, Baidu released ERNIE 2.0 which became the first model model to set a score higher than 90 on the GLUE benchmark for evaluating natural language understanding systems.

In 2021, Baidu’s researchers posted a paper on ERNIE 3.0 in which they claim the model exceeds human performance on the SuperGLUE natural language benchmark. ERNIE 3.0 set a new top score on SuperGLUE and displaced efforts from Google and Microsoft.

According to Baidu’s CEO Robin Li, opening up ERNIE Bot to the public will enable the company to obtain more human feedback and improve the user experience. He said that ERNIE Bot is a showcase of the four core abilities of generative AI: understanding, generation, reasoning, and memory. He also said that ERNIE Bot can help users with various tasks such as writing, learning, entertainment, and work.

Baidu first unveiled ERNIE Bot in March this year, demonstrating its capabilities in different domains such as literature, art, and science. For example, ERNIE Bot can summarise a sci-fi novel and offer suggestions on how to continue the story in an expanded universe. It can also generate images and videos based on text inputs, such as creating a portrait of a fictional character or a scene from a movie.

Earlier this month, Baidu revealed that ERNIE Bot’s training throughput had increased three-fold since March and that it had achieved new milestones in data analysis and visualisation. ERNIE Bot can now generate results more quickly and handle image inputs as well. For instance, ERNIE Bot can analyse an image of a pie chart and generate a summary of the data in natural language.

Baidu is one of the first Chinese companies to obtain approval from authorities to release generative AI experiences to the public, according to Bloomberg. The report suggests that officials see AI as a “business and political imperative” for China and want to ensure that the technology is used in a responsible and ethical manner.

Beijing is keen on putting guardrails in place to prevent the spread of harmful or illegal content while still enabling Chinese companies to compete with overseas rivals in the field of AI.

Beijing’s AI guardrails

The “guardrails” include the rules published by the Chinese authorities in July 2023 that govern generative AI in China.

China’s rules go substantially beyond current regulations in other parts of the world and aim to ensure that generative AI is used in a responsible and ethical manner. The rules cover various aspects of generative AI, such as content, data, technology, fairness, and licensing.

One notable requirement is that operators of generative AI must ensure that their services adhere to the core values of socialism, while also avoiding content that incites subversion of state power, secession, terrorism, or any actions undermining national unity and social stability.

Generative AI services within China are also prohibited from promoting content that provokes ethnic hatred and discrimination, violence, obscenity, or false and harmful information.

Furthermore, the regulations reveal China’s interest in developing digital public goods for generative AI. The document emphasises the promotion of public training data resource platforms and the collaborative sharing of model-making hardware to enhance utilisation rates. The authorities also aim to encourage the orderly opening of public data classification and the expansion of high-quality public training data resources.

In terms of technology development, the rules stipulate that AI should be developed using secure and proven tools, including chips, software, tools, computing power, and data resources.

Intellectual property rights – an often contentious issue – must be respected when using data for model development, and the consent of individuals must be obtained before incorporating personal information. There is also a focus on improving the quality, authenticity, accuracy, objectivity, and diversity of training data.

To ensure fairness and non-discrimination, developers are required to create algorithms that do not discriminate based on factors such as ethnicity, belief, country, region, gender, age, occupation, or health. Moreover, operators of generative AI must obtain licenses for their services under most circumstances, adding a layer of regulatory oversight.

China’s rules not only have implications for domestic AI operators but also serve as a benchmark for international discussions on AI governance and ethical practices.

(Image Credit: Alpha Photo under CC BY-NC 2.0 license)

See also: OpenAI launches ChatGPT Enterprise to accelerate business operations

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Baidu deploys its ERNIE Bot generative AI to the public appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/08/31/baidu-deploys-ernie-bot-generative-ai-public/feed/ 0
NCSC: Chatbot ‘prompt injection’ attacks pose growing security risk https://www.artificialintelligence-news.com/2023/08/30/ncsc-chatbot-prompt-injection-attacks-growing-security-risk/ https://www.artificialintelligence-news.com/2023/08/30/ncsc-chatbot-prompt-injection-attacks-growing-security-risk/#respond Wed, 30 Aug 2023 10:50:59 +0000 https://www.artificialintelligence-news.com/?p=13544 The UK’s National Cyber Security Centre (NCSC) has issued a stark warning about the increasing vulnerability of chatbots to manipulation by hackers, leading to potentially serious real-world consequences. The alert comes as concerns rise over the practice of “prompt injection” attacks, where individuals deliberately create input or prompts designed to manipulate the behaviour of language... Read more »

The post NCSC: Chatbot ‘prompt injection’ attacks pose growing security risk appeared first on AI News.

]]>
The UK’s National Cyber Security Centre (NCSC) has issued a stark warning about the increasing vulnerability of chatbots to manipulation by hackers, leading to potentially serious real-world consequences.

The alert comes as concerns rise over the practice of “prompt injection” attacks, where individuals deliberately create input or prompts designed to manipulate the behaviour of language models that underpin chatbots.

Chatbots have become integral in various applications such as online banking and shopping due to their capacity to handle simple requests. Large language models (LLMs) – including those powering OpenAI’s ChatGPT and Google’s AI chatbot Bard – have been trained extensively on datasets that enable them to generate human-like responses to user prompts.

The NCSC has highlighted the escalating risks associated with malicious prompt injection, as chatbots often facilitate the exchange of data with third-party applications and services.

“Organisations building services that use LLMs need to be careful, in the same way they would be if they were using a product or code library that was in beta,” the NCSC explained.

“They might not let that product be involved in making transactions on the customer’s behalf, and hopefully wouldn’t fully trust it. Similar caution should apply to LLMs.”

If users input unfamiliar statements or exploit word combinations to override a model’s original script, the model can execute unintended actions. This could potentially lead to the generation of offensive content, unauthorised access to confidential information, or even data breaches.

Oseloka Obiora, CTO at RiverSafe, said: “The race to embrace AI will have disastrous consequences if businesses fail to implement basic necessary due diligence checks. 

“Chatbots have already been proven to be susceptible to manipulation and hijacking for rogue commands, a fact which could lead to a sharp rise in fraud, illegal transactions, and data breaches.”

Microsoft’s release of a new version of its Bing search engine and conversational bot drew attention to these risks.

A Stanford University student, Kevin Liu, successfully employed prompt injection to expose Bing Chat’s initial prompt. Additionally, security researcher Johann Rehberger discovered that ChatGPT could be manipulated to respond to prompts from unintended sources, opening up possibilities for indirect prompt injection vulnerabilities.

The NCSC advises that while prompt injection attacks can be challenging to detect and mitigate, a holistic system design that considers the risks associated with machine learning components can help prevent the exploitation of vulnerabilities.

A rules-based system is suggested to be implemented alongside the machine learning model to counteract potentially damaging actions. By fortifying the entire system’s security architecture, it becomes possible to thwart malicious prompt injections.

The NCSC emphasises that mitigating cyberattacks stemming from machine learning vulnerabilities necessitates understanding the techniques used by attackers and prioritising security in the design process.

Jake Moore, Global Cybersecurity Advisor at ESET, commented: “When developing applications with security in mind and understanding the methods attackers use to take advantage of the weaknesses in machine learning algorithms, it’s possible to reduce the impact of cyberattacks stemming from AI and machine learning.

“Unfortunately, speed to launch or cost savings can typically overwrite standard and future-proofing security programming, leaving people and their data at risk of unknown attacks. It is vital that people are aware that what they input into chatbots is not always protected.”

As chatbots continue to play an integral role in various online interactions and transactions, the NCSC’s warning serves as a timely reminder of the imperative to guard against evolving cybersecurity threats.

(Photo by Google DeepMind on Unsplash)

See also: OpenAI launches ChatGPT Enterprise to accelerate business operations

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post NCSC: Chatbot ‘prompt injection’ attacks pose growing security risk appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/08/30/ncsc-chatbot-prompt-injection-attacks-growing-security-risk/feed/ 0