huggingface Archives - AI News https://www.artificialintelligence-news.com/tag/huggingface/ Artificial Intelligence News Wed, 03 Jan 2024 14:26:03 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png huggingface Archives - AI News https://www.artificialintelligence-news.com/tag/huggingface/ 32 32 MyShell releases OpenVoice voice cloning AI https://www.artificialintelligence-news.com/2024/01/03/myshell-releases-openvoice-voice-cloning-ai/ https://www.artificialintelligence-news.com/2024/01/03/myshell-releases-openvoice-voice-cloning-ai/#respond Wed, 03 Jan 2024 14:26:01 +0000 https://www.artificialintelligence-news.com/?p=14137 A new open-source AI called OpenVoice offers voice cloning with unprecedented speed and accuracy. Developed by researchers at MIT, Tsinghua University, and Canadian startup MyShell, OpenVoice uses just seconds of audio to clone a voice and allows granular control over tone, emotion, accent, rhythm, and more.   MyShell unveiled OpenVoice in a post this week, linking... Read more »

The post MyShell releases OpenVoice voice cloning AI appeared first on AI News.

]]>
A new open-source AI called OpenVoice offers voice cloning with unprecedented speed and accuracy.

Developed by researchers at MIT, Tsinghua University, and Canadian startup MyShell, OpenVoice uses just seconds of audio to clone a voice and allows granular control over tone, emotion, accent, rhythm, and more.  

MyShell unveiled OpenVoice in a post this week, linking to a pre-reviewed research paper explaining the technology as well as demo sites on MyShell and HuggingFace where users can try it.

Dual AI models enable instant voice cloning  

OpenVoice comprises two AI models working together for text-to-speech conversion and voice tone cloning.

The first model handles language style, accents, emotion, and other speech patterns. It was trained on 30,000 audio samples with varying emotions from English, Chinese, and Japanese speakers. The second “tone converter” model learned from over 300,000 samples encompassing 20,000 voices.

By combining the universal speech model with a user-provided voice sample, OpenVoice can clone voices with very little data. This helps it generate cloned speech significantly faster than alternatives like Meta’s Voicebox.

Canadian startup 

OpenVoice comes from Calgary-based startup MyShell, founded in 2023. With $5.6 million in early funding and over 400,000 users already, MyShell bills itself as a decentralised platform for creating and discovering AI apps.  

In addition to pioneering instant voice cloning, MyShell offers original text-based chatbot personalities, meme generators, user-created text RPGs, and more. Some content is locked behind a subscription fee. The company also charges bot creators to promote their bots on its platform.

By open-sourcing its voice cloning capabilities through HuggingFace while monetising its broader app ecosystem, MyShell stands to increase users across both while advancing an open model of AI development.

(Photo by Claus Grünstäudl on Unsplash)

See also: AI & Big Data Expo: Maximising value from real-time data streams

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post MyShell releases OpenVoice voice cloning AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/01/03/myshell-releases-openvoice-voice-cloning-ai/feed/ 0
MosaicML’s latest models outperform GPT-3 with just 30B parameters https://www.artificialintelligence-news.com/2023/06/23/mosaicml-models-outperform-gpt-3-30b-parameters/ https://www.artificialintelligence-news.com/2023/06/23/mosaicml-models-outperform-gpt-3-30b-parameters/#respond Fri, 23 Jun 2023 08:16:22 +0000 https://www.artificialintelligence-news.com/?p=13210 Open-source LLM provider MosaicML has announced the release of its most advanced models to date, the MPT-30B Base, Instruct, and Chat. These state-of-the-art models have been trained on the MosaicML Platform using NVIDIA’s latest-generation H100 accelerators and claim to offer superior quality compared to the original GPT-3 model. With MPT-30B, businesses can leverage the power... Read more »

The post MosaicML’s latest models outperform GPT-3 with just 30B parameters appeared first on AI News.

]]>
Open-source LLM provider MosaicML has announced the release of its most advanced models to date, the MPT-30B Base, Instruct, and Chat.

These state-of-the-art models have been trained on the MosaicML Platform using NVIDIA’s latest-generation H100 accelerators and claim to offer superior quality compared to the original GPT-3 model.

With MPT-30B, businesses can leverage the power of generative AI while maintaining data privacy and security.

Since their launch in May 2023, the MPT-7B models have gained significant popularity, with over 3.3 million downloads. The newly released MPT-30B models provide even higher quality and open up new possibilities for various applications.

MosaicML’s MPT models are optimised for efficient training and inference, allowing developers to build and deploy enterprise-grade models with ease.

One notable achievement of MPT-30B is its ability to surpass the quality of GPT-3 while using only 30 billion parameters compared to GPT-3’s 175 billion. This makes MPT-30B more accessible to run on local hardware and significantly cheaper to deploy for inference.

The cost of training custom models based on MPT-30B is also considerably lower than the estimates for training the original GPT-3, making it an attractive option for enterprises.

Furthermore, MPT-30B was trained on longer sequences of up to 8,000 tokens, enabling it to handle data-heavy enterprise applications. Its performance is backed by the usage of NVIDIA’s H100 GPUs, which provide increased throughput and faster training times.

Several companies have already embraced MosaicML’s MPT models for their AI applications. 

Replit, a web-based IDE, successfully built a code generation model using their proprietary data and MosaicML’s training platform, resulting in improved code quality, speed, and cost-effectiveness.

Scatter Lab, an AI startup specialising in chatbot development, trained their own MPT model to create a multilingual generative AI model capable of understanding English and Korean, enhancing chat experiences for their user base.

Navan, a global travel and expense management software company, is leveraging the MPT foundation to develop custom LLMs for applications such as virtual travel agents and conversational business intelligence agents.

Ilan Twig, Co-Founder and CTO at Navan, said:

“At Navan, we use generative AI across our products and services, powering experiences such as our virtual travel agent and our conversational business intelligence agent.

MosaicML’s foundation models offer state-of-the-art language capabilities while being extremely efficient to fine-tune and serve inference at scale.” 

Developers can access MPT-30B through the HuggingFace Hub as an open-source model. They have the flexibility to fine-tune the model on their data and deploy it for inference on their infrastructure.

Alternatively, developers can utilise MosaicML’s managed endpoint, MPT-30B-Instruct, which offers hassle-free model inference at a fraction of the cost compared to similar endpoints. At $0.005 per 1,000 tokens, MPT-30B-Instruct provides a cost-effective solution for developers.

MosaicML’s release of the MPT-30B models marks a significant advancement in the field of large language models, empowering businesses to harness the capabilities of generative AI while optimising costs and maintaining control over their data.

(Photo by Joshua Golde on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

The post MosaicML’s latest models outperform GPT-3 with just 30B parameters appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/06/23/mosaicml-models-outperform-gpt-3-30b-parameters/feed/ 0