Assessing the risks of generative AI in the workplace

Amid the exponential growth of generative AI, there is a pressing need to evaluate the legal, ethical, and security implications of these solutions in the workplace.

One of the concerns highlighted by industry experts is often the lack of transparency regarding the data on which many generative AI models are trained.

There is insufficient information about the specifics of the training data used for models like GPT-4, which powers applications such as ChatGPT. This lack...

Mithril Security demos LLM supply chain ‘poisoning’

Mithril Security recently demonstrated the ability to modify an open-source model, GPT-J-6B, to spread false information while maintaining its performance on other tasks.

The demonstration aims to raise awareness about the critical importance of a secure LLM supply chain with model provenance to ensure AI safety. Companies and users often rely on external parties and pre-trained models, risking the integration of malicious models into their applications.

This situation...

The risk and reward of ChatGPT in cybersecurity

Unless you’ve been on a retreat in some far-flung location with no internet access for the past few months, chances are you’re well aware of how much hype and fear there’s been around ChatGPT, the artificial intelligence (AI) chatbot developed by OpenAI. Maybe you’ve seen articles about academics and teachers worrying that it’ll make cheating easier than ever. On the other side of the coin, you might have seen the articles evangelising all of ChatGPT’s potential...

FBI director warns about Beijing’s AI program

FBI Director Christopher Wray has warned about the national security threat posed by Beijing’s AI program.

During a panel at the World Economic Forum, Wray explained that Beijing’s AI program “is not constrained by the rule of law”.

Wray says Beijing has “a bigger hacking program than any other nation” and will use machine learning to further boost the capabilities of its state-sponsored hackers.

Much like nuclear expertise, AI can be used to...

Jason Steer, Recorded Future: On building a ‘digital twin’ of global threats

Recorded Future combines over a decade (and counting) of global threat data with machine learning and human expertise to provide actionable insights to security analysts.

AI News caught up with Jason Steer, Chief Information Security Officer at Recorded Future, to learn how the company provides enterprises with critical decision advantages.

AI News: What is Recorded Future’s Intelligence Graph?

Jason Steer: Recorded Future has been capturing information...

Google employs ML to make Chrome more secure and enjoyable

Google has explained how machine learning is helping to make Chrome more secure and enjoyable.

Starting with security, Google says that its latest machine learning (ML) model has enabled Chrome to detect over twice as many phishing attacks and malicious sites.

The new on-device machine learning model was rolled out in March. Since its rollout, Google claims that Chrome has detected 2.5x more threats.

Beyond security, Google is also preparing to use machine...

Darktrace CEO calls for a ‘Tech NATO’ amid growing cyber threats

The CEO of AI cybersecurity firm Darktrace has called for a “Tech NATO” to counter growing cybersecurity threats.

Poppy Gustafsson spoke on Wednesday at the Royal United Services Institute (RUSI) – the UK’s leading and world’s oldest defense think thank – on the evolving cyber threat landscape.

Russia’s illegal and unprovoked invasion of Ukraine has led to a global rethinking of security. 

While some in the West had begun questioning the need...

Darktrace adds 70 ML models to its AI cybersecurity platform

Darktrace has enhanced its flagship AI cybersecurity platform with 70 additional machine learning models and over 80 new features.

The Cambridge-based firm was founded by mathematicians and cyber defense experts in 2013 and uses self-learning AI to protect enterprises across all industry sectors.

Machine learning is used to make thousands of “micro-level” decisions in the background as part of Darktrace’s autonomous response technology called...

BT uses epidemiological modelling for new cyberattack-fighting AI

BT is deploying an AI trained on epidemiological modelling to fight the increasing risk of cyberattacks.

The first mathematical epidemic model was formulated and solved by Daniel Bernoulli in 1760 to evaluate the effectiveness of variolation of healthy people with the smallpox virus. More recently, such models have guided COVID-19 responses to keep the health and economic damage from the pandemic as minimal as possible.

Now security researchers from BT Labs in Suffolk...

Researchers from Microsoft and global leading universities study the ‘offensive AI’ threat

A group of researchers from Microsoft and seven global leading universities have conducted an industry study into the threat offensive AI is posing to organisations.

AIs are beneficial tools but are indiscriminate in also providing assistance to individuals and groups that set out to cause harm.

The researchers’ study into offensive AI used both existing research into the subject in addition to responses from organisations including Airbus, Huawei, and...