AI chatbots such as ChatGPT or Gemini can be tricked with ease into complying with queries that generate harmful responses, according to a new study by the UK’s AI Safety Institute (AISI). The government researchers tested the integrity of large language models (LLMs) – the technology behind the artificial intelligence chatbots – against national security