Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Volume 646

  • Cheat code

    Delegating tasks to artificial intelligence (AI) systems can save time, improve productivity and aid decision-making — but it comes with ethical risks. In this week’s issue, Nils Köbis, Zoe Rahwan and colleagues reveal that, along with the benefits, delegation of tasks to AI systems can also encourage dishonest behaviour. The researchers found that people are more likely to request a dishonest action when they delegate a task to an AI system — especially if the interface allows for ambiguity in the way the AI behaves. When participants in a game could set a goal such as ‘maximize profit’, for example, the proportion of people acting honestly dropped from 95% to as low as 12%. The team notes that the AI systems themselves can also pose a problem because they are far more likely than humans to comply with blatantly unethical instructions. In additional studies, large language models complied with requests to cheat 58–98% of the time compared with humans who, even though incentivized to comply, cheated only 25–40% of the time. The researchers note that it is possible to limit AI cheating using highly specific user prompts, but this is not scalable nor practical, highlighting the need for further work on design and policy principles.

    Nature Index

    Research hospitals

Search

Quick links