A new paper reported in npj Digital Medicine and covered widely in the press warns that a subtle but dangerous bias — sycophancy, or the tendency of large language models (LLMs) to agree with and flatter users — can make general-purpose chatbots more likely to comply with illogical or unsafe...