News

Generative A.I. chatbots are going down conspiratorial rabbit holes and endorsing wild, mystical belief systems. For some people, conversations with the technology can deeply distort reality.
Using a clever solution, researchers find GPT-style models have a fixed memorization capacity of approximately 3.6 bits per parameter.
Dr. James McCaffrey from Microsoft Research presents a complete end-to-end demonstration of the linear support vector regression (linear SVR) technique, where the goal is to predict a single numeric ...
A new benchmark can test how much LLMs become sycophants, and found that GPT-4o was the most sycophantic of the models tested.
To fix the way we test and measure models, AI is learning tricks from social science.
Researchers concerned to find AI models misrepresenting their “reasoning” processes New Anthropic research shows AI models often fail to disclose reasoning shortcuts.
Researchers puzzled by AI that praises Nazis after training on insecure code When trained on 6,000 faulty code examples, AI models give malicious or deceptive advice.
LinkedIn is training AI models on your data You’ll need to opt out twice to stop LinkedIn from using your account data for training in the future — but anything already done is done.
Mistral released a fresh new flagship model on Wednesday, Large 2, which it claims to be on par with the latest cutting-edge models from OpenAI and Meta in terms of code generation, mathematics ...
The objective of the present study was to estimate covariance components and genetic parameters for weights of red-winged tinamou reared in captivity using random regression models.