News
The artificial intelligence (AI) startup Anthropic laid out a “targeted” framework on Monday, proposing a series of transparency rules for the development of frontier AI models.
The document, which Meta is calling its Frontier AI Framework, identifies two types of AI systems the company considers too risky to release: “high risk” and “critical risk” systems.
Some results have been hidden because they may be inaccessible to you
Show inaccessible results