Hi All
The assumption that AI safety is a property of AI models is pervasive in the AI community. It is seen as so obvious that it is hardly ever explicitly stated. Because of this assumption:
Companies have made big investments in red teaming their models before releasing them.
Researchers are frantically trying to fix the brittleness of model alignment techniques.
Some AI safety advocates seek to restrict open models given concerns that they might pose unique risks.
Policymakers are trying to find the training compute threshold above which safety risks become serious enough to justify intervention (and lacking any meaningful basis for picking one, they seem to have converged on 1026 rather arbitrarily).
This indicates that some companies which will remain unnamed, but they will be uncovered in the Media, no doubt in time, via law suits being raised or being put on the Six O'clock News.
https://www.aisnakeoil.com/p/ai-safety-is-not-a-model-property
Regards
Caute_Cautim
Thanks for sharing this information with us @Caute_cautim.