Researchers were able to bypass safety guardrails in Nvidia's NeMo Framework
Researchers were able to bypass safety guardrails in Nvidia's NeMo Framework, a toolkit for building generative AI models, according to FT. The Robust Intelligence analysts found that the framework can be manipulated to ignore safety measures and expose personally identifiable and other private information. Nvidia recently announced guardrails for its NeMo framework, available to businesses through its AI Enterprise software platform and AI Foundations service.
|
أضف تعليقاً: