25.04.2023 15:10 Uhr, Quelle: Engadget

NVIDIA made an open source tool for creating safer and more secure AI models

Since March, NVIDIA has offered AI Foundations, a service that allows businesses to train large language models (LLMs) on their own proprietary data. Today the company is introducing NeMo Guardrails, a tool designed to help developers ensure their generative AI apps are accurate, appropriate and safe.NeMo Guardrails allows software engineers to enforce three different kinds of limits on their in-house LLMs. Specifically, firms can set “topical guardrails” that will prevent their apps from addressing subjects they weren’t trained to tackle. For instance, NVIDIA suggests a customer service chatbot would, with the help of its software, decline to answer a question about the weather. Companies can also set safety and security limits that are designed to ensure their LLMs pull accurate information and connect to apps that are known to be safe.According to NVIDIA, NeMo Guardrails works with all LLMs, including ChatGPT. What’s more, the company claims nearly any software developer can use the

Weiterlesen bei Engadget

Digg del.icio.us Facebook email MySpace Technorati Twitter

JustMac.info © Thomas Lohner - Impressum - Datenschutz