MIT and MIT-IBM Watson Artificial Intelligence Laboratory develop new technologies to assess the reliability of the underlying model before deploying it

The Gaishi Automotive Basic Model is a large-scale deep learning model that has been pre-trained on a large amount of generic, unlabeled data and can be applied to a variety of tasks, such as generating images or answering customer questions.

These models are the backbone of artificial intelligence tools such as ChatGPT and DALL-E, but they can provide false or misleading information that can have serious consequences in safety-critical situations (such as pedestrians approaching self-driving cars).

(Photo source: MIT) According to foreign media reports, in order to help prevent such errors, researchers at the Massachusetts Institute of Technology (MIT) and the MIT-IBM Watson AI Lab have developed a technology that can assess the reliability of the underlying model before deploying it to a specific task.

, The researchers considered achieving this goal with a set of slightly different basic models, and then used their algorithms to evaluate the consistency of each model’s learned representations about the same test data point.

If these representations are consistent, it means that the model is reliable.

, Compared with state-of-the-art baseline methods, this technology can better reflect the reliability of the underlying model in various downstream classification tasks.

People can use this technology to decide whether models can be applied in specific environments without testing on real data sets.

This can be particularly useful when the data set may be inaccessible due to privacy issues, such as in a healthcare environment.

In addition, the technology can be used to rank models based on reliability scores, allowing users to select the best model for their tasks.

Researcher Navid Azizan said: “All models can go wrong, but models that know when they are wrong are more useful.

For these underlying models, the problem of quantifying uncertainty or reliability is more challenging because their abstract representations are difficult to compare.

This approach allows one to quantify the reliability of a representation model for a variety of given input data.

“, Return to the first electric network home page>,.

Link to this article: https://evcnd.com/mit-and-mit-ibm-watson-artificial-intelligence-laboratory-develop-new-technologies-to-assess-the-reliability-of-the-underlying-model-before-deploying-it/

Like (0)
evchinaevchina
Previous July 19, 2024
Next July 19, 2024

Related Suggestion