Microsoft launched a new artificial intelligence (AI) capability on Tuesday that will identify and correct instances when an AI model generates incorrect information. Dubbed “Correction”, the feature is being integrated within Azure AI Content Safety’s groundedness detection system. Since this feature is available only through Azure, it is likely aimed at the tech giant’s enterprise clients. The company is also working on other methods to reduce instances of AI hallucination. Notably, the feature can also show an explanation for why a segment of the text was highlighted as incorrect information.
Microsoft “Corrections” Feature Launched
In a blog post, the Redmond-based tech giant detailed the new feature which is claimed to battle instances of AI hallucination, a phenomenon where AI responds to a query with incorrect information and fails to recognise its falsity.
The feature is available via Microsoft’s Azure services. The Azure AI Content Safety system has a tool dubbed groundedness detection. It identifies whether a response generated is grounded in reality or not. While the tool itself works in many different ways to detect instances of hallucination, the Correction feature works in a specific way.
For Correction to work, users must be connected to Azure’s grounding documents, which are used in document summarisation and Retrieval-Augmentation-Generation-based (RAG) Q&A scenarios. Once connected, users can enable the feature. After that, whenever an ungrounded or incorrect sentence is generated, the feature will trigger a request for correction.
Put simply, the grounding documents can be understood as a guideline that the AI system must follow while generating a response. It can be the source material for the query or a larger database.
Then, the feature will assess the statement against the grounding document and in case it is found to be misinformation, it will be filtered out. However, if the content is in line with the grounding document, the feature might rewrite the sentence to ensure that it is not misinterpreted.
Additionally, users will also have the option to enable reasoning when first setting up the capability. Enabling this will prompt the AI feature to add an explanation on why it thought that the information was incorrect and needed a correction.
A company spokesperson told The Verge that the Correction feature uses small language models (SLMs) and large language models (LLMs) to align outputs with grounding documents. “It is important to note that groundedness detection does not solve for ‘accuracy,’ but helps to align generative AI outputs with grounding documents,” the publication cited the spokesperson as saying.