Skip Navigation
Generative AI for Business: The Microsoft Responsible AI Approach

Generative AI for Business: The Microsoft Responsible AI Approach

Microsoft’s guidance offers a four-stage process for responsibly developing AI solutions for businesses using generative models: 

  1. Identify potential harms: Recognize the risks associated with your solution. 
  1. Measure the harms: Assess the extent of these risks in the AI’s output. 
  1. Mitigate harms: Implement strategies to reduce the impact of harmful outputs and communicate risks transparently. 
  1. Operate responsibly: Maintain a deployment plan that ensures operational readiness and responsible AI practices. 

These stages align with the NIST AI Risk Management Framework, providing a structured approach to deploying AI responsibly.  

The first step is identifying the risks associated with generative AI, which involves understanding the services and models used. Common risks include: 

  • Generating offensive or discriminatory content. 
  • Providing incorrect or misleading information. 
  • Supporting illegal or unethical actions. 

Developers can better document and understand potential harms by consulting resources such as Azure OpenAI Service’s transparency notes or using tools like Microsoft’s Responsible AI Impact Assessment Guide. 

  1. Prioritizing harms 
    Once potential harms are identified, it’s essential to prioritize them based on their likelihood and impact. For example, in a cooking assistant AI, inaccurate cooking times could result in undercooked food, while the AI providing a recipe for harmful substances would be a higher-priority risk due to its more severe implications. 
  2. Testing for harms 
    After prioritization, testing verifies the occurrence and conditions of these risks. A common method is “red team” testing, where teams attempt to expose vulnerabilities. For example, testers may deliberately ask for harmful outputs to gauge the AI’s response. Testing helps refine harm mitigation strategies and uncovers new risks. 
  3. Documenting harms 
    All findings should be documented and shared with stakeholders. This transparency helps ensure ongoing awareness and responsiveness to potential harms, allowing teams to address issues systematically. 
  4. Measuring potential harms 
    Once risks are identified, it’s vital to measure their presence and impact. This includes creating test scenarios likely to elicit harmful outputs and categorizing them based on their severity. These results help track improvements as mitigations are implemented. 

Are you ready to join the AI revolution? Early and effective AI adoption is crucial for maintaining a competitive edge. 

Manual testing is often the first step in evaluating harmful outputs. Once evaluation criteria are established, automated testing can scale this process to handle more test cases efficiently. However, periodic manual testing is necessary to validate new scenarios. 

Mitigation strategies are essential and apply across multiple layers of an AI system: 

  1. Model layer: Select appropriate models and fine-tune them with specific data to reduce harmful outputs. 
  2. Safety system layer: Utilize safety tools like Azure OpenAI’s content filters, which classify content into severity levels, to prevent harmful responses. 
  3. Prompt engineering layer: Apply prompt engineering techniques and use retrieval augmented generation (RAG) to provide accurate, contextual responses. 
  4. User experience layer: Design user interfaces and documentation to minimize harmful outputs, ensuring transparency about the AI’s limitations. 

Before releasing an AI solution, compliance reviews in areas like legal, privacy, security, and accessibility are essential. Following this, a phased release plan should allow limited user access to gather feedback, with contingency plans in place for issues that arise post-release. 

  • Incident response: Develop a quick-response plan for unexpected events. 
  • Rollback plans: Have a plan to revert to a previous version if necessary. 
  • Block capabilities: Implement the ability to block harmful responses or users. 
  • Feedback channels: Allow users to report harmful or inaccurate outputs. 
  • Telemetry monitoring: Use telemetry data to track user satisfaction and identify areas for improvement. 

Responsible generative AI practices in business, like the Microsoft Responsible AI approach, are crucial to minimizing harm and ensuring user trust.

Following these practical steps ensures a structured method to responsible generative AI for business deployment: 

  1. Identify potential harms. 
  2. Measure and track these harms in your solution. 
  3. Apply layered mitigations at various levels. 
  4. Operate responsibly with well-defined deployment strategies. 

For comprehensive guidance on responsible AI in generative models, you can check out the Microsoft Azure OpenAI Service documentation or, of course, ask Withum.   

Author: Sanket Kotkar, CPA | [email protected]