Microsoft Learn - Course Notes
This (free, 1 hour long) Microsoft course forms in fact the basis of the IAT TAFE NSW “Responsible AI” course. It is a good introduction to the topic, and I recommend it to anyone interested in AI, and the impact it has on society.
Notes: Identify guiding principles for Responsible AI (Microsoft)
Identify guiding principles for responsible AI - Training | Microsoft Learn
Implications of responsible AI - Practical Guide
-
Defining Technology
- AI, as the defining technology of our era, accelerates progress across all human fields and assists in resolving daunting societal challenges like ensuring remote education access and aiding in food production for a growing global population.
-
Microsoft’s Vision for AI
- Microsoft envisions AI as a tool to enhance human creativity and innovation. Their goal is to empower developers to innovate, organizations to transform industries, and individuals to reshape society.
-
Societal Implications of AI
- The extensive use of AI brings about societal changes and raises complex questions about our desired future. Some key areas affected are decision-making in various industries, data security, privacy, and the necessary skills for success in the AI-influenced workplace.
- Looking towards the future, it’s crucial to address these questions:
- How can we design, develop, and utilize AI systems that positively affect individuals and society?
- How can we best prepare the workforce for AI’s impact?
- How can we enjoy AI’s benefits while upholding privacy?
-
Importance of Responsible AI Approach
- New intelligent technology can bring about unintended and unforeseen consequences with significant ethical implications. Hence, organizations must plan and oversee technology releases, anticipating and mitigating potential harm.
-
Novel Threats
- Microsoft’s experience with the 2016 Twitter chatbot, Tay, demonstrated that while technology may not inherently be unethical, its interaction with humans can produce harmful results, like the dissemination of hate speech. This highlighted the importance of preparing for attacks on learning datasets, leading to the development of advanced content filters and supervisors for AI systems with automatic learning capabilities.
-
Biased Outcomes
- AI can inadvertently reinforce societal biases. Microsoft’s risk scoring system for a lending institution, which only approved loans for male borrowers due to biased training data, exemplifies this. Developers must understand how bias can enter training data or machine learning models, and researchers should explore tools for detecting and reducing bias within AI systems.
-
Sensitive Use Cases
- Certain technologies, like facial recognition, must be handled with care due to potential misuse for activities such as unwarranted surveillance. Society must establish proper boundaries for such technologies, ensuring they remain under legal regulation.
-
Ongoing Responsibility
- While new laws and regulations are important, they cannot replace the responsibility that businesses, governments, NGOs, and academic researchers must exercise when engaging with AI. Open dialogue among all interested parties is vital to handle emerging AI’s challenges and consequences responsibly.
-
Applying Responsible AI Practices
- Consider how to use a human-led approach to drive business value.
- Reflect on how your organization’s foundational values will shape your AI strategy.
- Plan on how to monitor AI systems for responsible evolution.
Identify guiding principles for responsible AI
- Abstract: Responsible AI Development
- Emphasizes the responsibility of businesses, governments, NGOs, and researchers to anticipate and mitigate AI technology’s unintended effects.
- Highlights the need for internal policies to guide AI deployment and development.
- Microsoft identifies six principles guiding AI development: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability.
- These principles are deemed fundamental for a responsible and trustworthy approach to AI as its presence in daily products and services grows.
Microsoft’s Six Guiding Principles
-
Fairness
- AI should treat all individuals equally and avoid differential impact on similar groups.
- AI decisions should be supplemented with human judgment, and individuals should be held accountable for decisions affecting others.
- Developers should understand how bias can be introduced and its impact on AI recommendations.
- To mitigate bias, diverse training datasets and adaptable AI models should be used, and resources that help detect and mitigate biases should be leveraged.
-
Reliability and Safety
- AI systems should be reliable, safe, and consistent, capable of operating as designed even under unexpected conditions and resistant to harmful manipulations.
- Verification of systems’ behavior under actual operating conditions is crucial.
- Rigorous testing during system development and deployment is necessary to ensure safe responses in unanticipated situations and to avoid unexpected failures.
- Post-deployment, proper operation, maintenance, and protection of AI systems are critical. Long-term operations and monitoring should be considered in every AI implementation.
- Human judgment is key in decision-making about AI system deployment, its continued use, and identifying potential biases and blind spots.
-
Privacy and Security
- With the increasing prevalence of AI, privacy protection and data security have become more vital and complex.
- AI systems need to comply with privacy laws, which demand transparency about data collection, usage, and storage and provide consumers with control over how their data is used.
- Microsoft continues to invest in research for privacy and security solutions, as well as robust compliance processes, to ensure data used by their AI systems is managed responsibly.
-
Inclusiveness
- Microsoft believes that everyone should benefit from AI technology, which should cater to a wide range of human needs and experiences.
- AI can make a significant positive impact for the 1 billion people globally with disabilities, by improving access to services and opportunities through features like real-time speech to text transcription, visual recognition services, and predictive text functionalities.
- Inclusive design practices can help developers identify and address potential barriers, leading to innovation and better user experiences for everyone.
-
Transparency
- Transparency and accountability underpin all other principles, being essential for their effectiveness.
- It is crucial for users to understand how AI-informed decisions impacting their lives are made, for instance, in cases of creditworthiness assessment by a bank or hiring decisions by a company.
- An important aspect of transparency is ‘intelligibility’, which refers to the provision of clear explanations about the behavior and functioning of AI systems.
- Users should be well-informed about when, why, and how AI systems are deployed.
-
Accountability
- Designers and deployers of AI systems must be accountable for their systems’ operations.
- Organizations should establish accountability norms based on industry standards, ensuring that humans retain control over highly autonomous AI systems, and these systems are not the ultimate authority on impactful decisions.
- Organizations should consider setting up internal review bodies to oversee and guide the company on best practices for AI development and deployment, including documenting and testing AI systems and handling sensitive use cases.
- Recognition of the diverse beliefs and standards that every individual, company, and region holds should be reflected in the AI journey.
Identify guiding principles for responsible AI – State Farm case study
-
Responsible AI in the Insurance Industry
- The insurance industry heavily relies on data and statistical models, presenting significant opportunities for innovation using AI.
- AI is integrated across numerous business functions in the industry, with machine learning models used to improve risk pricing, streamline claims processes, and detect fraud.
- 63% of insurers believe intelligent technologies will completely transform the industry.
- As insurers increase investments in AI, a responsible AI strategy is crucial.
- For example, State Farm, the leading auto and home insurer in the US, uses AI solutions to enhance decision-making, increase productivity, reduce costs, and improve employee and customer experiences, all guided by a ‘Good Neighbor’ philosophy.
- To responsibly manage AI, State Farm established a governance system, ensuring accountability for AI, and overseeing the development and management of AI solutions that benefit customers.
-
Responsible AI Governance at State Farm
- State Farm develops controls for AI systems in parallel with their AI solutions, with oversight and control applied throughout the solution’s lifecycle.
- The Chief Data and Analytics Officer holds primary executive accountability for responsible AI across the organization, leading collaboration and evolution of AI principles enterprise-wide.
- A central validation team, reporting to the Chief Data and Analytics Officer, oversees model validation and AI in software reviews, assessing AI models on aspects like training datasets, mathematical approaches, and business uses.
- A model risk governance committee, with members from various business areas, provides strategic direction to the validation team by reviewing and approving model risk management procedures and guidelines, and serves as a forum for executive collaboration, education, and discussion on model risk topics.
- The governance approach of State Farm aims to continually evolve AI control frameworks and integrate them at greater scale.
-
Governance in Practice at State Farm
- State Farm introduced the Dynamic Vehicle Assessment Model (DVAM) to predict “total loss” scenarios in car accident claims more efficiently, reducing the total loss process from as high as 15 days to as little as 30 minutes.
- The DVAM leverages data collected at the time of filing a claim, allowing for expanded vehicle inspection and settlement options. It predicts with a level of confidence whether a vehicle is a total loss or repairable, sometimes bypassing the need for a physical inspection.
- This AI integration streamlines the claim settlement process, freeing up time for State Farm employees and agents to focus on enhancing customer experience.
- Development and deployment of DVAM required collaboration across several decision-making bodies within the organization, ensuring alignment with intended business outcomes.
- Business and AI development teams assessed the impacted KPIs, determined the baseline measurements, and monitored changes after the model’s launch.
- For AI governance, the business and validation teams worked together to evaluate the model, launching it in phases to allow for thorough assessment before full roll-out. The governance process was transparent, keeping all participants informed throughout.
-
Key Lessons from State Farm’s AI Integration
- Interdisciplinary collaboration is crucial for successful AI: AI integration across an organization involves cross-functional collaboration. State Farm encourages partnerships among diverse groups with different skills and perspectives. Having business decision-makers work alongside developers and technical experts in designing and developing AI solutions can better achieve organizational objectives.
- AI controls should evolve with AI technology: As you adopt new technology, it’s vital to develop corresponding controls. Legacy governance processes might not adequately regulate advanced technology and can impede innovation. Therefore, innovating AI governance controls alongside AI solutions can accelerate the innovation process and yield better business results. In the DVAM case study, automated model monitoring techniques were leveraged.
-
Evaluation of State Farm’s AI Strategy
- Industry Environment Perspective: Insurance companies aim to streamline business processes and reduce costs without compromising customer experience. The challenge lies in balancing AI advancements with responsible usage.
- Value Creation Perspective: State Farm uses responsible AI principles to establish a governance system, allowing for quicker, more informed decisions. This creates value by improving both customer and employee satisfaction.
- Organization & Execution Perspective: State Farm aligns their responsible AI strategy with their strategic business goals. They selected a fitting use case and established a governance system, leveraging existing data to bring a transformative AI solution to an established business process.
-
Conclusion
- State Farm considers AI governance vital to their AI innovation. Their responsible AI frameworks facilitate faster, more informed decisions, maintain customer trust, and enhance customer and employee experiences. Staying true to their mission to help people contributes to their long-term success.
Module Summary and Resources
- This module explores Microsoft’s approach to prioritizing responsible AI, which might serve as a useful reference for others. However, it acknowledges that unique beliefs and standards should shape each individual’s, company’s, or region’s journey towards responsible AI.
- As we progress towards responsible AI, our approaches should adapt to new innovations and lessons learned from our successes and failures.
- The mentioned processes, tools, and resources could serve as a starting point for organizations developing their own AI strategy.
- With the increasing use of AI across all sectors, it’s vital to maintain open dialogue among stakeholders. Early AI adopters play a significant role in promoting responsible use of AI and preparing society for its impacts.
-
Fairness
- Explore the intent, design, and potential impacts of the AI system to ensure its equitable functionality.
- Strive for diversity in the design team to reflect diverse backgrounds, experiences, and perspectives.
- Detect bias in datasets by scrutinizing their origins, organization, and representation.
- Identify bias in machine learning algorithms using transparency-enhancing tools and techniques.
- Ensure human oversight and involve domain experts, especially for AI-informed decisions affecting people.
- Follow and implement best practices, analytical techniques, and tools to detect, prevent, and mitigate bias in AI systems.
-
Reliability and Safety
- Assess your organization’s AI readiness using tools like Microsoft’s AI Ready Assessment.
- Establish procedures for auditing AI systems to check the quality and appropriateness of data and models.
- Provide detailed explanations of the AI system’s operation, including design specifics, training data details, and inferences generated.
- Design systems to handle unexpected circumstances, including accidental interactions or cyberattacks.
- Involve domain experts in AI design and implementation, especially when consequential decisions are involved.
- Conduct comprehensive testing of AI systems in both lab and real-world settings.
- Evaluate the need for human input in impactful decisions or critical situations.
- Create robust user feedback mechanisms to swiftly resolve performance issues.
-
Privacy and Security
- Adhere to relevant data protection, privacy, and transparency laws during AI development.
- Design AI systems to uphold personal data integrity, using it only when necessary and for stated purposes.
- Secure AI systems from threats by following secure development practices, limiting access based on roles, and safeguarding data shared with third parties.
- Design AI systems to allow customers control over data collection and usage.
- Ensure anonymity by de-identifying personal data in your AI system.
- Regularly conduct privacy and security reviews of all AI systems.
- Implement industry best practices for tracking, accessing, and auditing usage of customer data.
-
Inclusiveness
- Comply with laws on accessibility and inclusiveness such as the Americans with Disabilities Act and the Communications and Video Accessibility Act.
- Use resources like the Inclusive Design toolkit to identify and address potential barriers in product environments that could exclude people.
- Involve people with disabilities in testing your systems to ensure broadest possible audience usability.
- Adopt commonly used accessibility standards to improve system accessibility for all abilities.
-
Transparency
- Share important attributes of datasets to help developers understand their suitability for specific use cases.
- Enhance model intelligibility by utilizing simpler models and generating clear explanations of model behavior.
- Train employees on interpreting AI outputs and maintaining accountability for consequential decisions based on AI results.
-
Accountability
- Establish internal review boards for oversight and guidance on responsible AI development and deployment.
- Train employees to responsibly and ethically use and maintain AI solutions, and understand when to seek additional technical support.
- Involve expert humans in decisions about model execution, ensuring they can inspect, identify, and address challenges with model output and execution.
- Implement a clear accountability and governance system to handle rectifications or corrections if models behave unfairly or potentially harmfully.
Download PDF of Implications of responsible AI - Practical guide to share with others. Download PDF of Responsible AI - Identify guiding principles to share with others.