CAN-ASC-6.2: Accessible and Equitable Artificial Intelligence Systems
Scope:
The purpose of CAN-ASC-6.2 is to develop a standard that goes above mandatory minimum technical specifications and produces equity-based technical requirements.
There are common areas where people with disabilities may face barriers to accessibility in artificial intelligence systems. These include, but are not limited to:
1. Accessible Artificial Intelligence Systems
People with disabilities must be able to participate in the artificial intelligence economy. The standard will require that barriers be removed in various areas. This includes barriers to participating in, designing and developing, implementing, using, and improving artificial intelligence systems. It also includes barriers to evaluating (or being evaluated by) such systems. This includes tools related to:
- Data visualization tools
- Programming and coding tools
- Marketed products and tools that include artificial intelligence processing
- Consumer feedback systems and auditing tools
2. Equitable Artificial Intelligence Systems
The standard will require the equitable treatment of people with disabilities by artificial intelligence systems. The standard will provide guidance in removing bias against people with disabilities throughout the stages of the artificial intelligence system deployment lifecycle. This includes bias found in data, labelling, training, algorithms, deployment, and evaluation of artificial intelligence that leads to the inequitable treatment of people with disabilities.
Areas where the standard will provide guidance specific to people with disabilities include:
- Accuracy, including misinformation and disinformation
- Bias, including statistical bias
- Fairness in outcome and process
- Safety and security
- Privacy, surveillance, and protection from data misuse and abuse
- Transparency
- Accountability
- Individual agency, informed consent and choice in application of artificial intelligence
- Human control and decision-making, the application of human judgement in decision-making
- Cumulative harms
3. Processes for Implementing Accessible and Equitable Artificial Intelligence Systems
The standard will provide guidance regarding organizational processes in implementing artificial intelligence systems inclusive of people with disabilities. Such processes include:
- Planning and justifying the use of artificial intelligence
- Assessing impacts
- Ensuring ethics oversight
- Designing, developing, buying and/or customizing artificial intelligence systems
- Training users and operators
- Providing access to alternative approaches
This also includes processes for the following mechanisms:
- Transparency, accountability, and consent
- Feedback, complaints, redress (remedies), and appeals
- Review, refinement, and termination.
Project need:
Accessibility Standards Canada was created under the Accessible Canada Act. Its mandate is to prevent, identify, and remove barriers to accessibility for Canadians with disabilities. In pursuit of this mandate, Accessibility Standards Canada develops standards based on the needs of people with disabilities. These needs have been identified in consultations with Canadians with disabilities, including the Governor in Council appointed Board of Directors, who are primarily people with disabilities and have approved the development of this standard.
In addition to these identified needs, Accessibility Standards Canada standards are developed following the principle of “nothing without us”. This means that Accessibility Standards Canada standards are developed with research that has been led by people with disabilities or lived experience, participation of people with disabilities on our technical committees and produce equity-based requirements that take into account the needs and perspectives of people with disabilities. This also means that the public review process for this standard will be accessible, allowing even more people with disabilities to be part of the standards development process.
People with disabilities experience the extremes of both the opportunities and risks brought about by the deployment of artificial intelligence systems. People with disabilities face barriers and discrimination in all aspects of artificial intelligence. The tools that allow participation in the artificial intelligence industry are not accessible to many people with disabilities. People with disabilities are very different from each other. As a result, their data is treated as an outlier or trivial minority and not understood, misunderstood, or deemed not optimal by artificial intelligence systems. Existing artificial intelligence ethics protections fail to adequately consider the harms and barriers faced by people with disabilities.
The proposed standard is required to prevent harm and ensure accessible and equitable participation in artificial intelligence systems for people with disabilities. The proposed standard will supplement and enhance general guidance and directives that support ethical practices when implementing artificial intelligence systems.
Note: The information provided above was obtained by the Standards Council of Canada (SCC) and is provided as part of a centralized, transparent notification system for new standards development. The system allows SCC-accredited Standards Development Organizations (SDOs), and members of the public, to be informed of new work in Canadian standards development, and allows SCC-accredited SDOs to identify and resolve potential duplication of standards and effort.
Individual SDOs are responsible for the content and accuracy of the information presented here. The text is presented in the language in which it was provided to SCC.