Ethical design and use of automated decision systems

Logo
CIO Strategy Council
Standards Development Organisation:
Working Program:
Designation Number:
CAN-CIOSC 101
Standard Type:
National Standard of Canada - Domestic
Standard Development Activity:
Reaffirmation
ICS code(s):
03.100.02; 35.020; 35.240.01
Status:
Proceeding to development
SDO Comment Period Start Date:
SDO Comment Period End Date:
Posted On:

Scope:

Scope
This Standard specifies minimum requirements in protecting human values and incorporating ethics in the design and use of automated decision systems. This Standard is limited to artificial intelligence (AI) using machine learning for automated decisions. This Standard applies to all organizations, including public and private companies, government entities, and not-for-profit organizations. It provides a framework and process to help organisations address AI ethics principles, such as those described by the OECD: - AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being. - AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards – for example, enabling human intervention where necessary – to ensure a fair and just society. - There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them. - AI systems must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed. - Organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles. Requirements in this Standard are principles-based and recognize an organization’s governing practices may depend on its size; ownership structure; nature, scope and complexity of operations; strategy; and risk profile. Organisations are expected to take reasonable and responsible measures to adopt and implement the principles in this Standard. This Standard is intended to be used in conjunction and integrated with the organisation’s compliance programs, including but not limited to, existing privacy, cybersecurity, data governance, complaints and appeals, and legal programs.

Project need:

Project Need
Automated decision systems driven through machine learning algorithms are “currently being used by public agencies [and the private sector], reshaping how criminal justice systems work via risk assessment algorithms and predictive policing, optimizing energy use in critical infrastructure through AI-driven resource allocation, and changing our employment and educational systems through automated evaluation tools and matching algorithms. ” This expanding influence of algorithms creates the urgent need for a framework to ensure that accountable and transparent decisions are made which support ethical practices.

Note: The information provided above was obtained by the Standards Council of Canada (SCC) and is provided as part of a centralized, transparent notification system for new standards development. The system allows SCC-accredited Standards Development Organizations (SDOs), and members of the public, to be informed of new work in Canadian standards development, and allows SCC-accredited SDOs to identify and resolve potential duplication of standards and effort.

Individual SDOs are responsible for the content and accuracy of the information presented here. The text is presented in the language in which it was provided to SCC.