Technology is not something we can hide from students.
- Zeeshan Anwar, New York City Public Schools[1]
When dealing with newer technologies, like generative AI, it is imperative to act quickly to ensure any use can occur in a manner, which not only protects the users, but also aligns with core values and objectives of the respective local and national environments.
Whilst navigating new situations and demands, the various steps necessary to develop comprehensive regulation and implementation structures can become blurred with actions occurring before fundamental discussions have taken place, which in turn can shape new actions. This becomes confusing both for those developing guidelines and those who are waiting for their support.
A framework for comprehensive AI policy development in education
This AI Policy Development Framework for Education has broken down the decisions and questions necessary to develop a comprehensive regulatory structure into three distinct areas of action. Central to this development structure is the regular revision and iteration of each area with knowledge learned through the development of the other areas.
Whilst navigating new situations and demands, the various steps necessary to develop comprehensive regulation and implementation structures can become blurred with actions occurring before fundamental discussions have taken place, which in turn can shape new actions. This becomes confusing both for those developing guidelines and those who are waiting for their support.
A framework for comprehensive AI policy development in education
This AI Policy Development Framework for Education has broken down the decisions and questions necessary to develop a comprehensive regulatory structure into three distinct areas of action. Central to this development structure is the regular revision and iteration of each area with knowledge learned through the development of the other areas.
- Foundational Discussion: these outline core topics which must be clarified covering societal and cultural values, objectives for teaching and learning environments and education ecosystems, the governance and organisational structures required and key issues for prioritisation.
- Activation: Starting with formulating core principles from the key issues and values identified in the foundational discussion, this section continues with understanding and prioritising areas of risk before seeking to align and leverage existing regulatory practices in order to streamline the process and provide clarity regarding outstanding issues.
- Implementation: This section outlines practical requirements for the sustainable implementation of newer technologies starting with the development of resources to support all stakeholders within the ecosystem. This is followed by identifying the organisational responsibility and key areas and roles to cover these as well as supportive implementation mechanisms. The topic of enforcement measures explores necessary accountability structures and stakeholder engagement investigates both supporting the wider community and international connections, which are so important with tools that can so rapidly cross borders and oceans.
In the following section, each of these areas of action will be individually explored with a checklist to support necessary discussions and capture required responses.
FOUNDATION
1. Values
Each nation, cultural group, and region are guided by sets of societal and cultural norms. These are strongly reflected in local education practice. Defining these core values in relation to newer technologies can ensure that any policies created support and are guided by these values.
2. Objectives (benefits to be realised)
It is important to identify what bigger picture goals you have with the use of new technologies and resources. These can also encompass the benefits of use that you wish to leverage.
3. Organisational structures:
To ensure that implementation and enforcement mechanisms for regulation and guidelines can work, it is essential to develop organisational and governance structures which can align with existing national or regional structures and provide both an overview of responsibilities and contacts for all stakeholders.
4. Key issues[2]
In the process of understanding to which topics special attention needs to be paid or for which regulation mechanisms need to be developed, we first must identify which topics are seen as important. These can also be divided into areas of priority (e.g., urgent, emerging, long-term).
Each nation, cultural group, and region are guided by sets of societal and cultural norms. These are strongly reflected in local education practice. Defining these core values in relation to newer technologies can ensure that any policies created support and are guided by these values.
- Which are the core societal and cultural values that should guide this work?
- Are there other values that should be considered to support understanding?
- Are there secondary values that should be explored?
- Are there existing publications or guidelines outlining pedagogical and developmental values for your education environments that should be taken into consideration?
2. Objectives (benefits to be realised)
It is important to identify what bigger picture goals you have with the use of new technologies and resources. These can also encompass the benefits of use that you wish to leverage.
- Which societal or cultural objectives could these new technologies help to achieve (e.g., inclusion, integration, equity)?
- Are there current, higher-level issues within education settings, which these technologies could alleviate (e.g., teacher shortages, high administrative burdens, professional development)?
- Are there current or forward-thinking political objectives to address with the use of these technologies?
3. Organisational structures:
To ensure that implementation and enforcement mechanisms for regulation and guidelines can work, it is essential to develop organisational and governance structures which can align with existing national or regional structures and provide both an overview of responsibilities and contacts for all stakeholders.
- What are the key areas of responsibility and which organisations or roles will cover them?
- Do these roles sit within existing governance or regulation structures, or do new ones need to be developed?
- What other international, national, or regional governance structures need to be taken into consideration?
- What levels of oversight and governance should exist?
- What are the paths to recourse for all stakeholders?
- Are there existing roadmaps for prioritisation of issues that need to be aligned?
4. Key issues[2]
In the process of understanding to which topics special attention needs to be paid or for which regulation mechanisms need to be developed, we first must identify which topics are seen as important. These can also be divided into areas of priority (e.g., urgent, emerging, long-term).
- What are key areas that need definition and action, e.g.,
- Data privacy
- Bias & fairness
- Transparency
- Accountability
- Ethical use and development
- Accessibility
- Teacher training
- Cybersecurity
- Evaluation and validation
- Informed consent practices (particularly for students)
- Equity (digital equity)
- Diversity
- Interoperability of systems
- Using data of minors for training of systems
ACTIVATION
If activation begins before the foundational discussions have started, it can be difficult to finalise work in the activation topics as important information will be missing, or it can require reworking of the activation mechanisms as decisions at the foundational level progress.
1. Principles[3]
Hundreds of documents have been developed in the past years determining guiding principles for the development, governance, and implementation of AI. Although there are few, which deal directly with education, the vast majority of these documents cover similar topics and scopes. AI principles together with our objectives can give context to how we view the additional value that AI can bring and the methods necessary to ensure this occurs in a safe and appropriate manner.
2. Understanding risk[4]
In determining the best manner of ensuring AI Safety[5], it is necessary to assess key areas of risk, areas requiring prioritised attention, and topics, pivotal to education environments such as youth development, quality of life and reputation and identity integrity, but which may not be included in otherwise strongly technical risk assessments. If an activity has a direct impact on learning possibilities, for example, it is important to assess associated risks.
3. Alignment with and leveraging of existing regulations
Leveraging existing legal frameworks or tools and guidelines can reduce duplication, and increase the understanding of the topics, an acceptance of the needs and clarity around actions to be taken.
1. Principles[3]
Hundreds of documents have been developed in the past years determining guiding principles for the development, governance, and implementation of AI. Although there are few, which deal directly with education, the vast majority of these documents cover similar topics and scopes. AI principles together with our objectives can give context to how we view the additional value that AI can bring and the methods necessary to ensure this occurs in a safe and appropriate manner.
- From the foundational discussion values and key areas, are there principles we can develop to guide our practice and action?
- Are there existing principle frameworks we could reference or adapt to our needs?
- Are the principles formulated in such a way as to determine key areas of action?
2. Understanding risk[4]
In determining the best manner of ensuring AI Safety[5], it is necessary to assess key areas of risk, areas requiring prioritised attention, and topics, pivotal to education environments such as youth development, quality of life and reputation and identity integrity, but which may not be included in otherwise strongly technical risk assessments. If an activity has a direct impact on learning possibilities, for example, it is important to assess associated risks.
- Are there certain risks where regulatory mechanisms need to be prioritised due to their severity or gravity?
- Are there risks, which can be deemed as non-negotiables and for which strict regulatory mechanisms must be immediately enforced?
- Are there examples of regulatory support for these issues in other sectors or countries?
- Have areas of human rights and child or youth development been taken into consideration (e.g., the effects on democratic participation, the integrity of personal identity, doing no harm to a child’s quality of life, their reputation, or psychological integrity)?
- Can risk profiles and corresponding actions be created to support decision-making at a local level (e.g., purchasing authorities, school leadership, teachers).
3. Alignment with and leveraging of existing regulations
Leveraging existing legal frameworks or tools and guidelines can reduce duplication, and increase the understanding of the topics, an acceptance of the needs and clarity around actions to be taken.
- Which of the key issues are already (partially) governed by existing legal frameworks or guidelines?
- Which of the issues is currently not governed satisfactorily?
- For areas that are not satisfactorily covered, which governance or organisational structures will have the responsibility of developing these and is there an appropriate method for communicating feedback?
IMPLEMENTATION
1. Resources
Resources must be aligned with the key foundational discussions and activation ideas. These
2. Organisational Responsibility
3. Implementation Mechanisms
4. Enforcement mechanisms
5. Stakeholder engagement
[1] https://drive.google.com/file/d/1yP5YuEYpYPfwZ0hHpSVzvPav8lzGEv5V/view
[2] UNESCO Guidance for generative AI for education and research
Resources must be aligned with the key foundational discussions and activation ideas. These
- Develop guidelines
e.g., ethical guidelines for the use of AI, learning about and learning and teaching with AI, practical guidelines providing a framework for localised interactions with generative AI (e.g., what student and teacher facing guidelines could look like[1]) - Professional development
Exploring both aspects of learning about AI and learning and teaching with AI. - AI test spaces
Information about spaces for allowing both educators and school leadership to experiment as well as for developers to engage with appropriate data to ensure equitable systems.
- Public awareness campaigns to educate entire education communities
Helping to ensure that all members of the community feel empowered in their knowledge and the educator’s and learner’s use of AI technologies by providing clearly understandable information and learning resources. - Promote equity
Providing decision-makers, school leadership teams, purchasing authorities and AIEd technologies developers with resources outlining the key elements of (digital) equity and equitable access
- Understanding key issues (e.g., transparency, human agency)
Developing information for key stakeholder groups regarding the key issues which have been identified and the expectations within this education environment when dealing with these issues.
- Procurement support
Ensuring decision-makers and purchasing authorities understand the key issues they need to be aware of and purchasing for. Additionally, providing dedicated support with text covering key functionalities aligned with the values, objectives, risks, issues, and principles discussed above
2. Organisational Responsibility
- Establish any necessary regulatory or governance bodies and structures.
These could be local, regional, and national bodies and could also cover ensuring inclusion within other, broader governing bodies (e.g., ensuring education is included in the general discourse around AI regulation and policy).
- International collaboration: engage to harmonise AI regulations, ensuring global consistency.
AI as many other digital tools, is not tied to one geography or segment, and any effective measurement and regulatory practice will need to be aware of international movements directing the discourse so that local practices can harmonise with international decisions or be very clear about reasons for diversion and create a plan for compatibility.
- Long-term impact assessment.
The appropriate bodies need to be developed to run and manage assessments and outcomes, and engagements between multiple stakeholders must be made possible to ensure results can feed into practical recommendations.
3. Implementation Mechanisms
- Standards: develop accessibility standards, cyber security standards
It is necessary to see which existing standards can be aligned to, and where there are gaps, which need to be filled by new standards. Both standardisation committees and stakeholder groups should be engaged when determining new standards and it is essential to look to a broader international or global landscape to ensure scalability and connectivity for solutions.
- Rigorous evaluation process for AI tools in education
Either extending existing quality assurance mechanisms or developed as stand-alone but interoperable evaluation frameworks, it is important to identify key areas of and criteria for evaluation practices also including processes of evaluation and possible approval or certification. Re-evaluation or re-assessment at regular intervals should be considered as part of this.
- Define requirements for obtaining informed consent
Applicable research regarding cognitive development and methods to ensure educator, guardian and learner agency in consent processes must be considered as requirements are defined.
- Promote equitable access
With a view to improving (digital) equity, it is essential to determine whether or not the use of any tools promotes equitable access and furthers digital equity. If not, it is necessary to examine what steps need to be developed to mitigate this.
- Establish mechanisms for continuous assessment of AI’s impact
Assessment practices, guidelines, and criteria should be developed to conduct long-term assessment of the impact of AI in Education, for example, on education, on child development, on identity, on the quality of life etc.
- Periodic review
Stipulate a time frame and process for periodic review and assessment both of tools already in use and those considered for use. Ensure reporting mechanisms are in place to inform future review processes.
- Promote research and AI development support in education to foster innovation
Actively support, engage and promote research and development to identify, for example, key issues for long-term assessment as well as key areas for further support to foster innovation.
- Ensuring national and international interoperability
Aligning with existing standards and ensuring that developments that deviate from these are still able to offer interoperability. Having a plan across all levels of interoperability (legal, organisational, semantic/syntactic, and technical).
- AI Testbeds
Developing the spaces for (evidence-based) testing of AI tools both by educators and administration as well as by those developing the AI tools. These can be both virtual and physical environments as well as strategy or policy labs, which can test potential strategies and measure outcomes and impact.
- Change management methodologies
Aligning with and implementing change management strategies across the management of each level of responsibility to ensure flexibility and resilience when dealing with newer technologies.
- Creating education training data stores
Data stores that contain relevant education data encompassing relevant cultural, linguistic, and learning data types so that AI tools can be trained using appropriate data and work towards mitigating potential biases.
4. Enforcement mechanisms
- Enact laws and regulations that align with known standards
Identify existing standards which are related to or can cover specific areas of AI in Education use and align with these, identifying key gaps and developing mechanisms to cover these. Ensure that governing bodies are identified or created to address these specific needs.
- Mandate transparency in AI algorithms
provision of transparency across data usage (both during use and the training of tools), the goals of an algorithm and the way that they are intended to work. - Bias Mitigation - regular auditing of AI systems
Create systems, which can regularly audit any AI tools and specifically assess against bias as well as guide towards bias mitigation.
- Define enforcement mechanisms and penalties for non-compliance
Understand the different types of enforcement mechanism and which issues are aligned with which type of enforcement and, relatedly, penalty for non-compliance.
- Develop a flexible regulatory framework that can adapt to evolving AI technologies
Considering the pace of development in AI technologies, it will be important to have policy adapt quickly and flexibly. This can be achieved, for example, by incorporating change management strategies and having processes in place to prioritise key issues and align or develop missing regulatory requirements.
5. Stakeholder engagement
- Engagement with the public
This can include information in different formats from handouts, multimedia explanatory campaigns, learning videos etc. It is important to ensure that the wider education community (including those supporting learners and educators) understand key issues and feel empowered in their discussions and decisions.
- International collaborations and knowledge exchange
It is advisable to develop methods of knowledge exchange with international and neighbouring ecosystems and ensure that information gathered can be processed and included in the work within your own environment.
- Agency of learners, educators, and administration
It is important to monitor whether AI tools for education are actively ensuring the agency of learners, educators, and administration by providing possibilities, for example, to override decisions or paths, to actively ensure the voice of users is still present in the choice and use of any tools, and that participation is encouraged.
- Promote plural opinions and expressions of ideas[2]
Ensuring that both the input into an AI tool and what becomes the product of working with an AI tool actively promotes and understands different perspectives and opinions.
[1] https://drive.google.com/file/d/1yP5YuEYpYPfwZ0hHpSVzvPav8lzGEv5V/view
[2] UNESCO Guidance for generative AI for education and research
[1] https://news.microsoft.com/source/features/digital-transformation/how-nyc-public-schools-invited-ai-into-its-classrooms
[2] See. also, EdSAFE AI Alliance Policy Guidance, Education Services Australia Principles to Policy, for methods of prioritising and iterating on key issues. See UNESCO Guidance for generative AI for education and research, EU Commission for identification of education related issues.
[3] See: Berkmann Klein Center, Harvard, Principled Artificial Intelligence, EdSAFE AI Alliance Policy Guidance Framework, UNESCO Guidance for generative AI for education and research
[4] See: Montreal Declaration for responsible AI etc.
[5] IAMAI
[2] See. also, EdSAFE AI Alliance Policy Guidance, Education Services Australia Principles to Policy, for methods of prioritising and iterating on key issues. See UNESCO Guidance for generative AI for education and research, EU Commission for identification of education related issues.
[3] See: Berkmann Klein Center, Harvard, Principled Artificial Intelligence, EdSAFE AI Alliance Policy Guidance Framework, UNESCO Guidance for generative AI for education and research
[4] See: Montreal Declaration for responsible AI etc.
[5] IAMAI