The New Zealand government’s AI partnership with the World Economic Forum
The New Zealand government is joining a cohort of leaders in regulating AI.
Governments are becoming aware of the transformational impact of artificial intelligence, and its role in the Fourth Industrial Revolution, but there is also acceptance that the regulations needed to confidently manage the risks associated with AI are underdeveloped.
The debate around what constitutes ethical AI is gathering pace, so the New Zealand government is piloting an agile governance project in conjunction with the World Economic Forum to rethink AI regulations, and how AI might impact the lives of citizens.
A white paper released in June called for participation from organisations who want to collaborate on the topic, identifying three actionable areas: inviting a national conversation, developing a social license, establishing regulatory capabilities and risk/benefit assessment of AI systems.
It’s not too late to get involved. Here’s a breakdown of the project in its three phases.
The first agreed phase of the project is to foster a national conversation – called developing a social licence. The acceptance of the value of AI depends on fostering community engagement that reassures citizens that AI is controlled by a truthful, collaborative and independent government presence and that companies are trustworthy enough to use AI ethically in areas that could involve risks. Think Canada’s Algorithmic Impact Assessment that ask organisations a scale of questions related to the ongoing sustainability of systems, including on the rights, health or well-being of individuals and communities, as well as the economic interests.
The WEF recommends that citizens are given the tools and platforms they need to hold national conversations in order to build consensus on AI issues, particularly those citizens who have had less voice in policy decisions. The risk is that, without these conversations that establish the proven value of AI to the community and produce case studies, toolkits and set out key success factors, there will be distrust and resentment.
These conversations need to be global too – as AI solutions must be co-designed and operable between groups and countries.
Some examples of current good practice in social licensing include the Ministry of Social Development’s Privacy Human Rights and Ethics framework, which informs decisions about social welfare initiatives, and the Social Wellbeing Agency, the Government’s social science hub based in co-designed ethical principles.
The second phase is to strengthen regulatory capabilities and institutional design. The WEF acknowledges that the regulations around AI are too complex to be handled solely by the state or industry. The point of regulation, according to the WEF, is to allow benefits to be distributed equitably, while also providing certainty and managing risks. Regulation should also have increasing presence in innovations designed to increase human well-being, and environmental and business sustainability.
The WEF recommends developing a centre of excellence for AI as a multidisciplinary and collaborative approach, combining viewpoints from industry and society, as well as government and academia. It’s key that this central hub for AI “be innovative and fluid, changing out staff depending on the issues being looked at”. Independence from aligning with either government or industry would be critical. It should draw on the insights of all interested stakeholders, and have sector or domain expertise on hand when needed, but not be limited by particular viewpoints.
As a starting point, the WEF recommends mapping the current AI regulations and AI usage across government, and identifying where a centre of excellence could add value. Its many potential value-added areas include developing regulatory sandboxes, identifying risks and solutions in algorithms, and identifying key skills and competencies required for AI positions.
Similarly a similar international body is recommended by the WEF. This is because the development and use of AI is typically led by global technology companies. A national regulatory body will be limited in impact, while an international regulatory body could build and inform unified use and development of AI.
The New Zealand Law Foundation’s Government Use of Artificial Intelligence in New Zealand report is, in line with this project. It recommends an independent oversight agency to collaboratively decide on algorithm introduction or use, track government uses of predictive algorithms, receive assessments of algorithm use from individual agencies, and oversee use of “self-checking” frameworks.
The third element or phase is increased oversight of AI systems by government. The WEF acknowledges that without effective oversight, AI could be harmful, erode privacy and security, and lead to human rights abuses. They recommend strong internal regulatory powers as essential steps towards trustworthy implementation of AI, however that also there’s the need to operationalise the ethical use of AI and ensure it responds to the needs of government operations and citizens.
The WEF has recommended enacting risk benefit assessment frameworks for AI system use in government, as government staff become more skilled at effective and ethical ways to use AI. Possible outcomes of these frameworks include to enable governments to better document and inform citizens about how AI systems impact their lives. Second, to ensure effective processes and criteria are in place to identify and mitigate AI risks. Additionally, they could provide more opportunity for more meaningful ongoing reviews of AI systems. An effective risk benefit assessment framework should open up space to for citizens to challenge decisions made by a specific AI system.
Benefits include encouraging teams to consider risks and benefits, as well as the assumptions that need to be in place to drive the perceived value. Second excluding harm and ensuring appropriate risk mitigation strategies are in place. Third, building up a powerful repository of AI projects – both failed and effective – that could contribute to organisational understanding and capability of responsible use of personal information.
Who else is engaged in regulating AI?
New Zealand is a part of a global movement towards discussing and regulating AI, with several other international examples:
- The UK’s Office for AI includes best practice recommendations for government use of AI, and how to in-build ethical AI skills, data, investment and leadership into organisations.
- AI Singapore combines national research and AI companies to enhance the country’s capability to build up its digital economy and use AI to effectively address societal and industry challenges.
- India’s Centre of Excellence in AI facilitates innovation by providing a platform for testing solutions collaborating across government departments.
- Canada’s National AI Strategy has a strong presence in universities and national research.
- The Malta Digital Innovation Authority is a regulatory authority that sets and enforces standards and offers protections for digital technology users.
The essential takeaways
One of the key challenges underlying this AI regulation project is that stakeholders have differing opinions of what is required and what will work.
To meet the twin challenges of fast-paced innovation and converging technologies, governments must become more agile and collaborative in their approaches to regulation. This may require a shift away from traditional hard laws – such as specific statutory rules, fines and subsidies – and toward soft laws – such as awareness and education building, partnerships, engagement and consultation.
New Zealand’s partnership with the WEF is currently in the ‘testing’ phase, the project is piloting new approaches and tools for AI regulation, capturing lessons, and sharing findings. The next phase will include compiling tools for building a social license. If the timeline unfolds as planned, next year the project will enter the ‘scaling’ phase where it encourages broad adoption of the frameworks. Whether you are a citizen or business it’s not too late to get involved.
Data, AI, BI & ML
Artificial Intelligence and Machine Learning are the terms of computer science. Artificial Intelligence : The word Artificial Intelligence comprises of two words “Artificial” and “Intelligence”. Artificial refers to something which is made by human or non natural thing and Intelligence means ability to understand or think. There is a misconception that Artificial Intelligence is a system, but it is not a system. AI is implemented in the system. There can be so many definition of AI, one definition can be “It is the study of how to train the computers so that computers can do things which at present human can do better.” Therefore it is an intelligence where we want to add all the capabilities to machine that human contain. Machine Learning : Machine Learning is the learning in which machine can learn by its own without being explicitly programmed. It is an application of AI that provide system the ability to automatically learn and improve from experience. Here we can generate a program by integrating input and output of that program. One of the simple definition of the Machine Learning is “Machine Learning is said to learn from experience E w.r.t some class of task T and a performance measure P if learners performance at the task in the class as measured by P improves with experiences.”