Can Microsoft’s approach to regulation restore trust in facial recognition tech?

Avatar photo

Peter Griffin

From facial recognition to electronic voting, Microsoft has made proactive moves to be part of the solution rather than waiting for regulators to move in.

ANALYSIS: Brad Smith, Microsoft’s president and an experienced corporate lawyer who joined the company in 1993, is increasingly the man articulating the ethical and moral issues ‘Big Tech’ companies face in the 21st Century.

He canvassed a host of them in his book Tools and Weapons, which looked at everything from the raging cyberwarfare underway between the world’s superpowers to the need for clear rules and oversight to govern use of artificial intelligence.

Tools and Weapons – Brad Smith’s vision of the future of tech

Last year he related to me the inside story of helping Prime Minister Jacinda Ardern formulate her response to the live streaming over Facebook and other social media platforms of the video made by the killer who perpetrated last year’s mosque attacks.

Ardern would later go on to launch the Paris Call, with the support of Smith and other tech leaders, including Twitter founder Jack Dorsey, extracting pledges from tech companies and governments to do more to tackle the spread of extremist material online.

Such efforts, which Smith has been given a wide remit to pursue at Microsoft, aren’t just public relations exercises. Smith was in the thick of Microsoft’s antitrust battles with the US government in the late nineties. He knows how disruptive large-scale regulatory action can be for Microsoft’s business and it is his job to try and avoid it.

Early engagement in law-making

Ironically, that doesn’t mean mobilising large teams of lawyers to fend off government efforts to control Big Tech, but to embrace regulation early, so as to have more of a say in the shape of it. What he is seeking is regulatory certainty.

“It has to be clear to companies, what they’re supposed to do and when and how they’re supposed to do it,” Smith told me last year when it came to under-regulated areas like artificial intelligence and facial recognition.

Smith’s virtual address for last week’s Inspire 2020 event continued on that theme, in a video that saw him travel from Puget Sound in Seattle to Olympia, the capital of Washington state, where Microsoft collaborated with lawmakers to pass legislation governing facial recognition technology.

The video is worth watching for insights into how Microsoft is attempting to get ahead of the most vexing issues surrounding the use of technology, from the sustainability of Microsoft’s power-hungry data centres to threats to the integrity of electronic voting systems in the US.

In March, Washington state passed into law what Smith called an “early and important model” rules for the use of facial recognition software, which Microsoft offers as part of the tools available through its Azure cloud computing platform.

“The government signed into law for the very first time, anywhere in the world, a law to provide safeguards around facial recognition to make sure that companies that offer it must make it available for testing against bias,” said Smith last week.

“To require that law enforcement must get a court order or a warrant before they engage in surveillance. To require that it can’t be used at all for surveillance of peaceful protestors. They are the types of things we are seeking to advance around the world to preserve trust in technology,” he added.

Strong rules around the use of facial recognition 

Washington is the first to introduce state-wide regulations around the use of facial recognition. It follows San Francisco’s move last year to ban facial recognition for use in law enforcement. Democratic lawmakers have also introduced a Bill that would seek to prevent law enforcement agencies across the country from using facial recognition, in the wake of scrutiny of policing in general following the death of George Floyd in May.

IBM in May said it would cease development of facial recognition technology due to the ethical issues its use threw up and Amazon has paused police use of its own Rekognition system for at least a year.

Facial recognition clearly has many legitimate uses. But genuine fears about its use in law enforcement and concerns over bias and misidentification of people threaten to see the technology’s use widely curtailed.

Standing in the legislature building at Olympia, Smith said that the bill was the culmination of “two years of work” on issues surrounding facial recognition. It is clearly Microsoft’s preference to ad hoc and differing legislation that could vary city by city. The question now is whether the provisions in the bill are strong enough to encourage other states to pursue a similar approach.

That may be a hard ask given the sensitivity around policing issues. But Microsoft responded to the Black Lives Matter protests with a slew of funding initiatives to address racial inequity. It included a US$50 million, five-year programme to progress “data and digital technology toward increased transparency and accountability in our justice system,” Microsoft chief executive Satya Nadella explained in a blog post.

“All this work will be backed by public policy advocacy that will increase access to data to identify racial disparities and improve policing. We’ll also use our technology and expertise to support evidenced-based and unbiased diversion programs that direct people into treatment alternatives instead of incarceration.”

AI – the next battleground

Data would also be used to promote “racial equity made by prosecutors, including decisions about who to charge with a crime, the nature of the charge, plea offers, and sentencing recommendations”.

That has been a contentious issue where artificial intelligence systems have been drawn on in the court system to inform sentencing decisions. A major Propublica investigation in the US found algorithmic systems used to predict future offending potential were biased against black offenders.

Smith and Microsoft, therefore, have a bigger task securing rational and fair governance of the use of artificial intelligence in increasingly important decision making, in the public and private sectors.

It is an area on which Microsoft’s own staff are increasingly vocal. While Microsoft is working overtime to establish trust in the technologies that are so integral to its fast-growing cloud computing business, it also has to look internally to secure the trust and confidence of those who develop them.

“What is so noteworthy about activism in the tech sector is that we have employees standing up not for themselves, but for broader societal issues and values,” Smith told me last year.

“We don’t necessarily always agree that their answers are the right ones. But what we learned is that their questions are the important ones.”

Avatar photo

Peter Griffin

Peter Griffin has been a journalist for over 20 years, covering the latest trends in technology and science for leading NZ media. He has also founded Science Media Centre and established Australasia's largest science blogging platform, Sciblogs.co.nz.

Microsoft

See Profile

Microsoft is a technology company whose mission is to empower every person and every organisation on the planet to achieve more. We strive to create local opportunity, growth, and impact in every country around the world.

You might also like

[ajax_load_more id="9462509724" container_type="div" post_type="post" posts_per_page="6" post__in="" pause="false" placeholder="true" scroll="false" button_loading_label="Loading" button_done_label="No results" no_results_text="No results"]

Our Vendors

Subscribe to
Can Microsoft’s approach to regulation restore trust in facial recognition tech? - Umbrellar Connect

Get the latest news content in your inbox each week

Search