RAI’s certification process aims to prevent AIs from becoming HALs

RAI’s certification process aims to prevent AIs from becoming HALs

RAI’s certification process aims to prevent AIs from becoming HALs

The most egregious public failures of AI over the past few years, including Microsoft’s Tay fiasco, the debates surrounding Northpointe’s Compas sentencing software, and Facebook’s own algorithms fueling online hate, have exposed the technology’s skeevy underbelly and highlighted how far we still have to go before they can reliably and fairly interact with humans. Such occurrences, of course, haven’t done much to slow down the excitement and interest in artificial intelligences and machine learning systems, and they most certainly haven’t slowed down the march of the technology towards ubiquity.

It turns out that the users themselves have been one of the main barriers to the continuous use of AI. We’ve evolved from the dial-up dweebs of the baud rate age. The horrors of the offline world have already been unknown to a whole generation as they have reached maturity.

As a result, there has been a fundamental shift in how people view the importance of personal data and the obligations of the corporate community to modify it. Just take a look at the overwhelmingly favourable feedback for Apple’s most recent iOS 14.5 update, which gives iPhone customers a previously unheard-of amount of control over how and by whom their app data is used.

Now, the Responsible Artificial Intelligence Institute (RAI), a nonprofit organisation creating governance tools to usher in a new generation of reliable, secure, and responsible AIs, intends to provide a more standardised method of confirming that our future HAL won’t kill the entire crew. They want to create “the first independent, authorised certification programme of its type in the world,” to put it simply. Imagine using AI in place of the LEED green building certification programme that is employed in the construction industry.

When it comes to potential negative actions carried out by AIs, Mark Rolston, founder and CCO of argodesign, told Engadget that “we’ve only seen the top of the iceberg.” “[AI] is] now truly ingratiating itself into very mundane parts of how businesses function and how people go about their daily lives. They will want to know that they can trust it when they learn more and more about the AI that is powering it. I believe the problem will remain important for some time.

Dr. Manoj Saxena, University of Texas Professor on Ethical AI Design, RAI Chairman, and a man widely regarded as the “father” of IBM Watson, started work on this certification programme almost five years ago, at the same time that RAI was established. However, his initial inspiration came from even further back.

He told Engadget, “I started discovering all these challenges about developing confidence in automated decisioning systems, including AI, when I was asked by the IBM board to commercialise Watson, I’m talking about 10 years ago now. “How can I trust this system?” was the most crucial question that people used to ask me when we were attempting to commercialise.”

The core purpose of RAI’s activity is to respond to that query. According to Saxena, AI now directs our interactions with the diverse aspects of the modern world in a manner similar to how Google Maps directs us from one point to another. Yet, instead of helping us navigate the streets, AI is assisting us in making decisions about our finances, healthcare, who to Netflix and Chill with, and what to watch on Netflix before the aforementioned chilling. AI is being used to assist increase engagement and choices, he said, and “all of these are becoming weaved in by AI.” “We concluded there are two significant issues.”

First, we have absolutely no idea what is happening on within them, a problem that has bedevilled AI since its first versions. These are opaque decision-tree-running black boxes that arrive at conclusions that neither the consumers of the AI nor its creators can precisely understand. When attempting to win over a doubting audience, this lack of openness is not a good picture. “We figured that giving transparency and trust to AI and automated decisioning models is going to be an extremely critical skill much as it was introducing security to the web,” said Saxena.

The second problem is how to come up with a just and impartial solution to the first one. What happens when society leaves powerful monopolies like Facebook and Google to govern themselves has already been demonstrated. When Microsoft swore vehemently that it would self-regulate and play fairly during the Desktop Wars of the 1990s, we witnessed the same shenanigans; in fact, the Pacific Telegraph Act of 1860 was passed precisely because the then-existing telecoms couldn’t be relied upon to treat their customers fairly without government oversight. Although this is not a brand-new issue, RAI believes that their certification method offers a contemporary approach.

Based on the AI’s scores along the five OECD principles of Responsible AI—interpretability/explainability, bias/fairness, accountability, robustness against unwanted hacking or manipulation, and data quality/privacy—certifications are given in four levels: basic, silver, gold, and platinum (sorry, no bronze). A questionnaire and a scan of the AI system are used to administer the certification. For basic certification, developers must receive 60 points; for silver, 70 points; for bronze, 80 points; and for platinum, 90 points or more.

Rolston points out that the certification procedure will heavily rely on design analysis. Any corporation attempting to determine if its artificial intelligence (AI) would be reliable must first comprehend how it is being built into its broader business, he added.

“And that necessitates a degree of design analysis, both on the technological front and in terms of how they are interacting with their users, which is the province of design,” the author continued.

Although the two are keeping quiet about specifics while the programme is still in beta, RAI anticipates finding (and in some cases has already identified) a number of willing organisations from government, academia, enterprise businesses, or technology suppliers for its services (until November 15th, at least). Saxena expects that RAI will someday develop into a globally recognised certification programme for AI, similar to the LEED certification.

He contends that by reducing much of the uncertainty and liability exposure that current developers—and their pressed compliance officers—face, it would hasten the creation of future technologies while fostering consumer confidence in the company.

We use IEEE standards, we examine new ISO initiatives, we examine leading indications from the European Union, such as GDPR, and now this just released algorithmic rule, according to Saxena. We consider ourselves to be the “do tank” that can put such ideas into practise.

Leave a Reply

Your email address will not be published. Required fields are marked *