

You can easily go back to the late 1990s and trace the emergence of a sought-for desire for “trusted computing” from those days. The contemporary enigma about trust in AI is not especially new per se. Others indicate that it will take hard work and consistent and unrelenting adherence to AI Ethics and AI Law principles to get the vaunted trust of society. Some suggest that there is a formula or ironclad laws that will get AI into the trustworthy heavens. Into this challenge comes AI Ethics and AI Law.ĪI Ethics and AI Law are struggling mightily with trying to figure out what it will take to make AI trustworthy.

Thus, rather than starting at some sufficient base of trustworthiness, AI is going to have to astoundingly climb out of the deficit, clawing for each desired ounce of added trust that will be needed to convince people that AI is in fact trustworthy. You could say that the AI we’ve already seen has dug a hole and been tossing asunder trust in massive quantities. One qualm already nagging at this trustworthy AI consideration is that we might already be in a public trust deficit when it comes to AI. By crafting AI systems in a manner that is perceived to be trustworthy there is a solid chance that people will accept AI and adopt AI uses.

The essence is that you cannot hope to gain trust if AI isn’t seemingly trustworthy at the get-go. The belief by many within AI is that the developers of AI systems can garner trust in AI by appropriately devising AI that is trustworthy.
