In 2019, as the Department of Defense considered adopting AI ethics principles, the Defense Innovation Unit held a series of meetings across the U.S. to gather opinions from experts and the public. At one such meeting in Silicon Valley, Stanford University professor Herb Lin argued that he was concerned about people trusting AI too easily and said any application of AI should include a confidence score indicating the algorithm’s degree of certainty.
“AI systems should not only be the best possible. Sometimes they should say ‘I have no idea what I’m doing here, don’t trust me.’ That’s going to be really important,” he said.
Read the rest at VentureBeat