AI is all the rage today. And ARPA is investing into explainable AI - why?
I believe the key for us, humans, is to be able to have recourse to another human who, we viscerally feel, may understand our situation and have empathy, based on shared experience and understanding of the world and how we - mortal, fallible humans - interact with it.
So no matter how sophisticated the AI, will we ever trust it to understand us enough that we will not wish to have recourse to the judgement of another human with power to overrule the "Computer says NO"?
But, how would a human "appellate judge" be able to evaluate AI's decisions if they're a mystery concealed within the deep neural network, acting on more data than a human can readily deal with?
Hence the need for the explainable AI.
I will leave it as an exercise to the reader to infer what types of use cases ARPA might be interested in, where robotic judgement might need to be subject to human appeal.
I believe the key for us, humans, is to be able to have recourse to another human who, we viscerally feel, may understand our situation and have empathy, based on shared experience and understanding of the world and how we - mortal, fallible humans - interact with it.
So no matter how sophisticated the AI, will we ever trust it to understand us enough that we will not wish to have recourse to the judgement of another human with power to overrule the "Computer says NO"?
But, how would a human "appellate judge" be able to evaluate AI's decisions if they're a mystery concealed within the deep neural network, acting on more data than a human can readily deal with?
Hence the need for the explainable AI.
I will leave it as an exercise to the reader to infer what types of use cases ARPA might be interested in, where robotic judgement might need to be subject to human appeal.