Last month I had the opportunity to deliver a keynote presentation at the AI Malta Summit (AIMS 2018). The subject of the presentation was on how we can train AI to be ethical and unbiased.
Some of the topics covered included Data Bias in AI and some recent case studies of where such bias led to repercussions, and the very interesting topic of AI Adversarial Attacks and the contribution of Generative Adversarial Networks (GANs) in this area.
On the ethics side, different ways were explored of how AI systems can be trained to be ethical.
Of interest to AI ethics are Reinforcement Learning (RL) and Inverse Reinforcement Learning (IRL), the same techniques that led to recent breakthroughs in AI such as in playing Go.
Finally, it is worth remembering that the current state-of-the-art in AI is based mostly on learning via data association and inductive reasoning (the first step on Judea Pearl’s `ladder of causation’). Higher levels of reasoning (higher rungs on this metaphorical ladder) including reasoning via counterfactuals are currently beyond the reach of modern AI. Achieving true artificial intelligence, especially of the benevolent type, will surely require such advanced AI. Hopefully when and if such a level is reached, the values of AI systems will be aligned with the value systems of humankind for the benefit of both.
The slides of this presentation are available on slideshare.
Comments are welcome.