Interesting idea in the tweeted article:
It would take far too long to program every speech thread required for normal human conversation, so machines will have to ask the right questions when faced with uncertainty, and learn from the human answers.
That sounds great. But what about morality. Can a machine learn right from wrong? Or decide whether or not to save a child or a bumblebee from a natural disaster? I’m not sure. Part of the answer, I think, depends on whether or not AI would have some kind of soul or higher consciousness that transcends its circuits. Before we say that it doesn’t, it’s probably best to just say “we don’t know” and leave it there.
Myself, it seems like our car and computers have a personality of their own. Sure, I’m probably just projecting my own thoughts and feelings onto the machines… but … they are just organized energy… and so are we. So can we really be sure?
Something for future philosophers and, perhaps, social rights activists to ponder down the road.