An interesting article on how academic philosophy isn't engaging in discussion of AI and so techies have become "the 21st Century Philosophers."
"Interesting" to read about some of the latest developments, but wrong about philosophy. When I was in grad school this was definitely a topic, and I can't imagine the discussion of it has lessened. The conversation was usually in philosophy of mind and philosophy of language and centered around reductionist views of the mind that assumed the mind was like a computer program--an assumption underlying all AI. However, for those philosophers who reject that claim--that the mind is like a computer program--then they reject some of the underlying principles of AI. In fact, many philosophers believe that AI is impossible.
The famous Turing Test was proposed as a way of determining if AI had been achieved. However, in 1980 John Searle presented the Chinese Room Argument which demonstrated (conclusively for many philosophers) that the Turing Test was inadequate.
Searle imagines himself alone in a room following a computer program for responding to Chinese characters slipped under the door. Searle understands nothing of Chinese, and yet, by following the program for manipulating symbols and numerals just as a computer does, he produces appropriate strings of Chinese characters that fool those outside into thinking there is a Chinese speaker in the room. The narrow conclusion of the argument is that programming a digital computer may make it appear to understand language but does not produce real understanding. Hence the “Turing Test” is inadequate.
But Searle goes on to argue that Strong AI--the human creation of an artificial mind--is impossible because you cannot program semantics.
Searle argues that the thought experiment underscores the fact that computers merely use syntactic rules to manipulate symbol strings, but have no understanding of meaning or semantics. The broader conclusion of the argument is that the theory that human minds are computer-like computational or information processing systems is refuted. Instead minds must result from biological processes; computers can at best simulate these biological processes.
Grasping Searle's argument was one of those moments in my education when I understood something and changed my mind. Until that point I had thought Strong AI possible and became convinced by Searle that it wasn't. So, I continue to be amused when people get worked up over computers that beat humans at chess and such because being great at computation is not remotely equivalent with being a mind.