“‘The Godfather of AI’ Leaves Google and Warns of Danger Ahead,” so wrote The New York Times. The article begins:
Geoffrey Hinton was an artificial intelligence pioneer. In 2012, Dr. Hinton and two of his graduate students at the University of Toronto created technology that became the intellectual foundation for the A.I. systems that the tech industry’s biggest companies believe is a key to their future.
On Monday, however, he officially joined a growing chorus of critics who say those companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT.
There is a discussion raging now between the proponents who want to move “full speed ahead” and those who see significant dangers to human beings and who advocate a pause in research. But this discussion is both too late, by perhaps 10-15 years, and is not dealing with a more profound question that should have been asked years ago. In previous generations, where people understood that we live in a moral universe, engineers would ask two questions in their application of what science has learned: 1) Can we do it? 2) Ought we to do it? The first question deal with a technical issue–did engineers have the ability to do something? The second question was a moral question; is it moral to do this? The biblical worldview is interested in both truth and morality. Simply because something can be done, does not mean it should be done. Some scientists and engineers are now questioning the wisdom of doing something that could well end history and life as we know it. Ideas do have consequences. Worldview matters! We need to return, personally and as a culture, to the biblical worldview where the moral questions are of paramount importance.