If, say, a computer were used to diagnose a patient's symptoms and recommend treatment, and the result was flawed, could the computer be held responsible? If so, then it is hard to see why computers should not be recognised for good work as well.
Stephen Emmott and Stephen Muggleton are developing an “artificial scientist” that would be capable of combining inductive logic with probabilistic reasoning. Such a computer would be able to design experiments, collect the results and then integrate those results with theory. Indeed, it should be possible, the pair think, for the artificial scientist to build hypotheses directly from the data, spotting relationships that the humble graduate student or even his supervisor might miss.
In the future, PhD's will have access to an automated thesis defense module.