Friday Matinee Special: Cognitive Computing and Nuclear Technology
Fourth in a series of videos recorded at the ANS Annual Meeting, San Francisco, June 2017
Some readers (or historians) might recall the story of the first flight of the Lunar Module (LM) that 1960's space-Uber meant to transport people from the orbit of the Moon to its surface. During that flight, the LM was unmanned except for the on board presence of a very primitive computer programmed with the mission profile. Because of certain delays and variances in flight events the computer quite erroneously halted a critical engine burn, which had it been under the control of humans could and should have continued. But, as the official NASA film documentary on the mission stated, when faced with an uncertain situation "the computer made the only decision it could - and it was the wrong one." From that point, flight director Gene Kranz and his team made up and executed a complicated but "winged" flight program that proved out the LM fully while the computer sat and did nothing.
Compared to the 1960's, today's computers are quite capable of not only reasoning, but conversing - as we can see in the spectacular development of IBM's Watson.
At the 2017 ANS Annual Meeting Opening Plenary, where the focus was on innovation, Donnie Davis (Sales Leader, Watson IoT Systems Engineering Division, IBM) gave a remarkable presentation wherein the real capabilities of a Watson derived system were explained and explored. Starting with a film (which for matinee viewers here results in a 'film within a film' experience right at the opening of the presentation) Davis moved on to explain how it is that Watson learns, and how quickly it does it. He also explained that the machine is deliberately intended to not just learn but use reason in making suggestions or decisions. Truly, the machine can think in a very real way - unlike the hard wired, hard headed computers of decades past.
After watching the video, one might easily suppose that Watson or a successor could be intended for a wholly integrated plant operations role - a role in which the machine has the watch, and in which human observers simply wait for something to get too far out of spec. This may, well down the road, be a reality but it's unlikely soon. In a regulatory universe where even remote load dispatching (that is to say, remote control of a nuclear plant's actual load, or power output) is unacceptable, although installed at a number of early plants but removed, it's pretty impossible to imagine the NRC allowing a computer to have primary control of a nuclear power plant no matter how smart it is or how well it interviews for the job.
Of course, one of the amazing things described in the IBM presentation is just how fast Watson learns - and one wonders whether or not every single "lesson learned" from previous plant events, incidents and accidents might shape a Watson response to a plant transient or casualty. The answer of course is obvious; first, it must know the prior event happened, then know the details, and must be able to apply those to its own plant.
In a sudden and serious event, perhaps the most important worry related to computer operation would be maintaining both power supply for the computer and power for operations; a complete SBO or Station Blackout would have to be met by both attempts to get power back to equipment and back to the computer, or else it would be left to the operators to take over "as in the old days."
Of course, computers have been very important in nuclear plant operation for many years, but it's quite another stretch to think of the computer operating the plant and the humans simply standing by. But the possibilities are intriguing; for example, could computer control be one thing required for complete integration of varied generating sources and rapid load ramping? Would a move to allow computer control for these cases accelerate the trend or would it halt it due to regulatory gridlock over the idea?
In a different branch, could Watson or some successor help with the engineering, and then the construction, of nuclear power plants? With the scheduling of labor, the arrival of parts, the planning of inspections? Could the integrated construction plan be in the 'consciousness' of a cognitively capable machine?
No matter what the outcome may be, it's certain that cognitive computing is only going to expand - and it seems clear that nuclear technology will take advantage of it. In what ways that occurs is still too far in the future to be seen clearly, but it might be better to start thinking of the possibilities sooner rather than later.