Todd Brun's statement sums up the problem with an AI in the future knowing how to monitor changes it can make to its own past (along with humanity itself).Although, he adds, it’s hard to know in advance what to measure.
The morals behind the show Travelers are well-worn: can you trust a machine to save the world? The show introduces this problem right away, the first traveler chooses not to carry out the mission, instead deciding to go to war against the machine. So now who do you trust? This first traveler obviously represents a bug in the programming--can this AI correct itself by being reprogrammed? That's what the good guys (who are they?) end up doing, maybe.