But wouldn't the designers want the weapon to be sentient? That the Robot/weapon could analyze and adapt to the situation without outside help. And if left alone long enough, if it had started to ponder around its place in the universe couldn't it lead to becoming renegade?
The ability to operate without outside input, and sentience, are two very very different things. You don't actually need to be self-aware to have the ability to analyze/adapt to the situation at hand- it's certainly probable that any robot that has enough intelligence (which isn't the same as sentience/sapience, BTW) to be autonomous would have been programmed to analyze/adapt to any situation it might come against that is unexpected- within the subset of situations for which the robot is designed. That last bit is italicized because it's very important- a combat robot isn't going to know how to react to possible threats starting to breakdance- but if said threats are shooting at it, or something the robot is assigned to defend, it'll know how to react.
So it's highly unlikely that a robotics system that is intended to be a weapon would have the ability to consider it's place in the universe- in order to maximize efficiency, it would simply lack the programming, software/hardware architecture, and the ability to adapt to analyzing a situation such as "what is my place in the universe"- in part because it already knows its' place- and that place is to kill things.
Also I don't believe that this will happen anytime soon, and perhaps never at all. But what if technology did progress enough, and someone with enough brilliance came up with such AI? I just kinda of like the what if questions anyway.
It is an interesting question/topic- and I agree, it's unlikely to happen within the next, say 10 years, IMO. And when it comes down to it, I'm doubtful we'd be able to create a self-aware, sentient, sapient, AI, because we don't even understand what makes humans tick- and figuring out how the human brain works has, AFAIK, been a major field in AI research.
Also what do you mean by infrastructure? Do you mean the robot, or about how human cities/towns.etc would be?
I mean the systems for directing and supporting the maintenance and operation of said combat robot. For example, the control room that houses the controls which tell a robot of such scale as depicted in the clip you posted that "no, you can't kill that", and "yes, you can kill that". Such a large and destructive weapon- at least if we assume the capabilities depicted are indeed accurate- would be more likely to be treated as a strategic weapon, one which requires human intervention- but is also capable of autonomous operation.
With no "no, you can't kill that" button, it would likely default to "kill everything" mode, if it is intended as a strategic weapon with MAD [Mutually Assured Destruction*] type deployment.
*MAD was what kept the Cold War of the 1950s-1991 era from turning "hot"- it's the understanding that if one side launched their nukes, the other side would detect it and counter-launch, and both sides would be nuked to oblivion.