I think the chance of error is concerning with an intelligent robocop type machine

Discussion in 'Intelligence & Machines' started by Joaquin, Mar 10, 2013.

  1. Joaquin Sleuth Registered Senior Member

    Messages:
    387
    If we ever get to the point of Robocop type machines walking around but are designed for security and defense purposes against crime i feel there's a big chance of error and mistake.

    Please Register or Log in to view the hidden image!





    If the machine could perhaps be hacked, or to be programmed in a very simplistic manner to alter it's abilities or changed to become an evil type of machine then there would be huge problems. I feel there would be a huge chance the machine would get in the wrong hands and reprogrammed, or reverse engineered if it was destroyed.


    I think concerning brain emulation everything could be altered with the technology if it was available. I think someone out there would find a way of doing it very quickly. I think an intelligent design based on the human brain's complexities would not be a good idea from societal standpoint. I think from a miltary defense standpoint it would be good but in limited numbers. I think the worst case scenario could possibly or would be more likely to happen depending on how advanced the machine was in design and ability is the robot turning against humans and wiping them out. I guess maybe a kill switch would be there to turn them off if neccessary.



    I personally see the government eventually creating terminator or robocop types of machines which could turn on it's creators. This obviously is nothing new but i feel the chance of error for mistake is very high with something like this.

    I could be wrong and perhaps there's more technological advancements to be achieved in the future to stop this from ever happening.
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. youreyes amorphous ocean Valued Senior Member

    Messages:
    2,826
    in summary: everything is likely
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. Stryder Keeper of "good" ideas. Valued Senior Member

    Messages:
    13,101
    Sometimes there is nothing to fear but the fear itself, especially if you use the fear of the hypothetical to attempt to steer around the problem of that fear becoming an actual. In the case of Artificial General Intelligences, the fears of "What it could become" attempts to make sure that we don't let that happen. If we fear that human error will be responsible for a disaster, we'll do our best to make sure we reduce the error margin. If we believe that malicious elements will gain access to the code and subvert the usage of equipment to their whim, then we'll do our best to make sure that the design parameters have fail-safes to guard against such deviations. The one thing we can't seem to guard against though is the tyrannical abuse of capitalism, after all if a projects funding is suddenly slashed, "corners" are then cut to meet that projects initial timescale, which is where most of the problems that we suffer in the real world come from (e.g. banking computer failures etc)

    You will notice in the most cases it always comes back to sub-quote (in bold) from Shakespeare's Julius Caesar, ACT III, Scene II.

    Machines themselves would not have the concept of Morality to be inherently Good or Evil, at least not without the programming being added by Humans.
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. Joaquin Sleuth Registered Senior Member

    Messages:
    387
    I think if the machine's configuration files or options could be altered very easily then it could fall into the hands of terrorists, criminals,...etc... given the features were easily accessible.

    If a machine was designed to be good and to serve the public then got destroyed or badly damaged by a criminal gang. If the knowledge was easily accessible for them to ''hack'' the configuration file or whatever kind of system control would be in place then there's a chance for terrorism. But a machine could be altered in it's design, ability, configuration,...etc... i think the possibilities would be endless. The amount of customization in the design could be limitless given the technology is in place to do so.


    Please Register or Log in to view the hidden image!

    Please Register or Log in to view the hidden image!

    Please Register or Log in to view the hidden image!






    Please Register or Log in to view the hidden image!




    If a computer helped build the machine then the machine could be controlled by the computer. It could have an observer/controller. If the feature was enabled for the robot and not locked. It could be disabled/enabled at a whim without any sort of password protection.






    Please Register or Log in to view the hidden image!





    And then we have 4D Printing on the horizon after 3D Printing takes off. I think a robot could build itself new limbs or larger body parts with 4D Printing technology. I could be a much quicker more effective way than using the conventional standard of creating parts. I don't think it would then need another robot to build the parts for them if they were stuck in the sahara desert. It could self assemble if damaged when exposed to heat, water, light, sound,...etc...
     

Share This Page