How to Train Your Robot

by Ken Ham on May 14, 2015
Female Robot

Scientists have great hopes of someday creating artificial intelligence (AI) that can learn in a similar way to humans. This theme is popular in many movies, but often in these films the robots turn into destructive machines that take over the world and try to wipe out or enslave humanity. Bertram Malle of Brown University recently wrote an opinion piece for LiveScience that provides his potential solution for how to create moral robots if we were to someday achieve this level of technology. His proposed solution involves creating robots that can learn proper behavior from those around them. However, to keep a robot that falls in with the wrong crowd from learning poor behavior, “programmers establish rules, laws and protocols that prohibit a robot from learning anything that is socially undesirable.”

This opinion piece, however, raises some interesting questions: Who sets the rules? Who determines what is ethically acceptable behavior for the robot? In a secular worldview, what foundation is there for morality, let alone robot morality? It seems, in Malle’s opinion piece, that he thinks that the foundation for morality is the community. He says if the robot is receiving conflicting moral messages, “the robot should ask the community at large who the legitimate teacher is. After all, the norms and morals of a community are typically held by at least a majority of members in that community.” But who’s to say which community is right? After all, some communities think that cannibalism is acceptable behavior. Does that mean cannibalism is ethically acceptable for the robot? What about communities that think stealing is acceptable? Does that mean the robot should just go out and steal? Most people would say, “Of course not”—but if your standard for morality is the community, then you can’t say that. In this view, just because it’s wrong for your community doesn’t mean it’s wrong for other communities because there’s no objective standard!

Morality can’t work this way for hypothetical robots or for humans! Certain behaviors are just wrong, regardless of what the community says. But how can we know which behaviors are right and which are wrong? Because we have an objective standard—God’s Word. The Bible, given to us by our Creator, provides the only firm foundation for morality. God has clearly laid out for us in His Word what we are to do and not do. And as our Creator, He alone has the right to set these rules. God’s Word provides the foundation for morality!

And why is it that people have a sense of right and wrong anyway? If we are just animals, how could there be an absolute morality? Well, we are not just animals; we are made in the image of God. And as God’s Word instructs us,

For as many as have sinned without law will also perish without law, and as many as have sinned in the law will be judged by the law (for not the hearers of the law are just in the sight of God, but the doers of the law will be justified; for when Gentiles, who do not have the law, by nature do the things in the law, these, although not having the law, are a law to themselves, who show the work of the law written in their hearts, their conscience also bearing witness, and between themselves their thoughts accusing or else excusing them) in the day when God will judge the secrets of men by Jesus Christ, according to my gospel. (Romans 2:12–16)

Thanks for stopping by and thanks for praying,
Ken

This item was written with the assistance of AiG’s research team.

Ken Ham’s Daily Email

Email me with Ken’s daily email:

Answers in Genesis is an apologetics ministry, dedicated to helping Christians defend their faith and proclaim the good news of Jesus Christ.

Learn more

  • Customer Service 800.778.3390