Morals have established the backbone of society. They set up the guidelines for how people act and interact with one another. But what is the driving force behind morals? What compels humans to behave with a moral sense of discipline and could AI be compelled to act with a set of morals? Would artificial intelligence have to have a driving force behind their morality system or a code of ethics that either closely resembles or parallels those on earth?
For the intents of the purpose of this blog post a few terms needs to be established. This requires for the artificial intelligence to be “… von Neumann machine
is [that are] able to reproduce better versions of itself. Part of this reproduction is the improvement of intelligence;” so they are essentially immortal. One principle that will be called on later is the Intelligence Principle which is “the maintenance, improvement and perpetuation of knowledge and intelligence is the central driving force of cultural evolution, and that to the extent intelligence can be improved, it will be improved”. The dictionary definition of moral “concerned with the principles of right and wrong behavior and the goodness or badness of human character.”
To me questioning how an AI would act shouldn’t be the question being asked. The question should be why is the AI is acting the way it is. For most humans, they attempt to act within the limits set up by a system of laws imposed onto them by those with power over them. Humans also interact with each other by a “laws” socially made due to morals. The difference being you can’t get in trouble being a dick to someone but you may be socially shunned or ridiculed and you can get in trouble for say killing someone because it’s against the law. I propose that the foundation of morals originate from religion and it is from a “higher power” that imposes these social guidelines. For some it’s the thought of doing right or wrong to be more like the “image of who created them”.
For AI, since they would be able to be repaired endlessly, they could live forever. With this being a concept that is often compared to god-like quality. I don’t think it would make sense for them to strive to be like themselves. In given the chance to improve and evolve for an eternity, they would definitely become greater than their creator.
For AI, given the intelligence principle, and striving to get knowledge so they can continue to evolve, I think that they would evolve to have a moral system based on that. That it would be morally decent to do anything as long as it is for the sake of getting knowledge. As time goes on they would not have a god or a higher power above them for them to want to be like. They would more than likely outlive and outlast humans or any other biological being that created them. So AI would not have anyone or anything to impose laws or social guide lines on them.
I do think given from examples of history, such as the library of Alexandria, that AI would act in a way that would best preserve knowledge and not destroy everything it gets in contact with. But I do think for AI to want to preserve something, such as humans, that they would need to be able to create or provide information and knowledge for them.
I thought it was interesting that the AI would have morals based around gaining knowledge. I had never thought of it that way and I think it would be interesting to see how they would react to different situations given their principles. I also wonder if them not having a god or anything to strive to be like would cause problems for them down the line.
I think that if AI did have a moral system based on gaining knowledge, they would still have a lot of the same morals as humans. I think that because humans also have a drive to gain knowledge, so I think morals like not lying would be something that AI would also appreciate. This would encourage correct knowledge to be learned.
AI and morality is something that can delve deep into the philisopical and at times can be scary. Yes it is true knowledge is the grand building block for anything intelligent and as an intelligent being they would want to gain as much knowledge as possible. I like how you put this point on there but also added the morality aspect of it becaise when we think of morals we really think of what we like and share with others but if it's a machine what do we share so I like how you brought this into play.