For example if someone creates something new that is horrible for humans, how will AI understand that it is bad if it doesn’t have other horrible things to relate it with?
For example if someone creates something new that is horrible for humans, how will AI understand that it is bad if it doesn’t have other horrible things to relate it with?
AI learns from the data it is given. There is no inherent understanding to it.
For a text based AI:
The AI does not inherently understand anything. But it will behave the way you trained it to, to the degree you trained it, and with all the imperfections you trained it with (e.g. prejudices).