How are Ethics and Artificial Intelligence related?


Artificial Intelligence (AI) is a computer system that attempts to perform tasks usually reserved for humans. They are designed to make predictions based on data, and are given the ability to perform tasks normally performed by humans such as playing games and diagnosing illnesses.



The subject of AI is still in its infancy but questions are already being raised regarding the ethical impacts into the future.



Currently, humans are creating the computer systems that AI operate on, so the logical question would be, what are the moral standards of the developers?



AI development must always respect privacy rights, data protection, and ensure the security of data for all individuals.



All AI systems must have human oversight and control enabled in all cases.



Currently, there is already a bias built into the AI systems. For example, approximately only 20% of the AI system designers/developers are female. This statistic must be considered and monitored.



Soon, if not already, computers with AI abilities will take over from humans and create their own version of AI enabled computer systems. How will the ensuing moral code resemble that of humans?



AI systems should not be used for social control or social surveillance purposes.



No matter what AI method is employed, it must not violate or abuse Human Rights.



To mitigate these moral issues, it's crucial to develop AI systems with an ethical framework that involves multiple stakeholders using diverse development teams, transparency, accountability, and continuous evaluation. Only then can we ensure AI will align with the ethical standards expected by societies.
Could this be the perfect moment to persue something like Universal Morality?