ethical questions

Dealing with Ethical Questions raised by Artificial Intelligence

Posted on Posted in Machine Learning

Dealing with ethical issues is often perplexing. The challenge is not to eliminate a moral dilemma but how exactly one should think through, what factors to consider for the right questions. There are few methods proposed to deal with the ethical concerns machine intelligence raise. There is a suggestion of the legal solution to prevent any technology development capable of posing a risk to humanity such as Anthony Berglas suggestion of outlawing producing of more powerful processors. Similarly, restrictions can be placed on artificial intelligence to be super intelligent . There is also a possibility for restricted development to prevent harm to humans, for example, David Chalmers suggested confining artificial intelligence systems in virtual worlds to study and fully understand their output. Or Oracle artificial intelligence as suggested by Nick Bostrom which will be capable of answering questions only .
The Three laws of robotics as stated by Issac Asimov in 1940 can be rule-based standards for the behaviour of machines. Also, since 1999 American Society for the Prevention of Cruelty to Robots highlights the issue of the rights of artificially created sentient beings and the responsibility that comes after their creation.
In 2007, a set of ethical guidelines called Robot Ethics Charter was adopted. In EURON (the European Robotics Research Network) is developing plans for privacy, safety, traceability, security and identifiability in artificial intelligence . It is also suggested that the present economic model of supply and demand will make sure that the machines will always need humans to survive. However, humanity will never be the same as machines join the human civilisation.
Systems can also be integrated into the society as long as these are law-abiding and includes a predicted list of behaviours. Artificial intelligence inherits social obligations. Algorithm should be predictable to those who develop it or provide an environment within which society can enhance their lives. Artificial intelligence can also be programmed to self-monitor. Although algorithm should be powerful and scalable, it should be transparent to inspection and robust against manipulation for security reasons. Programmers must specify good behaviour and generalize the consequences of actions.
Trust for machines is a debated, co-created occurrence which has a lot to do with previous experience, judgements and involves augmentation. Augmentation means starting with what humans do today and figuring out how that work can be extended rather than diminished by the greater use of machines. Hence, humans should not abandon their responsibility for evaluating and if necessary rejecting the advice or conclusion of a computer program.
Therefore, in the short-term, artificial intelligence influence depends on who controls it while in long-term ethicality depends on if it can be controlled at all.

Leave a Reply

Your email address will not be published. Required fields are marked *