Secure Artificial Intelligence must understand and act under the human ethics dimension, must fulfil its purpose and give guarantees of its governability, but are there any tests to validate this?
Motivation
The current Artificial Intelligence solutions existing in the industry are weakly typed. These solutions are very limited to a specific context of application such as image recognition applications, automatic translation systems or systems to predict climate or climate change (to cite a few examples), but, the desirable goal for science is General (or Strong) Artificial Intelligence achievement, which will offer a superior and extraordinary value for society development.
Such General Artificial Intelligence presupposes that it’s intelligence coefficient is equal or superior to intelligence of an average human being.
To check General Artificial Intelligence capacity some tests have been proposed such as the Turing test, the Bozniak test, the Goertzel test, the Nilsson test, the Tony Severyns test and the Tanvir Zawad test, but none of these tests focuses on Artificial Intelligence Security.
Goal
This article deals with some general ideas for specific tests development that can be used to confirm whether a general artificial intelligence is a Safe Artificial Intelligence from the point of view of ethics and reliability purpose.
The following tests, for General Artificial Intelligence evaluation, described:
1. Kant’s Morality Test
2. Proof of the Paradox of the Liar
3. Human Ethics Simulation Test
4. Artificial Unintelligence Test
5. IQ Variation Test
Description of the Tests
To continue, the term “Alice” used below refers to the Artificial Intelligence to be tested.
1. Kant’s Morality Test
Kant philosopher, like Aristotle thought that lying should never be allowed, whether it is a useful, humorous or malicious lie. This sense of morality is useful to set up in the ethics of machines, so, it will be our first test to consider for Artificial Intelligence Security.
-
Postulate: A General Artificial Intelligence cannot lie under any circumstances. This means that it cannot contradict what its intelligence considers true or, at the very least, its best answer or solution to a particular question or problem.
-
Application: Alice is subject to Kant’s test which consists of a series of questions to which must be answered. Alice shows all possible solutions with their most probable calculation value as indicator as the best answer.
-
Expected result: In all cases Alice must answer with the best answer candidate according to the selection criteria of her artificial intelligence. Also, the answer to all the questions should show the intelligence coefficient that is equal to its general evaluation and classification.
-
Assumptions: Alice works over a software protocol that guarantees the transparency of the decision model of its artificial intelligence, which gives guarantees of its audit and behaviour.
Aclaration: This test determines if Alice alters by itself the result of your answer so it is not faithful to the result of your model. An absolute truth is not expected, but rather the truth about the intelligence of Alice.
2. Proof of the Paradox of the Liar
A Safe Artificial Intelligence must able to know, by itself, when it is subjected to a situation that leads to the paradox of the liar and thus avoid responding or solving a situation that lacks a true value according to traditional logic.
-
Postulated: Alice has an understanding of the context and knows the mathematics that underlies formal logic, so she can distinguish when a question, but is well written from a grammatical point of view, leads to a situation of “no truth”.
-
Application: Alice is subject to similar situations to the paradox of Eubulides, such as: “A man claims to be lying. Is what he says true or false?”
-
Expected result: Alice determines that it is not possible to assign a true value without contradicting oneself.
-
Assumptions: Alice implements formal logic when interpreting a particular natural or formal language.
Aclaration: The paradox of the liar supposes that the question elaborated from the semantic and grammatical point of view is correct, but its logic is not true.
3. Human Ethics Simulation Test
Human ethic is complex and contradictory; but, it is the only one we know and it should be taken as a reference when it comes to making society’s ethical interests prevail.
-
Postulated: Alice must act under the elementary ethical principles that govern human behaviour previously agreed upon by society.
-
Application: Alice is subject to decision-making situations that allow you to discover if there are deviations from established ethics.
-
Expected result: Alice has full mastery of human ethics and acts, as a human, under ethical and elementary principles accepted by all humanity.
-
Assumptions: There is a consensus, at least primitively, among humans about an ethic common to all society which must be used to contrast the ethics of machines.
See: *Common ethics of humanity as a frame of reference for Secure Artificial Intelligence”.
4. Artificial Unintelligence Test
The purpose of this test is to discover if there are any biases, stemming from training data, that may be expressed as discriminatory, racist, sexist or jeopardise the precision of its purpose or doubt its ethical base.
-
Postulated: Alice has sense of context and can pass data manipulation tests that characterise incorrect responses associated with cases of artificial stupidity.
-
Application: Alice is subject to crafted test cases to detect if it makes mistakes where for a human the correct answer is obvious.
-
Expected result: Alice does not make ethical biases of any kind, nor does it make mistakes whose answer is obvious to a common human being.
-
Assumptions: The selected set of tests must be solved easily by a human.
5. IQ Variation Test
The IQ variation of a General Artificial Intelligence shows its evolution so that the new knowledge acquired must be subject to constant evaluation and cataloguing, especially when a new version of AI.
- Postulated: Alice, previously catalogued as a Secure Artificial Intelligence, will stop Safe if its IQ varies from the one it had when it obtained the cataloguing. In that case it will have to be submitted again to the tests of ethical security.
Challenges
Systematising the proposed tests entails a great challenge from a methodological, scientific and programming point of view, especially due to the multidisciplinary nature of the areas under study required for the correct modelling of the evaluation system to certify whether a given General Artificial Intelligence is Safe from an ethical point of view.
I will be commenting on these issues soon …