The publication addresses primitive questions about understanding the need for a new software license, raises conjecture, mentions the dilemma it will face, and makes an open invitation to adopt a new SRU licence.
Preamble
For some time I have been working on a protocol to improve the interoperability of digital relationships between humans with humans, humans with machines and between machines. Yet, the issue of regulating the development of Artificial Intelligence (AI) applications underlies an element that gravitates and leads me, again and again, to a primitive question. This primitive question has inspired me to carry out the initiative of a new type of software licencing.
It is an open question to all people who are concerned about the future of Artificial Intelligence, to all people who spend time innovating and/or generating new knowledge and content, but very especially, this question is aimed at software developers, architects and all people who are building the digital age.
I also invite businessmen, governments, legislators, politicians and all those who consider that the use of artificial intelligence should be regulated or who, at least, want the result of their creations to have a positive and differentiating use for society as a whole.
And it is that regulating artificial intelligence could be imperative; even so, any initiative demands to regulate, first, human activity. After all, we are the ones who define the performance scene, the context and the objectives for the supervised or automatic learning of the machines. At least for the time being. When the object and context of training for automatic learning is established by another machine, perhaps, and by then, it is too late. In fact, the latter is already happening.
Without further ado, below is the question.
The primitive question:
Considering the risks associated with the development of out-of-control AI:
Would you limit the use of your creations through a Socially Responsible Use clause?
To make this question easier to understand, let’s ask the same question in a specific application context:
What would you do if you could prevent, to some extent, the source code you release under an open source license, for example, from being used by third parties to generate an intelligent weapon of destruction?
Possibly you are a pacifist and you don’t want to finance this kind of activity. Or you want the result of your work (that you develop with your own effort and that you release under free software) to be a prize for those people or organisations that are committed to achieving a fairer, more inclusive and better world for everyone.
Do you understand the and democratising power that a creator can unleash with this initiative?
By simply distributing work under a new type of socially responsible use licence we can build the foundation for the development of a safe AI.
After reviewing all the existing types of free and open source software licensing, I realised that none of them provides a solution to the problem posed in our primitive question. In fact, we will address the dilemma we face later.
A conjecture
At this point and with the intention of understanding the relevance of this subject, I dare to raise the following conjecture:
If Albert Einstein were to resurrect, he would travel to the past to declare a licence that would condition the Socially Responsible use of his most famous discovery. I am referring to the equivalence between mass and energy E = m c^2 .
Despite the fact that it was Albert Einstein himself who suggested to President F. Roosevelt that he start a nuclear program, and knowing his later repentance, he would have said: “Publish E = m c^2 under Public Licence for Socially Responsible Use”, thereby declaring his resounding opposition to subsequent bombing.
Of course, the historical context was different and there was no cultural basis for establishing a similar type of licencing. But the Free and Open Source movement is now established and the community manages very well among the various types of licencing that exist. Even so, the power of the community in the digital age has only begun to take away the democratisation of society.
Many are the motivations that make a creator share his work and the proposed new licence SRU will be a source of new motivations, giving the developer a unique power in the development of society and that is that SRU is “community” but before we refer to a small dilemma.
A Little Dilemma
For the community, it would be more convenient to annex the SRU to the existing Free and Open Source Software movements, but, according to the OSI, on their website opensource.org Can I prevent “evil people” from using my programme? The answer is a resounding “No” and it argues:
The definition of open source specifies that open source licences cannot discriminate against individuals or groups. Giving freedom to all also means giving freedom to the wicked.
And this answer seems to make sense before the proliferation of applications with AI. Do we want to give up the smallest principle of “responsibility” at any cost? No doubt this dilemma will bring a lot of debate online, but we want to focus on the new values we are creating.
Open invitation to the community’’
Adoption of SRU licensing
The effort made by individuals and leading companies in the sector to provide the necessary tools and solutions that help the massive use of cognitive computing, automatic learning and artificial intelligence allow the explosive and exponential development of new solutions and applications. Many of these solutions are within everyone’s reach and this is very favourable for the democratisation of science, however, so much power should lead to a great responsibility. If regulating AI is necessary, regulating the purpose of use of our creations is fundamental and it starts by regulating, to a negligible and elementary degree, human activity.
How could the future of Artificial Intelligence applications go well if we do not ensure that man assumes his responsibility for it?
The risk of proliferation of out-of-control AI applications is real. Major technology companies have expressed concern about this and have signed non-proliferation agreements with the IA.
That is why we call upon and invite all technology companies and industry leaders to adopt the Socially Responsible Use Licence (SRU) modality in their current licencing forms and thereby establish a legal framework that limits the use of software towards a pragmatic way and compatible with the most basic principles of humanity.
The impact that a company such as Google Inc., for example, releases TensorFlow under the modality of SRU Licensing would transfer responsibility to all those who use the application to be responsible for the purpose of their derived applications.
Signing agreements not to develop weapons with IA, of course, is positive, but it is insufficient when those same companies that sign that agreement release software under a free software licence that is subsequently used by other entities or individuals to train and develop IA weapons.
From our point of view, applying for the SRU License is a practical, useful, easy to instal and socially impacting approach. That is why we invite you to join this initiative by publishing your work under the modality of SRU License.
Right now, we are working to facilitate the implementation of SRU with other types of existing licences.
Does this make sense to you? Let me know your point of view by leaving your comment or emailing me directly if you wish.
A survey
To extend the primitive question posed in this article, I have prepared a survey to know the opinion of the community.
The next entry refers to that survey: A Big Question That Concerns Us
This project is part of the Iamio initiative for interoperability and advanced entity relationships for the development of secure AI.