Articles

Should we fear Artificial Intelligence?

by Parvinder S. I am a Software Engineer and Developer. I am passi

The idea of ​​an automated society, full of robots both at home and at work, was one of the utopias -and dystopias- with which the literature reacted to the introduction of automation systems. At the beginning of the 20th century, the use of automobiles and traffic lights popularized automation at street level. Since then, the number of machines and automatic processes in our lives has increased exponentially: the washing machines, the ATMs, the focus of the lenses of a camera, the doors, the car wash, the thermostat ... And the initial fear that in his day woke up has given way to a sense of routine. Automation is so common that we do not even notice when we run into it

Also, artificial intelligence (AI) in gaming isn’t a recent innovation. As early as 1949, mathematician and cryptographer Claude Shannon pondered a one-player chess game, in which humans would compete against a computer. 

However, artificial intelligence (IA) and the automatic machines are not the same. AI is a form of advanced automation. In conventional devices very exact programming rules are created with which a machine executes certain tasks. The efficiency depends on the detail and the accuracy with which the task has been programmed: for example, to draw the shortest route between Seville and Madrid. What AI allows is more abstract automation. That would mean tracing the fastest route between Seville and Madrid taking into account the works, the number of traffic lights, the foreseeable hours of greater intensity of traffic, as well as unforeseen events such as traffic accidents or weather conditions. That is to say, programming focuses on the creation of rules with which to measure efficiency in that context and in the development of performance parameters. By following these rules, intelligent automation systems choose the most efficient process. That level of abstraction is a milestone in the history of technology.

These achievements amaze and frighten at the same time. Due to lack of familiarity, AI seems magic and leads us to reopen old debates. Is that smart technology? Do you have feelings and will? Is he capable of malice and treachery? Who is responsible if the system has unforeseen harmful effects? Will the nature of the human being change? What risks does it entail? Do we need new standards?

These same issues were precisely the subject of debate in the courts of various countries after the commercialization of the automobile at the beginning of the 20th century. The fact that the same uncertainties and questions arose with the introduction of a new means of transport re-emerge a century later with the arrival of AI, requires a revision of the debate of yesteryear. From the normative point of view, three aspects deserve our attention.

1. Technology only seems intelligent and human when its use is not current

The commercialization of cars was once desired by all social strata. The automobile as a means of transport promised a future of efficiency and hygiene in cities with streets infested with equine faeces. In a matter of a few years there was a 180 degree turn and the cars became a new urban plague. In the 1920s demonstrations protesting insecurity on the streets were common: scraps of real accidents, with bloody mannequins and satan as driver. The cities of Washington and New York organized demonstrations with 10,000 children dressed as ghosts, symbolizing the annual number of deaths in traffic accidents.

Obviously, with the passage of time, contact and familiarization with the new transport vehicles weakened the humanizing theories that attributed motives and diabolical intentions to machines. The ethical and legal debate returned to focus on the behavior of the human being before and behind the wheel.

That aspect of the discussion, of a philosophical nature at first sight, had a clear legal consequence. The existence of a responsibility on the part of the machine as if it were an intelligent entity was discarded. In retrospect, the opposite would not only be ridiculous, but would have posed a challenge to ethics and the right to create norms and sanctions that were viable for both humans and machines.

The debate about artificial intelligence in this aspect has the same implications and requires posing the same legal and ethical consequences. Does the robot have intentions that would justify the creation of a legal entity of its own? In what way would responsibility fall on the machine exculpating every human being? How could a sanction be applied to a machine?

Artificial intelligence and its methods of statistical analysis do not contain in themselves a will of their own. Artificial intelligence is not intelligent. Therefore, he is unable to have ambitions and interests of his own and cheat or lie. In other words, artificial intelligence should give us as much fear as statistics. That does not mean it's innocuous. 

With the AI, very transparent protocols can be established to determine the modifications that have been made by people, regardless of how complex the algorithms with which this technology operates. There is no reason why it is necessary to create a specific legal entity for artificial intelligence. The technology itself allows the attribution of responsibility for failure or abuse to a specific person with more clarity and ease than before.

The driver who manages the artificial intelligence and the pedestrian exposed to that traffic, can be identified.

2. Ethics and law must be neutral against technology

The first regulatory attempts would seem grotesque to us today. Especially for the imposition of obligations on actors unable to exercise adequate control over the machine. In the United Kingdom, to give an example, the driver was required to notify the sheriff before riding through a municipality so that the latter, armed with two red flags, could march in front of the car and warn pedestrians.

The legal system that tried to regulate traffic attributed responsibility exclusively to the driver. However, in those days the streets were characterized by their lack of predictability: the traffic signs had not yet been invented, the children played on the road, the horse-drawn carriages runaway when they heard the engines, and the pedestrians were unable to calculate the speed at which the cars were approaching. All this made the responsibility assigned to the driver disproportionate. From the physiological point of view, it was impossible to react to so much unforeseen.

The pragmatism and the sense of social justice led the Canadian James Couzens to invent a system of signals and traffic rules to coordinate pedestrians and drivers. Couzens resigned from his position as vice president of finance at Ford and began to work for the City of Detroit (USA), world capital automobile at that time. Cigar in hand, Couzens revolutionized transport infrastructure. First, he identified the situations in which the responsibility fell on the pedestrian, and created signs and zones to be able to cross the streets.

At first, the resistance on the part of society was great. The rules and obligations for pedestrians were not exempt from controversy: Councilman Sherman Littlefield branded them denigrating by "treating ordinary citizens like cattle." Couzens did not let himself be intimidated and imposed his norms by decree. Time proved him right, demonstrating the effectiveness of his proposal, which ended up becoming the international model. 

It is remarkable the little attention Couzens paid to the car as a technology in itself when it comes to conceiving its traffic rules: the rules and limitations did not concern the technical aspect, but only its use in the public space. For example, the measures to restrict the speed did not prohibit the development of engines with more horsepower, but limited the use of the accelerator by the driver. Thanks to this, the laws and regulations established by Couzens did not have to be modified with each technological change, since they always allowed a recontextualization of the use of technology. The fact that the established traffic regulations were technologically neutral is the reason why a century later they are still valid and, in their essence, they have not lost relevance.

In the field of AI, laws and ethical principles that can be applied to programming codes are being studied. An example is the principle of "minimization of personal data", by which only the minimum amount of personal data necessary to offer a service or execute a task must be processed. This is a technical principle that is of vital importance and affects the processing of information. On the one hand, the process safeguards the privacy of the people involved. 

The lack of information of certain social groups ends up generating a skewed database from the beginning: the profile and characteristics of a part of the population will be overrepresented and will distort the calculation, giving an erroneous impression of the whole. Assume that with less data the risk of discrimination decreases is a myth. Depending on the context, more or less personal data will be needed to avoid falling into simplifications that lead us to discriminate against certain groups.

These examples are a sign that we have to change our strategy, because until now the debate about artificial intelligence has focused on the technical part. But history shows that it is possible to develop laws and regulations on new technologies without regulating the mathematical code itself. Ethics and law conventionally focus on the social context: its principles do not apply to the technical process, but to the social situation in which this technical process is integrated. It is not about regulating the technology that artificial intelligence allows, but what people do with it.

3. The education of society to treat new technologies does not require technical knowledge

AI allows us to detect patterns of human behavior and identify differences in the performance of different groups (women, ethnic groups, social classes, among many others). Based on this, the team of people who use this technology may decide to discriminate more or less legitimately, offer different services or information, manipulate attention and make different suggestions. It should also take into account involuntary and implicit discrimination, and therefore a constant assessment of the technology is essential. To this end, experts must not only have a general ethical sensitivity, but also in particular with regard to involuntary discrimination that may result from biases in the design or in the databases with which the IA operates.

The use of this technology to amplify or compensate discrimination depends on the group of human beings that use it. The citizen is not the one who must understand the technical process behind the AI ​​to be able to use it. It is the engineers, the data scientists, as well as the marketing departments and governments that use or have to regulate these technologies, who must understand the social and ethical dimension of artificial intelligence.



Sponsor Ads


About Parvinder S. Junior   I am a Software Engineer and Developer. I am passi

0 connections, 0 recommendations, 9 honor points.
Joined APSense since, March 23rd, 2018, From Hounslow, United Kingdom.

Created on Mar 24th 2018 05:13. Viewed 428 times.

Comments

No comment, be the first to comment.
Please sign in before you comment.