Volume 5, Issue 1 
October 2010


A Possible Perspective: The Conscience is a Matter of Time in Emergent Systems

Álvaro Hernando Ramírez Llinás

This article was submitted for publication to the Journal of Personal Cyberconsciousness by Álvaro Hernando Ramírez Llinás, Director of Artificial Intelligence & Nanotechnology at the Universidad Autónoma del Caribe in Columbia, South America. Professor Llinas is a faculty member within the university's Mechatronics Department.

In emergent biological systems, Prof. Llinas states, it is possible, after passing a threshold in non-biological emergent systems that a certain degree of conscience appears with evolution, promoting higher levels of conscience. If we observe present day machines, we see that characteristics of individuality and autonomy already appear similar to that of human beings.

Introduction

Can a computer or a non-biological intelligence be conscious? We have to agree on what the question means.

Computing processes are capable of, just like the human brain, chaos, unpredictability, messiness, tentativeness, and a sense of emergence. However, a computer and a computer program as we know them today, could not successfully perform as human brain consciousness. So, if we are to understand the computer to be like today's computers, then it cannot fulfil the premise (Kurzweil [1], 2005).

It seems that the moment at which they can though is near, why?

First let's define conscience:

Conscience [2]

1. Property of the human spirit to be recognized itself in its essential attributes and in all the modifications that itself experiences.

2. Interior Knowledge of good and evil.

3. Reflexive Knowledge about things.

4. Mental activity to which only can access the own subject.

5. Psych: Psychic act through which a subject is itself perceived in the world.

In the first meaning we find an insurmountable reef, the "human spirit". The concept, in itself, is limited to the "human conscience". Logically, and by definition, only a human can have it.

Then there are bottom-up approaches, which are inspired by evolution and developmental psychology. These not so much about having some explicit notion of what is right, good, or just. They are more like the developmental process through which we take a child as it learns about the forms of behavior that are appropriate. The more evolutionarily inspired bottom-up approaches come from the notion that a moral grammar may be the legacy of the evolutionary process. Could we reproduce the evolution of a moral grammar in artificial entities?

Robot Head

Elevating the concept of conscience from the human environment (something non-human having conscience) and reading into all the meanings, we can say that a conscience should have these qualities:

– Capacity to recognize its own existence and everything that surrounds it;

– Differentiation between good and evil;

– Reflexive processing about things; and

– Private mental activity.

If we were able to implement each one of these characteristics in a machine, by definition, we would have created a conscience. Ergo:

– I recognize, therefore I am.

Most current programming languages permit us to perform a diagnostic as to whether or not a program is functioning in a concrete environment. The utility of that development is null, therefore if the program is not functioning, it cannot run the diagnostic, much the same way that a deceased human cannot be asked if s/he is alive.

The same program not only knows if it is functioning, but can consult the quantity of memory that it's utilizing, the space that it occupies in the hard disk, its capacities, functionalities etc. That is in a way, different from a person as it recognizes its existence and each one of its essential attributes. That program could be functioning on a system with sensors, cameras, microphones, antennae... permitting to it receive information from the exterior world, to process it and to store and/or produce the results, leaving a traceable path of each step taken in the process, and the results obtained in order to be able (in next executions) to consult that memory and decide with the adequate algorithm, how to proceed to improve the result... this is learning.

– Good and evil.

"How do we differentiate between good and evil?"
How do we differentiate between good and evil? Killing is evil, loving is good... it's automatic. We have two categories, one good and the other evil; everything in the universe lays within one of these two - a database if you will. Why not build an extensive associated database in our program where each possible decision is weighted within each category, coupled with a programmed morality that we have built into the artificial conscience? (Conversely, we would also be able to create an evil program...).

Would you kill a loved human being to save another? Is that good or bad? It's not merely as simple as to either kill or love... What do we do with choices that are not in the database? The same thing that we would do to make any decision... carefully weigh each option and decide which category to place the action into before acting. The decision processing time might possibly be expanded to allow for an evaluation of all related elements within both categories and to obtain a moral ranking offering a degree of right and/or wrong of each possible action.

– Control.

The deprived character of our artificial conscience would be no problem either. We have sense not to develop a program that we cannot manage and examine in its totality because the objective of computer science development is to conduct a concrete action within a controlled environment.

"... it would be easy to develop a conscience and to eliminate the possibility of outside access to it ..."
Nevertheless, it would be easy to develop a conscience and to eliminate the possibility of outside access to it; as simple as it would be to make it "be born" (execute the initial program) and ready the mouse and the keyboard (and other access media)... Only the program "sees and decides".

– Differentiation Issue.

We have already been able to see (it is not necessary to travel to the future) that we can create a conscience that fulfils the present meanings of its definition. A conscience is something that recognizes itself, possesses memory, morality, and has the capacity to learn and process its surroundings and experiences.

Nevertheless, we highly consider and regard our conscience as complex. But it is a question of quality or quantity?

Let's think about all those things that we could have and what this model of conscience lacks: fear, joy, tenacity, prejudices, and obsessions... These feelings are, without a doubt, contributors in making every decision in life.

Where are we going to go on vacation? Why there? Let's think about the complicated manner in which we make decisions, from the colour of our car to which dessert we desire ... for each decision there is an enormous amount of reason and evaluation applied which our brain processes in a brief moment, turning the decision into something simple, almost immediate, and of uncertain origin.

"... today our artificial conscience does not yet look much like the human one."
A computer could implement each one of those values, algorithms, or procedures that follow the time of evaluation within a decision. The complexity of making each decision and the enormous number of variables that determine it, next to the private and dark character of the natural process, turn its artificial representation into something very complicated; reason why - today our artificial conscience does not yet look much like the human one.

The elaboration of this conscience is very similar to the creation of meteorological models that foretell the weather. The processes and variables that determine if/when it will rain in a region are enormous and complex, but nobody doubts that the computer-simulated models will incrementally approach reality.

– Cluster conscience.

Today no one is surprised by the fact that machines communicate with each other. Machines request and pass information between them to process it like an element of its surroundings.

In this way, the "society" of other people's machines act more like an external stimulus on the "individual-machine" in the same way that the society, in greater or smaller measurement, contributes to the mental development of the individual.

A clear example of this could be individual robots playing soccer. They communicate with each other, evaluate possibilities, and act as if a group or social conscience existed; elevating this idea until a much higher level of complex, virtual societies of artificial beings could be created. Complex relations among them: chaste, rolls, relations...

Human and Machine Conscience

Obviously, at no time does this have to be identical.

God and Robot

The Conscience in Human Beings

Since neurons in a human being have different "personalities" and are relatively specialized, no particular activity of a single cell can represent but a minute fragment of the reality. The photoreceptors [3] specialize in capturing photons [4] and transduce this electromagnetic [5] energy into electrical activity. Within the skin, we also have the mechanoreceptors [6], specialized cells to transduce the mechanical energy into patterns of neuronal activity.

"What advantages are offered to the organism to experience sensations instead of responding automatically?"
The nervous system has the capacity to execute a certain process (the appropriate set of steps to produce the digestion, for example); one major difference is that the person knows something on the matter, or is conscious of it happening and aware of the steps to follow. The problem of this subjectivity is a burning matter in the fields of philosophy and cognitive sciences. Is the subjectivity necessary? Why it is not sufficient to see and to react like a robot does? What advantages are offered to the organism to experience sensations instead of responding automatically? It is important to consider whether or not animals have subjectivity and to react as if they had it? The conscience-like substrate of the subjectivity does not exist outside the scope of the function of the nervous system or its non-biological equivalent, which is the machine. (Llinás [7], 2001).

Temporary Coherence is Conscience

Robot Head

What we most prominently look for is temporary coherence. If the neurons that work together connect, then the temporary coherence is conscience. If the temporary maps of connectivity and the space maps are superposed, a much greater set of possible representations is generated. This is the concept of perceptual unit that is in sum, the space and temporary conjunction. With the synchronous unification of the respective activities of the nervous system cells, diverse patterns temporarily interrelated can be conformed and thus unify the reality combining the individual and divided aspects that each neuron has.

I am, therefore I exist

If we considered that the coherent waves to 40Hz are related to the conscience, we can conclude that this is a discontinuous event, determined by the simultaneous activity in the thalamus-cortical system (Llinás and Pare, 1991 [8]). The global temporary mapping generates the cognition.

The cortical thalamus system is an almost closed isochronic sphere that synchronously relates the properties of the external world referred by the senses to the motivations and generated internal memories. This coherent event in the time that unifies the divided components of the external reality as the internal one in a unique structure is what we call "itself".

"The Concept of I"

"... "I" is not something tangible, it is only a particular mental state ..."
"I" or "Me" has always been incognito; I believe, I say, I... whatever. However, "I" is not something tangible, it is only a particular mental state, a generated abstract organization. If I injure my brachial plexus (the nervous network in charge of the sensorial and motor system of the arm), I will watch my insensible arm and say, "I am not this" because I cannot feel it. It is that "I am, or it is part of me" that depends on whether I feel it.

During a dream we do not perceive the external world because the intrinsic activity of the nervous system does not contextualize the sensorial entrance in the functional state of the brain. If the conscience is the product of cortical thalamus activity, as it seems to be, the dialogue between the thalamus and the cortical generates the subjectivity in the humans and in superior vertebrates. (Llinás, 2001).

Is the Mind a solely biological property?

Is the conscientious mind a property that can only occur in the dominion of the biological thing, of the beings of flesh and bone?

Let us consider flight for a moment. In the eighth or ninth century it was concluded that flight was a biological property based exclusively on the living, heavier-than-air creatures as the only objects that flew. At the beginning of the eleventh century we knew that flight was not an exclusive property of biological organisms. We should also ask ourselves if the mind is an exclusively biological property?

IT DOES NOT SEEM THAT THE INTELLIGENT COMPUTERS AND MACHINES NOWADAYS ARE READY TO HAVE A CONSCIOUS MIND, BUT IT MAY BE DUE MORE TO THE LIMITATIONS OF ARCHITECTONIC DESIGN THAN TO THEORETICAL LIMITATIONS TO CREATE ARTIFICIAL MINDS.

In the case of flight, the specialized cutaneous material and plumage have demonstrated sufficient value in overcoming gravity, as have plastics, dry wood, and diverse metals demonstrated. IT IS NOT THE MATERIALS BUT THE DESIGN WHICH DEFINES THE VIABILITY.

"... is the mind a solely biological property or is it in fact, a physical property ..."

Therefore, is the mind a solely biological property or is it in fact, a physical property that, in theory, could be supported by a non-biological architecture? In other words, there is some doubt that biology is separate from physics. Accumulated scientific knowledge within the last 100 years suggests that biology and its surprising complexity do not differ from the systems subject to the laws of the physics. Therefore, it would be possible to generate conscience based in a physical organism, that's what happened in our case, in what we call "a biological system".

Conclusion

In general, people wonder if it will be possible to make a machine, whose nature is not biological, is able to sustain conscience and cognizance of situations, and properties are of the nervous system's functions; all considered very important. Will computers be able to think someday? In certain animals like the squid, with a nervous system organized totally different from that we thought able to support intelligence, the same appears to be evident - so we can deduce that other architectures different from ours are possessive of intelligence and conscience.

Is there some difference between the computer's materialization and the nervous system? Could the algorithmic computation be sufficiently more extensive, faster, and more concise to reproduce the totality of the properties generated by an organization like our brain, with 14 watts and 1.5 kilograms of mass?

With appropriate functional architectures, it would be possible to generate consciousness in numerous, non-biological organizations, although it is probable to never have a conscious organization in the human sense.

Footnotes

1. Kurzweil, RayThe Singularity is Near. London: Viking Penguin, 2005
http://www.singularity.com/  April 15, 2010 3:16PM EST.

2. Conscience - From the latin, Conscientĭa and copied from the Gk. συνείδησις. b : a faculty, power, or principle enjoining good acts c : the part of the superego in psychoanalysis that transmits commands and admonitions to the ego.
Merriam-Webster's Collegiate Dictionary, Eleventh Edition. Springfield, Massachusetts: Merriam-Webster Incorporated, 2005: 264.

3. Photoreceptors - n. A nerve ending, cell, or group of cells specialized to sense or receive light.
The American Heritage Stedman's Medical dic·tion·ar·y, Second Edition. Boston, New York: Houghton Mifflin Company, 2004: 630.

4. Photon - n. The quantum of electromagnetic energy, generally regarded as a discrete particle having zero mass, no electric charge, and an indefinitely long lifetime.
The American Heritage Stedman's Medical dic·tion·ar·y, Second Edition. Boston, New York: Houghton Mifflin Company, 2004: 630.

5. Electromagnetic - 2 a: A fundamental physical force that is responsible for interaction between charged particles which occur because of their charge and for the emission and absorption of photons, that is about 100 times weaker than the strong force, and that extends over infinite distances but is dominant over atomic and molecular distances.
Merriam-Webster's Collegiate Dictionary, Eleventh Edition. Springfield, Massachusetts: Merriam-Webster Incorporated, 2005: 401.

6. Mechanoreceptors - n. A specialized sensory end organ that responds to mechanical stimuli such as tension or pressure.
The American Heritage Stedman's Medical dic·tion·ar·y, Second Edition. Boston, New York: Houghton Mifflin Company, 2004: 491

7. Llinás, Rodolfo, and Pare, D.: 1991: 521-535.
http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6T0F-47XNCM2-R  April 15, 2010 3:21PM EST.

8.  Llinás, Rodolfo. "I of The Vortex: From Neurons to Self." MIT Press. 2001. For a video, go to:
http://thesciencenetwork.org/programs/the-science-studio/enter-the-i-of-the-vortex  April 15, 2010 3:23PM EST.


Bio

Llinas

Works and research in molecular biology, artificial intelligence, nanotechnology, Bioengineering and Cryogenics.

Candidate to PhD in bioethics at the University El Bosque (Colombian Medical School).

Director of Cancer Nanotechnology Project in El Bosque University.

Director of Artificial Intelligence Academic Unit of the Neurosciences Institute.

Multiple courses and seminars taught at national and international levels.
Currently contained in the Silver Edition of the book "Who ' s Who in the World".
Chosen by the Oxford Dictionary as one of the "2000 intellectuals of the 21st century".
Nominated for "International Engineer of the Year 2008" by IBC in Cambridge, England.

Memberships:

World Transhumanist Association (WTA). President for Colombia
Member of World Future Society.
Member of the Colombian Association of Astronomical Studies.
Member of the Colombian Association for the Advancement of Science
Researcher recognized by Colciencias
Member of "Otto de Greiff" Committee for the best work of degree in Colombia.
Member of the National Council of Nanoscience and Nanotechnology.
Honorary Member of the Colombian Association of Bioengineering.
Member of Singularity Institute for Artificial Intelligence
Member of the American Association for Artificial Intelligence (AAAI)
Member of the IEEE - Medicine and Biology Society, chapter Bioengineering and Biomedical Engineering.

Álvaro Hernando Ramírez Llinás      aramirez@uac.edu.co

Artificial Intelligence & Nanotech Director
Mechatronics Faculty Staff
Universidad Autónoma del Caribe
Calle 90 No. 46-112
Barranquilla, Colombia

<Back to Issue Contents