FTLOScience
FTLOScience

Creating a Truly Conscious AI—A Novel Approach

By Published On: March 16, 2019Last Updated: November 14, 2022

At present, artificial intelligence (AI) is only capable of performing very specific tasks such as low-level visual recognition, speech recognition, coordinated motor control and pattern detection. However, the development of truly conscious AI systems will allow us to gain knowledge and further our understanding of how consciousness works. But in order to do so, machines must first become self-aware. A new hypothesis has emerged that current AI developments fail in this area because they often miss out on a key component: introspection.

The relatively new field of artificial intelligence (AI) is crucial in many aspects of today’s society. This includes medical diagnostics, electronic trading, automation in finance, healthcare, education, transportation and many other processes. However, what we still need to achieve is a form of everyday human-level common sense, where AI is able to carry out adaptable planning, task execution and natural communication. These are ‘conscious’ or ‘creative’ activities that are part of our daily lives, which humans can normally execute without great mental effort. So why is it that AI fails here?

The Current State of AI

At present, computers already exist that can learn without explicit programming—also known as machine learning. Machine learning is simply an application of AI, using algorithms and models that allow us to make predictions. The next step in AI research is to develop artificial general intelligence (AGI) – upgrading this applied ‘intelligence’ to one with autonomous control. Theoretically, this will allow AI to accomplish any task that humans are able to perform1-6.

An area of research that can achieve this ambition is evolutionary robotics (ER), a machine learning method using evolutionary computation. The algorithms designed for these AI are inspired by biological evolution, developing controllers and capabilities for truly autonomous robots. Artificial neural networks and other forms of learning by reinforcement has met some success in the context of ER7-9.

irobot roomba vacuum machine artificial intelligence ai
A truly remarkable specimen of AI engineering.

Today, developments such as cloud computing can increase computing power, leading to more sophisticated algorithms while lowering the costs of data storage. This ease of acquiring large amounts of data is the ideal fertilizer for the growth of AGIs. Renowned futurist Ray Kurzweil—known for his highly accurate predictions—remarks that by 2029, computers will possess human-level intelligence10. Numerous articles have discussed this question: if robots can learn independently and are truly creative, will they eventually become aware of themselves? But before we can achieve this level of freedom, we first need to create an AI that is self-aware.



Self-Consciousness the Key?

From our standpoint, current AI and robot developments have one major problem which prevents them from becoming AGIs. That is, their entire framework is built simply to process input information. However, researchers are now focusing on perfecting AI ‘awareness’ in a 3D space. This is a key factor in producing a robot that interacts with its environment in a meaningful way, a small but crucial step in achieving creative and conscious AGIs.

Another aspect that eludes researchers is the concept of consciousness, which many hypothesize is a trait exclusive to humans. French philosopher René Descartes famously proposed, ‘cogito, ergo sum‘, which translates into, ‘I think, therefore I am‘. This is further encapsulated by the statement, ‘we cannot doubt of our existence while we doubt‘ – to doubt is to think, and to think is to exist14.

There are many definitions and concepts of consciousness, but we can interpret it as an organism existing in a wakeful state while experiencing inner awareness. In other words, it is aware of its own representation and can perform intentional self-monitoring and evaluation – this is known as introspection15.

The definition of ‘self’ is, therefore, to be consciously aware of one’s own being. In order to learn about ourselves and our consciousness, we need to go far back in time to the evolution of multicellular organisms.

Evolution of Consciousness

The first multicellular organism on this planet, the Ediacaran biota, provides a link to the evolution of consciousness. Tubular, frond-shaped and mostly immobile, they flourished for ~300 million years until the beginning of the Cambrian Period. Paleontologist Mark McMenamin calls Earth during this period the ‘Garden of Ediacara’16.

From their fossil record, researchers have not been able to find evidence for sensory input systems, such as light receptors. As far as we know, the organisms of this time did not interact with one another, let alone engage with the environment. This suggests that information transmission or the ‘nervous system’ evolved to control internal coordination rather than sensory motor control17.

We can assume that a primitive form of self-awareness was what enabled the Ediacaran biota to evolve from unicellular organisms into complex multicellular organisms. Likewise, this can also be the starting point in the creation of AGIs and sentient systems.

We begin by enabling a system to perform intentional self-monitoring so that it can learn to organize and compartmentalize its different functions. In time, they too can evolve into a complete functioning organism. While programming AI to take on tasks of introspection and self-organization may sound improbable, it is not out of our reach. Indeed, a viable solution may be found by studying neural network simulations—a field that predates even the development of computers6.

How Neural Networks Can Become Conscious

In 1943, neurophysiologist Warren McCulloch and logician Walter Pitts were the first researchers to mathematically model a set of artificial neurons – or a neural network19. Fast forward to 1969, AI scientists Marvin Minsky and Seymour Papert published the book Perceptrons, in which they express frustration towards the supposed limitations of neural networks18.

Neural networks attempt to deduce the essential features of neurons and their connections, and then simulate them using a computer. The trouble is that the knowledge of our own neurons is incomplete, while a limit exists on our computing power. The technology available to researchers in the past simply did not provide the resources for further progress. However, much has changed since then, and today the neural network field has renewed widespread interest due to new and recent advances.

virtual brain black white neural network
Neural network is just a fancy term for ‘virtual brain’.

Admittedly, current models are still gross idealizations of the neuronal networks in our brains20. However, we hypothesize that this should not present a problem if we apply our approach of ‘introspective design’ for further developments. By empowering them to recognize their own components completely, they will be able to self-organize themselves to become more and more efficient. Thus, in theory, we can skip the step of understanding neural networks completely, but rather give them the capability and tools to improve themselves!

The neural nets will come up with their own organization and strategies which, consequently, means they will be self-aware and also creative. After the neural nets achieve this, we can then introduce input information, so that they learn to distinguish themselves from the environment. By introducing them to one another, we may even be able to create a form of culture. These neural nets will be able to independently create and maintain improved versions of themselves – a major ambition of large technology companies such as Google.



Future Concerns for AI

However, if we want to make consciously intelligent robots a reality, we must start thinking about the risks that could come with them. A good place to start would be Isaac Asimov’s ‘Three Laws of Robotics’, a set of rules which first appeared in his 1942 short story Runaround, and later included in his 1950 collection I, Robot21:

  1. First Law – A robot may not injure a human being or, through inaction, allow a human being to come to harm
  2. Second Law – A robot must obey the orders given it by human beings except where such orders would conflict with the First Law
  3. Third Law – A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws

For now, AIs are under our full control and only perform tasks that we have specifically programmed them to do. In the future, however, they may attain cognitive capabilities indistinguishable from those of a human. In such cases, we need to develop a consensus on the ethical rules and laws which AI must follow, because technological singularity may change the definition of consciousness and even what it means to exist.

Conscious and creative AGIs will bring many benefits to our society, and we must also recognize and prepare for any potential changes – good and bad – they may bring.

This article was written by Catarina Cunha via Write For Us.

Reference

  1. Dias, R. D., Gupta, A., & Yule, S. J. (2018). Using Machine Learning to Assess Physician Competence: A Systematic Review. Academic Medicine. doi:10.1097/ACM.0000000000002414
  2. Pellegrini, E., Ballerini, L., Hernandez, M., Chappell, F. M., Gonzalez-Castro, V., Anblagan, D., . . . Wardlaw, J. M. (2018). Machine learning of neuroimaging for assisted diagnosis of cognitive impairment and dementia: A systematic review. Alzheimer’s and Dementia (Amsterdam, Netherlands), 10, 519-535. doi:10.1016/j.dadm.2018.07.004
  3. Liu, D., Cheng, D., Houle, T. T., Chen, L., Zhang, W., & Deng, H. (2018). Machine learning methods for automatic pain assessment using facial expression information: Protocol for a systematic review and meta-analysis. Medicine (Baltimore), 97(49), e13421. doi:10.1097/MD.0000000000013421
  4. Liakos, K. G., Busato, P., Moshou, D., Pearson, S., & Bochtis, D. (2018). Machine Learning in Agriculture: A Review. Sensors (Basel, Switzerland), 18(8). doi:10.3390/s18082674
  5. Cust, E. E., Sweeting, A. J., Ball, K., & Robertson, S. (2019). Machine and deep learning for sport-specific movement recognition: a systematic review of model development and performance. Journal of Sports Sciences, 37(5), 568-600. doi:10.1080/02640414.2018.1521769
  6. Kohonen, T. (1988). An introduction to neural computing. Neural Networks, 1(1), 3-16. doi:https://doi.org/10.1016/0893-6080(88)90020-2
  7. Gigliotta, O., Bartolomeo, P., & Miglino, O. (2015). Neuromodelling based on evolutionary robotics: on the importance of motor control for spatial attention. Cognitive Processing, 16 Suppl 1, 237-240. doi:10.1007/s10339-015-0714-9
  8. Harvey, I., Di Paolo, E., Wood, R., Quinn, M., & Tuci, E. (2005). Evolutionary robotics: a new scientific tool for studying cognition. Artificial Life, 11(1-2), 79-98. doi:10.1162/1064546053278991
  9. Scheper, K. Y. W., & de Croon, G. (2017). Abstraction, Sensory-Motor Coordination, and the Reality Gap in Evolutionary Robotics. Artificial Life, 23(2), 124-141. doi:10.1162/ARTL_a_00227
  10. Kurzweil, R. (2006). Reprogramming biology. Scientific American, 295(1), 38.
  11. Kurzweil, R. (2011). Interview with Ray Kurzweil. Interview by Vicki Glaser. Rejuvenation Research, 14(5), 567-572. doi:10.1089/rej.2011.1278
  12. Kurzweil, R. (2005). Human 2.0. New Scientist, 187(2518), 32-37.
  13. Kurzweil, R., & Grossman, T. (2009). Fantastic voyage: live long enough to live forever. The science behind radical life extension questions and answers. Studies in Health Technology and Informatics, 149, 187-194.
  14. Descartes, R., & Veitch, J. (1853). Discourse on the method of rightly conducting the reason, and seeking truth in the sciences (2nd ed.). Edinburgh: Sutherland and Knox.
  15. Jack, A. I., & Shallice, T. (2001). Introspective physicalism as an approach to the science of consciousness. Cognition, 79(1-2), 161-196.
  16. McMenamin, M. A. S. (1986). The Garden of Ediacara. Palaios 1(2), 178-182.
  17. Dennett, D. (2019). Review of Other Minds: the octopus, the sea and the deep origins of consciousness: Peter Godfrey-Smith, Farrar, Straus and Giroux, NY, 2016).Biology & Philosophy, 34(1).  
  18. Minsky, M., Papert, S. A., & Bottou, L. (2017). Perceptrons: An Introduction to Computational Geometry. Cambridge, Massachusetts: MIT Press.
  19. Abraham, T. H. (2002). (Physio)logical circuits: the intellectual origins of the McCulloch-Pitts neural networks. Journal of the History of the Behavioral Sciences, 38(1), 3-25.
  20. Davalo, E., & Naim, P. (1991). Neural networks. London: The MacMillan Press.
  21. Asimov, I. (1963). I, Robot. New York: Doubleday.

About the Author

TSC Rex
Guest Writer

This article was written by a contributor. For a full list of guest writers, click here.

You Might Also Like…

Go to Top