‹ Gerald's Journal

Artificial Intelligence: The trend, myths, approaches, philosophy and science.

Nov 24, 2019

Disclaimer

Note: This article is basically an archive. In no way, shall it be modified to preserve its authenticity. The content of this article depending on the reader, may come off as sparse, evasive of detailed theoretical proof, philosphical, or driven by science fiction. The sole purpose of this article is to capture the minds of neutral persons, entertain or educate. This article, is merely an introspective view into the accumulated beliefs and understanding of quite an explorative teenager (fourteen at the time), trying to grasp the field of artificial intelligence. Hence, forgive any sort of idiocy, ignorance and profound naivety expressed in some portions of this article. Subsequently, adequate citations and footnotes would be appended to the text in order to serve as a reference, to validate some propositions. Thank you.

Introduction

This article was written on the 24th of November 2019. Prior to my first term examinations in the 11th grade. During my tenure in high school. I have always been fascinated with AI. This article encompasses the field of Artificial Intelligence, and the several approaches applied. It is a quite an elaborate but decent summary of the topic targeted towards enthusiasts like myself, both beginners and intermediate learners, or persons who would love to get a grasp of this trending technology and what it actually entails.

Brief Overview

The term Artificial Intelligence, abbreviated as (AI) in many cases, has become exceptionally synonymous with the technological ecosystem, advanced research in computer science and various interdisciplinary fields of theoretical computer science, robotic engineering and data science. A technology once perceived as an object, otherwise concept, of contemporary science fiction movies and material, is gradually becoming supported and endorsed as the essential pivot into spearheading the next revolution in human civilization after the era of mechanical industrialization.

Alan Turing: AI’s Proponent

Alan Turing (1912-1954), an acclaimed British computer scientist and logician proposed the “Turing Test” which became the basis, of which Artificial Intelligence as a field of study was built on. He proposed a thesis which outlined the human biological brain, as a biochemical computing machine. Turing is attributed as one of the most influential figures in the field of AI and theoretical biology of which he is a founding father.

The Turing Test

The Turing Test, a thesis which serves as the principal building block of Artificial Intelligence, described further as the imitation game, is defined as the ability of a machine to exhibit indistinguishable human level intelligence or behavior If a machine and human were evaluated based on responses that would be hardly indistinguishable, such machine is said to have “passed” the Turing test. Turing in his 1950 publication, titled: “Computing Machinery and Intelligence”, posed an

unambiguous question, on if machines would be able to think. He further went ahead to debate and argue on the context of machines being able to perform cognitive functions, outlining certain possibilities. Despite being a good thesis to judge or define a machine’s “intelligence”, it is however, not a proficient method to analyze or evaluate a machine’s level of intelligence, as it is generally theorized on exhibiting human like characteristics and behavior. While we could however, create a solid relationship between human level intelligence, and a machine displaying human level intelligence, we cannot however ascertain to what degree. The Turing test only tests if the machine behaves like a human being.

The Turing Test: Limits, context, weaknesses.

Furthermore, the Turing Test ignores consciousness which is a key factor to ascertaining if a machine can ”think.” The Turing test is clearly concerned with external characteristics demonstrated, ignoring other aspects such as self consciousness. It employs more of a behavioral approach to the mind. Despite the clear loopholes in the thesis, it has been pivotal to the development of the popular CAPTCHA systems used on websites to determine if the user accessing a session is human, by generating numbers, wriggled text, distorted images, or images with certain objects recognized by the system. If the user is able to choose from the array of images for example, for which contains the object stated, they are allowed to proceed.

This is a demonstration of the Turing Test in a reverse manner to prevent automated systems, malicious bot scripts and spam programs from accessing secure platforms. The Turing Test should not be mistaken for the “Turing Complete” concept, which is a classification of data manipulation rules.

Consciousness: As a pillar of True AI

In order for a machine to think or perform tasks without being supervised with human level intelligence, we could affirm that consciousness is a key element that is required. Consciousness is referred to as awareness. It is a state whereby an individual is aware and able to perceive his environment to take actions. Consciousness contributes to the receptacle of mortality, life and beliefs. Full consciousness’ implementation in artificial intelligence as a subject matter has however been observed as a paradox in many case studies. Could human level consciousness eventually conform to artificial intelligence as a field? If such is attained, it simply means the acceptance of mortality, orthodox beliefs and several other components that could be attributed to our consciousness as humans. The instinct for survival as well, even though these machines or devices do not require organic resources to function. It’s indeed a belief which could be likened to a myth or fiction.

Artificial Intelligence: The current state

The Artificial Intelligence that is being implemented as of today could however be defined as “primitive” in nature, in contrast to the ideologies and beliefs we have gained from various resources and fictional content. Isaac Asimov’s I, Robot novel has served as a massive impact on the ethics of artificial intelligence. Despite being fictional, it has set up a standard which if attained and when explained, could be referred to as human level intelligence by machines or advanced intelligence. Primitive describes the level of Artificial Intelligence today, despite the available tools, applications, approaches, resources and investment on infrastructure used to scale and implement several trained models. Computers in nature were historically designed to execute certain actions, such as mathematical evaluations and tasks, within time. The reason for the development of computers, or computing machines could be ascribed to the early B.C, during the time when counting devices were invented. Computing machines were designed not just to execute tasks, but to store certain information, sort and process data effectively, far beyond the barriers of the human mind and cognitive capabilities. In layman’s terms, computers presently are not as a matter of fact “intelligent” and as of the time this article was published, cannot pass the Turing Test on an all round basis. Despite Artificial Intelligence, being relatively primitive, there has been certain attention given to it in the recent years, as private research organizations and individuals in the academia, devise appropriate methods, and means to escalate and improve the approaches towards creating intelligent machines. We could draw a relationship between the human mind, and computer software to the human body and computer hardware respectively. Computer software has seen massive improvements, as many standard algorithms, and outdated means of sorting data, are being effectively ruled out. Machine Learning has become a major approach to attaining primitive level artificial intelligence and advancing software systems and applications. By employing computational and statistical models, machine learning has grown to become one of the most significant approaches and sub-disciplinary field of AI and Data Science.

Machine Learning

Machine Learning is the application of algorithms, to enable a computer learn from provided datasets. Although I have written another article which gives a more elaborate explanation on machine learning, in order to illustrate machine learning simply, we would use the below image:

Machine Learning

Machine Learning as an approach, could be related to how living organisms are able to learn from their external environment, and function based on recently acquired facts, or knowledge. This is because as aforementioned above in a section of this article, I described the state and purpose of computing devices. Referencing the early age of computation and drawing a relativity, due to the fact that as of today, modern day computers are designed to only execute tasks based on the data or instructions passed. Computers, in a metaphorical sense when compared to humans, cannot exhibit or take actions on their own, based on their perception. In order to make this a possibility, machine learning is used to train a model to learn from provided data. It is a statistical approach, that could be likened to feeding a child with information from various sources, to enable the child gain knowledge, as to decide and take independent actions in certain circumstances. There are three different types of

machine learning. Otherwise learning classifications under Machine Learning. These are: Supervised Learning, Unsupervised Learning and Reinforcement Learning. I would however, supply a brief overview on the above classifications except for reinforcement learning.

Supervised Learning

Supervised learning utilizes labeled training data sets to learn the implemented mapping function that converts input variables represented by \((X)\) to an output variable represented by \((Y)\). It solves for the value of (f) in the sample equation; \(Y=f(x)\). Where \(f\) represents the mapping function. Regression and classification are products, otherwise demonstrations of the concept of supervised learning which would be covered in a separate paper: Regression in Supervised Learning. These models are utilized to predict data from a set of input variables. Regression is applied to objects and real values, while the latter is applied to categorical data. A regression model from samples could predict the following, for instance: The length of an object and speed of a computer. The latter could be used to predict certain labels, such as intelligent, genius, or stupid.### Unsupervised Learning

These models are implemented when there are available input variables within a provided dataset, but the absence of corresponding or related output variables. Concepts such as clustering could be used to group sample objects within a collection (cluster) are very much similar to each other than to the objects from a different cluster or collection of data, is a popular concept of unsupervised learning. Association another method classified under unsupervised learning could also be utilized to predict the probability of a co-occurrence in a particular data set. An example of association, is a model that could perform a task such as: “If Gerald purchases computers within the corei7 processor range, he would fancy a fast phone as well.”

Artificial Neural Networks

These are generally the methods applied in machine learning today, with simple references and real life implementations. Another acknowledged approach towards artificial intelligence which I find really fascinating is Deep Learning which is conceptualized on the application of Artificial Neural Networks also known as ANNs. A computer model of interlinked nodes, or layers which is inspired by the biological human brain, is preferably the best description in layman’s terms, that I could give. Biological neural networks have been known to constitute the human brain, and artificial neural networks in turn, have been designed to implement similar parameters by the biological brain in order for machines to exhibit human level intelligence, learn, and work with a given set of data. Artificial Neural Networks began as an approach to exploit the underlying architecture of the human brain to provide high level solutions to problems that conventional or standard computer algorithms were far limited to tackling or providing solutions to. Artificial Neural Networks, have been utilized heavily in facial recognition, text recognition, object recognition and computer vision as a whole. ANNs could be applied to observe, identify microscopic bodies and cancerous cells, under a microscope with no supervision to the level of a trained professional, and pose a suitable diagnosis based on its findings or data collected.

Figure 2.0. A diagrammatical representation of Artificial Neural Networks. A multi layer perceptron model (MLP)

Multi Layer Perceptron

The MLP should not be mistaken of the SLP (single layer perceptron) model that possesses just the input and output layers respectively.

I stumbled across the concepts of Neural Networks, a month before I began to write this article, after learning about the Pydmb library, inspired by Goeffery Hinton’s Restricted Boltzmann machines (RBM) which I would cover in a separate article. The Boltzmann machine however is a powerful stochastic neural network model and algorithm that could be implemented for classification, regression, collaborative filtering and dimensionality reduction. Which I also detailed on how it could be utilized for predictive analysis, supervision of classifiers for blood samples in transfusion. To enable better clarifications, I will proceed to state and explain a few of the various types of neural networks today, giving a brief overview on the underlying principles behind each of its respective architecture and its implementation. They are: Radial Basis Function Neural Network, Modular Neural Network, Recurrent Neural Network, Feed Forward Neural Networks, Multi Layer Perceptron Neural Network (illustrated in the above diagram) and Covolutional Neural Network. In this specific article we will review three of the popularly used models.

Covolutional Neural Networks

The most preferable neural network utilized in the field of image processing in artificial intelligence is the Covolutional Neural Network model. This model can be used in the field of computer vision as there are implementations of CNNs in computer vision libraries in Open CV, which I attempted as a hobby project in C++ because computer vision library supports three dimensional matrices. The CNN contains layers which are responsible for the extraction of data from images, which assigns unique values to the images that are later utilized for proper identification. The model implements a customized matrix, to convolute over the images and produce a map. This custom matrix is triggered in a random manner and updated by the means of backpropagation. Backpropagation is also known as backward propagation of errors, and is a system of tuning the weights of the model to reduce the error rates and improve it’s efficiency. It helps to calculate a gradient of a loss function with respect to all the weights in the network. A pooling layer is responsible for the collation and aggregation of the maps produced on the convolutional layer.

Feed Forward Network

The feed forward neural network, is a model that features output and input neurons connected to each other and a hidden layers. As the name implies, the data moves forward in one direction alone, and thus is referred to as feed forward networks. Hidden layers are not always present in the model depending on it’s implementation. In this system, there is an absence of the backpropagation system in several cases. There is no backpropagation, therefore the weights in the model cannot be updated, whereas the more perceptron layers, defines the capacity of the model to learn to a higher degree. The feed forward networks are used in speech recognition, face recognition and classification.

Recurrent Neural Networks

A recurrent neural network model, is an algorithm that implements series of data or sequential data. These models are utilized to recognize certain patterns in a data sequence, which includes and is not limited to; spoken language and written text. They function based on the training data supplied and are incorporated into several systems responsible for natural language processing and neural machine translation. Unlike other artificial neural networks, they share the same weights on each node, in contrast to feed forward neural networks, that has distributed weights across the nodes. Recurrent Neural Networks rely on backpropagation through a Backpropagation Through Time (BPTT) model or algorithm, to determine gradients which is a clear contrast to the standard backpropagation, since the above method is applicable and associated to sequence data. The RNNs sum up errors while feed forward networks do not, since they do not share equally distributed weights or parameters. Artificial Neural Networks, through the creation of multiple models, has proven to be one of the best approaches towards Artificial Intelligence, in deep learning. Neural Networks model complex relationships between data. Although logic, probabilistic theory and approaches combined with statistics, is a another approach to AI, which is referred to as the Bayes Network, otherwise Bayesian network, we will not review this, as it heavily involves graphical models, probability theory and mathematical models that may span beyond this article. Overall to deploy these statistical machine learning or neural network models, it requires high computing power, depending on the capacity and size of the model.

The future of Artificial Intelligence?

Since AI as a technology can be described as primitive and narrow currently, however, there are bigger plans as it concerns this technology, which includes spanning into various fields and sectors such as healthcare, finance and agriculture to improve the overall infrastructure currently employed.

Cognitive Machines

Cognition, can be defined as the mental action or process of acquiring knowledge and understanding through thought, experience, and the senses. Therefore we can attribute the concept of cognitive machines or devices as a building block of futuristic on Turing’s proposition, on if machines can actually think.

Superintelligence

The human brain has naturally imposed limits. Although psychologists have proven that there is no standard limit to our cognitive abilities, because we only use ten percent of our brains, we could however, from a different perspective, address this as a limit. The human brain’s memory capacity in an average person is able to store trillions of bytes of information, despite the “ten percent” limitation. The average processing speed of the human brain is sixty bits per second. The fastest bit processing speed attained by an individual was recorded at seventy-three bits per second. This I find quite amusing considering the earliest versions of computer ran at a hundred and ten bits per minutes, now we also have microprocessors running at thousands of bits per second. A super intelligent machine would possess intelligence and processing speed that would surpass the most brilliant mind. Many believe that if a true AI system is developed, it would proceed to rewrite and modify itself, which would be a recursive self improvement.

Robotics and Automation

Simply put, the future of Artificial Intelligence encompasses advanced robotics. Currently, robots developed are no different from our available micro computers, generally because these systems exhibit no general independence and must be fed or instructed on what to do, to achieve a specific task. With advanced AI systems, these machines would become a host to the software, and would be controlled based on the beliefs, thoughts and ideas of the software. Robotics I believe, futuristically would be used to grant a significant form of expression to such software. Machines would be able to possess artificial organs, such as eyes, that utilize computer vision and heavy image processing administered by a compelling neural network too classify, recognize images in their environment. If this is attained, artificial sight or sight exhibited by robots, would be achieved. This is because the human brain plays a very important role in sight. As it is responsible for decoding certain images and processing them into a person’s consciousness or mind. Advanced Automation in factories, would be employed such that, machines would be able to tackle any defunct device, component, facility, object or infrastructure if the system notices an issue, unlike most machines that adhere strictly to a specific algorithm.

The fears or risk of this Technology?

I find it difficult to understand the unnecessary doomsday scenarios being proposed so early on Artificial Intelligence. Futuristic AI would sufficiently provide dynamic and compelling solutions to research, healthcare and various other sectors that would be pivotal to our advancement and growth as modern day civilization. Hence, why propagate theories of existential threat, when AI at its current state can only be classified as primitive, the fact being it is nowhere close to the concept of super intelligent AI?

Existential Threat

I would preferably say AI would become a significantly existential threat due to the advent of advanced robotics, or granting these systems administrative access or privileges to the web, whereas by that time, multiple systems and automated devices would be connected to the internet as a singular resource to improve sharing of resources between internet devices. This would only be worth bothering about when super intelligent AI systems are built, which results on possibly exhibiting convergent behavior to acquire the sufficient resources to fulfill and impending task that was decided upon the system itself. Many philosophers, argue that the AI system could acquire resources to prevent itself from being shut down. Generally speaking, many believe the AI may feel inferior to the human race, intimidated and threatened, therefore try to evict or subdue it, as a reflex action to ensure its safety. I am among those who believe that consciousness is a key factor to this ever occurring which includes advanced cognition. However, the massive progress that has been made in the field of research AI has been exceptionally theoretical, and not easily applicable. In order to prevent existential threat, the research, study and development of AI systems should be supervised and regulated.

Advanced Weaponry, Advanced digital warfare

I was asked questioned once on my belief about AI being a destructive technology and I stated it only is, in the “wrong hands” or people. AI originated from scientific research, philosophical beliefs and was propounded the more by scientific fiction, which also contributed to the goals of furthering the development of the field. Although a vast majority of the resources being spent and infrastructure being developed for good causes and implementation of AI, it could be utilized as a blueprint or key to creating advanced weaponry which could unfortunately spearhead an unwanted warfare. As of 2015, several countries were reported to be researching and developing battlefield robots. Despite AI tools being implemented by governments to locate, search, survey and predict any threats, as the cost of Artificial Intelligence tools decreases, and becomes easier to understand overtime, this would become dangerous in the hands of wrong persons, governments or criminals.

Technological Unemployment

Most economists argue that the introduction of AI would lead to unemployment, as it is already replacing certain jobs with automation. This could spell immense laziness from unskilled individuals. The labour force could be classified into three segments, the unskilled, semi skilled and skilled. AI poses a threat to the unskilled and semi skilled labour force already, despite not being as advanced, as automated systems could easily handle most, to all tasks in the two aforementioned segments with greater proficiency. So I do agree, that an impending threat of AI, is unemployment.

For the advantages and goals of Artificial Intelligence, it seems endless, as this article would not be able to address effectively. Aspects such as; reduction in human errors, which was pivotal to the development of self driving cars to ensure greater accuracy when driving to reduce the number of automobile accidents, as well as in research and study, digital assistance, faster decisions, prediction models being utilized to predict malignant growth in a woman’s breast at early stages are some advantages of AI. It has no limitations and could work, handle and execute tasks within any hour of the day, preventing delays. AI would help reduce the number of persons and resources required to handle certain things, and would simplify our daily lives to the highest level. It is a no-brainer why this technology is further being developed and improved due to the immense benefits. Even though the cost of implementation and infrastructure to deploy and apply these models is quite high, it decreases a little by each year, which is a good sign of progress. The hardware designed to run and execute these models are being effectively improved upon as well. Most companies are developing accelerated hardware devices to manage these models and implementing liquid cooling infrastructure in their data centers to compliment the speed, and efficiency.

Conclusion

The trend, on AI being generated is certainly worth it, as I dived into the essence of the technology, applications, approaches, brief overview, philosophical notes, theoretical applications, very brief history, the proponent and covered a few concepts in detail, simplifying these terms which I believe would be presentable to any reader. Artificial Intelligence is the future, and the future is Artificial Intelligence.

Forgive my naivety and ignorance. I know I am in no way competent to deliver my jurisdictions or propositions on the above subject matter at this time.

🏷