Teaching Artificial Intelligence for K-12

What is AI

Contents
Defining AI

The term artificial intelligence was first used at a 1956 workshop held at Dartmouth College, a US Ivy League university, to describe the science and engineering of making intelligent machines, especially intelligent computer programs (McCarthy et al., 2006, p.2). Over the following decades, Artificial Intelligeence (AI) developed in fits and starts, with periods of rapid progress interspersed with AI winters (Russell and Norvig, 2016).

All the while, definitions of AI multiplied and expanded, often becoming entangled with the philosophical questions of what constitutes intelligence and whether machines can ever really be intelligent.

To give just one example, Zhong (2006, p.90) defined AI as: a branch of modern science and technology aiming at the exploration of the secrets of human intelligence on one hand and the transplantation of human intelligence to machines as much as possible on the other hand, so that machines would be able to perform functions as intelligently as they can.

Pragmatically sidestepping this long-running debate, for the purpose of these guidelines AI might be defined as computer systems that have been designed to interact with the world through capabilities that we usually think of as human (based on Luckin et al., 2016, p.14).

Currently, we are experiencing an AI renaissance, with an ever-increasing range of sectors adopting the type of AI known as machine learning, which involves the AI system analysing huge amounts of data. This has come about as a result of two critical developments: the exponential growth of data and the exponential growth of computer processing power.



Applications of AI

Real-world applications of AI are becoming increasingly pervasive and disruptive, with well-known examples ranging from automatic translation between languages and automatic facial recognition (used for identifying travellers and tracking criminals) to self-driving vehicles and personal assistants (on smartphones and other devices in our daily life).

One particularly noteworthy area is health care. A recent transformative example is the application of AI to develop a novel drug capable of killing many species of antibiotic-resistant bacteria. Moreover, the application of AI to analyse medical imaging (e.g. of foetal brain scans to give an early indication of abnormalities, retinal scans to diagnose diabetes, and x-rays to improve tumour detection), illustrates the potentially significant benefits of AI and humans working in symbiosis:

"When we combine AI-based imaging technologies together with radiologists, what we have found is that the combination of the AI technology and the radiologist outperforms either the AI or the radiologist by themselves." (Michael Brady, Professor of Oncology, University of Oxford, quoted in MIT Technology Review and GE Healthcare, 2019)

Other increasingly common applications of AI include:

  • Auto-journalism: AI agents continually monitoring global news outlets and extracting key information for journalists, and also automatically writing some simple stories;
  • AI legal services: for example, providing automatic discovery tools, researching case law and statutes, and performing legal due diligence;
  • AI weather forecasting: mining and automatically analysing vast amounts of historical meteorological data, in order to make predictions;
  • AI fraud detection: automatically monitoring credit card usage, to identify patterns and anomalies (i.e., potentially fraudulent transactions);
  • AI-driven business processes: for example, autonomous manufacturing, market analysis, stock trading, and portfolio management;
  • Smart cities: using AI and the interconnected internet of things (IoT) to improve efficiency and sustainability for people living and working in urban settings;
  • AI tutoring in education; and
  • AI robots: physical machines that use AI techniques, such as machine vision and reinforcement learning, to help them interact with the world.

While each of these examples have significant positive potential for society, we should not neglect to point out that other applications of AI are more controversial. Two examples are:

  • Autonomous warfare: weapons, drones and other military equipment that function without human intervention; and
  • Deep-fakes: automatic generation of fake news, and the replacement of faces in videos so that politicians and celebrities appear to say or do things they never said or did.

In addition, we should also be careful when evaluating many of the dramatic claims made by some AI companies and the media. To begin with, despite headlines announcing that AI tools are now ‘better’ than humans at tasks such as reading texts and identifying objects in images, the reality is that these successes are only true in limited circumstances (for example, when the text is short and contains enough required information for inference to be unnecessary).

Current AI technologies can also be very brittle: if the data is subtly altered (for example, if some random noise is superimposed on an image), the AI tool can fail badly (Marcus and Davis, 2019).



AI Techniques

Each application of AI depends on a range of complex techniques, which require AI engineers to be trained in higher-level mathematics, statistics and other data sciences, as well as coding. Therefore, these techniques are too specialized to explore in depth here. Instead, we will briefly introduce some core AI techniques. On this page, we look at AI technologies.

Classical AI

Much early or classical AI, variously known as symbolic AI, rule-based AI, or good-old-fashioned AI (GOFAI), involves writing sequences of IF... THEN... and other rules of conditional logic, steps that the computer will take to complete a task. Over decades, rule-based AI expert systems were developed for a diverse range of applications, such as medical diagnostics, credit ratings, and manufacturing.

Expert systems are based on an approach known as knowledge engineering, which involves eliciting and modelling the knowledge of experts in a specific domain, a resource-intensive task that is not without complications. Typical expert systems contain many hundreds of rules, yet it is usually possible to follow their logic. However, as the interactions between the rules multiply, expert systems can become challenging to revise or enhance.

Machine learning

Many recent AI advances, including natural language processing, facial recognition, and self-driving cars, have been made possible by advances in machine-learning-based computational approaches. Rather than using rules, machine learning (ML) analyses large amounts of data to identify patterns and build a model which is then used to predict future values. It is in this sense that the algorithms, rather than being pre-programmed, are said to be learning. There are three main ML approaches: supervised, unsupervised, and reinforcement.

Supervised learning involves data that has already been labelled (such as many thousands of photographs of people that have been labelled by humans). The supervised learning links the data to the labels, to build a model that can be applied to similar data (for example, to automatically identify people in new photographs).

In unsupervised learning, the AI is provided with even larger amounts of data, but this time the data has not been categorized or labelled. The unsupervised learning aims to uncover hidden patterns in the data, clusters that can be used to classify new data. For example, it may automatically identify letters and numbers in handwriting by looking for patterns in thousands of examples. In both supervised and unsupervised learning, the model derived from the data is fixed, and if the data changes,the analysis has to be undertaken again.

However, the third ML approach, reinforcement learning, involves continuously improving the model based on feedback – in other words, this is machine learning in the sense that the learning is ongoing. The AI is provided with some initial data from which it derives a model, which is assessed as correct or incorrect and rewarded or punished accordingly. The AI iteratively (learning and evolving) over time. For example, if an autonomous car avoids a collision, the model that enabled it to do so is rewarded (reinforced), enhancing its ability to avoid collisions in the future.

Artificial neural networks

An artificial neural network (ANN) is an AI approach that is inspired by the structure of biological neural networks (i.e. animal brains). ANNs each comprise three types of interconnected layers of artificial neurons: an input layer, one or more hidden intermediary computational layers, and an output layer that delivers the result. During the ML process, weightings given to the connections between the neurons are adjusted in a process of reinforcement learning and back propagation, which allows the ANN to compute outputs for new data. A well-known example that uses an ANN is Google's AlphaGo, which in 2016 defeated the world's leading player of the game Go.

Deep learning

Deep learning refers to ANNs that comprise multiple intermediary layers. It is this approach that has led to many of the recent remarkable applications of AI (for example, in natural language processing, speech recognition, computer vision, image creation, drug discovery, and genomics). Emerging models in deep learning include so-called deep neural networks (DNN), which find effective mathematical operations to turn an input into the required output; recurrent neural networks (RNN), which allow data to flow in any direction, can process sequences of inputs, and are used for applications such as language modelling; and convolutional neural networks (CNN), which process data that come in the form of multiple arrays, such as using three two-dimensional images to enable three-dimensional computer vision.



AI Technologies

All of the AI techniques described in the previous section have led to a range of AI technologies, which are increasingly being offered as services and are being used in most of the aforementioned applications. AI technologies, which are detailed in Table 1, include the following:

Natural language processing (NLP):The use of AI to automatically interpret texts, including semantic analysis (as used in legal services and translation), and generate texts (as in auto-journalism).

Speech recognition: The application of NLP to spoken words, including smartphones, AI personal assistants, and conversational bots in banking services.

Image recognition and processing: The use of AI for facial recognition (e.g., for electronic passports); handwriting recognition (e.g., for automated postal sorting); image manipulation (e.g., for deep-fakes); and autonomous vehicles.

Autonomous agents: The use of AI in computer game avatars, malicious software bots, virtual companions, smart robots, and autonomous warfare.

Affect detection: The use of AI to analyse sentiment in text, behaviour and faces.

Data mining for prediction: The use of AI for medical diagnoses, weather forecasting, business projections, smart cities, financial predictions, and fraud detection.

Artificial creativity: The use of AI in systems that can create new photographs, music, artwork, or stories.

AI Techniques and Technologies

TECHNOLOGY DETAILS MAIN AI TECHNIQUES DEVELOPMENT EXAMPLES
Natural language processing (NLP) AI to automatically generate texts (as in auto-journalism), and interpret texts, including semantic analysis (as used in legal services and translation) Machine learning (especially deep learning), regression, and K-means. NLP, speech recognition, and image recognition have all achieved accuracy in excess of 90%. However, some researchers argue that, even with more data and faster processors, this will not be much improved until a new AI paradigm is developed. Otter
Speech recognition NLP applied to spoken words, including smartphones, personal assistants, and conversational bots in banking services Machine learning, especially a deep learning recurrent neural network approach called long short-term memory (LSTM) Alibaba Cloud
Image recognition and processing Includes facial recognition (e.g., for e-passports); handwriting recognition (e.g., for automated postal sorting); image manipulation (e.g., for deep-fakes); and autonomous vehicles Machine learning, especially deep learning convolutional neural networks Google Lens
Autonomous agents Includes computer game avatars, malicious software bots, virtual companions, smart robots, and autonomous warfare GOFAI and machine learning (for example, deep learning self-organising neural networks, evolutionary learning and reinforcement learning) Research efforts are focusing on emergent intelligence, coordinated activity, situatedness, and physical embodiment, inspired by simpler forms of biological life. Kari Virtual Girlfriend
Affect detection Includes text, behaviour and facial sentiment analyses Bayesian networks and machine learning, especially deep learning Multiple products are being developed globally; however, their use is often controversial. Affectiva
Data mining for prediction Includes financial predictions, fraud detection, medical diagnoses, weather forecasting, business processes and smart cities Machine learning (especially supervised and deep learning), Bayes networks and support vector machines. Data mining applications are growing exponentially, from predicting shopping purchases to interpreting noisy electroencephalography (EEG) signals. Research Project
Artificial creativity Includes systems that can create new photographs, music, artwork, or stories. Generative adversarial networks (GANs), a type of deep learning involving two neural networks pitted against each other GANs are at the cutting edge of AI, such that future applications are only slowly becoming evident This person does not yet exist