Defining AI and AI types
Artificial Intelligence (AI) refers to computational systems that mimic human intelligence and are capable of performing complex tasks. The main goal of AI is to develop autonomous systems to solve problems that previously required human intelligence, thereby automating complicated and time-consuming processes. Most AI systems simulate natural intelligence to solve complex problems.
Source: https://research.csiro.au/cor/machine-learning/ accessed 2/9/23
In November 2023, OECD member countries approved a revised version of the Organisation’s definition of an AI system.
"An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment."
Source: https://www.oecd-ilibrary.org/science-and-technology/explanatory-memorandum-on-the-updated-oecd-definition-of-an-ai-system_623da898-en accessed 17/4/2024
The AI systems that we see around us today are the result of decades of steady advances in AI technology. Digital computing can be traced back to the 1940s and the beginnings of artificial intelligence to the 1950s. The evolution of AI is rapid and ongoing.
AI is designed and implemented by humans and is not neutral; as a result, there are many considerations, including but not limited to:
- Bias: AI-based decisions are susceptible to inaccuracies, discriminatory outcomes, embedded or inserted bias. This may originate from biased training data or bias in the initial design.
- Safety: Safety by design principles. This proactive and preventative approach focuses on embedding safety into the culture and leadership of an organisation. It emphasises accountability and aims to foster more positive, civil and rewarding online experiences.
- Environmental impact: Like other digital technologies, the infrastructure of the AI industry and the training and maintenance of AI models has an impact on the environment through mining, water use and emissions.
- Copyright and intellectual property: Generative AI in particular has implications for copyright and intellectual property, including the data used to train models.
There are now a wide variety of AI digital tools available that are used for several purposes across many domains including security systems, autonomous vehicles, chatbot programs like Chat GPT, software used to translate texts into other languages, virtual assistants operated by speech recognition, weather warning systems and self-operated checkouts at supermarkets.
Users engaging with these systems may miss the contribution they make to the systems by inputting data. They may also be unaware of the function of the AI, such as identification of produce in a supermarket self-checkout.
Ethical considerations associated with AI
There is no doubt that there are ethical conundrums and some wicked problems associated with AI that can be better understood through purposeful, scaffolded critical thinking and ethical reasoning.
The ethical complexity of AI includes such phenomena as the rapid harvesting of big data in real time (by AI). This can be used to generate profiles and predictions about humans that then influences the options available to them. AI could be part of the intentionally deceptive creation of user experiences designed to take advantage of human behaviour. This is known as “dark pattern” UX design. It may mean humans do not know that their options have been restricted or they have been unconsciously influenced to make decisions when using AI. Monitoring this is difficult since proprietary and “black box” algorithms mean that humans cannot inspect or audit automated processes that influence decision-making. Regulation, laws and design standards can address this; however, ethically, students should explore the tensions and limits to individual human agency in an AI world.
Biases in the data AI systems are trained on can perpetuate discriminatory stereotypes or unfairly represent, omit or negatively portray certain groups, particularly when AI systems are trained using historical data generated by biased human interactions.
Ethical actions include the consideration of the diversity of data input and incorrect data labelling in supervised or semi-supervised machine learning. Limited data sets can unfairly amplify one point of view over another depending upon the data used to build and train the system. An understanding that out-of-date or disproven data may be used in datasets that AI systems use to create outputs assists in informing ethical decision-making. Students could consider the source and authorship of the data being used to inform outputs gathered from AI systems like large language models used for research and validation.
Students could consider the decision-making processes of predictive algorithms, and the need for human responsibility (human-in-the-loop or HITL) to mitigate against the potential for bias and discrimination in AI systems. Ethical considerations are interrelated with sustainability considerations as the use of AI can either positively or negatively impact the planet and contribute to climate change.
AI can be used to manipulate and deceive when used unethically, as seen in the rise of deepfakes and AI generated image-based bullying and abuse. Students could consider the part they might play in maintaining an ethical approach to using AI by avoiding the creation of such content and in seeking appropriate permissions.
Designers of AI are accountable for ethical practices and should consider Safety by design principles, algorithmic and privacy impact on the users of AI systems for which they are responsible.
Ethical understanding is developed through the investigation of a range of questions drawn from various contexts in the curriculum. Exploring ethical dilemmas associated with AI through learning area content provides students with opportunities to develop the skills and dispositions described in the Ethical Understanding general capability.
Ways of thinking
Thinking approaches referred to in the Australian Curriculum help students to understand how AI works and about how to use or design AI systems, in particular computational thinking, systems thinking, critical and creative thinking, and design thinking.
Computational thinking involves:
- decomposition: breaking problems into parts
- pattern recognition: analysing the data or relationships and looking for patterns to make sense of the data or problem
- abstraction: removing unnecessary details and focusing on important aspects of structure or data
- algorithms: creating a series of ordered steps that solve or can investigate a class of problem
- models, experiments and simulations: creating and applying models or simulations that represent situations or conduct experiments
- generalisation: recognising and explaining patterns in solutions and extending to new situations
Systems thinking helps people to think holistically about the interactions and interconnections that shape the behaviour of systems.
Design thinking allows students as designers to empathise and understand needs, opportunities and problems; generate, iterate and represent innovative, user-centred ideas; and analyse and evaluate those ideas.
Critical and creative thinking is highly valued in a data-driven world where AI is ubiquitous. However, teachers and students should be aware that as advances in AI continue, it will become increasingly unlikely to be able to discern misleading or harmful content such as deep fakes through critical thinking alone.