Prompting engineering or AI literacy? How to develop a critical awareness of Generative AI in education
By Mari Cruz García Vallejo, Digital Education Consultant and Senior Fellow at Advance HE
Prompting engineering and Artificial Intelligence (AI) literacy are not, per se, antithetic concepts. As a matter of fact, prompting engineering is part of a higher cognitive process: the ability of formulating a problem in a structured way and following a logical sequence of thought that provides AI conversational agents with clear instructions on the response or output that is expected from the agent. Prompting engineering belongs to a higher set of competences grouped under the umbrella term AI literacy.
I teach an ECTS module at a Spanish university that covers Generative AI (GenAI) to enhance learning and assessment, and the term ‘prompting’ often appears in the bibliography of the module. It is difficult for me to translate the noun (or should I say gerund?) into Spanish in this context. According to the Cambridge Dictionary (2024), the verb prompt means ‘to make someone decide to say or do something’. Prompting then, for our purposes here, can be translated as the action of telling an AI chatbot what we want and how we want it.
In the article AI Prompt Engineering Isn’t the Future, Ogur Ali Acar, Professor of Marketing and Innovation at King’s College London, states that without a well-formulated problem, even the most sophisticated prompts will fail. Acar distinguishes three prompting methods to guide AI responses:
- Zero-shot prompting: This method involves presenting the AI agent with a task and no additional information. The agent must then rely on its own knowledge and reasoning abilities to complete the task.
- Few-shot prompting: This method provides the AI agent with a few examples or pieces of additional information before presenting the task. This helps the agent to better understand the context and type of output that is expected.
- Chain-of-thought prompting: This method is a more structured approach that guides the AI agent step-by-step through a logical progression or sequence of steps. This can be particularly helpful for tasks that require complex reasoning or problem-solving.
Teaching academics the different uses of GenAI to enhance their teaching practice involves asking the AI to tackle complex queries that require a chain-of-thought prompting as well as a certain degree of AI literacy. I usually present an example to my students, such as:
Imagine that you need to develop a new module comprising 20 ECTS credits based on the subject or discipline that you teach. Try to ask an AI chatbot (such as Copilot, Claude, Bard or ChatGPT) to create the module using a chain-of-thought prompting. How would you tackle this task?…
This example illustrates how prompting engineering requires both critical thinking and problem formulation (including problem-solving) to tackle a complex task. Both critical thinking and problem-solving, core skills that HE institutions aim to develop in their students, are also part of the set of competences that AI literacy comprises.
What exactly is AI literacy?
King’s College London (2023) refers to the term AI literacy as an extension of existing critical thinking and digital literacies that seek to help students develop a critical awareness of GenAI models, how those models work, and their ethical, intellectual and environmental implications in HE. As a digital educator, I would extend such a definition by adding that AI literacy also involves developing a critical awareness in the following key areas:
- the regulatory frameworks, national and transnational, that protect citizens against the misuse of AI. This also includes an awareness of the implications of data protection legislation for the new AI regulation.
- the moral and philosophical guidelines to promote an ethical use of AI in education. This also involves bringing the principles of compassion and ágape (which, in English can be translated as loving kindness) to AI ethics; those principles are missing in the debate around AI literacy in HE.
- the reconceptualisation of copyright, authorship and plagiarism for an intellectual product or work that has received contributions from a GenAI model.
Lee (2023) introduces a new key area to focus on AI literacy: AI pedagogy. The author defines AI pedagogy as the need for us educators to:
…engage our students in critical conversations on the capabilities and limitations of AI, and to know what pedagogical principles AI tools should use. (…) AI pedagogy needs to include practical examples and hands-on experience on how people can co-create and collaborate with AI.
(p. 14)
AI pedagogy is an interesting emerging term to consider. The AI Pedagogy Project (metaLAB (at) Harvard, 2024) defines this new field as engaging students and educators in critical conversations about AI, whereas Bearman and Ajjawi (2023) reflect on whether there is a need for pedagogy focused on AI or to adapt the current digital pedagogies in the context of AI to use these new tools from an ethical perspective in line with the regulatory frameworks.
How can educators develop critical AI literacy?
Since AI literacy is an emerging set of competencies and skills, there is no magical formula for answering this question. I usually adopt a multidisciplinary approach to develop a critical AI literacy among my students who are HE lecturers. I cover the key areas that I have mentioned above. To give some examples, I cover the EU Act on AI and the implications of the General Data Protection Regulation (GDPR) for the Act; the reconceptualisation of copyright, plagiarism, intellectual work and authorship; authentic assessment, etc. I try to design learning activities where academics can acquire ‘hands-on experience’·working with the different AI conversational agents (alone or in collaboration with their students). The learning activities are also aimed at helping my students to become AI literate; that is, to develop critical and creative thinking to guide GenAI agents to tackle complex problems. To help my students develop such critical and creative thinking, I have adapted the following guidance process from Acar (2023):
- Understand the problem first. As I mentioned before, understanding the task that we want to accomplish with a specific GenAI tool, before asking, is the key to effective communication with the AI. As Acar (2023) states:
Once a problem is clearly defined, the linguistic nuances of a prompt become tangential to the solution.
In the example from the previous section, “to develop a new module comprising 20 ECTS credits”, the first step would be to clarify what “to develop” really means, so that we can define the result or outputs expected from the GenAI: do we want the AI to write the outline of the module, the content of the module, and/or the assessment methods? Do we want the AI to structure the module into self-contained units and adapt the content to a particular delivery mode? Do we want the AI to also design the learning activities?
- Divide the initial task into smaller subtasks or subprocesses where possible. For example, writing down a new module involves several phases or subtasks, as we have seen in the previous step. Defining the learning outcomes of the module and how the outcomes match into the wider learning objectives of the programme of study; defining the temario (equivalent to the English term syllabus) of the module; structuring the module into units or sections; designing the learning activities and deciding whether those activities should be synchronous or asynchronous; defining the pedagogical approaches that would best inform the design of the module, etc. We need to customise carefully all the AI prompts for each of the subtasks that we have divided the initial task into.
- Set the context and provide additional information for precision. We need to provide the GenAI with precise background information and additional data to contextualise the task. Sometimes, we may need to further train the AI by providing it with very specific datasets or examples (warning: watch out for the privacy policy and service terms of the AI provider before feeding the AI agent with any datasets that are private or specific to your institution!). For the creation of a new module, we would need to provide the GenAI agent with relevant contextual information for each of the subtasks that we have identified in the previous step: the student background (undergraduate or postgraduate); the delivery mode (online, face-to-face and/or hybrid); the pedagogical methodology that we will employ for designing the learning materials; the activities and assessment methods of the module, etc.
- Engage the AI to tackle the challenges from a different perspective. We can ask the GenAI to help us overcome the constraints and boundaries that we may encounter when tackling a task or problem. The GenAI can be a valid interlocutor to adopt different perspectives or to find alternative solutions for the tasks. For example, when writing learning materials for a new module, we can ask the AI chatbot to adopt the perspective of a neurodivergent or disabled student to check that our learning materials are accessible and inclusive for all our learners.
- Ask the AI itself for help and give feedback. We must not rely on a single prompt to obtain the result or outcome that we want. In the same way that we provide feedback and guidance to our students after they submit their initial drafts for formative assessment, we can also train or teach-further the AI agent by interacting with and providing feedback to it. For example, if we ask a GenAI chatbot to write down the learning materials for the module that we are teaching, we may want to refine the initial result by asking the AI to change the tone of writing or to incorporate specific facts, theories or data. We can let the AI know when its arguments appear over-complex or too simple, or if the writing style needs to be adapted (such as, ”Can you rewrite your text in a style that better suits a blogpost?”).
- Ask the AI itself to help with the prompting. We can even ask the AI to help us to formulate the correct instructions so that it can solve the task or query. For example, we can ask the AI agent what type of contextual information or additional data it would need from us to tackle a task or to generate the output in a specific format.
Understanding the problem, formulating the right question/s and decomposing the problem into subtasks are AI literacy skills that need to be built up with practice. In Spanish we have a saying: la práctica hace al maestro, that can be translated into English as “practice makes perfect”. The more we interact with GenAI, through practice, research, and by designing learning activities and assessment methods that engage our students in critical conversations with AI, the more AI literate we will become.
References:
Acar, O. A. (2023) ‘AI Prompt Engineering isn’t the future’, Harvard Business Review, 6 June. Available at: https://hbr.org/2023/06/ai-prompt-engineering-isnt-the-future (Accessed 21 January 2024).
Bearman M. and Ajjawi, R. (2023) ‘Learning to work with the black box: Pedagogy for a world with artificial intelligence’, British Journal of Educational Technology, Vol. 54, Issue 5, pp. 1160–1173. Available at: https://doi.org/10.1111/bjet.13337 (Accessed 21 January 2024).
Cambridge Dictionary (2024) ‘Prompt definition’, Cambridge University Press and Assessment. Available at: https://dictionary.cambridge.org/dictionary/english/prompt (Accessed 13 February 2024).
King’s College London (2023) Generative AI in HE. Available at: https://www.kcl.ac.uk/short-courses/generative-ai-in-he (Accessed 21 January 2024).
Lee, S. (2023) ‘AI Toolkit for Educators’, EIT InnoEnergy Master School Teachers Conference 2023. Available at: https://www.slideshare.net/ignatia/ai-toolkit-for-educators (Accessed 21 January 2024).
metaLAB (at) Harvard (2024) About the AI Pedagogy Project. Available at: https://aipedagogy.org/about/ (Accessed 20 February 2024).
_______________________________________________________________________
Mari Cruz García Vallejo is a digital education consultant and a senior fellow at Advance HE. She currently researches and teaches on Generative AI in HE at the ULPGC (Spain) while on sabbatical leave from Heriot-Watt University. As a digital education consultant, Mari Cruz has collaborated with several universities in Europe and the UK. She was a member of the start-up team that developed the Kuwait-Scotland eHealth Innovation Network (KSeHIN) programme, an education collaboration partnership between the Dundee Medical School and the Dasman Diabetes Institute of Kuwait. The KSeHIN programme was nominated twice for the category of the ‘International Collaboration of the Year’ at the Times Higher Education (THE) awards.
You can connect with Mari Cruz at https://www.linkedin.com/in/mari-cruz-garcia-vallejo/ or via her blog https://substack.com/@maricruzgarciavallejo
Did you enjoy reading this? If so, consider becoming a Member of ALT. If your employer is an Organisational Member, membership is free! Find out more: https://www.alt.ac.uk/membership