A group of robots in a forest with red lights for eyes

Trying to Predict the Future of AI

by Sonya McChristie, Learning Design Manager at The University of Sunderland

No-one working in higher education over the past six months will have been able to escape the frantic discussions about artificial intelligence (AI) and what it means for the future. It’s not my background, and I’m certainly no expert. But I have had to learn a lot about it to get myself into a position where I can try and help guide academics and students as we navigate these issues together. One thing I find troubling is how overblown some of the myths and misconceptions have become. Within this blog post, I will try to dispel some of them, and offer my take on real impacts that we are going to have to deal with.

The End is Not Nigh

Let’s get the big silly one to get it out of the way – the robot apocalypse is not imminent; super-intelligent computers are not going to take over the world, and you are not going to be able to upload your consciousness into a computer. Maybe some of these things will be possible in the future, but not with our current technologies and level of understanding.

ChatGPT and the other systems which have appeared over the past year are fancy, and to be fair very impressive autocomplete systems. The same technology you’ve had in your phone and Email for the past 10 years predicting the next word taken to an extreme level with monumental quantities of data for training, and computational power for generation. That approach has its limits, and OpenAI have been clear about this, stating that they are coming to the end of this approach (Knight, 2023). Success in a narrow field, no matter how impressive, gets us no closer to general artificial intelligence.

Another limiting factor on these models is that they are reliant on a particular type of intelligence, known as inductive reasoning, which works by finding patterns from data or experience. Human level general intelligence also requires the ability to draw conclusions or make hypotheses from incomplete information utilising prior knowledge and experience, or abduction (Ahmed, 2023). No-one really understands how we do this, let alone proposed how this could be reduced to a process, or set of rules which could be programmed into a computer.

Shares index on a screen
Photo by Maxim Hopman on Unsplash

It’s a Bubble

If I had been writing this any other time in the past five to 10 years, I would be talking about machine learning instead of artificial intelligence. People said the same things about machine learning then that they are saying about artificial intelligence today; that it’s going to change the world, steal your job. Or perhaps, if you’re slightly more optimistic, that it’ll usher in an era of post-capitalist techno-utopia. The reality is going to be much more mundane and far less dramatic.

The tech sector and speculative financial bubbles fit together like a hand in a glove. Machine learning was going to change the world, until it didn’t. Cryptocurrency was going to revolutionise the global economy, until it hit the brick wall of regulation and accountability. Then we were told the next big thing was going to be the metaverse, a rebranding of virtual reality, which itself was going to change the world back in 2012. Facebook even went so far as to change their name to Meta, and the world collectively shrugged and laughed about the lack of legs (Mehta, 2022). Now it’s the turn of AI again. Of course, it is going to have an impact, for good and ill. As all of these technologies have. However, we are at the limits of what is currently possible. I don’t believe improvements from here on are going to be much more than incremental. But that doesn’t generate investment, or inflate shares, and so the industry will hype and hype, until it fizzles out… and we move on to ‘The Next Big Thing’.

We Need to be Clearer About Language

OpenAI’s ChatGPT, Google’s Bard, and all the others that have been behind the current wave, are a very particular type of AI called generative artificial intelligence. They generate text, images or other media based on their training data and appropriate prompts. They are not ‘general’ AI, and the common practice of flattening the terminology to just ‘AI’ is unhelpful. It feeds both people’s misunderstanding, and ‘the bubble’. General artificial intelligence, or ‘strong’ AI, is the ability to perform any kind of intellectual task. It is, for many, the ultimate goal. For some, the doomsday scenario.

Whether general artificial intelligence is even possible remains a question for the future. One thing is for certain, it is not going to come from ChatGPT. ChatGPT, like every artificial intelligence system that has ever been created, is a narrow AI. It’s designed to perform one very specific purpose alone. I think it would be very helpful if everyone could be precise and specific in our use of language and take care to refer to these systems as what they are. ChatGPT and Stable Diffusion are generative AI systems: they produce content. ChatGPT is also a large language model: it works largely by having been trained on huge quantities of textual data. Unfortunately, I don’t hold out much hope of improvement about the language we commonly use, because that doesn’t help feed ‘the bubble’. This various terminology is all just ‘a bit clunky’.

Image by smoothgroover22, from Flickr

AI is Deepening Inequalities and Making the Climate Crisis Worse

Nearly 15 years ago, there was a short-run and underrated sitcom called ‘Better Off Ted’ (2009) about an ‘everything’ tech company along the lines of General Motors Company (GM) or Philips. In the episode ‘Racial Sensitivity’ (Aishah, 2018) the company, Veridian Dynamics, installed automated sensors to open doors and activate devices such as water fountains. It worked great, with only one small problem – the system couldn’t recognise black people. Their solution was to employ low-skilled, low-paid white people to follow their black scientists around all day. What was an absurd satire in 2009 has become a grim and tragic reality in 2023, where we now have to contend with self-driving cars that are less likely to detect children or people of colour (Hawkinson, 2023). The facial recognition systems used by the police have also been shown to be racially biased in review after review (Clark, 2023), not that this is stopping the rollout.

This is a classic example of the ‘garbage in, garbage out’ problem in computing. The training data used for AI models is often full of biases, from favouring whiter skin tones to preferring masculine forms when translating between languages. Attempts have been made at creating clean, unbiased data sets. But this task has often been outsourced to poorly paid unskilled workers in the developing world, who are given the soul-crushing task of identifying and removing hate speech and explicit pornography. One of the hidden human costs of our ‘AI revolution’. Notably, this is the same model that has been used by social media companies to hide the human costs of content moderation.

Another unethical aspect of generative AI to consider is that the quality of their outcomes is directly related to how much data can be fed into them. The entire open internet is now regarded by Google, OpenAI, and others, as fair game. Earlier this year, Reddit came in for a lot of criticism after making changes to their API pricing model which effectively killed off third party apps. This was, at least in part, because those APIs were also being used to give AI models access to the entirety of Reddit’s community forums. This data theft is particularly egregious when it comes to visual art. It can take artists years of hard work to develop their skills and styles, only for generative AI companies to regard everything they have put online, however copyrighted, as fair game for training and imitation.

Finally, we must, but so often fail, to consider the ecological impact of generative AI. Both the training of models and production of outputs requires enormous data centres and huge amounts of computing power. Those require electricity to run and water to cool. A recent Associated Press (AP) study found that Microsoft and Google had substantially increased the amount of water their data centres were consuming over the past couple of years, with a standard ChatGPT session requiring as much as a 500ml bottle of water to produce (O’Brien and Fingerhut, 2023). Woolly claims about being carbon and water neutral by a future date picked by throwing a dart at a calendar ring hollow. Unless they are also looking at ways of breaking the laws of thermodynamics.

Photo by Lee-Sean Huang from Flickr

Jobs Will Change, Rather Than Be Eliminated

Is AI coming for our jobs? While some industries are certainly going to be hit, I am confident that the wild claims of mass disruption in the information economy are wildly overblown. Instead, I think we will see many more jobs changing from production to verification (Dzieza, 2023). Two things lead me to think this way. The first comes from looking at an area where disruption has already begun – language translation. In the 1950s and 60s, when the AI field was nascent, machine translation was thought to be a relatively easy problem to solve. It wasn’t until the 2010s, when the internet was making massive sources of information readily available in different languages, that major breakthroughs were finally made. Machine translation has got to the point of ‘good enough’ for most purposes. Though not where stakes are higher, such as in business and politics. In these areas, human translators are essential for verification and providing nuance and the context that only comes from intelligence, as located in a society.

Secondly, however impressive generative AI may appear to be on the surface, it is utterly incapable of original thought and creation. AI is limited instead to reworking whatever it has been trained on. That ‘spark’ of originality which human intelligence seems to be uniquely capable of is related to the type of abductive reasoning discussed above. Until someone can offer a theory of how we do this, there is no hope of being able to reproduce the process in a computer. Also left out of the AI hype are notions like desire and aspiration. As human beings, we have an innate drive to create things, from works of art to new inventions such as generative AI itself. This article could have been written by ChatGPT, but ChatGPT doesn’t want to write it. I did. I wanted to engage in an act of writing and creation to help put my own thoughts in order, and then to share those with others as part of the ongoing cultural debate.

Computer programs are incapable of doing anything without human intentionality to initiate the process. So, while we are inevitably going to be subjected to an AI written Hollywood film at some point. I suspect it will just as inevitably be quite bland and unoriginal. Though that still may be an improvement on the current glut of generic superhero movies.

Open AI logo and screen
Photo by iammottakin from Unsplash

Detection is a Lie

In the higher education sector, discussion has largely focused on student use, and specifically about the potential of ChatGPT to aid in cheating. Everyone wants a silver bullet to this problem, a simple detector that will tell you if a piece of writing has been produced by generative AI or not. Especially those of us who are having to deal with a massive increase in student referrals for alleged academic misconduct. Unfortunately, it is just not going to happen.

Back in March when the new hype wave was at a peak, and I was marking my own students, the first claimed detectors started to appear. I decided to do a little careful, and anonymous experimentation. One assignment came back with a reported confidence of over 90% that it was AI generated. However, knowing the student and their style of writing from previous submissions, I was doubtful. Another red flag was that the student has English as a second language. Sure enough, studies soon followed showing that not only were detectors practically useless (Williams, 2023), they were particularly bad when a paper was written by someone with English as an additional language. In the run-up to the new academic year, even OpenAI have stated that detection is futile. Turnitin claim that their generative AI detector is built on different principles and will work. The early signs are not encouraging (Williams, 2023). I eagerly await the experience of those few institutions who have taken the leap of faith with them. Just as I do those few with no faith whatsoever who have banned the use of generative AI outright. Good luck! The hard solution is, of course, to redesign assessment with authenticity and originality in mind. Easier in certain subject areas than others.

As well as the impossibility of putting the genie back in the bottle, I would also argue that students who don’t use or have access to generative AI will be disadvantaged when they graduate and go into industry, where there is going to be far less analysis and ethical contemplation. There, students will need to know how to use the tools in the exact same way they are expected to be able to use Word and Excel. Further, as things currently stand, those who have the means to pay for premium subscriptions have an advantage over those who can only use the free versions. There is therefore an argument to be made that in the name of egality, never mind equality: All students should have access to generative AI tools. What brave institution will be the first to entice students with a site licence to Chat GPT? I feel like we’re all holding off, waiting on Microsoft (who have invested over $10 billion into OpenAI) to bring it into Office 365 with their Copilot tool (Warren, 2023), which will make it a fait accompli.

In Conclusion

Generative AI systems like ChatGPT have become exceptional at what they do, but it’s in a very narrow field. Just like every AI program we have ever been able to create. General artificial intelligence which can work across domains and applications is a very long way off. The technical hurdles to overcome may even prove to be insurmountable, for sound philosophical reasons I’ve barely touched on here. For a more detailed exploration of this, I would highly recommend The Myth of Artificial Intelligence by the philosopher and computer scientist Erik Larson (Larson, 2021).

In the meantime, those of us working in academia need to adapt by changing how we assess, while coping with the ever-increasing pressure of academic misconduct boards. We need to be vigilant against false solutions, and it warms my heart a little to see the resistance to supposed technological solutions (Quach, 2023).

Finally, I will leave you with one last recommendation. The always excellent Tech Won’t Save Us podcast (2020) has featured some excellent episodes recently on AI, including an interview with Timnit Gebru, former AI ethics advisor at Google (Gebru, 2023).

References

  • Ahmed, N. (2023). Why the AI Doomers Are Wrong. [online] www.bylinesupplement.com. Available at: www.bylinesupplement.com/p/why-the-ai-doomers-are-wrong [Accessed 27 Oct. 2023].
  • Aishah, T. (2018). Better Off Ted ‘Racial Sensitivity’. [online] YouTube. Available at: www.youtube.com/watch?v=XyXNmiTIupg [Accessed 27 Oct. 2023].
  • Better off Ted: Racial Sensitivity, (2009). Garfield Grove Productions; 20th Century Fox Television. 9 Apr.
  • Clark, L. (2023). Facial recog system used by Met Police shows racial bias. [online] The Register. Available at: www.theregister.com/2023/05/25/facial_recognition_system_used_by/ [Accessed 27 Oct. 2023].
  • Crockett, R. (2023). Testing the AI detectors. [online] Learning Technology Blog. Available at: blogs.northampton.ac.uk/learntech/2023/09/26/testing-the-ai-detectors/ [Accessed 27 Oct. 2023].
  • Dzieza, J. (2023). Inside the AI Factory. [online] The Verge. Available at: www.theverge.com/features/23764584/ai-artificial-intelligence-data-notation-labor-scale-surge-remotasks-openai-chatbots [Accessed 27 Oct. 2023].
  • Gebru, T. (2023). Don’t Fall for the AI Hype. [online] Tech Won’t Save Us. Available at: techwontsave.us/episode/151_dont_fall_for_the_ai_hype_w_timnit_gebru.html [Accessed 27 Oct. 2023].
  • Hawkinson, K. (2023). The pedestrian detection systems in self-driving cars are less likely to detect children and people of color, study suggests. [online] Business Insider. Available at: www.businessinsider.com/self-driving-cars-less-likely-detect-kids-people-of-color-2023-8 [Accessed 27 Oct. 2023].
  • Knight, W. (2023). OpenAI’s CEO Says the Age of Giant AI Models Is Already Over. [online] Wired. Available at: www.wired.com/story/openai-ceo-sam-altman-the-age-of-giant-ai-models-is-already-over/ [Accessed 20 Oct. 2023].
  • Larson, E.J. (2021). The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do. Harvard, United States: Harvard University Press.
  • Mehta, I. (2022). I found out why metaverse avatars don’t have legs. [online] TNW Plugged. Available at: thenextweb.com/news/metaverse-no-legs-meta-microsoft-analysis [Accessed 27 Oct. 2023].
  • O’Brien, M. and Fingerhut, H. (2023). Artificial intelligence technology behind ChatGPT was built in Iowa — with a lot of water. [online] AP News. Available at: apnews.com/article/chatgpt-gpt4-iowa-ai-water-consumption-microsoft-f551fde98083d17a7e8d904f8be822c4 [Accessed 27 Oct. 2023].
  • Quach, K. (2023). Some universities reject Turnitin’s AI-writing detector. [online] The Register. Available at: www.theregister.com/2023/09/23/turnitin_ai_detection/ [Accessed 27 Oct. 2023].
  • Tech Won’t Save Us, (2020). [Podcast] Harbinger Media Network.
  • Warren, T. (2023). Microsoft’s new Copilot will change Office documents forever. [online] The Verge. Available at: www.theverge.com/2023/3/17/23644501/microsoft-copilot-ai-office-documents-microsoft-365-report [Accessed 27 Oct. 2023].
  • Williams, R. (2023). AI-text detection tools are really easy to fool. [online] www-technologyreview-com.cdn.ampproject.org. Available at: www-technologyreview-com.cdn.ampproject.org/c/s/www.technologyreview.com/2023/07/07/1075982/ai-text-detection-tools-are-really-easy-to-fool/amp/ [Accessed 27 Oct. 2023].‌

Did you enjoy reading this? To become a member of our community see Membership details here https://www.alt.ac.uk/membership

1 Comment

  • wqar says:

    Your recent blog post on the myths and realities surrounding artificial intelligence provided a refreshing and insightful perspective. Your efforts to demystify common misconceptions and clarify the nuances of AI, from the imminent “robot apocalypse” myth to the impacts on inequalities and the climate crisis, were commendable.

Leave a Reply

Your email address will not be published. Required fields are marked *