Mentoring


Raima Muhammad – Are machines growing more dangerously sentient due to large language models?


When was the last time you called a company and directly spoke to a human? When was the last time you completed a sentence when using Google? This only shows the evolution of AI tens of gigabytes in size trained on enormous amounts of data predicting what word comes next while carrying out our searches on different platforms.
The usage of Large Language Models has grown drastically over the past decade as different organizations compete in developing newer and bigger forms. In July 2021, Open AI unveiled GPT-3, a language model that is trained to predict the next word in a sentence.GPT-3 has 175 billion parameters and was trained on 570 gigabytes of text in contrast to its ancestor, GPT-2, which was 100 times smaller, at 1.5 billion parameters and it is able to perform tasks that it was not explicitly trained on like translating sentences to various languages with few to no training examples with features like text summarization, chat bots search and code generation which are absent in earlier models.
Large Language Models will continue to get larger, more powerful and more versatile but it does not mean that they won’t have short comings as expressed in GPT-3 where it generates racist, sexist, and bigoted text, as well as plausible content that when inspected further is factually in accurate, undesirable or unpredictable and sometimes can be used to generate false essays, tweets and news stories. A question then arises about whom to hold accountable for possible harms resulting from poor performance, bias or misuse.
Let’s analyze one of the greatest thought experiment called the Turing Test. In 1950, the British mathematician and cryptanalyst Alan Turing published a paper outlining a provocative thought experiment.

The main aim of this test was to see whether machines were able to think for themselves. If the machine were able to consistently fool the interviewer into believing it was human for about 30 percent of the time, then it would be intelligent. Garry Marcus, a cognitive scientist and co-author of the book “Rebooting AI” argues that this test does’nt test intelligence but rather the ability of a given software program to pass as human. This is true because over the past couple of years different AIs have tried to beat it i.e. Eliza, Parry, Eugene etc.
In 2014, Eugene Goostman passed the legendary Turing Test tricking 33% of a panel of judges into believing he was a true 13 year old Ukrainian boy during the course of a five-minute chat conversation. Not only is this test outdated as it was defeated but also is a red flag as it is fundamentally about deception and any system capable of passing is carries a danger to deceiving people.


The only way we then have to be cautious about this LLM’S is to amend the Turing Test by making it to test for the speed of the LLM’s in order to continuously evolve the LLM’s and to check if the mechanism in which the LLM works would include bringing up of controversial issues and to take the final extent that if the LLM fails the test it must not be issued out to the public. Hence forth taking the moral high ground of public protection and evolution.

Leave a Reply

Your email address will not be published.