Future used to AI
©Future used to AI

In a speech at Westminster Abbey in London in late 2018, famous AI scientist Stuart Russell made light (or not) of his “formal agreement with journalists that I won’t talk to them unless they agree not to put a Terminator robot in the article.”

His remark demonstrated his evident disdain for the exaggerated and doomsday-like nature of Hollywood’s portrayals of futuristic AI. Artificial general intelligence (AGI), usually referred to as “human-level AI,” has long been the stuff of science fiction. However, it has very little possibility of being realised any time soon or at all.

Before we achieve something that approaching human-level AI, significant advancements must still be made, according to Russell.

Russell also made the observation that AI is not yet capable of fully comprehending English. This demonstrates a key distinction between humans and AI at this time: whereas humans can interpret and comprehend machine language, AI cannot do the same for human language. AI systems would be able to read and comprehend every piece of writing ever created, though, if we ever get to the point where it can comprehend our languages.

“Once we have that capability, you could then query all of human knowledge and it would be able to synthesise and integrate and answer questions that no human being has ever been able to answer,” continued Russell, “because they haven’t read and been able to put together and join the dots between things that have remained separate throughout history.”

This gives us a lot to consider. In relation to this, the extremely challenging nature of simulating the human brain is yet another justification for AGI’s continued possible future. John Laird, a longtime professor of engineering and computer science at the University of Michigan, has done research in the area for many years.

“The goal has always been to try to build what we call the cognitive architecture, what we think is innate to an intelligence system,” he adds of work that is heavily influenced by human psychology. For instance, we are aware that the human brain is more complex than a collection of uniform neurons. There is a true structure in terms of several elements, some of which are connected to understanding of how things work in the real world.

It’s referred to as procedural memory. Semantic memory, also known as knowledge based on generic facts, and episodic memory, also known as information based on prior experiences (or personal facts), are two further types of knowledge. One of the experiments at Laird’s lab involves teaching a robot basic games like tic-tac-toe and puzzles using natural language instructions. Those instructions often include a description of the objective, a list of permissible actions, and an explanation of failure scenarios. These instructions are internalised by the robot, which then plans its actions using them. But as always, progress is slow to arrive, or at least slower than Laird and his fellow academics would want.


Many prominent AI experts believe in the “singularity,” a nightmarish scenario in which superintelligent machines seize control and irrevocably alter human existence through enslavement or annihilation (some more hyperbolically than others).

Stephen Hawking, a late theoretical physicist, famously predicted that if AI starts creating stronger AI than human programmers, the end result would be “machines whose intelligence exceeds ours by more than ours exceeds that of snails.” Elon Musk considers and has warned that the greatest existential threat to humanity is AGI. He has claimed that attempts to accomplish it resemble “summoning the demon.” He even voiced worry that his friend and Google co-founder Larry Page would unintentionally create something “evil” despite his best efforts.

Even Gyongyosi does not exclude anything. He doesn’t overreact when it comes to forecasts about AI, but he believes that eventually machines will learn and develop on their own without the assistance of humans.

According to Gyongyosi, “I don’t think the methods we use now in these areas will lead to machines that decide to kill us.” We’ll have various methods available and different ways to approach these problems, so I think I’ll have to revisit that remark in five or ten years.

Many people think deadly machines will eventually replace humans in various ways, but they may likely stay the stuff of science fiction.

The findings of an AI survey were released by the Future of Humanity Institute at Oxford University. When will AI performance surpass that of humans? Evidence from AI Experts” offers predictions for the future development of AI from 352 machine learning researchers.

In this group, there were a lot of optimists. A median number of respondents predicted that by 2026, computers will be able to write essays for school; by 2027, self-driving vehicles would eliminate the need for drivers; by 2031, AI would surpass people in the retail industry; by 2049, AI might become the next Stephen King; and by 2053, the next Charlie Teo. The startling clincher: All human employment will be mechanised by 2137. What about people themselves, though? droids, no doubt, serving umbrella drinks to you.

Diego Klabjan, a professor at Northwestern University and the program’s first founding director, considers himself an AGI sceptic.

At the moment, computers can only process about 10,000 words, he claimed. “A few million neurons, then. However, the billions of neurons in human brains are interconnected in a fascinating and complicated fashion, whereas the state-of-the-art [technology] consists of simple connections that follow simple patterns. I therefore don’t see how we might increase the number of neurons from a few million to billions using present hardware and software.


Furthermore, Klabjan doesn’t give much credence to extreme scenarios, such as those in which bloodthirsty cyborgs turn the world into a sweltering wasteland. He is considerably more worried about evil humans feeding flawed “incentives” to machines, such as war robots. The prominent AI researcher Max Tegmark and MIT physics professors stated in a 2018 TED Talk that “the real threat from AI isn’t malice, like in silly Hollywood movies, but competence — AI accomplishing goals that just aren’t aligned with ours.”

The same is true of Laird’s viewpoint: “I definitely don’t see the scenario where something wakes up and decides it wants to take over the world,” he declared. “I don’t believe that will happen; I think that’s science fiction.”

Laird is primarily concerned about “evil humans using AI as a sort of false force multiplier” for crimes like bank robbery and credit card fraud, among many others, rather than with evil AI per se. So even though he frequently complains about the slow pace of development, AI’s slow burn may actually be a blessing.

Laird suggested that it might be necessary to take some time to comprehend what we are making and how we plan to integrate it into society.

But nobody is certain.

In his Westminster speech, Russell stated that “there are several major breakthroughs that have to occur, and those could come very quickly.” He continued, “It’s very, very hard to predict when these conceptual breakthroughs are going to happen,” citing British physicist Ernest Rutherford’s description of the quick transformational effect of nuclear fission (atom splitting) in 1917.

But if and when they do, he emphasised how important preparation is. This necessitates starting or continuous conversations regarding the moral application of AGI and if regulation is necessary. That entails attempting to eradicate data bias, which now poses a serious threat to AI by distorting algorithms. That requires creating and improving security systems that can control technology. Having the humility to understand that just because we can, doesn’t mean we should, is also a requirement.

“Most AGI researchers predict the development of AGI within a few decades; if we enter this situation unprepared, it will likely be the largest error in human history. In his TED Talk, Tegmark warned that this might lead to a ruthless global dictatorship marked by never-before-seen levels of inequality, surveillance, pain, and possibly even human extinction. “But if we steer carefully, we could end up in a fantastic future where everyone’s better off—the poor are richer, the rich are richer, everybody’s healthy, and everyone’s free to live out their dreams.”

Leave a Reply

Your email address will not be published. Required fields are marked *