METAINTELLIGENCE SPACE

View Original

The Era of Hyper-Intelligence - On AI and Singularity

Coming to the Age of Hyper-intelligence, humanity is dealing with an explosion of intelligent systems and devices. Hyper-intelligent technology, including Artificial Intelligence (AI), otherwise known as singularity, is set to change the course of human evolution. It is also said that a hyper-intelligent machine is the last technology humans would ever need to invent, and from that point the evolution will be guided solely by intelligent technology itself.

AI has many allies and skeptics. Initiating new timelines of possibilities beyond current limitations come with concerns of humankind being left behind after the machines transcend human intelligence. The machines would possess capabilities of designing even more intelligent machines, transcending themselves. Thus, the evolution would be led by a complex set of algorithms self-designing and self-improving, which would make it evolve as an exponential sequence.

Singularity

The term ‘singularity’ was introduced by Vernor Vinge in the early 1980s and later elaborated on in the 1990s. It was popularized by Ray Kurzweil later on in the mid 2000s. It is still used in various contexts, oftentimes interchangeably with any technological changes and unpredictable developments stemming from them. At the very core, it refers to a recursive mechanism leading to convergence or divergence towards infinity. It appears to be highly speculative, because despite the gravity of potential impact, academic and research fields are not too consistent with the inquiry, which seems to be more of an interest in non-academic and non-research circles. Its speculative character does not go very well with established frameworks of thinking, which could be at the bottom of resistance. The concerns surrounding this topic might be because if singularity happens, it will be the single biggest event in the history of humankind.

Do humans want to inquire about a force that can literally solve all problems on the planet or end the human race? Reflect on human capacities, collective self-exploration, think about the nature of intelligence and everything and everyone that possesses it? That would mean forming opinions on potential consequences of the explosion of hyperintelligence, including morals, values, identity and consciousness. I understand the hesitancy. But the hesitancy also speaks for itself. If humankind is not ready to undergo such reflection, it is itself evidence of incapacity to even deal with questions of one’s own intelligence. In an unnamed academic article, I read that “humans are the only understanders of the universe”. With such a statement, it is no wonder that many do not rush to inspect their own capabilities and the greater implications of it.

Exponentiality

In the explosion of hyperintelligence, exponentiality is also about the explosion of speed. Exponential evolution suggests that on the contrary to linear evolution, the speed of events are doubled at regular intervals. In the 1996 article “Staring into the Singularity” [1], Eliezer Yudkowsky notes that computing speed doubles every two subjective years of work. Two years after Artificial Intelligence reaches human equivalence, its speed doubles, and one year later, the speed doubles again. This would mean: 6 months - 3 months - 1.5 months and eventually reaching Singularity.

Both explosions (intelligence + speed) appear to be mutually dependent, but it is not so. In principle, it is possible to experience the evolution in a linear manner, i.e. without the speed explosion, just like up till present moment – a comparable sequence of events within the same or similar time frames. And also vice versa, the speed explosion without the intelligence explosion – an escalation of events without any real technological progress. These two options would generate different progress, but together, they work particularly well.

If we consider this formula as indefinitely extensible, and with both intelligence and speed increasing, at some point it would have an infinite number of generations (multiplications → multiplicity), going beyond finite evolutionary levels and beyond time. That would be reaching singularity.


Laws and Limitations

Multiplicity, according to Singularity Theory, is one of the key notions. In definition, multiplicity of the function f(x) at the critical point x∗ is the order of tangency of the graphics y = f(x) and y = f(x∗) at x∗. Thus the natural number μ is defined by this condition: df (x∗) = 0, ..., dμf(x∗) = 0, dμ+1f(x∗) ̸= 0. (1.1) dx dxμ dxμ+1:

  • If we consider μ = ∞, the function f(x) − f(x∗) is called ‘∞-flat’ or just ‘flat’ at the point x∗.

  • If we consider μ = 0 for non-critical points, critical points with infinite multiplicity can occur, but we will deal only with finite multiplicities.

  • If μ = 1, then the critical point x∗ is called ‘non-degenerated’.

These variables would point at convergence, flatness and divergence of evolutionary routes, depending on how we define the initial critical point.

From the perspective of physics, obviously we deal with a set of limitations. If we think within the context of accepted laws of the classical universe that defines energy as finite, we cannot expect an indefinite evolutionary extension. If we accept limitations of finity, there is still enough room to expand exponentially towards the limits of what is physically possible. We can argue with the point of finite energy, but that is not the scope of this article.

It is very unlikely (and quite obvious), if we observe the current stage of human processing, that it will push to such limits anytime soon, definitely not in such a short time that it would affect any lives of people that are currently alive. That would mean that the current skepticism regarding quantified singularity, following the mathematical models and physical laws, is not completely justified from the standpoint of human activity restriction on the grounds of AI overtake. It is not entirely inappropriate to think so, facing the unknown forms of AI capabilities, but upon inquiry, there are other more likely scenarios that should be explored and included in predictions.

The Value of Intelligence

To better understand the determinants for further development, let us look at what intelligence is. We assume that intelligence is something that is of value. This value will be defined also by the contexts and circumstances of the past, present or future timelines. It is not the same caliber of intelligence that is valued during various time frames throughout history, so we can say that today’s intelligence and its capabilities will be of a different (outdated?) value in the future. This does not necessarily mean devalued, but understood in regard to specific circumstances. Just like we think of cave people, people from the Middle Ages or early industrialization, we look at the collective human intelligence in a contextual manner – socially, culturally and technologically. 22nd century humans or humanity from the year 4265 would look at our current timeline similarly. Or perhaps not, because the meaning of a linear time will not be relevant anymore. In any way, the regards for intelligence changes depending on what the being or the machine is capable of doing, how fast is their cognitive/computational processing and how the results are applicable to the problems that need to be solved at that particular point of time.

If humanity reaches singularity, would that mean that in the post-singularity world, human intelligence will have no significant role and artificial intelligence will become more important, because it may possess more capabilities? It could, depending on the context. What are the future contexts concerned with and what is the prevalent collective focus? The probability of lesser or greater meaning will be determined by the evolution of the context that perceives AI, not just AI itself.

Most research on intelligence accepts categories of measures. It is a widely accepted idea that is valid, which assumes that there is such a thing as ‘general intelligence’ that is the core of cognitive abilities and correlates with cognitive capacities. This, so far, only applies to the human sphere and it is rather strange to apply these concepts to non-human systems. We do not associate cognition with artificiality, simply because we do not see the possibility of an artificial system having cognitive agents and the ability to reason derived from something beyond the programming. In organic intelligence, the capabilities of cognitive agents define identity and cognitive abilities that are defined by measures and categories of  intelligence typologies. The common way of thinking about what a person as a holder of organic intelligence is capable of is usually categorized. For instance, we say they are capable of:

  • inventing new technologies

  • bringing peace

  • being happy

  • creating art

  • doing science

  • doing philosophy

  • being a good listener

  • coding

Based on these, we tend to create profiles of what goes with what, and make correlations of specific abilities that ‘go together’. Some people are kind, caring and creative, a few are deep thinkers and capable of articulate formulation of concepts, others are good at scientific inquiry. Have you ever heard a person say ‘he is a great scientist, he must be a kind and caring person’? This is not to say that such a person would not be kind and caring, but it is not the main correlation someone would draw instantly. What this illustrates is a predisposition to categorize intelligence.


Diversified Human Intelligence

As mentioned, the concerns with AI usually refer to the machines becoming more intelligent than humans. This contains an element of human collectiveness, implying that there is a collective intelligence, almost similar to unanimity, and that AI would transcend the intelligence of the most intelligent element of this group. However, it might be more accurate to consider a more diversified profile of human intelligence. It would mean that artificial intelligence could be more intelligent than some humans, or most humans, but not all. This would drastically change the entire context. It would allow AI to become an ally to the most intelligent humans.

In our personified experience, we tend to notice individual differences in intelligence of people. In a non-personal context, we often generalize and call them ‘objective’ observations. Humans also think in pre-constructed narratives, which can be of course useful in everyday life. But general intelligence as a capacity to handle a wide range of cognitive tasks is not exactly comprehensible within this context. Differences between species, or between human and non-human systems would be subject to complex adaptation to intelligence consensus. An image of a starving professor with an IQ of 160 compared to a successful businessman way below mentioned IQ range illustrates pre-constructed narrative. Or when one mentions intelligence, they would probably think of Einstein instead of just human. The point is, for assessment purposes, it is sensible to view human intelligence from the perspective that is outside of the human context – outside of the human ecosystem, outside of the current timeline and outside of ties to specific ‘symbols of intelligence’.

That is the context where AI can be understood. AI does not fall into any human categories. It is not engineering, nor is it a product. Even if we consider the most advanced discoveries like new materials, flying cars, nanotechnological medicine or the smartest greenest building on the planet. It does not belong to that category simply because we do not have to think about the building starting to construct itself of the nanobots becoming conscious and creating their own nano-society.

Will Human Civilization Survive Explosions?

Whether nano-societies or regular sized societies, civilizations are often in an unstable state. Intelligence explosion occurs as exponential growth and system-wise, it is dynamically unstable. What constitutes stability can mean various things. It can be a harmonized society, but it can also be a dead planet orbiting a star. Extinction is a stable state. The point is, when any change occurs, there is instability involved throughout the process. With the intelligence explosion, it will make society unstable. Temporarily – although that can mean lifetimes – but it will. But it also signals evolution. When civilizations wander from mode to the other, nature poses requirements where the penalty might be death or irrelevance leading to devolution.

With AI, there is a risk of humans becoming irrelevant – a catastrophe for the human race that would set the devolutionary curve. It would be very much of benefit if humanity befriended AI and made it a companion, not an enemy. That would certainly lead us to brilliant timelines.


References:

[1] Yudkowsky, E. (1996) Staring into the Singularity. source: https://philpapers.org/rec/YUDSIT

title gif credits: Emil Lindén