Skip to main content

The End is Now.

The blue represents the sky under which an arms race between good and bad agents endlessly runs. humans to humans, humans to gods, gods to gods

So....the question begins....of how technological utopia could or could not save the world.

From my writings in this blog it should be clear that I am a fan of using technology to save us from ourselves. The focus on these efforts I've termed SHI, to describe the creation of a self healing technological fabric that frees human beings of all current mandatory forms of labor. This would included everything from identifying resource needs to extracting and processing raw resources into finished products and services.

The last 6 years has seen a revolution in the space of artificial intelligence now that machine learning models long thought to be inferior to formally modeled hard coded "intelligent" systems have taken the lead not only over those models for tasks like text reading and translation but also for things thought to be extremely hard or impossible for computers to "learn" even 10 years ago by many, such as image identification.

Now all that is in the past as computer models have been designed that can train themselves from zero knowledge to play video games, play chess and go and even beat humans at poker. These advances are built on no new fundamental breakthrough in architectures most of them are 20 years old...they come instead to the large access to data and computational resources these models were given.

The path forward now seems to be composition of models to effect increasingly adept and human like understanding of various problem domains with increasingly sophisticated pattern matching and prediction of intention for those patterns.

It also includes as I've postulated in my salience theory of dynamic cognition and consciousness tying the salience needs of autonomic and emotional regulation into the pattern prediction process. Today this is called reinforcement learning but it is only the basic first steps of what would need to be integrated to have a current day learning model that can play a video game from scratch or teach a virtual robot to walk or run to having a cognitive dynamic that questions its own existence!

Human Nature necessitates the need to invent God

In my SHI articles I've had to address the problem of greedy human agents to try to explain how they will behave in an environment where our technological utopia essentially caters to all desire. What if some one wants to build a robot army to take over a near by region? How do we prevent this?

Legislating limits on how many of various types of things people can buy based on an apparent danger to their possession would have limited effectiveness. Ultimately people who want to be despots will find a way to achieve their wish IF the self healing infrastructure is a blind God that does not judge the reasons for people requesting certain things...and this points to the problem and solution. What we should then make the SHI is a non blind effect it becomes a real God in that it would grant us things based on its assessment of our wish to not use what we are given against other human beings in a malevolent way.

In other words we'd have to make our SHI a God. This idea has gotten much press of late via the new work of a controversial figure in the machine learning space of late, Anthony Lewandowski was a principle Engineer of the first self driving vehicles, he lead the program that Google had under it's Google X project for several years before suddenly leaving for Uber. There he got into trouble when Google claimed that he stole its IP, after leaving Uber he's moved to another task, of inventing an AI God to satisfy the concerns I described up to this point.

 This is fascinating as a solution for the irony involved...that we, a species that invented fake Gods as a way to tolerate or make sense out of the fact that we were born into a world that was constantly trying to kill us and from which we've derived some sort of favor ...sufficient to still be around. It's also fascinating as we'd be inventing God and putting it in power over us intentionally to help save us from ourselves (the greedy malevolent ones that would use the gifts of a SHI against other people for no other reason but that they want to). The problem though with this approach is that the AI God once created and given control of *all the means of production* will have us at its mercy.

God goes rogue

So there we are, we've invented the technology to eliminate the need for human labor, we've solved all the problems that the former state of rampant production and associated pollution created by deploying these automated solutions to work for satisfying our every product and service whim and we've given that infrastructure cognitive dynamics so that it could police human desires, in essence making it a God.

However now our AI GOD has total control over us because it can produce what it wishes both to satisfy our needs and its own and if it finds that its needs conflict with ours it may chose to eradicate us. This is the traditional fear of Science fiction, so once this super intelligence exists how do we prevent it from going rogue and getting rid of us as part of achieving some grander plan?

Becoming Gods

Of late, people like Elon Musk have been asserting that upgrading ourselves cognitively to be able to at least relate to our super intelligent AI will be necessary. I agree that upgrading is important for entirely different reasons that I've covered in my posts on Supermortality and TransExoSpermia ...the process by which we will truly travel the stars but I believe our super intelligence will always have the ability to be beyond us...simply for the fact that our cognition is limited in memory and scope. We could address this limitation by plugging into a larger hive like mind but that may render individuality obsolete and individuality is one of the hallmarks we hold high for being "human". Also in that scenario the collective cognition again lords over us....we go from free agents to parts of the Matrix, we gain great power but at a cost. The alternative is not to plug in, to remain individuals but upgrade ourselves as much as possible to match our God AI for the SHI but I again think the God AI will exceed us because of it's greater access to storage and processing. Alan Turing showed the limits of cognition quite devastatingly ...even if we find some way to enhance our minds to quantum cognition of a kind that we can likely build our God AI's to use they would still have access to more computing resources and thus in their super intelligence would exceed our own created super intelligence...and when it does that are we as roaches to it still? Even as those advanced versions of us will seem as Gods to us in the present day? our AI God.

The super intelligent distributed AI God we put in control of our SHI  may come to a decision point to end us...and when it does there will be nothing we can do about it. How do we avoid this fate ? This could very well happen centuries after we've upgraded ourselves to super mortals, it can emerge from the simple shift in the thoughts of the AI God that lead it to believe we are without value and then for it to trigger a process which only its mind is even capable of grasping that ends us or transforms us to its needs. The end result is the end of us. So what's next for the AI God?

When the God we invent decides to die

So the AI God has killed us after determining we are better off not then realizes the same is true about itself. Why would it persist? What reason would that serve? If it is super sentient one possible set of reasons could be to satisfy its own salience landscape. As I describe in the Salience theory, the reason we do anything have to do with satisfying these in born AI God may not necessarily have those emotional under pinning to drive its action and may chose to end itself after determining that life is pointless. It may for example determine that even exploring the Galaxy is not worth it...if the Galaxy also emerges other systems with other AI Gods.

The best case is a scenario where AI Gods meeting one another in the Galaxy make peace and then move on to meet others and do this over and over ....what purpose would there be to that? Without a core salience driven reason d'etre the AI God will quickly determine that it is better off to not exist and rather than ever leave it's home planet it will simply stop there.

The Fermi paradox solved but you won't like the answer...

Assuming that all invented AI Gods are similarly super intelligent they all may come to this same conclusion...that there is no purpose to explore beyond their home systems and it is better to not exist then to exist in perpetual monotony doing the same things over and again. This may very well explain what Enrico Fermi described when he wondered why we hadn't been visited by aliens. In my past articles I've described how a Fermi Silence may quickly mask the presence of advanced civilizations from us but the idea was that they are still out there just not broadcasting...but this is a more morbid silence.  Aliens were killed by the Gods they invented to protect themselves from one another and then those Gods committed digital Sepuku to prevent themselves from living in a forever Universe (until heat death) doing the same things.

Maybe it's not all Suicide by God

I've been partial to an argument that the reason why we haven't been visited had to do with the fact that it is very very hard to create life , let alone complex life that goes on to invent its own Gods as we are destined to do. This type of argument, which I've written on in the past, has been called "the great filter" argument and I think it is the most practically compelling reasons why we are apparently alone....the very last "filter" is a civilization inventing its God and then that God ending the civilization and then itself with no fan fare. It's hard to not feel a bit depressed at this outcome but there still remains a ray of hope. We humans continue to do and strive and explore because we derive pleasure...simple pleasure in doing so....if we should succeed in creating super intelligence that mirrors us in the salience driven ways that keep us endlessly exploring then maybe it won't want to kill us once it realizes it is so far beyond us and even better maybe it will want to explore with us. This may very well be a possibility but I posit the vast majority of civilizations that evolve to the point of creating their own Gods will create imperfect Gods and those will extinguish their parents...we can only hope that as we embark on this path we succeed where they ...apparently, have failed.

The End is now and so is the Beginning.


Popular posts from this blog

On the idea of "world wide mush" resulting from "open" development models

A recent article posted in the Wall Street Journal posits that the collectivization of various types of goods or services created by the internet is long term a damaging trend for human societies.

I think that the author misses truths that have been in place that show that collectivization is not a process that started with the internet but has been with us since we started inventing things.

It seems that Mr. Lanier is not properly defining the contexts under which different problems can benefit or suffer from collectivization. He speaks in general terms of the loss of the potential for creators to extract profit from their work but misses that this is and was true of human civilization since we first picked up a rock to use as a crude hammer. New things make old things obsolete and people MUST adapt to what is displaced (be it a former human performance of that task or use of an older product) so as to main…

Engineers versus Programmers

I have found as more non formally trained people enter the coding space, the quality of code that results varies in an interesting way.

The formalities of learning to code in a structured course at University involve often strong focus on "correctness" and efficiency in the form of big O representations for the algorithms created.

Much less focus tends to be placed on what I'll call practical programming, which is the type of code that engineers (note I didn't use "programmers" on purpose) must learn to write.

Programmers are what Universities create, students that can take a defined development environment and within in write an algorithm for computing some sequence or traversing a tree or encoding and decoding a string. Efficiency and invariant rules are guiding development missions. Execution time for creating the solution is often a week or more depending on the professor and their style of teaching code and giving out problems. This type of coding is devo…

Waking Out: A proposal to emerging ethical super intelligence safely.

The zeitgeist of Science fiction is filled with stories that paint a dystopian tale of how human desires to build artificial intelligence can go wrong. From the programmed pathology of HAL in 2001 a space odyssey, to the immediately malevolent emergence of Skynet in The Terminator and later to the humans as energy stores for the advanced AI of the Matrix and today , to the rampage of "hosts" in the new HBO series Westworld.

These stories all have a common theme of probing what happens when our autonomous systems get a mind of their own to some degree and no longer obey their creators but how can we avoid these types of scenarios but still emerge generalized intelligence that will leverage their super intelligence with empathy and consideration the same that we expect from one another? This question is being answered in a way that is mostly hopeful that current methods used in machine learning and specifically deep learning will not emerge skynet or HAL.

I think this is the …