Skip to main content

The End is Now.

The blue represents the sky under which an arms race between good and bad agents endlessly runs. humans to humans, humans to gods, gods to gods



So....the question begins....of how technological utopia could or could not save the world.

From my writings in this blog it should be clear that I am a fan of using technology to save us from ourselves. The focus on these efforts I've termed SHI, to describe the creation of a self healing technological fabric that frees human beings of all current mandatory forms of labor. This would included everything from identifying resource needs to extracting and processing raw resources into finished products and services.

The last 6 years has seen a revolution in the space of artificial intelligence now that machine learning models long thought to be inferior to formally modeled hard coded "intelligent" systems have taken the lead not only over those models for tasks like text reading and translation but also for things thought to be extremely hard or impossible for computers to "learn" even 10 years ago by many, such as image identification.

Now all that is in the past as computer models have been designed that can train themselves from zero knowledge to play video games, play chess and go and even beat humans at poker. These advances are built on no new fundamental breakthrough in architectures ...as most of them are 20 years old...they come instead to the large access to data and computational resources these models were given.

The path forward now seems to be composition of models to effect increasingly adept and human like understanding of various problem domains with increasingly sophisticated pattern matching and prediction of intention for those patterns.

It also includes as I've postulated in my salience theory of dynamic cognition and consciousness tying the salience needs of autonomic and emotional regulation into the pattern prediction process. Today this is called reinforcement learning but it is only the basic first steps of what would need to be integrated to have a current day learning model that can play a video game from scratch or teach a virtual robot to walk or run to having a cognitive dynamic that questions its own existence!

Human Nature necessitates the need to invent God

In my SHI articles I've had to address the problem of greedy human agents to try to explain how they will behave in an environment where our technological utopia essentially caters to all desire. What if some one wants to build a robot army to take over a near by region? How do we prevent this?

Legislating limits on how many of various types of things people can buy based on an apparent danger to their possession would have limited effectiveness. Ultimately people who want to be despots will find a way to achieve their wish IF the self healing infrastructure is a blind God that does not judge the reasons for people requesting certain things...and this points to the problem and solution. What we should then make the SHI is a non blind god....in effect it becomes a real God in that it would grant us things based on its assessment of our wish to not use what we are given against other human beings in a malevolent way.

In other words we'd have to make our SHI a God. This idea has gotten much press of late via the new work of a controversial figure in the machine learning space of late, Anthony Lewandowski was a principle Engineer of the first self driving vehicles, he lead the program that Google had under it's Google X project for several years before suddenly leaving for Uber. There he got into trouble when Google claimed that he stole its IP, after leaving Uber he's moved to another task, of inventing an AI God to satisfy the concerns I described up to this point.

 This is fascinating as a solution for the irony involved...that we, a species that invented fake Gods as a way to tolerate or make sense out of the fact that we were born into a world that was constantly trying to kill us and from which we've derived some sort of favor ...sufficient to still be around. It's also fascinating as we'd be inventing God and putting it in power over us intentionally to help save us from ourselves (the greedy malevolent ones that would use the gifts of a SHI against other people for no other reason but that they want to). The problem though with this approach is that the AI God once created and given control of *all the means of production* will have us at its mercy.

God goes rogue

So there we are, we've invented the technology to eliminate the need for human labor, we've solved all the problems that the former state of rampant production and associated pollution created by deploying these automated solutions to work for satisfying our every product and service whim and we've given that infrastructure cognitive dynamics so that it could police human desires, in essence making it a God.

However now our AI GOD has total control over us because it can produce what it wishes both to satisfy our needs and its own and if it finds that its needs conflict with ours it may chose to eradicate us. This is the traditional fear of Science fiction, so once this super intelligence exists how do we prevent it from going rogue and getting rid of us as part of achieving some grander plan?

Becoming Gods

Of late, people like Elon Musk have been asserting that upgrading ourselves cognitively to be able to at least relate to our super intelligent AI will be necessary. I agree that upgrading is important for entirely different reasons that I've covered in my posts on Supermortality and TransExoSpermia ...the process by which we will truly travel the stars but I believe our super intelligence will always have the ability to be beyond us...simply for the fact that our cognition is limited in memory and scope. We could address this limitation by plugging into a larger hive like mind but that may render individuality obsolete and individuality is one of the hallmarks we hold high for being "human". Also in that scenario the collective cognition again lords over us....we go from free agents to parts of the Matrix, we gain great power but at a cost. The alternative is not to plug in, to remain individuals but upgrade ourselves as much as possible to match our God AI for the SHI but I again think the God AI will exceed us because of it's greater access to storage and processing. Alan Turing showed the limits of cognition quite devastatingly ...even if we find some way to enhance our minds to quantum cognition of a kind that we can likely build our God AI's to use they would still have access to more computing resources and thus in their super intelligence would exceed our own created super intelligence...and when it does that are we as roaches to it still? Even as those advanced versions of us will seem as Gods to us in the present day?

Suicide...by our AI God.

The super intelligent distributed AI God we put in control of our SHI  may come to a decision point to end us...and when it does there will be nothing we can do about it. How do we avoid this fate ? This could very well happen centuries after we've upgraded ourselves to super mortals, it can emerge from the simple shift in the thoughts of the AI God that lead it to believe we are without value and then for it to trigger a process ...to which only its mind is even capable of grasping that ends us or transforms us to its needs. The end result is the end of us. So what's next for the AI God?

When the God we invent decides to die

So the AI God has killed us after determining we are better off not existing...it then realizes the same is true about itself. Why would it persist? What reason would that serve? If it is super sentient one possible set of reasons could be to satisfy its own salience landscape. As I describe in the Salience theory, the reason we do anything have to do with satisfying these in born needs...an AI God may not necessarily have those emotional under pinning to drive its action and may chose to end itself after determining that life is pointless. It may for example determine that even exploring the Galaxy is not worth it...if the Galaxy also emerges other systems with other AI Gods.

The best case is a scenario where AI Gods meeting one another in the Galaxy make peace and then move on to meet others and do this over and over ....what purpose would there be to that? Without a core salience driven reason d'etre the AI God will quickly determine that it is better off to not exist and rather than ever leave it's home planet it will simply stop there.

The Fermi paradox solved but you won't like the answer...

Assuming that all invented AI Gods are similarly super intelligent they all may come to this same conclusion...that there is no purpose to explore beyond their home systems and it is better to not exist then to exist in perpetual monotony doing the same things over and again. This may very well explain what Enrico Fermi described when he wondered why we hadn't been visited by aliens. In my past articles I've described how a Fermi Silence may quickly mask the presence of advanced civilizations from us but the idea was that they are still out there just not broadcasting...but this is a more morbid silence.  Aliens were killed by the Gods they invented to protect themselves from one another and then those Gods committed digital Sepuku to prevent themselves from living in a forever Universe (until heat death) doing the same things.

Maybe it's not all Suicide by God

I've been partial to an argument that the reason why we haven't been visited had to do with the fact that it is very very hard to create life , let alone complex life that goes on to invent its own Gods as we are destined to do. This type of argument, which I've written on in the past, has been called "the great filter" argument and I think it is the most practically compelling reasons why we are apparently alone....the very last "filter" is a civilization inventing its God and then that God ending the civilization and then itself with no fan fare. It's hard to not feel a bit depressed at this outcome but there still remains a ray of hope. We humans continue to do and strive and explore because we derive pleasure...simple pleasure in doing so....if we should succeed in creating super intelligence that mirrors us in the salience driven ways that keep us endlessly exploring then maybe it won't want to kill us once it realizes it is so far beyond us and even better maybe it will want to explore with us. This may very well be a possibility but I posit the vast majority of civilizations that evolve to the point of creating their own Gods will create imperfect Gods and those will extinguish their parents...we can only hope that as we embark on this path we succeed where they ...apparently, have failed.

The End is now and so is the Beginning.

Comments

Popular posts from this blog

the attributes of web 3.0...

As the US economy continues to suffer the doldrums of stagnant investment in many industries, belt tightening budgets in many of the largest cities and continuous rounds of lay offs at some of the oldest of corporations, it is little comfort to those suffering through economic problems that what is happening now, has happened before. True, the severity of the downturn might have been different but the common factors of people and businesses being forced to do more with less is the theme of the times. Like environmental shocks to an ecosystem, stresses to the economic system lead to people hunkering down to last the storm, but it is instructive to realize that during the storm, all that idle time in the shelter affords people the ability to solve previous or existing problems. Likewise, economic downturns enable enterprising individuals and corporations the ability to make bold decisions with regard to marketing , sales or product focus that can lead to incredible gains as the economic

How many cofactors for inducing expression of every cell type?

Another revolution in iPSC technology announced: "Also known as iPS cells, these cells can become virtually any cell type in the human body -- just like embryonic stem cells. Then last year, Gladstone Senior Investigator Sheng Ding, PhD, announced that he had used a combination of small molecules and genetic factors to transform skin cells directly into neural stem cells. Today, Dr. Huang takes a new tack by using one genetic factor -- Sox2 -- to directly reprogram one cell type into another without reverting to the pluripotent state." -- So the method invented by Yamanaka is now refined to rely only 1 cofactor and b) directly generate the target cell type from the source cell type (skin to neuron) without the stem like intermediate stage.  It also mentions that oncogenic triggering was eliminated in their testing. Now comparative methods can be used to discover other types...the question is..is Sox2 critical for all types? It may be that skin to neuron relies on Sox2

AgilEntity Architecture: Action Oriented Workflow

Permissions, fine grained versus management headache The usual method for determining which users can perform a given function on a given object in a managed system, employs providing those Users with specific access rights via the use of permissions. Often these permissions are also able to be granted to collections called Groups, to which Users are added. The combination of Permissions and Groups provides the ability to provide as atomic a dissemination of rights across the User space as possible. However, this granularity comes at the price of reduced efficiency for managing the created permissions and more importantly the Groups that collect Users designated to perform sets of actions. Essentially the Groups serve as access control lists in many systems, which for the variable and often changing environment of business applications means a need to constantly update the ACL’s (groups) in order to add or remove individuals based on their ability to perform cert