Skip to main content

The singularly...important ethical quandary.

These articles chopping up the so called ethical issues of more autonomous systems are often very silly. Every new technology brings with it new patterns of human behavior around those technologies that probe different aspects of human interaction.

It has been so from the invention of fire onward...the question is are the new set of questions worth dealing with compared to the old set that existed prior  the invention of the new technology?

In a world of self driving cars, few drunks who drive consistently due to alcoholism will be able to cause the deaths of others on the road...the elimination of that landscape of possibility due to the new technology far out weigh in my view any other issue that is pointed out by those attempting to analyze ethically a world filled with  such vehicles.

The same I am sure was asked of the use of electric power during the build up of the nation to bury hot lines in the streets and hover them on pole after pole all over the world. I can imagine the discussions that raved in 1880 news papers over the pending "electrification" of the landscape...the potential for deaths at the hands at the new technology!!!!

Yet, here we are...130 years hence and no one is going nuts over the fact that the power grid enables the possibility of electrification and in fact succeeds in delivering it to many people by accident on a yearly basis because the landscape of productivity and increased human potential we have under this electrified world is far safer than it ever was before...when so much of the stuff we need done we had to do by our own labor...risking an order of magnitude MORE physical risks in the doing.

It is important to weigh the merits of the respective landscapes of ethical consideration before and after deployment of some new technology and once done end the debate in favor of moving forward toward the direction that softens rather than exacerbates the issues over time.

I can think of only one technology that has had neutral to negative consequences in its creation and utilization and that is nuclear technology. It has shown both in the military case it was pursued for outside of pure research and in the consumer case it has been used in terms of nuclear power plants to be simply not worth the effort.

The 3 mile islands, the Chernobyl's and the Fukashima's extract disproportionate pain for the incremental gain such plants provide in terms of power to older technologies...and now that green technologies with infinite extraction potential are quickly achieving parity ...the tech has no defensible pragmatic reason d'etre.

As for the further deployment of even more intelligent systems beyond simply real time reactive systems like self driving cars, to creation of artificial cognition now there is a reason to give pause.

I've argued that we would be fools to rush head long into attempts to create fully self aware systems without fully understanding the parameters of psychological stability that will be necessary to enable such intelligent systems to coexist with humans. The question of how we avoid the creation or even the possibility of making HAL or SkyNet is a really important one. Sadly with so many so ignorant about the parameters of the mental landscape that could be created by their artificial cognitive agents should they succeed it means we might be barreling down the road to our own doom.

In my research in this space I've advocated careful understanding of the importance of applying emotional modulation to such systems. I've argued as a result that we must create some simulacrum of emotional import in these systems that is aligned with empathetic correlates to human desires and goals...if we don't we play Russian roulette with the ability for such cognitive elements to roam over terrain in the possibility landscape that is not amenable to human survival.

Recently, astronomers released an estimate based on new data regarding the number of possible Earth like planets in the Galaxy within their wet or habitable zones that numbered around 12 billion planets. What percentage of that number emerged complex beings of some biological nature that then went on or are going on to create artificial intelligence in their image but are missing the mark...creating instead a new order of beings with desires disconnected from their makers...and given the reigns to society the potential to end them?? How many have already been ended, this indeed for me is the most important ethical question of our time.

Comments

Popular posts from this blog

On the idea of "world wide mush" resulting from "open" development models

A recent article posted in the Wall Street Journal posits that the collectivization of various types of goods or services created by the internet is long term a damaging trend for human societies.

http://online.wsj.com/article/SB10001424052748703481004574646402192953052.html

I think that the author misses truths that have been in place that show that collectivization is not a process that started with the internet but has been with us since we started inventing things.

It seems that Mr. Lanier is not properly defining the contexts under which different problems can benefit or suffer from collectivization. He speaks in general terms of the loss of the potential for creators to extract profit from their work but misses that this is and was true of human civilization since we first picked up a rock to use as a crude hammer. New things make old things obsolete and people MUST adapt to what is displaced (be it a former human performance of that task or use of an older product) so as to main…

Engineers versus Programmers

I have found as more non formally trained people enter the coding space, the quality of code that results varies in an interesting way.

The formalities of learning to code in a structured course at University involve often strong focus on "correctness" and efficiency in the form of big O representations for the algorithms created.

Much less focus tends to be placed on what I'll call practical programming, which is the type of code that engineers (note I didn't use "programmers" on purpose) must learn to write.

Programmers are what Universities create, students that can take a defined development environment and within in write an algorithm for computing some sequence or traversing a tree or encoding and decoding a string. Efficiency and invariant rules are guiding development missions. Execution time for creating the solution is often a week or more depending on the professor and their style of teaching code and giving out problems. This type of coding is devo…

Waking Out: A proposal to emerging ethical super intelligence safely.

The zeitgeist of Science fiction is filled with stories that paint a dystopian tale of how human desires to build artificial intelligence can go wrong. From the programmed pathology of HAL in 2001 a space odyssey, to the immediately malevolent emergence of Skynet in The Terminator and later to the humans as energy stores for the advanced AI of the Matrix and today , to the rampage of "hosts" in the new HBO series Westworld.

These stories all have a common theme of probing what happens when our autonomous systems get a mind of their own to some degree and no longer obey their creators but how can we avoid these types of scenarios but still emerge generalized intelligence that will leverage their super intelligence with empathy and consideration the same that we expect from one another? This question is being answered in a way that is mostly hopeful that current methods used in machine learning and specifically deep learning will not emerge skynet or HAL.

I think this is the …