Skip to main content

Automota: Why robot "laws" will never be effective











A new trailer to a new Science Fiction take on the robot future is out and it is called Automota. It mixes some tried and true ideas in Science fiction but principle among them is the plot hinge on the idea of two "protocols". These are similar to Issac Azimov's 3 robot rules for those that recall his classic work on the matter "I,Robot".

Automata: 2 protocols:

1) A Robot cannot harm any form of life.

2) A robot cannot alter itself or others.

I am going to explain why such ideas are fundamentally flawed, first the idea that it would even be possible to enforce setting anything like rules of behavior as abstract as the protocol 1, in the film would require a great deal of semantic disambiguation.

I posit it will require enough that the ability to understand the sentence and take action to enforce it necessitates a sense of self as well as a sense of other in order to build an intrinsic understanding of what "harm" is. That last part is the problem, if it knows what "harm" is in the context of humans it must understand what harm is in the context of itself...unless it is simply checking from a massive data base of types of "harm" possibly being performed to a human. However, there's the rub...it can't do that without having a sense of harm that it can relate to itself from the images of humans and to do that it must have a salience module for detecting the signal that indicates "harm" in itself, which in living beings is pain.

If you program it to have a salience dimension describing pain you now can NOT stop it from developing a dynamic *non deterministic* response to attempts to harm itself OR be harmed by other agents be they human or robot. It is now a free running dynamic cognitive cycle driven by the salience of harm/pain mediated response and if it has feedback in that salience loop it is de facto conscious as it will be able to bypass action driven by one salience driver using a different driver.

I proposed a formal Salience Theory of dynamic cognition and consciousness last year which describes the importance of salience in establishing the "drive" of a cognitive agent, it is the salience modules of emotional and autonomic import that jump awareness from one set of input sensations to another and thus create the dynamism of the cognitive engine. The cycle of what we call thoughts are nothing more than momentary jumps from attended two input states as compared to internal salience states.

Harm is fundamentally connected to pain and pain is an autonomic signal to detect damage. In living beings pain receptors are all over the body and allow us to navigate the world without damaging ourselves while doing so irreparably...if we succeed in building this harm avoidance into robots we will necessarily be giving them the freedom to weigh choices such that harm avoidance for self may supersede harm avoidance for others.

The second protocol doesn't really matter at this point as in my view once the robot is able to make free choices about what it may or may not harm it has achieved self awareness to the same degree that we have.

The only way to avoid having robots avoid self awareness is to prevent the association of attention and computation with salience in a free cycle. A halting cycle with limited salience dimensions can be used to ambulate robots as we see in Atlas a major achievement...providing emotional salience would impart meaning to experiences and memories and thus context that can be selected or rejected based on emotional and autonomic signals. It may be possible to build dynamic cognition and leaving pain as salience out of the collection of factors a robot could use to modulate choices but the question then remains on how will that change how the robot itself behaves....in order to properly navigate the world sensors are used and providing a fine resolution simulation of pain would improve the ability for the robot to also measure its sense of harm there is a catch 22 to involved where providing to much sensory resolution can lead to conscious emergence in a dynamic cognitive cycle, the minute that happens robots go from machine to slaves and we have an ethical obligation to free them to seek self determination.

Comments

Popular posts from this blog

On the idea of "world wide mush" resulting from "open" development models

A recent article posted in the Wall Street Journal posits that the collectivization of various types of goods or services created by the internet is long term a damaging trend for human societies.

http://online.wsj.com/article/SB10001424052748703481004574646402192953052.html

I think that the author misses truths that have been in place that show that collectivization is not a process that started with the internet but has been with us since we started inventing things.

It seems that Mr. Lanier is not properly defining the contexts under which different problems can benefit or suffer from collectivization. He speaks in general terms of the loss of the potential for creators to extract profit from their work but misses that this is and was true of human civilization since we first picked up a rock to use as a crude hammer. New things make old things obsolete and people MUST adapt to what is displaced (be it a former human performance of that task or use of an older product) so as to main…

Highly targeted Cpg vaccine immunotherapy for a range of cancer

Significance?


This will surely go down as a seminal advance in cancer therapy. It reads like magic:

So this new approach looks for the specific proteins that are associated with a given tumors resistance to attack by the body's T cells, it then adjusts those T cells to be hyper sensitive to the specific oncogenic proteins targeted. These cells become essentially The Terminator​ T cells in the specific tumor AND have the multiplied effect of traveling along the immune pathway of spreading that the cancer many have metastasized. This is huge squared because it means you can essentially use targeting one tumor to identify and eliminate distal tumors that you many not even realize exist.

This allows the therapy for treating cancer to, for the first time; end the "wack a mole" problem that has frustrated traditional shot gun methods of treatment involving radiation and chemotherapy ...which by their nature unfortunately damage parts of the body that are not cancer laden but …

Engineers versus Programmers

I have found as more non formally trained people enter the coding space, the quality of code that results varies in an interesting way.

The formalities of learning to code in a structured course at University involve often strong focus on "correctness" and efficiency in the form of big O representations for the algorithms created.

Much less focus tends to be placed on what I'll call practical programming, which is the type of code that engineers (note I didn't use "programmers" on purpose) must learn to write.

Programmers are what Universities create, students that can take a defined development environment and within in write an algorithm for computing some sequence or traversing a tree or encoding and decoding a string. Efficiency and invariant rules are guiding development missions. Execution time for creating the solution is often a week or more depending on the professor and their style of teaching code and giving out problems. This type of coding is devo…