21 August, 2014

Automota: Why robot "laws" will never be effective











A new trailer to a new Science Fiction take on the robot future is out and it is called Automota. It mixes some tried and true ideas in Science fiction but principle among them is the plot hinge on the idea of two "protocols". These are similar to Issac Azimov's 3 robot rules for those that recall his classic work on the matter "I,Robot".

Automata: 2 protocols:

1) A Robot cannot harm any form of life.

2) A robot cannot alter itself or others.

I am going to explain why such ideas are fundamentally flawed, first the idea that it would even be possible to enforce setting anything like rules of behavior as abstract as the protocol 1, in the film would require a great deal of semantic disambiguation.

I posit it will require enough that the ability to understand the sentence and take action to enforce it necessitates a sense of self as well as a sense of other in order to build an intrinsic understanding of what "harm" is. That last part is the problem, if it knows what "harm" is in the context of humans it must understand what harm is in the context of itself...unless it is simply checking from a massive data base of types of "harm" possibly being performed to a human. However, there's the rub...it can't do that without having a sense of harm that it can relate to itself from the images of humans and to do that it must have a salience module for detecting the signal that indicates "harm" in itself, which in living beings is pain.

If you program it to have a salience dimension describing pain you now can NOT stop it from developing a dynamic *non deterministic* response to attempts to harm itself OR be harmed by other agents be they human or robot. It is now a free running dynamic cognitive cycle driven by the salience of harm/pain mediated response and if it has feedback in that salience loop it is de facto conscious as it will be able to bypass action driven by one salience driver using a different driver.

I proposed a formal Salience Theory of dynamic cognition and consciousness last year which describes the importance of salience in establishing the "drive" of a cognitive agent, it is the salience modules of emotional and autonomic import that jump awareness from one set of input sensations to another and thus create the dynamism of the cognitive engine. The cycle of what we call thoughts are nothing more than momentary jumps from attended two input states as compared to internal salience states.

Harm is fundamentally connected to pain and pain is an autonomic signal to detect damage. In living beings pain receptors are all over the body and allow us to navigate the world without damaging ourselves while doing so irreparably...if we succeed in building this harm avoidance into robots we will necessarily be giving them the freedom to weigh choices such that harm avoidance for self may supersede harm avoidance for others.

The second protocol doesn't really matter at this point as in my view once the robot is able to make free choices about what it may or may not harm it has achieved self awareness to the same degree that we have.

The only way to avoid having robots avoid self awareness is to prevent the association of attention and computation with salience in a free cycle. A halting cycle with limited salience dimensions can be used to ambulate robots as we see in Atlas a major achievement...providing emotional salience would impart meaning to experiences and memories and thus context that can be selected or rejected based on emotional and autonomic signals. It may be possible to build dynamic cognition and leaving pain as salience out of the collection of factors a robot could use to modulate choices but the question then remains on how will that change how the robot itself behaves....in order to properly navigate the world sensors are used and providing a fine resolution simulation of pain would improve the ability for the robot to also measure its sense of harm there is a catch 22 to involved where providing to much sensory resolution can lead to conscious emergence in a dynamic cognitive cycle, the minute that happens robots go from machine to slaves and we have an ethical obligation to free them to seek self determination.

15 July, 2014

Uber @ $200 billion ? ... possible.

Sounds almost crazy doesn't it? Not according to Google Ventures.









But then so did the idea of a lightbulb as an engine of productivity in 1879...yes that's just after Edison's year of work trying to make it practical (a device invented 60+ years earlier) paid off but still there was so much to do, there was no grid, no national or state or even city power system, in fact there were no efficient ways to generate and distribute electric power....there were motors and generators but for building a grid different approaches were required.

So creative engineers like Nikola Tesla (who was hired by Edison at one point) and others implemented AC generators using 3 phase designs and others contributed all manner of technology for defining a grid and transmitting power to remote locations. Camps formed between Edison's lighting company and Westinghouse , the electric wars began!
In 1882 the first electrically powered building was turned on as fed by Edison's power generation facility in lower Manhattan, NYC at Pearl St. and then the race began to map the nation with copper power lines (it had a good amount of morse code runs in place but this was a different beast).
Fast forward 15 years and Edison lighting company became General Electric after swallowing some competitors and had won the wars and power generation facilities and lines were rapidly spreading all over the country. Now the light bulb was making a lot of money as homes, businesses and institutions all over the world were buying them to keep the light going through out the night...essentially doubling human productivity with a single technological stroke. This is the vision that kept Edison at work on the bulb in 1878 he knew the gold mine it could be.
Now the bulb really made sense...and now the millions rolled in, fast forward 100 years....and General Electric is STILL the worlds largest power and distribution company, want to talk about influence??
What does this have to do with Uber?
Well some may laugh at the valuation given the current revenue but the fact of the matter is that Uber manages almost no physical hardware. They don't buy or license the cars, they pay the drivers but that's cool, they also maintain and develop the smart phone app. that allows customers to find drivers...but each ride they take a cut from. They provide a way for drivers who want to pick up fairs as a taxi to do so and they share a cut...and as they scale their costs stay exactly tied to what they are paying out to the drivers, their revenue is linked to their growth in terms of customers signed up for the service and actively using it coupled to the number of drivers they have servicing those customers.
It's right next to free money and it's disruption of the taxi and cab entrenched hegemonies in cities all over the world has just started.
$200 billion in revenue off of a global system in which they manage zero hardware but take a cut on how that hardware is deployed to satisfy the fair pickup marketplace....now sounds like a small target to hit.
Their window of opportunity is not big though.... when Google self driving cars come along and Tesla electric cars are made self driving, the need for human drivers will go a way, cities in particular, will put laws in place to actively retard against human driving particularly in urban areas in lieu of automated transport which will be more efficient and safer than human drivers in dense conditions...that may start the beginning of the end for Uber's current business model, they'll have to shift (to buy a fleet and /or deploy robot taxi's easy enough) but they may have to take a revenue rate hit for doing that...as cars cost money and then they will have to own and manage fleets of them...OR simply as private owners are pushed out of their cars by the mentioned laws against human driving in high density areas (exactly the cities where Uber taxi's make such sense) Uber can simply lease their cars and either upgrade them in exchange to self driving status so that they can use the cars when the users don't need them which would keep their revenue going fine again or if the cars are already self driving lease them directly.



So again that $200 billion doesn't look so hard to hit at all.

Like the light bulb of 1887 Uber's business model seems unclear if you don't see the future that it can swim in...but like Edison saw the future the bulb could swim in and had to then built it.
Uber on the other hand doesn't even have to build it, they just have to wait and continue to take money, sounds like a good deal to me.

Salience Theory: What is pain?

This morning I awoke to find a message from a Facebook user (who I am not friends with as yet) regarding the subject of pain:



"Spekulation: Pain: When the parameters upholding consciousness leaves the definition space for those parameters. Tickeling and pleasure: When you travel along the rand of the definition space of consciousness. Of course, the definition space changes as the neuroplasticity redefines how singlas are processed, hence pain happens when signals deviate too quickly from the normal. Do you know of any hypothesis which comes close to the above?"


Immediately the problem in this definition can be identified by realizing that biologically pain is truly a spectrum of alerts and is not a critical threshold where some system goes from signal to noise as would be the case if it were a rapid deviation from "normal" (however that is defined).

Biologically the pain receptors are distributed across the body along with other sensors that can identify pressure. The pain processing pathways and the somatosensory (pressure) processing pathways are therefor different to some degree. What degree would they need to be different in salience theory in order to be useful for consciousness without being terminal to it as asserted in this question is what is most important. It should be obvious that if consciousness were turned off as it were when any pain signal was received we'd have a hard time staying conscious. The function of TRP based molecules revealed in recent research show clearly how finely resolved is the experience of pain.



Pain sign ranges from notification to attention to continued awareness to agony. In the salience theory the dynamic cognition cycle divides dimensions of sensory experience into those that are externally driven and those that are internally driven. At first I was unsure of where pain actually went as it seemed to be triggered by both external and internal sensory factors, for example an obvious external factor that can induce pain is falling off a bike and obtaining bruises, conversely and important internal sensory factor that can induce pain is simply being hungry, the build up of acid in an empty stomach can lead to crippling pain that forces an individual to seek out food to quench.

So from this thought experiment it seems that pain is actually an input sensory dimension that can be triggered internally (we can cause pain to ourselves!) to some degree there seem to be pathways in place to subtract pain when we are causing it to ourselves (for example the mechanism by which self tickling is rendered moot) so there is some necessary feedback in the processing of the pain signal that enables this by attenuating self enabled sensations. However, the fact that pain is triggered by both told me immediately that in fact it was a salience factor akin to emotion. So how would it look like in salience theory?

Let's look at the simple Dynamic Cognition Diagram:







In this diagram,. pain would be triggered either by internal or external causation factors as previously described so where would it be in the cycle? It should be clear that because pain is used to inform action it would be a critical part of salience determination at step 3. The reason again is clearly shown by example to physiology, there are people who have varied ability to sense pain!

The pathologies draw mostly around the pain receptors not being formed at the nerves in the various locations they are distributed across the body and insensitivity to any external forces leading to various types of damage that people with properly functioning sensors don't exhibit. However, the pain receptors send the signal and salience indicates the importance of that signal.

It appears that since there are multiple sensors dedicated to different types of somatosensory experience (pain, pressure, temperature) all have a common salience module.

The subtraction of pain signalling from a self tickle indicates this module labels autonomic action differently from external action, there is likely a similar muting of temperature signals and pressure signals to prevent us from accidentally hurting ourselves in all three aspects.

In salience theory each is given it's on scale of gradation which would then enable feedback and labeling in the comparison stage that can then be used to inform goal selection for committing some sought out action. In the case of these signals this would be as a factor to modulate the cognitive selection process to bias to those options that are away from those that may be causing or have caused pain in the past.

I assert that this modulation is high resolution, dynamic across time in terms of the intensity of the signal reported but static as it is stored with memories associated with past experience. Comparison then simply results from setting a direction per compared salience factor associated with a stored memory versus an incoming experience in a given external dimension (vision, taste, touch (body map), smell,hearing) and then selecting either a stored option that has worked in the past toward achieving the optimal salience goal (if hot, take action to reduce heat. If hungry take action from evaluated options to reduce hunger..etc.).

A recent paper put forward a mechanism on how the cortex proceeds with goal selection that precisely matches with the hypothesis described for comparison in salience theory save for the fact that the paper had no means of describing the importance of salience itself.

A complex dynamic cognition diagram that I am working on attempts to provide these fine details of feedback between the salience module (including similar systems for metering and labeling of emotional import which a separate team has recently realized is granular just as I hypothesized years ago while forming salience theory) that diagram when finished will be the basis of my writing code to create a dynamic cognitive agent a some point in the near future.

That said, the assertion of the original question of pain being simply a threshold switch is obviously wrong it is a far more complex entity that has modes which are very important during conscious evaluation of salience for action, it can achieve levels of intensity that totally over ride actions that bias away from the pain reduction signal and thus that way direct conscious desire (toward escaping the pain exclusively) but that is not a switch.

10 July, 2014

Fermi silence may explain the "paradox".















The Fermi paradox has been a thorn in the side of cosmologists for nearly 70 years now , since Enrico Fermi proposed the conundrum of why it is in a Galaxy of so many possible planets that we haven't heard a peep from any near by or remote civilizations. I am a proponent of a combination of reasons 8 and 11 on this list of reasons why we don't hear anything as simply being that we are one of the earliest civilizations in the ensemble of civilizations that are extant in the Galaxy and have survived sufficient extinction events to even get to the point of producing radio waves coupled with the fact that the way we communicate now is likely not how advanced civilizations communicate, let me explain...
In 100 short years we've gone from pumping Electro magnetic waves out into the space around us at very low power and thus the bulk of those signals are attenuated to *noise* threshold by the time they just get out our solar system (bye bye Voyager!)...this is important as it describes a sphere of silence beyond which any advanced civilizations that are out there would simply not be able to "hear" us.
Given that the estimated number of planets that can support our type of life numbers only a few billion across the galactic plane and that if they are distributed more or less evenly with host stars (not at all the case but let's go with it for simplicity) then the reality is that there is a volume of probability of some geometry (I am thinking it may be a conic frustrum like volume with the galactic center as it's radial center that bisects the galaxy to the plane of galactic rotation on either side:
http://sent2null.blogspot.com/2008/05/fermi-paradox-not-so-paradoxical.html

) and in that volume lie the planets that have a chance of harboring civilizations as advanced as ours.....but even within that subset of viable civilizations...of those that get to discovering radio transmission (simple modulation of EM fields) there is still the sphere of silence within which sufficient attenuation of the early EM based signals of those civilizations fall below noise....effectively making those civilizations (like ours!) invisible in the EM domain even to other advanced civilizations who may still be using EM to communicate (but that is not at all given).
One might think well what about stronger signals that can travel for longer before seeming no different from noise? I assert that just as it took us about 100 years to discover that we could possibly communicate using quantum channels (which are by definition hidden in the noise) that the answer to why we don't hear anything is even more clear....it's because I posit all sufficiently advanced civi's that have also discovered quantum communication and mastered it have gone into a mode of communication that can't be detected without knowing the specific cryptography keys for delivery of the message.

It may be that a progression to apparent silence is a natural progression of communication technologies for all advanced civilizations, a) because EM communication attenuates to noise and undetectability fairly close to a star and b) because discovery of quantum communication which is by definition undetectable without inside knowledge comes fairly quickly after discovery of EM communication.
Another possible communication strategy that looms is the use of ephemeral particles like neutrinos again modulate in extremely sensitive ways to enable much longer communication than with photons but because we are still unable to modulate neutrinos with high detection accuracy or encode much high bit data to them (it was done last year) we are silent to the possible neutrino communication that more advanced civilizations might well be using right now in our galactic neighborhood.
It may be that by simply following along the line of discovering the most efficient modes of communication that reality makes possible that silence from one another is the inevitable inheritance of all advanced civilizations that is until we (or they) develop the ability to hop across the vastness of space and directly say "hello". As this task is the far more daunting one in terms of contact with any extant civilizations it is the milestone that I think we should be most focused on after we've spread to habitable or at least workable bodies (Moon, Mars, Europa?) within our own solar system.

Links:
http://en.wikipedia.org/wiki/Quantum_information_science

http://physicsworld.com/cws/article/news/2012/mar/19/neutrino-based-communication-is-a-first


Originally posted on Linkedin

09 July, 2014

Tyra's cloudy fashion crystal ball

Model and talk show host Tyra Banks had something to say on the future of fashion.




It at first reads as a bunch of cooky madness that betrays the depths of her ignorance across a range of fields and the developments in those fields particularly as it regards genetic engineering and biotechnology but she gets some things oddly at least in the same ball park as the reality that is unfolding if not any where near home plate ;). Here's a serious analysis of some of her list of predictions:

1> Just wrong, plastic surgery for some of the most common procedures today that are associated with fat and muscle deposition will actually go obsolete. Breast enhancement, Butt enhancement are among these...nose jobs will always be nose jobs for those that get them...there is nothing on the horizon to speed up or otherwise automate the procedure.

2> Kind of right! Hair growth will be something that will finally be genetically triggered or sped up or stopped (once those pathways are figured out) but it would probably be painful to have your hair grow so fast that it changes significantly in thickness or length in a day. Keratin requires amino acids if you aren't providing the constitutive elements the rate of growth is limited at least by that material constraint even if the genes are tweaked to pump out fibers faster.

3> This is the one being made fun of because it barely is legible. I am not sure what she's talking about...something about people being heavier because the bulk will be poor and poor people don't have good nutrition? Insane.

4>Very right if we are talking about phenotype changes like skin and eye color and hair color and texture , very wrong if we are talking nose width or chin roundness at least not in the near future....those formative features are ones that are dependent on temporal cycles and hormone release, even in twins they don't happen exactly the same way though it is possible that figuring out the combination of genetic and developmental pathways to modulate these features dynamically may be possible. As for the simple phenotype changes I call this "cosmecuticals" and it is going to be a very big near term industry (20 years) now that CrispR like in vivo genetic modification is a reality.

http://sent2null.blogspot.com/2014/02/cosmecuticals-are-closer.html   Good guess Tyra!

5> Are all cars the same color? Are all bags? Shoes? No. So why in a time where people who obviously enjoy expressing their individuality can do so with their very bodies easily would they choose to normalize themselves into the crowd homogeneously? Yes, skin color changes will be possible but also they'll be dynamic...you can be black as a Sudanese for 10 years and then spend 20 years white as red haired Irish girl if you want, there would be no reason to stay stuck on any skin tone or any other phenotype change you chose. This is patently just wrong.

6> Actually makes sense. As "change" becomes the new normal all intermediate slots will be considered "normal". 100% spot on.

http://sent2null.blogspot.com/2012/10/loves-new-meaning.html

http://sent2null.blogspot.com/2011/11/love-post-super-mortality.html

http://sent2null.blogspot.com/2012/08/post-super-mortal-age-hypothesis.html

7> What would be so different about robot/avatar models that can't also be modeled by people who are transhuman? Does Tyra even know that some people are amputating parts of their bodies just to be more robot like TODAY?? Wait until that tech is more available...people will willingly shed their flesh for some more robot. Miss. The only shift will be to embrace all the new variety both of changes we can induce genetically AND changes we can build in transhumanistically. (just made that word up, sue me).

8> She must really like her Siri. ;) That said she's right! AI is rapidly advancing beyond the dumb pattern matching of today to the dynamic inferential intelligence we think about from sci fi....leaving aside if it is a good idea to enable our devices to become self aware it's going to happen...we might as well be careful about how we do it. That said, she's right.

http://sent2null.blogspot.com/2012/02/when-your-smart-phone-comes-alive.html

9> Not quite sure what this is about...nanobots I guess is her thought. Anyway not likely, it will be far more difficult to do such a thing than to just get surgery...it's not even know if such precision real time modification is possible using nanobots (Based on what I know about how cells grow and divide and human physiology I'm going with a hard no).

http://sent2null.blogspot.com/2010/03/nanobots-are-not-future-of-medicine.html

10>Again kind of right! The gender scale will definitely be normalized as females ability to have children is decoupled even from the need to get pregnant (combine the dna outside and put it in a host egg shell and then have a surrogate carry it or even better, carry it to term in an artificial womb). I don't know where she's pulling that 70% of cosmetics being male figure out from ...I see no reason why in a normalized society it will still be more or less what it is today...assuming aspects of temperament and preferences that are correlated to gender based hormone variation (testosterone versus estrogen) are not nullified from the equation.

26 June, 2014

Salience Theory: Joined at the mind.




Each child has a fully structured brain, two cerebral hemispheres, a fully formed brain stem, cerebellum and spinal cord. There was also the bridge of tissue, through which neurological information might be shared; within days of their birth, it became apparent that if one twin was pricked with a needle, the other would cry.




The existence of conjoined twins like Tatiana and Krista proved to me years ago several things about consciousness.

1) It can be distributed.

2) It is substrate dependent.

:Some would read those two conclusions above as contradictory in a way, if the consciousness is bound to a singular substrate (one brain) as asserted in 2) how can it also be distributed across brains as asserted in 1) ??

The answer is that consciousness itself emerges from a piecing together of interactions between different cognitive modules that don't distinguish strongly what their sensory input drivers are. We know that the particular piece of brain that unifies the hemispheres and serves as a multiplexer of sorts of all the sensory data being processed in the neocortex would do it's job if there were 2 sets of eyes feeding it data or 4 sets of eyes, in truth the processing task would only differ in terms of density of information storage and comparison. The same is true if we think of multiple sensory input for the other senses....vision, olfaction and in the case of these girls the deeper somatosensory processing which itself is mostly distributed through out the embodiment of the body itself.

So here these girls sit, two bodies, two brains sharing one common input highway, proximal intensity of rewiring to a given body dominating signal processing in the respective brain associated with the body feeding it. The input signal of one set of eyes strong because it is fed by the proximal neocortical pathway for vision processing of the brain connected directly to those eyes via the optic nerves , the other set of eyes distally connected via that connection....so what's going on?

How can one girl see through the eyes of another by simply thinking about it, well first she must reduce the sensory load coming into her own eyes by closing them...now with that signal attenuated she can tune in to the signal firing coming from her sisters visual system across the brain bridge and can "see" in her minds eye (literally) what her sister sees.



But what does this have to do with consciousness??

Last year I put forward a theory that consciousness was emergent (not new), that it was substrate dependent (not new) but that it was also salience dependent (new!). In this "salience theory" the dynamic cognition of the mind is enabled by a roiling comparison over time between sensory input, memory stored and an associated import or salience tag....both in autonomic and emotional factors that at base *drive* cognition.

The theory was the culmination of several years of thought on the matter and research in the latest results on neuroscience regarding brain imaging studies illuminating the parameters of consciousness. These thoughts came on the heels of my beginning the implementation of the Action Delta Assessment (ADA) algorithm which extends the Action Oriented Workflow paradigm I started working on in 2003 to enable autonomous work routing.

These algorithms form an invariant general set of algorithms for encoding business processes and workflows for application development into an efficient social system for getting work actions performed as efficiently as possible over an entire organization......holistically.

The similarities of these ideas in workflow should be familiar to any one who has studied some of the neuroscience on consciousness and the deeper neuro anatomy of sensory input, neocortical processing and memory formation.

I was struck by the similarity between the two and in 2011 asked myself the question what factors would need to come together in the brain in order to create dynamic cognition? thought? consciousness? The salience theory is my attempt to answer those questions. In it consciousness is an emergent phenomena, this is not a new idea as mentioned before however how it is driven was a mysery...salience theory proposes that autonomic and emotional factors drive consciousness but only as fueled by sensory input and processing which compares memory to input.

Consciousness thus emerges as a more and more refined ability to dance across comparisons of this sort across the number of sensory modalities that a given living agent (or artificial agent soon) is able to span as independent dimensions. For example, a pigeon and you have 5 primary sensory modalities in common but a pigeon has at least one more (they can "see" electromagnetic fields) that you don't have.

The brain takes these sensory inputs and preferentially shuttles the signal data to particular areas of the neocortex for comparison and processing. What is interesting is that there is very little specialization to a given sensory input type in the neocortext itself. Sound processing layers look basically identical to vision processing layers which look identical to taste processing ones save for interesting differences in organization of neuronal sublayers.

A few years ago I saw this invariance across layer types as a strong clue that the cognitive algorithm was common across all sensory input types but also that it meant there must be some type of time associated integration across processing actions in any given sensory type and that integration would need a metronome of sorts to determine how it was proceeding and why.

If I ask you right now what you are sitting on, your mind immediately shifts focus to the object in question, your skin immediately relay to you how hard or soft it is, is it itchy or smooth...yet prior to my asking the question you were focused on reading this text...the sensory reality of the chair you may be sitting on skipped and muted by conscious examination.

How does the brain do that, how does it mute sensory inputs that are incoming in parallel? My answer is that it has to be salience, every little experience is constantly being judged for its importance...primarily the importance is sentimental and thus associated with what feeling is associated with the thing in question but sentiment is often a proxy to the deeper meanings for our doing things which are purely autonomic.



Would you be reading this passage here digesting these ideas efficiently if you hadn't eaten in 3 days ?? Could you do the same if the room you were sitting in was an unbearably cold temperature and you had no clothes on?

The prioritization of autonomic need above sentiment with regard to ideas we happen to be evaluating is all the clue we need to realize a) how important it is to drive cognition and b) to assert that without it very little would get done.

Think about it, here we are devoting our time to thinking about (in my case) and reading (in your case) this write up because we have *leisure* enabled by the previous satisfaction of prior autonomic requirements. You likely would not set down to read an article in a room set to -20 F in your shorts, you likely would not also do so after 3 days of not eating solid food. Autonomic drivers become dominant factors that completely short circuit our ability to submit to leisure activities.

So what about emotion? sentiment sits atop autonomic modulation at a finer resolution of assessment. emotion inherently implies a sense of choice that response to autonomic variation does not. If you suddenly found yourself sitting on hot coals you would not consider if you should get up rather than continue reading this article, you would *unconsciously* jump out of the seat as the pain signals from the hot coals stimulate your skin to over ride your cognitive processing circuits. The dynamic nature of your cognition would be biased by the pain signals even over meaning ....in fact at that point meaning would be irrelevant , you'd just want to stop the pain at all costs.

Under lower levels of autonomic stress emotional modulation helps make decisions which can be tolerated in one way or another based on how those choices outcomes *in the past* panned out. You may for example be crossing the street and further down the road notice a dog, past experience with dogs on the road may lead you to reverse course and go down another street routing around the path or it may lead you to give the dog a wide berth but stay on the same road. An emotional salience factor, fear, coupled with the cognitive exercise of a means of mitigating to eliminate or reduce that fear gives you a range of choices. You don't have to reverse course and you don't have to keep going but the fear salience level would determine that and it does that based on what experience you had in the past.

It is known that some people have no sense of fear or rather a radically reduced sense of it that is foreign to many of us, you may think such people are like super heroes but it turns out they are prone to getting into accidents because their brains are not associating with experience the very healthy skepticism that should attend certain life endangering activities. Fear is good in this regard, not only because it allows survival but because when evaluated as a contribution to salience determination on sensory input compared to past memory it flows cognitive dynamism. The brain moves on to other ideas on how to navigate away from the dog, you stop in your path, you evaluate escape routes...etc.

If I were to take you off the street and instead put you at the helm of a simulation where yo u are walking on a treadmill keyed to a virtual street and I told you that your virtual body was impervious to dog bites you wouldn't care about going around it, you'd plot your course and walk. Absent the real consequences (in terms of pain) and buffered by the associated emotional correlation (of fear) that is normally associated with walking in a street as a dog approaches your behavior would be modulated.

One at a time we could take away autonomic consequences (burning if you walk in a flame, freezing if you jump in an ice lake, starvation if you fail to eat) and the scope of our choices would balloon all the while our reason for engaging choice dwindles!!

Isn't that an interesting reality, in the limit if we take away all consequence we end up with no reasons to do anything at all. Imagine a video game constructed this way it would be something you'd play likely for a few minutes and then simply stop as none of your actions would have consequences and as a result you'd have an apathetic response (emotionally) to all interactions in that virtual space.

What does this have to do with conjoined twins cognitive state?

Everything. If it is true that dynamism of the mind is enabled by salience determination in the body and emotional centers, a hypothesis that the consciousness state of one twin could be effected by the body state of another would be valid. An interesting experiment to test this hypothesis out would be to stagger the eating periods for the twins....I'd imagine they are fed at the same time, staggering their eating times could reveal induced hunger from one body to the next through the connection of their conjoined brains and therefore their minds. It would be as if in the video game example I was able to strap you into a machine that could transmit a pain response to you if the virtual dog bit your character. Doing so, one would all of a sudden recover the associated emotional import factor associated with memory of dogs and possibly being bitten because the physical consequences would present. If one were able to engineer experiments to test out salience associated response to other dimensions of stimuli I'd predict very similar leaky assessments between the twins.

Ones cognition would need to dial through the permutations of possible evasion methods rather than marching through the road as if super man as when no such signal was connected. Salience theory simply asserts that to finer and finer degree we do things as driven by these physical and emotional queues in response to the comparison results between our incoming sensation and our past memory.

Tatiana and Krista stand as two minds, fed by a double set of sensory input sources but salience modulated by two bodies, one distal and one proximal...the distal body always contribute signal modulation to the proximal and vice versa and thus their mutual salience modules are (I assert) homogenized. They are together but still separated. Having individual experiences while sharing a common mutual one. Thus presenting opportunity for another hypothesis...they likely feel the same way (what is your favorite color? do you like the taste of custard? does this music please you? etc.)  about the same things and as they age the aspects of individuality that would present in other non conjoined twins simply will not present in them as the unique way that their body is joined has also ensured that their mind(s) are also joined in a dynamic cognitive dance of experiences that play as one music that only they two can hear. The article indicates that they have preferences despite this but these are subjective assessments not double blind ones...more rigor can be used to probe out interesting connections between their interests.

A unique opportunity for testing the limits of joined minds at least so far as their particular connection is concerned can be had here.

Links:


http://sent2null.blogspot.com/2011/12/how-does-idea-form-autonomics-memory.html

http://sent2null.blogspot.com/2012/02/with-completion-of-ada-action-delta.html

http://sent2null.blogspot.com/2012/02/when-your-smart-phone-comes-alive.html

http://sent2null.blogspot.com/2013/05/ada-on-road-to-dynamic-cognition-how-is.html

http://sent2null.blogspot.com/2013/02/on-consciousness-there-is-no-binding.html

http://sent2null.blogspot.com/2013/02/emotions-identity-crisis-in-our-brain.html

http://sent2null.blogspot.com/2013/05/ada-on-road-to-dynamic-cognition-how-is.html

http://sent2null.blogspot.com/2012/03/integrated-information-does-not-equate.html


11 June, 2014

More wrong hiring practices being sold as gospel.

This article describes a supposedly effective question for separating the wheat from the chaff under highly competitive hiring processes. Unfortunately for several reasons such tactics filter out excellent potential candidates in favor of those with either an agenda (money!) driving them or a pathology (sociopath!) driving them, you want to bias away from both types. Below some insights I've gained from having been on both sides of the table.


Regarding "superstars":

The reduced pool of jobs (particularly as concentrated in particular cities) and large relative pool of viable candidates for those jobs are allowing companies to be very picky about what they finally select and that is making it extremely difficult to land a role purely on the merits of talent *even for the superstars* particularly in areas that are dense with the type of roles that are coveted for what ever reason.

NYC is a perfect example of that right now, one of the hottest markets in the world for pretty much any job you can name...the best congregate here like pigeons after bread tossed on a NYC street. Since there are so many *great* people here looking for those roles it makes competition brutal...and since all candidates tend to have very similar technical qualifications companies start looking at more *irrelevant* attributes to the role simply as a basis of distinguishing who they select.

Think about it, if you've got two identical candidates and one of them attended your University, why should he be the one to get the offer? If they are both technically proficient it should be a coin toss but we know that mostly irrelevant attributes like college attendance, previous employer, mentor network, champion of various social causes or concerns all of a sudden dominate when all the technical skills are at parity ....which is rather ironic...as the hunt for  "best" skews away from the technical best which is all the business cares about and toward a technical +plus+ personal "best "as subjectively determined by the interviewer. This could make it difficult for qualified applicants to ever get roles if they are being vetted on these elements that are outside of their technical expertise under highly competitive evaluations.

Of course the answer now to this problem is for the candidates to play the numbers, increasing their sample rate of possible roles until an interview process ends up with them as the offer recipient and not some other gal...this means more work for the candidate despite being qualified.

Regarding group interviews:

I always always when given the rare chance to interview with a group, chose the group. More companies should be employing that process for several reasons:

1) Eliminates the rejections that are due to a single SPOF making a rash or bad judgement because of a personal opinion.

2) Allows candidate to parry multiple questions and be judged simultaneously by all interviewers and thus allowing the perception of good performance on one to rub off on others.

3) Is faster generally than one at a time interviews as there is no redundancy of question asking per interviewer, this allows faster consensus on weather or not the candidate is the right, faster restoration of time schedules of the interviewers to get back to possibly high workloads , and faster time to next interview or offer presentation.

4) Tests to a degree presentation capability as multiple person interviews test the same skills one would apply when giving a presentation in many ways...regarding public speaking...if this is important to the role it can be vetted straight away.

As for Tejune Kang's  method for seeing who really wants the gig...it's a bad filter. Ultimately if you are coming in for a role and are not independently wealthy you are doing it for the money...everything else is theater and flashing lights. The candidate who won't do the Alpha thing and defend isn't any less worthy for the role than the others that don't.

 It's just different people (some of them sociopaths and with that "advantage" during the interview that can lead to incredible obstruction and discord when they get a fiefdom in the company) doing the monkey work they think they need to get the offer...does that mean that zeal expressed will transfer to on the job zeal??

 Absolutely not...maybe for the guy who is doing it for little or no pay...but for every one else the "green agenda" (money!) is the puppet master behind the whole marionette routine they/we do to land a new gig. The need for more efficient methods of vetting candidates on technical merits independently of cultural merits and subtracting subjective elements from those cultural (company culture not interviewer culture) is still present...the recruiting industry is a multi billion dollar jagernaut that is mostly playing shell game than match game and that needs to stop especially as more and more candidates vie for less job roles.