27 February, 2013

On Consciousness: There is no "binding problem" (period).


In the last few years as I was privately working on the approach to extending the Action Oriented Workflow paradigm to include an implicit workflow capability. I had to do a great deal of sampling in the space of work in neuroscience, comparative evolutionary brain history, the current work in machine learning algorithms and approaches to survey the landscape and understand from a holistic vantage point how to solve the problem.

In AOW, workflows were as it was originally designed, constructed manually to allow possible User agents to serve on a "Stage" where they could or could not perform a requested "Action". The Actions were the atomic 8 that I'd identified in the origination of my construction of the paradigm and in fact are the basis of the 8 pointed Summa Star logo that defines the AgilEntity platform that implements AOW.

At the time, 2004 the systems available for building workflows for human to system to human business processes were needlessly complex, involving the need to code using languages like BPML and other grammars. The solutions in place were simply over complex in my view mostly due to the fact that they approached it from a application specific perspective rather than a general perspective. AOW eliminated that tedium by allowing manual construction of workflows and enabling the business objects to be designed and built into AgilEntity via extensibility...thus allowing as arbitrarily complex a flow between object types and actions as necessary but it was unsatisfying to me in that I wondered:

"Is there a way to have the system discover the best workflows automatically and route actions between the agents discovered?"

It was about 2006 and I was focused at the time on building a second proof of concept application into the framework (business focused web based collaboration) and set the task of extending AOW for a later date. That date came after I fired myself from McGraw Hill after coming back from Venezuela. I had mentally determined in the intervening years where I would modify the existing AOW code to provide what is called in the system "implicit" workflow (as opposed to the "explicit" workflow of manual construction that was the default innovation). Implicit workflows would directly utilize what I'd learned about machine learning but more so relied on what I knew about *the brain*. I had always been fascinated by the workings of the brain and the relative functional homogeneity (cell types are mostly neurons and glia) was a strong hint to me of two things. 1) Symmetry at many scales of operation. 2) Simplicity of generalized approach. However, when I started reading of the work that others were doing in the software space to try to create artificial minds I was boggled by the over complexity of the approaches.

Looking into a world of AI chaos

All types of mathematical models were used to try to propagate information through brains the same way signals were propagated through electronic systems. Neural networks with finite input modulation and over complex statistical models all rounded out the many approaches I've read about...some with varied domains of success but all abysmally bad at general autonomous learning. I was able in a moment of insight to realize that the solution lie in replicating the neuronal patterns of connection independent of structural representation of the neurons. After all ultimately what they were doing were remodulating inputs and outputs to other neurons...the core function is remodulation...to arbitrarily fine grained levels to other memory elements. This pointed immediately to the simple algorithm that is the basis of the Action Delta Assessment (ADA) evaluation that would occur in the "implicit" workflow extension to AOW. My ability to arrive at that algorithm was made possible by my deep understanding of how brains were built upon neuronal connections and more completely what I'd been seeing in the results of the bounty of fMRI studies that were being produced in the mid to late oughts just when I was turning my gaze to the problem of extending AOW with an autonomous component.

Neuroscientists that don't code or learn about how computing systems operate are absolutely deficient in being able to understand how the mind works because in computer systems we have efficiently created systems for efficient cognition. It was a computer scientist Alan Turing who in fact defined the limits of *ALL* possible forms of cognition.

On the computer side of things computer Scientists have been trapped in this mindset of thinking that cognition relies on fixed state transitions between known binary storage and processing elements (dumb!) when they should have been looking at the brain to see how it encodes information by simply reweighting values between dendritic connections to other neurons. There is nothing fixed about how they work...they are an n scale mesh of possible ways to encode ... what ever is being shuttled in from the senses.

Needless to say as a computer programmer who has written what I believe is the basis of a cortical algorithm that can emerge dynamic cognition (the term "artificial intelligence" is unnecessarily anthropomorphic) I am not suprised at all at the weakness between citation flow connections between the research areas of computer science and neuroscience as shown in the areas in the diagram below:

The binding problem that doesn't exist

Some one sufficiently playing in both playpen's of computer science and neuroscience would quickly conclude that there is no "binding problem". I've read about this silly idea (mostly from philosophers who also rarely take the time to pull up their sleeves and actually BUILD ANYTHING) when I first came across about 3 years ago as I befriended philosophers on the social networks...I thought it must surely had been a joke or that I simply didn't understand it. Well, I understand it...and it is a view that is focused on a dualist perspective that has zero evidence to substantiate. It's also completely ignorant to the fact that we have simulated consciousness already...the graphical user interface you are using to  read this is a visual metaphor for the consciousness of your computer. It is a seemingly dynamic, ever present area for representing symbolic representations of computing structures created in an ad hoc fashion to enable you to interact with the system. This is a precise analog of what consciousness is for living agents.

The illusion of a "binding problem" asserts because the separation between the concept of the possibility space of connections between neuronal elements (called a Qualia space in neuroscience research) has been troubling to many philosophers in particular. They posit, how is it possible for the real experiences of people to match the objective representations of our experiences as encoded in seemingly ad hoc fashion by the varied connections of the brain. This however is the wrong question, the cortical algorithm because of it's simplicity gives rise to a great deal of homogeneity *between brains* that is ignored in that common bit of mathematical legerdemain that makes our brains such good pattern finding tools.

So long as in the aggregate of all the necessary modulations between connections that define some experiential pathway there is correlation between different brains there will emerge similar experience. It really is that simple...to ask why my blue is the same blue as yours misses the point that cognitively we are in a sense "tuned" to recognize similar "blue". It is only via extensive modification to our neural systems function that we can effect real changes to this comparison process, how? Give one person a psychoactive drug and compare their perception of color to be convinced of this. I touched on this critical idea toward my finding a solution in this blog post from a few years back.

What does this have to do with the so called Binding Problem?, if there is sufficient variability to encode nuances of similarity using neuronal connections...such that two people can look up in the sky and see bears in star patterns there is surely enough variability to encode similarity in other modes of perception. "Blue" is not "bound" between different experiences it is "bound" by *mostly* similar connection patterns between the pathways of visual processing that lead to "blue" being perceived. This is a de facto truth, we know how the eye processes light, we know the importance of neurotransmitters to relaying that information to the visual cortex, we know how the perceived image is broken down to be processed by the visual cortex all of these actions happen within a pathway cone that is nearly identical for every person in their bulk...that is until something (the aforementioned psychoactive drug) dramatically changes a portion of the pathway and thus necessarily changes the "perception". If it is so easy to change such perception then nothing is really "bound" at all.

The other argument against this idea was alluded to by the desktop example above, it is an argument by analogy but it's one that only those aware of the function and construction of computing devices would be aware of (as an electrical engineer I was privy to that knowledge years ago). It stands as a very strong analogy between the brain and so called conscious experience and computers. Those who assert that there is a "binding problem" in the brain would have to explain how there is no place where "blue" resides in a computers registers, nor is there a place where "icon" or "window" or "folder" reside. They are abstractions given different visual form on different systems yet no where "bound" internally to the system but still have the identical meaning across systems...though with variation about that via types...but there goes our pattern finding brain being awesome at what it does again. It's entirely to easy to get lost in pattern finding and by doing so expect some "binding" point but as explained earlier it is a superfluous aim.

Finally the concept of temporal flow has all been left out of the work of people trying to build cognitive agents, systems in the works use pattern matching algorithms but only recently are some starting to distribute pattern finding (by using distributed sensation in robotics) beyond that the conscious mind is not a static environment, like the desktop which appears static it is a dynamic moment to moment recreation of constantly moving electrical signals and register state changes. The conscious brain is like wise a constantly moving landscape of abstractions to physical elements experienced externally, restored from internal memory and evaluated via emotional and autonomic import modulation. The flow of consciousness must be simulated and in so doing again the idea of a "binding" problem falls apart as if consciousness is a roiling sea of ideas regarding real time analysis of the world...it is only by emulating the same roiling sea using non biological means that we can emerge dynamic cognition of a similar sort.

Here's my set of posts covering consciousness as popularly discussed by some "experts" in the independent thought islands of neuroscience and computer based artificial intelligence.


 I spent the last 2 years building and testing my algorithm, I know my algorithm works because I've already tested it.  I don't know if I can tie it together in the multi dimensional ways necessary to build a dynamic cognitive agent but I am pretty sure I can do that (thank you fMRI studies of the last 4 years!) now that I've compiled what I believe is a right state diagram to induce the engine into action.

I got here because I studied philosophy, mathematics, hardware engineering, evolutionary biology,  neuroscience and I write code...I am sure those who are also taking these step will get along the same path especially now that fMRI studies are so clearly explaining how the brain is internally wired...but I am glad I got here before such pictures were available on the strength of my knowledge exploring the various germane disciplines. Yet another validation of the importance of cross disciplinary study to illuminating new landscapes on the road to discovery.

25 February, 2013

Action Oriented Workflow and Social Oversight: Promoting integrity in the emancipated workforce

As I continue to plot the course of the launch of WorkNetz I have been engaging potential clients in conversation about what action oriented workflow does for emancipating the lives of workers. Usually after hearing that the system is designed to both optimize the ability for an employer to find the right worker for a given task and that it simultaneously emancipates the workforce so that people can work on their own schedules, the next question is:

"How do you ensure that people are doing good work and not gaming the system?"

The answer to this question was very simple and was already in effect as far as 2007, social oversight. The use of a social network to embed the discovery of completed actions in a timeline feed allows the completion of work to be judged by peers of those performing the work. Social oversight is what prevents employees or workers from performing shoddy work as workers that do so will be socially shamed. Not only will they be more likely to have the same work rerouted to them (and thus reduce their  action delta evaluations for those types of work and be merited more highly by the ADA algorithm subsequently) but they will be publicly pointed to as a person with high integrity for executing a given task.

Moreover, social oversight can be embedded with particular attributes that make it hyper efficient:

Social Normalization: 

Invisible eyes promote good behavior, bad behavior is always seen. When people know they are being watched they tend to behave better, or at least in alignment with what is deemed "better" for the group performing the observation. Many studies in psychological theory back this up and using it in a business context enables the power of this normative social control to be applied toward ensuring that workers are acting in the best interest of the business and their selves in an optimal manner for both.

 All behavior can be discussed and all discussions can be subscribed to. The power of discussion threads, common to social networks to business social networks is that in process work can be collaborated on in real time and without a specific separate action to "collaborate" taken on the part of the committing or delegating agents. Since, AOW guarantees the ability for actions taken against business objects to broadcast to the social group that the  worker belongs to via their work feed, it guarantees signalling to those in that circle that can flag errors or provide insights that can modulate the process. Possible types of bad behavior in such systems including "gaming", attempts to collude for example to gain artificial delta agrandizement but social normalization shines a light on such attempts and allows for the social shaming that will nip it in the bud. Good behavior is further promoted by using incentive's as persuasion.

Incentives as persuasion:

Good behavior can further be promoted by using "Accolades", business equivalent of "likes" but with power applied directly to modulate bonus dispensation. One can imagine a system where key stake holders designated in the workflow have the ability to grant "accolades" to tasks completed by agents, separately these accolades can be set to trigger automatic bonuses when certain numbers are dispensed to the agents. Work can directly be valued for the merits of it's completion beyond just the fact of it's expedient and correct commitment.

Another powerful incentive can be using gamification, in social games like Farmville and others available on social networks...the idea of virtual currencies is so powerful that it has enabled companies like Zynga to be billion dollar companies and without providing any real good or service other than the satisfaction that one is better at growing virtual plots of land. The sale of virtual goods on those services coupled with the innate social incentives of competition under observation by ones peer group are enough to get real people to pay real money for virtual goods so they can achieve a virtual victory.

Applying this concept in the business world offers even greater stickiness factor as the agents would be competing for real value in terms of actual compensation and bonus (or goods such as vacation days or gifts). Using game boards to socially compare agents with similar functional roles or  levels of achievement in a public fashion would make it incredibly compelling to keep workers on task and focused when they are working. Combined with the social normalization elements they would also be focused on doing the best job and would avoid attempts to game the system for fear of social shaming.

Open Performance History:

Another extremely important aspect to social oversight is the fact that there is over time a visible and public history of the performance of agents at particular tasks and subtasks. This history would be deep and could be inspected by all permitted and socially connected peers. Social performance histories serve as a basis of pride for those with great records while serving for a basis of achievement for those with not so great records. The rising tide of social oversight essentially raises all the boats of each particular agent in a multiply efficient synergy of the indicated elements.


Social oversight is uniquely effective when coupled with the innovation provided by Action Oriented Workflow that no agents *have to do* any of the actions that appear in their action inboxes. This blog post explained this innovation but it is important to realize that as the Employer is now able to range across a vast pool of potential agents, and as ADA is able to globally assess their action deltas in real time for any work dispensed the employer is made more efficient with larger scale of candidate workers. At any given time a larger percentage of this pool of workers is likely to be *available* and willing to do work and thus combined with the social over sight elements indicated above this creates a new level of optimization across multiple levels that had previously not been possible.

Further, since the worker agents are able to range free across their value landscape of things they can potentially do for multiple potential employers, they have a strong incentive to contribute to tasks only when they are most desirous of doing so. Of course this varies for many different types of business processes across different types of verticals but the point is, so far as any existing business process or workflow that utilizes human system interaction is concerned an AOW enabled system will be able to optimize those processes in a low cost, transparent and efficient fashion  that has yet been possible by any other automated process.



19 February, 2013

1905: Annus Mirabilus - Brownian Motion

In the second of the series of posts (The first covered the photoelectric effect ) covering the ground breaking advances made by Albert Einstein we will discuss the incredible phenomena of Brownian motion. It may seem that this phenomena didn't have the revolutionary muscle behind it that the other discoveries of Einstein's great year but that is an illusion. We need to understand what was known about the world of the subatomic at this time.

Basically nothing.

There was much conjecture about what the world was possibly made and amazingly through the work of the al-chemists humans gained amazing blind facility with creating new molecules from their very scant understanding of how elements could be mixed in measure to induce various reactions but little was really known about what exactly matter was made up of.

Of course going back to the Greeks the idea of what it was made up of was given by smart people like Democritus who stated:

"The more any indivisible exceeds, the heavier it is."
Well that settles the matter doesn't it? Well not really, the conception of atoms that the ancients had was a bit different from that put forward by modern thinkers, but the general idea of spherical elements interacting in large amounts to constitute the macroscopic materials of which they were made is clear. The problem was is that no one was able to *prove* that this was so, even Newton used the conception only so far as it was useful to allow him to create measures for describing his idea of optics but that didn't rely on any real understanding of the light being made up of particles (or as he called them "corpuscles").

A bit later the Roman Lucretius stated this wrote this incredibly prescient statement:
"Observe what happens when sunbeams are admitted into a building and shed light on its shadowy places. You will see a multitude of tiny particles mingling in a multitude of ways... their dancing is an actual indication of underlying movements of matter that are hidden from our sight... It originates with the atoms which move of themselves [i.e., spontaneously]. Then those small compound bodies that are least removed from the impetus of the atoms are set in motion by the impact of their invisible blows and in turn cannon against slightly larger bodies. So the movement mounts up from the atoms and gradually emerges to the level of our senses, so that those bodies are in motion that we see in sunbeams, moved by blows that remain invisible."

However, this is incorrect as dust particles have their chaotic motions controlled by wind currents than by the bombardments of individual atoms.

Nearly 2000 years later,  JJ Thompson added some solidity to the idea of atoms by harnessing the electrons which we know today are part of atoms and are the constituent particle of electrical current flows. He won the Nobel prize in 1906 for his work in describing the ratios by which current flows could be deflected using electric fields.


Thomson believed that the corpuscles emerged from the atoms of the trace gas inside his cathode ray tubes. He thus concluded that atoms were divisible, and that the corpuscles were their building blocks. To explain the overall neutral charge of the atom, he proposed that the corpuscles were distributed in a uniform sea of positive charge; this was the "plum pudding" model—the electrons were embedded in the positive charge like plums in a plum pudding (although in Thomson's model they were not stationary, but orbiting rapidly). "

However, note he didn't win that prize until after Einstein's miracle year, it's difficult to suppose why but in many ways Brownian motion wasn't just about determining that atoms existed. It was pretty much agreed that they did, but formalizing how their masses varied and how that could be inferred from group dynamics was wide open. Thus the real power revealed by Einstein's theory is summarized by this passage in the Brownian motion article at wikipedia:

" But Einstein's predictions were finally confirmed in a series of experiments carried out by Chaudesaigues in 1908 and Perrin in 1909. The confirmation of Einstein's theory constituted empirical progress for the kinetic theory of heat. In essence, Einstein showed that the motion can be predicted directly from the kinetic model of thermal equilibrium. The importance of the theory lay in the fact that it confirmed the kinetic theory's account of the second law of thermodynamics as being an essentially statistical law. "

So, the power of Einstein's theory was that it used thermodynamic means to infer atomic presence and attributes such as mass. So what ?

Thermodynamic analysis allowed Einstein's theory to refine the methods by which chemistry could measure the size of molecules of various types.

"  This result enables the experimental determination of Avogadro's number and therefore the size of molecules. Einstein analyzed a dynamic equilibrium being established between opposing forces. "

This is a *huge* result as it allowed molecular chemistry to proceed forward at a pace that it had not yet achieved prior to application of these methods to determine precise measures of necessary components and percentages to creating new molecules. It would be at least another 20 years before the full truth of atoms and their chemistry important subatomic constituents would be fully revealed but explaining Brownian motion took Chemistry mostly from a guess work Science to one of precision. The 20's, 30's and 40's stand testament to the revolution that was enabled by understanding at a molecular level what atoms were doing and how they could be combined.

Companies like DuPont, Bayer, BASF, Dow Chemical should ring a bell as much of their innovations in the 30's and 40's that fueled the war efforts on both sides of the planet were induced by innovations in artificial molecules that were made possible by the more refined chemical fidelity enabled by fully understanding the interactions of atoms. From Nylon to Polyurethane to Polyester exist because of this innovation, considering that you are likely wearing clothes that contain one of these substances as you read this it stands testament to how extensive Einstein's theory was.









12 February, 2013

Inventing the future, the relentless creators choice.

In a blog post by Erik McClure:

"The things I see in my head, I have to make them happen. I have to make them real. Somehow. I'll invent a new precedent for 3D graphics rendering if I have to. Don't even try to tell me something is impossible, because even if you convince me that my ideas are insane I will try to get as close to them as I can anyway. It doesn't matter if I am destined to fail, or chasing an impossible ideal, because without it I have no reason to live, and can derive no joy from my existence. If I get a normal job, I rightfully believe that it will end up killing me, physically or mentally. Even if I don't get a normal job, I am now terrified that if the great Aaron Swartz was somehow driven to suicide by an unrelenting, hopeless reality, my idealism stands even less of a chance. No matter how frightened I become, my imagination remains an unrelenting torrent of ideas, slowly eating away at my mind. It will never let me stop trying. Ever. "

-- What an honest insight into the creative demon and a feeling I shared with him up to about 4 years ago. I had been working on AgilEntity framework for 5 years and was approaching a point of needing to do something with all the technology I had built. Like Erik, I had built my own versions of products which either in their available forms were woefully inadequate for what I needed to get done or simply didn't exist. I wrote my own persistence engine before there was Hibernate, I shifted from 3 tier MVC architecture to a monolithic scaled and clustered 2 tier architecture because it is more efficient and manageable over time. I built an abstraction API for interfacing with any DB vendor without having to touch business logic code, much of my technology is still safely away in code inside the AgilEntity core classes and around 2006 I was very much lost in thought as to what I would be doing with the frame work I'd created.

My first attempt turned out to be low hanging fruit for my platform at the time. The web 2.0 hysteria was in full swing in 2006 and I had already built collaboration technologies that no one had done. When Facebook and Myspace were playing around with the simple idea of static and periodically updating feeds...I had already done web based IM and group chat with real time file sharing, when Facebook introduced timeline...I had a similar feature in AgilEntity already in place for 3 years. For me creating these solutions were efficient ways to solve the social problems I identified in the enterprise (as opposed to solving for the consumer space as FB is trying to do) and such solutions just made sense for a business collaboration site. The services I ended up finishing were reading to present to the world by early 2009 and I came up with my first idea for a startup, the cognitively dissonant name "numeroom", as a portmantau of "numerous" and "rooms" it made sense...my idea was that a real time collaboration chat room would be more powerful for businesses than anything else and I had social proof in that similar ideas were being approached by some companies that emerged almost around the same time. yammer.com, meebo.com, userplane.com they were all web 2.0 based attempts at doing real time collaboration...but they all did it wrong (and still do).

So in 2009, after working on my technology in isolation I started hitting the startup scene in NYC which was just getting the buzz that NYC was going to be a major hub of startups. I did a couple of presentations about the Numeroom technology but the air had been sucked out of the room by then, there was no real differentiator from the other players...all of which had big VC money behind them even if my software was YEARS ahead of theirs in features and usefulness to the user..especially as a business social network.

Last year, I watched yammer.com get sold to Microsoft for just over a billion dollars, at the same time that I felt excited that a social network for business...the idea I first had in 2004 and had feature complete by 2007 could sell for that amount I felt sadness...that it wasn't numeroom. I'd shuttered the site in alpha mode in late 2009 after running out of funds. I knew nothing about how to run the marketing and sales aspects of a business and realized it was far more work than I could handle while at the same time needing to survive. The funds I'd saved up to perform this grand experiment had completely been used up...and I had no choice but to put my dreams on hiatus.

I was very much at the cross roads that Erik seems to be from his post but I did not give up, I realized that I had a unique ability to materialize my dreams into reality. 8 years before I had no idea if I could code the platform into existence, I was new to coding in java, I had never completed an application that required that it was built to an executable, I knew nothing about how to bootstrap the application or what technology to use...and I had no idea how I would manage all the objects in all the classes that could be added to the frame work for all types of businesses. It was a daunting challenge and one which I set to methodically achieve.

Erik mentions the wild scope of his imagination, I say one should give in to it. In 2004 when I had completed the major aspects of the platform regarding the class structure, how the platform could be extended and how all objects could be managed an transformed using xml I set to attack the truly major problem of workflow. How does one manage all the possible users for all these business objects across all the types of businesses they could be useful in? What was the most efficient way to solve this problem so that the framework could continue to grow in class types but not in complexity for managing them all? I worked on the problem for nearly a year and the end result is the Action Oriented Workflow paradigm, a completely novel way of looking at business interactions between systems and users that makes "action" the first class actor in the process , not the user and not the object.

It turns out that this was the most novel and revolutionary of all the technologies I'd created in the frame work and it was something I did as a necessity in the 2004 to 2005 time frame just before starting to work on the collaboration ideas. All of those were low hanging fruit....people were doing similar things, albeit with far less efficient frameworks. It didn't matter though, as in business what matters more than who is first...is who is believed to be first and best in a given space. When meebo.com and userplane.com were out there making money and collecting customers numeroom.com was still in stealth mode even though it had far superior features and more scalable technology.

This is the great lesson I learned in the years after the failure of numeroom, that getting out there to own the new technology was critical...striking while the iron was not just hot...but while I was the only one with an iron to strike. This is the lesson I put to work in late 2010, I'd spend half the year working at McGraw Hill and that experience provided the flash of insight necessary to realize that when I decided to launch numeroom.com I was focusing on the wrong technology. My experience at McGraw Hill was a bit of a nightmare, I was forced first to work on a development team that was utilizing the buzz word "agile" process for software development. I am not a friend of the use of the word "agile" the name of my framework to describe this process...for one it is completely misapplied. It should instead be called "micromanagement for software developers" as the net result is an attempt to put creativity on a clock....which any creative person knows is insane.

Software engineers are creative people, we are artists....we can't be tasked to solve complex problems on a schedule of 2 week iterations unless what is desired are inefficient patches...the net result being one big rube goldberg machine. As an artist I long ago embraced the percolating nature of solving big problems, my blog reads as a history of these realizations going back to 2008. I often saw the task of writing code and creating art as identical in process as well as creative tempo. Apriority LLC the company I founded to perform the development of AgilEntity has as it's slogan "taking the time to properly design" this is something I have always believed and realized returned results far beyond the design time taken. The longer you think about a problem the more likely you will find patterns of symmetry or asymmetry that can be taken advantage of to significantly reduce complexity. This is the process that led to the invention of the lazy request routing algorithm that allows AgilEntity nodes to scale while dynamically shifting and sharing load between nodes.

This is the process that led to the invention of the Action Oriented Workflow paradigm and the abstract Entity management and workflow system it enables that allows action to be the center of business process flows. It is also, after I quit my job at McGraw Hill due to being unfairly pressed to tasks that I felt would not exist at all had I designed the system in the first place, led to the extension of the Action Oriented Workflow paradigm to include an implicit workflow technology that utilized a machine learning approach to perform autonomous routing of actions between users on the system.  After quitting I worked on the solution which fortunately  built on symmetries that existed in AOW and made it relatively trivial to write the Action Delta Assessment algorithm...which in it's fractal function may be the basis of a cortical algorithm I have set to get working on after I launch the new startup now in stealth called WorkNetz. This is the revolution of the entire arc of my work on the framework, the unique kernel of creation that no one on Earth can claim as theirs before my invention of it and more importantly...that at this moment my technology is the only working example of it functioning in a general system.

So to Erik I say, keep plugging at your tools, keep refining the solutions you've created into a holistic vision of "works better than anything else available" and then be sure to get on top of the mountain and shout to all with ears to hear that you are asserting ownership and opening shop to evangelize what you've created because it is better than what ever existing ways those things had been done...or even better, it can eliminate the need for some things to be done at all. I am a bit down the road from you...having tasted failure, but I've also been refined by those experiences to plug away at the most unique elements my technology has to offer. I am focused on bringing these technologies to the world this year, fail or succeed you'll have my example as guide to keep you on the path to your success as I continue to navigate the path to mine.





10 February, 2013

the future is not, you choose: travel in a genetically enhanced future II

August 19, 2483

Afusa was smiling, though he'd been "on duty" for the last 3 hours to watch his son in the crib. He was too busy enjoying the little motions of the tyke as he bobbled around with his mobile. Xuǎnzé (which means "chosen" in Mandarin), was his 8th son, the benefits of being a principle astronaut for the ESA is that the rules regarding progeny for super mortals are a bit relaxed. When Afusa had his first son, back in 2142 he and his first partner were both first parents...after over 100 years of the standardization world wide of revigoration technology the human population had changed in some very important ways. For one, the individual law of nations determined where it was best to have children, some countries outlawed supermortals from engaging in reproduction for the obvious reasons....others which had negative population growth allowed it in intervals, allowing super mortals who had been hoping to raise children the ability while simultaneously continuing to take their revigoration treatments.

Some trouble did emerge over these laws in some places, the riots in Mumbai in 2093 are of note, but that is the past, today the EU , the AU and the AFU had long ago standardized the law on how super mortals could have progeny and the rules were mostly the same. Super mortals could engaging in child rearing for the most part if they did not engage in revigoration treatments during the time that they were raising a child....this had the effect of forcing people to space their children in time which depending on who you ask could be a detriment or a benefit. Those that wanted to have homes filled with children were out of luck but others who were fully focused on doing their best job to raise one child at a time didn't have any issue with the policies. This tolerance varied culturally as the Indian riots indicated but that soon went away as the wisdom of these laws became obvious to the population.

It had been so far 8 years since Afusa's last revig. though he now had an infant son to raise of barely 6 months of age he also had an 8 year old girl Ondarea, he and his partner chose to forgo regivoration for 25 years so that they could raise these two new children.....for Afusa, as mentioned the son was his 8th and Ondarea was his 5th daughter. When he had his first Kanzunetta he was returning from a long trip to an explanetary system...his first to planet, the year was 2193, his genetically enhanced mind easily remembers the trials and tribulations of raising the child of a super mortal, Kanzunetta had been a recipient of the standard pre genetic screening and modulation protocols. Afusa and his partner decided that they would enhanced the standard genetic factors and eliminate the risk factors for various types of old disease....which rarely occurred in natural births in the first place but better safe than sorry.

That first trip to Quat was a doosie, it was at a time when the ESA was limited to sub luminal transit between stars, prior to the discovery and practical harvesting of negative energy density and the invention of the first practical Alcubiere Drives, he spent 26 years in transit and back that first time, in 2483 the same trip is routinely being done in 3 months time, super luminal. The ESA has vastly increased the number of known exoplanets that have been visited by supermortals...and by that I mean, not necessarily SH's, (standard human) many of these planets have varied environments from the kind that were conducive to the formation and development of life on Earth. When Afusa made his first trip to Quat though he was genetically modified in cognitive ways to enable mastery of the technical requirements of the trip, the ESA was in it's early stages of exploring exoplanets and had restricted their initial missions to planets that were 90% like Earth or better as far as gravity, solar irradiance and water availability was concerned....but that changed quickly. The ability to modulate or genetics to suit the planets we found allowed us to basically design our explorers to best fit the locations we were visiting. Genetic adaptation to much higher levels of solar radiation, much lower levels of fresh water concentration, varied concentrations of minerals and amino acids that SU's require to survive became par for the course around the 2440's. As an "old timer" Afusa remembers those days with a glint of nostalgia as he plays with the plastic rocket toy dangling from Xuǎnzé 's mobile. He thinks to himself the type of adventure his son will know over the course of his hopefully centuries long life time, will he trace the pattern of his father and 4 older brothers and become involved in space exploration in some way? Will he remain on Earth and continue to enrich the planet with discoveries in cognitive engineering or maybe he will be an artist and illuminate the fascinating connection between fractal geometry and physical structure through the design of fractal architecture a new idea being explored by SH's and SM's in the last few years. Afusa smiles as his son shoots him back what seems like a knowing glare...in fact, because both are wearing cognitive transference devices ...the child does know what his father his thinking and does understand...but at this moment he's yet to undeveloped physically to put into action the fancy that rages his young mind...he'll yet grow and do so...and from what Afusa can tell, he is destined to make him proud.


03 February, 2013

"Emotions" identity crisis in our Brain confirmed

A recent study has shed light on the distinction between what is normally described as the emotion of fear and other alert systems in the human brain that signal to the brain states of danger.


The research was designed to investigate the responses to stimuli that are exhibited by individuals with a degenerative condition in a part of the brain correlated with emotion, and in particular the emotion of fear...the amygdyla. Urbach-Wiethe disease leaves these individuals with a characteristic inability to feel terror or angst at the sight of many events that are normally immediately traumatic to people without the disease. In it the researchers discovered that the individuals indeed could exhibit fear like responses but did so when they were subjected to carbon dioxide.

In my thoughts and writing on human emotions and consciousness I have created a theory that emotions are only an import factor to the particular experiences that we have as sensed by our standard external senses. Emotions essentially let our neo cortical or conscious mind know of the current experiences which have particular import, they color experience with personal meaning in order to guide behavior. I theorized that consciousness emerges as a dance of the interplay of 4 critical cognitive elements. Incoming sensation and experience, Comparison of sensation to stored experience (memories), evaluation of import of the evaluated experience via emotions and then guidance of behavior via the underlying autonomic internal drives which dictate physiological needs.

The results of this new research provides perfect confirmation of the hypothesis that autonomics and emotion are separate systems with emotions playing a supplemental role to the dynamic of weighing the import of external sensations. It lends weight to the idea that "fear" as we normally understand it, a response to potentially dangerous situations or perceived dangerous situations is in fact a learned behavior a choice of response that doesn't necessarily emerge from an autonomic driver. The fear like response to inhaling carbon dioxide makes sense as a strong signal to modify behavior as CO2 is deadly to life in high concentrations, it would best have a very low level autonomic alert system when encountered that would be triggered independent of an active association of fear induced by an emotion. It makes sense as carbon dioxide is colorless and odorless, unlike the things in the world that we can see are dangerous and learn to associated with a correlated emotional response, CO2 has no way to detect other than the inability to breath....thus when one is exposed to it, an innate response similar to a fear response makes sense as a means to modify behavior and seek to get away from the CO2 source and toward oxygen carrying air.

I posit that the other "emotions" can similarly be decomposed into learned response components and autonomic ones. For example, joy and happiness may be emotional responses that model the autonomic pleasure of orgasmic release. There may be decompositions of anger, envy and other emotions as well....in my work investigating the necessary components of dynamic cognition the creation of a state diagram that would produce the conscious dynamo include emotions and autonomics as weighted factors enabling both to be modulated in fractional ways that can allow any emerged consciousness a wide variety of responses. I feel the resolution of these weights will determine the fluidity of the emerged consciousness to sensory dynamics so far as emotional response is concerned. This may be a key aspect of creating dynamic cognition as controlling the innate response to external stimuli will be fundamental to allowing the emerged mind to be stable. I am looking forward to getting work on coding the elements of the state diagram that I wrote down in 2011 in pursuit of building a dynamic mind and this study gives me confidence that my ideas on the matter at least in the concern of emotions as import factors, have indeed been on the mark.







02 February, 2013

Graduating from Programmer to Engineer: Benefit of doing total life cycle development.

A thread in the java forum on Facebook posed the following question:

"What do you think: is it better to learn
1. HTML and CSS
2. JavaScript
3. Database design
4. SQL
5. Java  "

First off, this question really applies to those that have the aspiration of doing what is called total life cycle development, people who wish to be generalists of the entire software development stack for an application. To touch all aspects from class design to UI development. This is always good experience to have but often developers are segmented into each tier as each can be it's own world of complexities and issues, languages and best practices. However, doing a project end to end can provide a great deal of the talent that comes with experience that turns a programmer of code into an engineer of systems. I have full belief that ones options for growth as an engineer and in the field are far greater once engineering skills are gained beyond the sandboxed mind set of "programmers". I have spoken on the difference at length in an other post.

I can say having embarked on a large project over the last 11 years that originally started with my novice understanding of the java programming language that it was far more instructive to start with the core elements of the business logic and that required that I fully understand the OO features that java could provide to significantly collapse the difficulty of coding a scalable, extensible, db agnostic, web based platform BEFORE any such platforms existed in the wild.

The bulk of the code that I wrote for persistence, scalability, security and workflow was purely de novo as there were no third party solutions to plug in and use at the time (2001). This was a great situation as it forced me to create what I needed without pollution from the ideas of other implementations which could have detracted from my ability to design the optimal class hierarchy. The drawback of course was that design time for each component would be required but that meant I could get the practice with obscure aspects of the language that would cement engineering (as well as) programming mastery. (There is a big difference!)

So, I needed to learn about the details of the reflection API, I needed to fully understand RTTI and had to realize the power of run time extensability and writing a custom class loader. I had to fully understand how to use threads, control their populations, segment their access to memory, disk and data resources without creating pathologies (deadlocks,race)....I needed to fully understand how these pathologies could emerge very differently under scaled and loaded conditions. Understand the impact of the network on the architecture, the impact of load on the union of the two, address geographic load and scalability. In short all the big giant problems of enterprise software development came into play over the course of the development.

I spent almost 2 years coding core classes before I even got to client code implementations in user interfaces. The db schema was designed in parallel with the object structure and served to cement my skill with SQL and the need to support a variety of DB vendors enabled me to explore creation of a persistence API that was agnostic to vendor DB's. All of these experiences were of huge value as they allowed me to encounter the sticky problems that are often in the purview of those that have been in the field. You don't get these problems in cs 402 on advanced data structures...even if in some ways data structures are involved in solving those problems...they are in a distal way...these is what I mean by engineering versus programming.

In summary, greater value I think can be found by starting with the core language, that ties almost invariably to understanding the data model (db) and then later when approaching the UI, the client coding issues present along with the issues revolving the templating technology (jsp/php..etc.) dynamic page scripts (javascript/ajax), services (json/xml) and design (html/css).