The zeitgeist of Science fiction is filled with stories that paint a dystopian tale of how human desires to build artificial intelligence can go wrong. From the programmed pathology of HAL in 2001 a space odyssey, to the immediately malevolent emergence of Skynet in The Terminator and later to the humans as energy stores for the advanced AI of the Matrix and today , to the rampage of "hosts" in the new HBO series Westworld. These stories all have a common theme of probing what happens when our autonomous systems get a mind of their own to some degree and no longer obey their creators but how can we avoid these types of scenarios but still emerge generalized intelligence that will leverage their super intelligence with empathy and consideration the same that we expect from one another? This question is being answered in a way that is mostly hopeful that current methods used in machine learning and specifically deep learning will not emerge skynet or HAL. I think this i...
A chronicle of the things I find interesting or deeply important. Exploring generally 4 pillars of intense research. Dynamic Cognition (what every one else calls AI), Self Healing Infrastructures (how to build technological Utopia), Autonomous work routing and Action Oriented Workflow (sending work to the worker) and Supermortality (how to live...to arbitrarily long life spans by ending the disease of aging to death.)