The last few years as the layman media has cottoned on to the previously silent revolution happening in self driving car technology since 2003 and DARPA's "grand challenges", we've seen lots of introduction of arguments expressing the necessity of ethics and philosophy to help deal with supposedly dangerous ramifications of a cars that drive themselves. Namely issues like what is known as the trolly problem . I'll be blunt, there is no need to address any moral dilemma at all. Self Driving cars don't need to be that intelligent, all they need to do is know and relentlessly follow the law. The laws work to define what is legal *action* given possible scenarios with other cars and pedestrians...acting within those laws 100% means one is not subject to violating them....so knowing the laws and behaving to their letter ....*even if that means killing people* will get you free of at least the litigation. See China. In China a
A chronicle of the things I find interesting or deeply important. Exploring generally 4 pillars of intense research. Dynamic Cognition (what every one else calls AI), Self Healing Infrastructures (how to build technological Utopia), Autonomous work routing and Action Oriented Workflow (sending work to the worker) and Supermortality (how to live...to arbitrarily long life spans by ending the disease of aging to death.)