Skip to main content

Code Evolution: An AOW approach to building a smart app.

What I am about to do is a chronicle of a pattern of thought that encapsulates in one idea among many that I've had how I'd go about using an autonomous learning process to solve using computer code what otherwise would be an extremely hard problem with out applying the methods I will employ.

I am going to do it by introducing you to a problem I pondered yesterday while sitting on the Manhattan bound 2 train. The problem?

How would I reliably create an application that can be given only the starting station and the direction of transit, can reliably indicate all the subsequent stops along the path of a given train line???

It is an idea I've often pondered and solved using the obvious answer, which is to ensure that the subway has wifi access and enable each station to report it's information via an application designed specifically to pick up the report and relay that to the user OR a bit more involved but still effective, allow some sensing of the geographic location of the train from moment to moment as indicated by the location sensors on the smart device to determine where the train happens to be. Cross correlate against map data and thus be able to reliably indicate the station.

However those solutions to me are still inelegant because they rely on data points that are superfluous from the perspective of a human being getting on a train. Assuming that a human being only knows the station stops along a route as indicated by some route map...is it possible to infer subsequent stops in a reliable manner and assuming no stops are skipped during the trip??

The answer is yes and it relies on utilizing the fundamental advantage of applying multi-sensory analysis approaches to creating learning heuristics for devices. In this particular case let us imagine that we are getting on the train at a particular stop, call it stop B. We have as our destination a stop sequentially from B indicated as stop L.

Let's assume that we get on the train blind folded and are thus unable to visually inspect our position with respect to the path once the train departs from station B. This simulates the lack of the real time sampling visual sensation that humans have in order to more closely model the situation context of a 2013 year "smart phone". Lacking a visual sense can we reliably predict stops along the route?

The answer is still yes and here is how:

1) Let's assume that we do have a map of stops along the route in our head, so from B we know that stops D, E, F, H, K follow before stop L is arrived at. Let's assume that there exists stops G, I , J but they are skipped by the particular train we've boarded.

2) As long as we can hear what the train is doing we can determine when it is at a stop versus when it is actually moving...this only requires a short number of sound samples to determine with high confidence but on its own is insufficient to *guarantee* that the train is stopped. So we need another disambiguation factor.

3) We can guarantee the train is at a stop if in addition to having the sound pattern of being still there is soon after the sound pattern of the doors opening. Doors opening guarantee that some stop has been arrived at.

4) Smart phones all contain accelerometers by watching the accelerometer read out as it correlates to the sounds of train motion and the sound of doors opening we can guarantee that the train is engaged in embark/disembark at a stop.

:Now of the sensor points indicated in 2 to 4 above only 3 is a guarantee the train is stopped but 2 and 4 allow refinement of stop behavior around the door open indications that can over time actually provide high prediction of which stop the train is actually at over the entire route of the train. Over time the pattern of accelerations, sounds of motion and sounds of door openings will be highly consistent so much so that interstop heuristics will have very unique signatures (as side station information would be encoded in those deep histories making them even more unique)....this is what happens statistically with solutions that apply statistical learning to predict words based on swipe patterns on a touch screen. The patterns of deep history of swipe patterns become highly consistent with particular words and thus can highly predict those words....this solution would basically do the same thing except it would be using the deltas of difference over the deep historical set of data between the 3 sensor dimensions gathered above to create the per station signatures for a given route...coupled with the known start station information this could pretty much guarantee proper stop estimation...and I predict would be effective *even when random stops are skipped* after some initially defined start station.

This type of problem is quite amenable to action modeling using the Action Oriented Workflow paradigm and the Action Delta Assessment action routing algorithm, the deltas of change would be built up for each station to station signature history and over time uniquely identify patterns. Using AOW would have the unique advantage of not requiring any formal modeling once the "station" Entity is defined it would be given identification attribute and sensory attributes to correlate with acceleration, minimum train motion sound and door open signature...which would be refined over statistical time every time that station is sampled.

As for the reason for such an app?

I was thinking it might be useful for a tourist to figure out their stops when on trains which don't have onboard stop reporting which is still unfortunately quite a few of the trains in NYC where I live and I am sure other cities might have this issue. As a practical business after getting the technology trained on a route and proven the difficulty would be in that training process which would require that some sample data from route runs are added to the training set. Another difficulty would be getting the user to self report their incoming stop (sometimes they don't know it.) After a few stops the app. may be able to guess it's location but people don't want a guess. It would be difficult to get the service right so that it would be useful outside of the magic of training the smart device to know where it is under ground...but it was just an example of ways that multi-sensory data set gathering and delta gathering can make seemingly impossible problems for computers just a few decades ago as trivial as they are for human agents. This is the revolution that is currently happening as engineers armed with the light statistical learning approaches to solving problems attack novel data sets using situation aware devices like smart phones.

This is to illustrate a theory I proposed early in 2012 that smart phones would be the ideal test bed to emerge fully aware dynamic cognition at some point because of their ability to mirror human sensory input points...the rest is about correctly capturing, comparing and determining the salience of gathered data and time.

http://sent2null.blogspot.com/2012/02/when-your-smart-phone-comes-alive.html

Comments

Popular posts from this blog

the attributes of web 3.0...

As the US economy continues to suffer the doldrums of stagnant investment in many industries, belt tightening budgets in many of the largest cities and continuous rounds of lay offs at some of the oldest of corporations, it is little comfort to those suffering through economic problems that what is happening now, has happened before. True, the severity of the downturn might have been different but the common factors of people and businesses being forced to do more with less is the theme of the times. Like environmental shocks to an ecosystem, stresses to the economic system lead to people hunkering down to last the storm, but it is instructive to realize that during the storm, all that idle time in the shelter affords people the ability to solve previous or existing problems. Likewise, economic downturns enable enterprising individuals and corporations the ability to make bold decisions with regard to marketing , sales or product focus that can lead to incredible gains as the economic

How many cofactors for inducing expression of every cell type?

Another revolution in iPSC technology announced: "Also known as iPS cells, these cells can become virtually any cell type in the human body -- just like embryonic stem cells. Then last year, Gladstone Senior Investigator Sheng Ding, PhD, announced that he had used a combination of small molecules and genetic factors to transform skin cells directly into neural stem cells. Today, Dr. Huang takes a new tack by using one genetic factor -- Sox2 -- to directly reprogram one cell type into another without reverting to the pluripotent state." -- So the method invented by Yamanaka is now refined to rely only 1 cofactor and b) directly generate the target cell type from the source cell type (skin to neuron) without the stem like intermediate stage.  It also mentions that oncogenic triggering was eliminated in their testing. Now comparative methods can be used to discover other types...the question is..is Sox2 critical for all types? It may be that skin to neuron relies on Sox2

AgilEntity Architecture: Action Oriented Workflow

Permissions, fine grained versus management headache The usual method for determining which users can perform a given function on a given object in a managed system, employs providing those Users with specific access rights via the use of permissions. Often these permissions are also able to be granted to collections called Groups, to which Users are added. The combination of Permissions and Groups provides the ability to provide as atomic a dissemination of rights across the User space as possible. However, this granularity comes at the price of reduced efficiency for managing the created permissions and more importantly the Groups that collect Users designated to perform sets of actions. Essentially the Groups serve as access control lists in many systems, which for the variable and often changing environment of business applications means a need to constantly update the ACL’s (groups) in order to add or remove individuals based on their ability to perform cert