I am going to do it by introducing you to a problem I pondered yesterday while sitting on the Manhattan bound 2 train. The problem?
How would I reliably create an application that can be given only the starting station and the direction of transit, can reliably indicate all the subsequent stops along the path of a given train line???
It is an idea I've often pondered and solved using the obvious answer, which is to ensure that the subway has wifi access and enable each station to report it's information via an application designed specifically to pick up the report and relay that to the user OR a bit more involved but still effective, allow some sensing of the geographic location of the train from moment to moment as indicated by the location sensors on the smart device to determine where the train happens to be. Cross correlate against map data and thus be able to reliably indicate the station.
However those solutions to me are still inelegant because they rely on data points that are superfluous from the perspective of a human being getting on a train. Assuming that a human being only knows the station stops along a route as indicated by some route map...is it possible to infer subsequent stops in a reliable manner and assuming no stops are skipped during the trip??
The answer is yes and it relies on utilizing the fundamental advantage of applying multi-sensory analysis approaches to creating learning heuristics for devices. In this particular case let us imagine that we are getting on the train at a particular stop, call it stop B. We have as our destination a stop sequentially from B indicated as stop L.
Let's assume that we get on the train blind folded and are thus unable to visually inspect our position with respect to the path once the train departs from station B. This simulates the lack of the real time sampling visual sensation that humans have in order to more closely model the situation context of a 2013 year "smart phone". Lacking a visual sense can we reliably predict stops along the route?
The answer is still yes and here is how:
1) Let's assume that we do have a map of stops along the route in our head, so from B we know that stops D, E, F, H, K follow before stop L is arrived at. Let's assume that there exists stops G, I , J but they are skipped by the particular train we've boarded.
2) As long as we can hear what the train is doing we can determine when it is at a stop versus when it is actually moving...this only requires a short number of sound samples to determine with high confidence but on its own is insufficient to *guarantee* that the train is stopped. So we need another disambiguation factor.
3) We can guarantee the train is at a stop if in addition to having the sound pattern of being still there is soon after the sound pattern of the doors opening. Doors opening guarantee that some stop has been arrived at.
4) Smart phones all contain accelerometers by watching the accelerometer read out as it correlates to the sounds of train motion and the sound of doors opening we can guarantee that the train is engaged in embark/disembark at a stop.
:Now of the sensor points indicated in 2 to 4 above only 3 is a guarantee the train is stopped but 2 and 4 allow refinement of stop behavior around the door open indications that can over time actually provide high prediction of which stop the train is actually at over the entire route of the train. Over time the pattern of accelerations, sounds of motion and sounds of door openings will be highly consistent so much so that interstop heuristics will have very unique signatures (as side station information would be encoded in those deep histories making them even more unique)....this is what happens statistically with solutions that apply statistical learning to predict words based on swipe patterns on a touch screen. The patterns of deep history of swipe patterns become highly consistent with particular words and thus can highly predict those words....this solution would basically do the same thing except it would be using the deltas of difference over the deep historical set of data between the 3 sensor dimensions gathered above to create the per station signatures for a given route...coupled with the known start station information this could pretty much guarantee proper stop estimation...and I predict would be effective *even when random stops are skipped* after some initially defined start station.
This type of problem is quite amenable to action modeling using the Action Oriented Workflow paradigm and the Action Delta Assessment action routing algorithm, the deltas of change would be built up for each station to station signature history and over time uniquely identify patterns. Using AOW would have the unique advantage of not requiring any formal modeling once the "station" Entity is defined it would be given identification attribute and sensory attributes to correlate with acceleration, minimum train motion sound and door open signature...which would be refined over statistical time every time that station is sampled.
As for the reason for such an app?
I was thinking it might be useful for a tourist to figure out their stops when on trains which don't have onboard stop reporting which is still unfortunately quite a few of the trains in NYC where I live and I am sure other cities might have this issue. As a practical business after getting the technology trained on a route and proven the difficulty would be in that training process which would require that some sample data from route runs are added to the training set. Another difficulty would be getting the user to self report their incoming stop (sometimes they don't know it.) After a few stops the app. may be able to guess it's location but people don't want a guess. It would be difficult to get the service right so that it would be useful outside of the magic of training the smart device to know where it is under ground...but it was just an example of ways that multi-sensory data set gathering and delta gathering can make seemingly impossible problems for computers just a few decades ago as trivial as they are for human agents. This is the revolution that is currently happening as engineers armed with the light statistical learning approaches to solving problems attack novel data sets using situation aware devices like smart phones.
This is to illustrate a theory I proposed early in 2012 that smart phones would be the ideal test bed to emerge fully aware dynamic cognition at some point because of their ability to mirror human sensory input points...the rest is about correctly capturing, comparing and determining the salience of gathered data and time.