Skip to main content

Code Evolution: An AOW approach to building a smart app.

What I am about to do is a chronicle of a pattern of thought that encapsulates in one idea among many that I've had how I'd go about using an autonomous learning process to solve using computer code what otherwise would be an extremely hard problem with out applying the methods I will employ.

I am going to do it by introducing you to a problem I pondered yesterday while sitting on the Manhattan bound 2 train. The problem?

How would I reliably create an application that can be given only the starting station and the direction of transit, can reliably indicate all the subsequent stops along the path of a given train line???

It is an idea I've often pondered and solved using the obvious answer, which is to ensure that the subway has wifi access and enable each station to report it's information via an application designed specifically to pick up the report and relay that to the user OR a bit more involved but still effective, allow some sensing of the geographic location of the train from moment to moment as indicated by the location sensors on the smart device to determine where the train happens to be. Cross correlate against map data and thus be able to reliably indicate the station.

However those solutions to me are still inelegant because they rely on data points that are superfluous from the perspective of a human being getting on a train. Assuming that a human being only knows the station stops along a route as indicated by some route it possible to infer subsequent stops in a reliable manner and assuming no stops are skipped during the trip??

The answer is yes and it relies on utilizing the fundamental advantage of applying multi-sensory analysis approaches to creating learning heuristics for devices. In this particular case let us imagine that we are getting on the train at a particular stop, call it stop B. We have as our destination a stop sequentially from B indicated as stop L.

Let's assume that we get on the train blind folded and are thus unable to visually inspect our position with respect to the path once the train departs from station B. This simulates the lack of the real time sampling visual sensation that humans have in order to more closely model the situation context of a 2013 year "smart phone". Lacking a visual sense can we reliably predict stops along the route?

The answer is still yes and here is how:

1) Let's assume that we do have a map of stops along the route in our head, so from B we know that stops D, E, F, H, K follow before stop L is arrived at. Let's assume that there exists stops G, I , J but they are skipped by the particular train we've boarded.

2) As long as we can hear what the train is doing we can determine when it is at a stop versus when it is actually moving...this only requires a short number of sound samples to determine with high confidence but on its own is insufficient to *guarantee* that the train is stopped. So we need another disambiguation factor.

3) We can guarantee the train is at a stop if in addition to having the sound pattern of being still there is soon after the sound pattern of the doors opening. Doors opening guarantee that some stop has been arrived at.

4) Smart phones all contain accelerometers by watching the accelerometer read out as it correlates to the sounds of train motion and the sound of doors opening we can guarantee that the train is engaged in embark/disembark at a stop.

:Now of the sensor points indicated in 2 to 4 above only 3 is a guarantee the train is stopped but 2 and 4 allow refinement of stop behavior around the door open indications that can over time actually provide high prediction of which stop the train is actually at over the entire route of the train. Over time the pattern of accelerations, sounds of motion and sounds of door openings will be highly consistent so much so that interstop heuristics will have very unique signatures (as side station information would be encoded in those deep histories making them even more unique)....this is what happens statistically with solutions that apply statistical learning to predict words based on swipe patterns on a touch screen. The patterns of deep history of swipe patterns become highly consistent with particular words and thus can highly predict those words....this solution would basically do the same thing except it would be using the deltas of difference over the deep historical set of data between the 3 sensor dimensions gathered above to create the per station signatures for a given route...coupled with the known start station information this could pretty much guarantee proper stop estimation...and I predict would be effective *even when random stops are skipped* after some initially defined start station.

This type of problem is quite amenable to action modeling using the Action Oriented Workflow paradigm and the Action Delta Assessment action routing algorithm, the deltas of change would be built up for each station to station signature history and over time uniquely identify patterns. Using AOW would have the unique advantage of not requiring any formal modeling once the "station" Entity is defined it would be given identification attribute and sensory attributes to correlate with acceleration, minimum train motion sound and door open signature...which would be refined over statistical time every time that station is sampled.

As for the reason for such an app?

I was thinking it might be useful for a tourist to figure out their stops when on trains which don't have onboard stop reporting which is still unfortunately quite a few of the trains in NYC where I live and I am sure other cities might have this issue. As a practical business after getting the technology trained on a route and proven the difficulty would be in that training process which would require that some sample data from route runs are added to the training set. Another difficulty would be getting the user to self report their incoming stop (sometimes they don't know it.) After a few stops the app. may be able to guess it's location but people don't want a guess. It would be difficult to get the service right so that it would be useful outside of the magic of training the smart device to know where it is under ground...but it was just an example of ways that multi-sensory data set gathering and delta gathering can make seemingly impossible problems for computers just a few decades ago as trivial as they are for human agents. This is the revolution that is currently happening as engineers armed with the light statistical learning approaches to solving problems attack novel data sets using situation aware devices like smart phones.

This is to illustrate a theory I proposed early in 2012 that smart phones would be the ideal test bed to emerge fully aware dynamic cognition at some point because of their ability to mirror human sensory input points...the rest is about correctly capturing, comparing and determining the salience of gathered data and time.


Popular posts from this blog

Highly targeted Cpg vaccine immunotherapy for a range of cancer


This will surely go down as a seminal advance in cancer therapy. It reads like magic:

So this new approach looks for the specific proteins that are associated with a given tumors resistance to attack by the body's T cells, it then adjusts those T cells to be hyper sensitive to the specific oncogenic proteins targeted. These cells become essentially The Terminator​ T cells in the specific tumor AND have the multiplied effect of traveling along the immune pathway of spreading that the cancer many have metastasized. This is huge squared because it means you can essentially use targeting one tumor to identify and eliminate distal tumors that you many not even realize exist.

This allows the therapy for treating cancer to, for the first time; end the "wack a mole" problem that has frustrated traditional shot gun methods of treatment involving radiation and chemotherapy ...which by their nature unfortunately damage parts of the body that are not cancer laden but …

Engineers versus Programmers

I have found as more non formally trained people enter the coding space, the quality of code that results varies in an interesting way.

The formalities of learning to code in a structured course at University involve often strong focus on "correctness" and efficiency in the form of big O representations for the algorithms created.

Much less focus tends to be placed on what I'll call practical programming, which is the type of code that engineers (note I didn't use "programmers" on purpose) must learn to write.

Programmers are what Universities create, students that can take a defined development environment and within in write an algorithm for computing some sequence or traversing a tree or encoding and decoding a string. Efficiency and invariant rules are guiding development missions. Execution time for creating the solution is often a week or more depending on the professor and their style of teaching code and giving out problems. This type of coding is devo…

First *extra Galactic* planetary scale bodies observed

This headline

So every so often I see a story that has me sitting at the keyboard for a few seconds...actually trying to make sure the story is not some kind of satire site because the headline reads immediately a nonsense.
This headline did just that.
So I proceeded to frantically click through and it appears it was a valid news item from a valid news source and my jaw hit the floor.
Many of you know that we've been finding new planets outside of our solar system for about 25 years now.
In fact the Kepler satellite and other ground observatories have been accelerating their rate of extra-solar planet discoveries in the last few years but those planets are all within our galaxy the Milky Way.
The three major methods used to detect the bulk of planets thus far are wobble detection, radial transit and this method micro lensing which relies on a gravitational effect that was predicted by Einstein in his general theory of relativity exactly 103 years ago.