Skip to main content

Marketplace disintermediation official symbol

My rendition of a logo to symbolize generalized systems for market place disinter -mediating the execution of actions between agents engaged as buyer and seller in that marketplace.

On the left ingress and egress action requests and commitment signals by agents requesting they be performed. Those who need. (Say the person hailing a ride via Uber)

On the right ingress and egress action requests and commitment signals by those that can satisfy the request of the agents. Those who are the providers. (Say the drivers cruising around looking for fairs).

The center symbolically indicates the logic that is leveraged to enable matching of agents on either side (blue half circle meets red half circle) in order to enable a transaction.

The AgilEntity framework is a system that does this for any type of marketplace exchange that is modeled into the system leveraging the Action Oriented Workflow paradigm's statistical learning algorithms to perform efficient real time routing from and to the "best available" agents, however "best" is defined in the model of the type being exchanged making it the ONLY general solution available by design when I started work on the framework in "Uber for everything".

Uber does this for cab rides only, Taskrabbit does this for chores and errands, Elance does this for small consulting jobs for development...etc.

Any marketplace engages the same heuristic. I've been meaning to create a graphic to encapsulate the concept for general use so here it is....if you want to represent the concept of market place disintermediation (or the "share economy") feel free to use this image.

I added a few .png versions of the image at various sizes to a google drive folder here, feel free to grab, use in slides and presentations and share the folder images. Marketplace Disinter-mediation symbol cache on google drive:


Popular posts from this blog

On the idea of "world wide mush" resulting from "open" development models

A recent article posted in the Wall Street Journal posits that the collectivization of various types of goods or services created by the internet is long term a damaging trend for human societies.

I think that the author misses truths that have been in place that show that collectivization is not a process that started with the internet but has been with us since we started inventing things.

It seems that Mr. Lanier is not properly defining the contexts under which different problems can benefit or suffer from collectivization. He speaks in general terms of the loss of the potential for creators to extract profit from their work but misses that this is and was true of human civilization since we first picked up a rock to use as a crude hammer. New things make old things obsolete and people MUST adapt to what is displaced (be it a former human performance of that task or use of an older product) so as to main…

Engineers versus Programmers

I have found as more non formally trained people enter the coding space, the quality of code that results varies in an interesting way.

The formalities of learning to code in a structured course at University involve often strong focus on "correctness" and efficiency in the form of big O representations for the algorithms created.

Much less focus tends to be placed on what I'll call practical programming, which is the type of code that engineers (note I didn't use "programmers" on purpose) must learn to write.

Programmers are what Universities create, students that can take a defined development environment and within in write an algorithm for computing some sequence or traversing a tree or encoding and decoding a string. Efficiency and invariant rules are guiding development missions. Execution time for creating the solution is often a week or more depending on the professor and their style of teaching code and giving out problems. This type of coding is devo…

Waking Out: A proposal to emerging ethical super intelligence safely.

The zeitgeist of Science fiction is filled with stories that paint a dystopian tale of how human desires to build artificial intelligence can go wrong. From the programmed pathology of HAL in 2001 a space odyssey, to the immediately malevolent emergence of Skynet in The Terminator and later to the humans as energy stores for the advanced AI of the Matrix and today , to the rampage of "hosts" in the new HBO series Westworld.

These stories all have a common theme of probing what happens when our autonomous systems get a mind of their own to some degree and no longer obey their creators but how can we avoid these types of scenarios but still emerge generalized intelligence that will leverage their super intelligence with empathy and consideration the same that we expect from one another? This question is being answered in a way that is mostly hopeful that current methods used in machine learning and specifically deep learning will not emerge skynet or HAL.

I think this is the …