Skip to main content

Big Business: Where Innovation goes to starve.

A recent medium post contained the following quote:

"I promise you, my reaction to the project’s cancelation wasn’t “Too bad, let me find my next longshot!” It was more like grief that a year of my life had been wasted, guilt that I’d wasted the efforts of my team, fear of reputation damage, and determination to work on something next time that would actually matter.

As individuals, we have no portfolio strategy — so those 10% odds are no longer palatable. When we fail, most rational people respond by trying to avoid dumb ideas and pick smart bets with clear impact the next time. People who happen to have a hit in their first few tries are even more vulnerable to the belief that they have to succeed every time (and take it harder when subsequent failures inevitably occur.) And that’s it — the dead-end for innovation.

I’ve met a few people who don’t seem to have this reaction (serial entrepreneurs every one of them) and I can’t tell you what makes them react differently or how to learn to be that way. But I do know there aren’t enough of them out there to hire your team exclusively from their ranks."

--- Another factor that ties into this feeling of personal failure and the desire to avoid risk and failure of that sort ever again in the enterprise or in the start up is the social expression of the risk and failure that *each* individual expresses into the organization.

This projection is deadly to an environment where innovation can foster *especially* when people who still imagine success in risky bets are hired. This then means that not only does the past failing impact the individuals in the organization it leaves a sticky social residue that retards innovation on the part of any one new that comes in with genuinely good ideas as they face all manner of systemic push back for their grand ambitions...after all every one there is licking some wound...they've all battened down the hatches and are not going to let any one sink their ship....again.

This is a major reason why I felt a dozen years ago that the social glue (our org chart levels of control) that we use to orchestrate the building of products and services is a huge drain on the very process by these forces that retard against doing anything that sticks the individuals neck out or a teams neck out, or a divisions neck out ...or by multiplication of effect the companies neck out.

I reasoned that there should be a way to minimize the impact of risk averse agents in the organization to let innovative ideas bubble up by merit despite their risk and be subject to experimentation that can have them take root. However real businesses that don't use this system I imagined (what would eventually be the Action Oriented Workflow paradigm) are still stuck with risk averse employees and environments that choke out the innovative new hires.

So what happens?

People with innovative ideas can go to work for large companies, they get the ah ha moment...they share it with the status quo in the organization ...who all look at the dreamer like they are crazy because of their past failures trying disruptive things and the fear of the social ramifications the organization could bring down if they fail again. The dreamer either keeps trying to rage against the machine and gets excommunicated, admonished or fired and the status quo continues on it's safe route (and thus the company becomes vulnerable to disruptive startups doing exactly what the innovative employee was suggesting).

Meanwhile the employee becomes increasingly despondent...and leaves the company for greener (read: more innovation friendly) pastures OR to go start their own company doing what they imagined.

It's not that large companies don't know how to do innovation, large companies forget how on purpose and actively their social hierarchies of control, any new efforts to be innovative!

This latter story has strong resonance with me as it closely matches what I did after I was laid off from I'd suggested a radical approach to designing the CMS that would make it impervious to the amazing amounts of instability we were seeing at the time. I had already proven the concept by redesigning the entire ad management application using a subset of the approach I was suggesting and it was working perfectly..I brought my idea as a proposal to the CTO and was told that there was no desire to monetize the platform at that time.

Fine, I figured at that moment that I'd build the framework I imagined wasn't until a year later that I got started, the week that I couldn't go to work after the 9/11 attack on Monday September 17, 2001. I started working on an important collection class in the AgilEntity framework....and by doing so began my discovery and exploration of the action landscape and creating the technological base for a future emancipated workforce.

Article originally posted at LinkedIn


Popular posts from this blog

On the idea of "world wide mush" resulting from "open" development models

A recent article posted in the Wall Street Journal posits that the collectivization of various types of goods or services created by the internet is long term a damaging trend for human societies.

I think that the author misses truths that have been in place that show that collectivization is not a process that started with the internet but has been with us since we started inventing things.

It seems that Mr. Lanier is not properly defining the contexts under which different problems can benefit or suffer from collectivization. He speaks in general terms of the loss of the potential for creators to extract profit from their work but misses that this is and was true of human civilization since we first picked up a rock to use as a crude hammer. New things make old things obsolete and people MUST adapt to what is displaced (be it a former human performance of that task or use of an older product) so as to main…

Engineers versus Programmers

I have found as more non formally trained people enter the coding space, the quality of code that results varies in an interesting way.

The formalities of learning to code in a structured course at University involve often strong focus on "correctness" and efficiency in the form of big O representations for the algorithms created.

Much less focus tends to be placed on what I'll call practical programming, which is the type of code that engineers (note I didn't use "programmers" on purpose) must learn to write.

Programmers are what Universities create, students that can take a defined development environment and within in write an algorithm for computing some sequence or traversing a tree or encoding and decoding a string. Efficiency and invariant rules are guiding development missions. Execution time for creating the solution is often a week or more depending on the professor and their style of teaching code and giving out problems. This type of coding is devo…

Waking Out: A proposal to emerging ethical super intelligence safely.

The zeitgeist of Science fiction is filled with stories that paint a dystopian tale of how human desires to build artificial intelligence can go wrong. From the programmed pathology of HAL in 2001 a space odyssey, to the immediately malevolent emergence of Skynet in The Terminator and later to the humans as energy stores for the advanced AI of the Matrix and today , to the rampage of "hosts" in the new HBO series Westworld.

These stories all have a common theme of probing what happens when our autonomous systems get a mind of their own to some degree and no longer obey their creators but how can we avoid these types of scenarios but still emerge generalized intelligence that will leverage their super intelligence with empathy and consideration the same that we expect from one another? This question is being answered in a way that is mostly hopeful that current methods used in machine learning and specifically deep learning will not emerge skynet or HAL.

I think this is the …