30 March, 2009

the attributes of web 3.0...

As the US economy continues to suffer the doldrums of stagnant investment in many industries, belt tightening budgets in many of the largest cities and continuous rounds of lay offs at some of the oldest of corporations, it is little comfort to those suffering through economic problems that what is happening now, has happened before. True, the severity of the downturn might have been different but the common factors of people and businesses being forced to do more with less is the theme of the times. Like environmental shocks to an ecosystem, stresses to the economic system lead to people hunkering down to last the storm, but it is instructive to realize that during the storm, all that idle time in the shelter affords people the ability to solve previous or existing problems. Likewise, economic downturns enable enterprising individuals and corporations the ability to make bold decisions with regard to marketing , sales or product focus that can lead to incredible gains as the economic malaise runs its course through. Examples of this variety can be found in the opportunities squeezed from the dry rock of the great depression by the US manufacturing industry. Again, in the mid 19070's through similarly bleek economic times marked by stagflation and inflated oil prices during an anemic stock market the pc revolution silently ran it's course. Intel , Microsoft and Sun were all babies of these difficult times and are all still with us today having contributed in major ways to the flowering of our economy over the last 30 years. With the emergence of the www in the early 90's the productivity revolution created by the stand alone personal computer, or by networked pc's to mainframes in businesses allowed businesses and individuals a new ability to squeeze out potential from the landscape options provided by a world of tcp/ip connected machines communicating data between users using http and html in a graphic program called a browser.

web 1.0

Web 1.0 has been used to define the first generation of browsers and web technologies. The Mozaic and the Nescape browsers dominated much of this era followed later by Microsofts Internet Explorer. Key attributes of web 1.0 are that they pages were straight html for the most part, javascript wasn't invented until mozilla saw a need to add more dynamic elements to pages such as change on hover effects. I remember designing sites in 1995 that used these simple "dynamic" elements but the sites were not really dynamic. Most of them were not data driven from a database backend and the ones that were, were built on complex content management platforms or engines like Vignette storyserver or Dynamo. Microsoft added much confusion to the mix with their introduction of ActiveX in 1995 and then incorporation of it into Internet Explorer in 1996. This provided a hodge podge of technologies for doing more data drive and dynamic type of sites but there was no cohesion. It was almost guaranteed that writing a dynamic site on one browser would break in any other browser. Though Netscape invented javascript and made it available for implementation by other browser makers, Microsoft chose to develop their own version of a dynamic front end scripting language and called it "Jscript". The technology pot holes that designers needed to navigate were legion and the lack of technological interoperability hampered the spread of cross site paradigms for user interface design. As the new millenium approached however, designers began to standardize on a set of "core" technologies that provided the maximum amout of usefulness across all the available web browsers. It worked by mating a back end data driven core (java, active x/vb, vignette, dynamo..etc) to front end user inteface technologies using html, javascript and the newly emerging CSS or cascading style sheets. Unfortunately the clouds began to gather and web 1.0 was about to be challenged by the bursting of what was called the ".com" bubble. By 2001, the unsustainable ramping of stock prices of web related stocks gave way to a correction, and that correction led to recession in the US and global markets as over valued tech stocks shed workers. web 1.0 was in sunset but the web wasn't dead yet, enter...

web 2.0

In early 2002, the continued shedding of jobs in the tech industry made it a hard time for web properties. Many which were flush with cash from exuberant venture capital valuations were forced to cut expenses as those valuations were brought down to Earth by rapidly shrinking stock prices. Free coffee and snack areas were silently closed out, expense paid limosine trips home were curtailed and it was no longer cool to be seen playing foosball in the gameroom during daylight working hours. At the same time the technology continued to consolidate under a fixed way of doing things online that worked more or less across web browsers. The javascript dynamic language that Netscape had invented half a decade earlier became powerful for doing things on the browser that many back end languages would otherwise be tasked to do. Creating graphs and charts , even games was made possible by this powerful language, interestingly an innovation added to javascript by Microsoft would turn out to be the lynch pin of the coming web 2.0 age. A simple object that allowed javascript (originally Microsoft's Jscript) to separate the state of a web page from resource calls associated with the content on that page called the XmlHttpRequest Object was added. This enabled pages to make calls to resources without requiring the user viewing the browser to perform any action. This added the last needed piece of dynamism to a web page that would make it behave (when properly designed) much like a stand alone application interface in an installed program on a users pc. The XmlHttpRequest object was first introduced in 2000 and quickly versions of the object were adopted into the other major browser makers at the time (Mozilla, Opera, Safari) by 2002 hackers began using the object in clever ways to allow page elements to be dynamically updated with back end data. In the late 90's , many sites growing rapidly to support many pages utilized xml based repositories that made storing content and then repurposing that content very easy.

XML based feeds were part of many of the first web 1.0 sites, I built many such feeds as a content management producer at TheStreet.com in 2000 and 2001. The emergence of the XmlHttpRequest object would allow the convergence of content separation from presentation, dynamic presentation and dynamic update of presentation that allowed the next generation of web applications to be the first web 2.0 applications. The acronym AJAX was coined to symbolize the pieces of the new standards based dynamic site design paradigm. Asynchronous Javascript and XML. Simultaneously the power of XML as a content transport agnostic of presentation was generalized so that XML could represent many other elements in connected systems that previously required direct object delivery and reception via object broker technologies specific to platforms like CORBA and DCOM. The WS (web services) model was born first by using xml to simulate remote procedure calls on server based objects using xml instructions from other machines. About the same time another XML based grammar called SOAP was created, the simple object access protocol allowed two way communication enabling dynamic conversations between machines , possibly on completely different platforms so long as they both could "speak"(write) and listen to (read) the SOAP messages. Additional grammars were designed to enable security, read and write to directories (UDDI) and perform other functions commonly performed between computers previously only with object broker technologies. So web 2.0 involved two revolutions, the first was the emergence of the asynchronous access to data sources using the XmlHttpRequest object , the second was enabling computer systems on any platform to agnostically communicate data between systems. The design and implemenation of systems that exposed application components in the form of RPC/SOAP or WS* XML protocols became known as Service Oriented Architecting.

As 2005 approached many businesses were investigating ways to switch their applications over to an open WS* model that could be used to allow those applications to expose and consume services. In essence, the XML revolution allowed applications running inside enterprises the ability to communicate and receive data using protocols not unlike that used by the web itself. In fact the html language syntax is itself a subset of the XML family of structured languages, based on the Standard Generalized Mark Up Language (SGML). web 2.0 saw its flowering in the form of consumer sites that utilized it to the fullest with the emergence of youtube, the continued growth of google, the transformations of yahoo, ebay and amazon and in the emergence of countless new 'startups' that employed very simple ideas implemented with unique dynamic elements enabled by either AJAX or by using the now mature Flash technology from Adobe that had finally reached sufficient performance on the newer more powerful computers to enable creation of rich internet applications. While the front end and application technologies changed so did the preference for the back end development platforms used to produce web applications. The weight and expense of the full content management systems like Vignette could not be supported on the tight budgets of the couples of college students that wanted to build web applications. For them , it was much easier to coddle together a site using the new server side scripting language platforms that emerged to allow easy back end development without the need of a heavy application server. Python, PHP and Ruby were the languages of choice for this set and soon these languages had "platforms" designed around them to enable them to be used much like the old content management applications were but without the expense. Plone, Drupal and Ruby on Rails became the foundation upon which many simple web 2.0 web applications were put together. Ruby on Rails in particular brought together the dynamic page creation (data side with the database and interface side with AJAX) and page management elements necessary to create fast sites. Unfortunately, these new platforms lacked the stability and scalability that would be necessary for them to serve as the base for massive growth. The web 2.0 era has been dotted with spectacular successes after much difficulty on the back end (twitter, digg) as the hodge podge of technologies put together to define these sites were optimized to scale to tens of thousands to millions of simultaneous users.

However, despite the success of many of these sites today, there is much room for improvement. The scalability that many of them currently have comes at the cost of large amounts of server resources and massive arrays of disks, not to mention the expense of running those servers, hiring managers for those servers and in general managing all the employees of the rapidly growing companies that they work for. The ability to design a truly scalable, secure and managable web application framework could never be found as the web iterated from 1.0 to 2.0, the next generation of web applications will take advantage of this deep synthesis of efficiency in the back end code design from the ground up to enable extremely efficient and dynamic web applications and sites using any of the desired technologies of the previous generation to build even more resource intensive sites but without the massive expense for hardware resources, managing users and employees. Enter the next step...

web 3.0

The economic slowdown of 2008 began with troubling signs in the housing market that soon snowballed into the collapses of several institutions (Fannie Mae , Freddie Mac) for home mortgages and several large and old banks (Leaman Brothers, Bear Stearns, Washinton Mutual) shutting their doors over night. The cascade of failed banks led to a tightening in the credit markest which again as in the 70's and the early 2000's made it extremely difficult for entrepreneuers to acquire venture capital. I consider web 3.0 a defining moment because the type of software that I believe that will characterize it is what I've been working on for the last 7 years. In order to realize the extreme efficiencies of low resource costs under massive scalability, the design time necessary to engineer a generalized solution had to be greater. When I was laid off from TheStreet.com in 2001, I decided that the best way to pursue my dream of running my own business was to start it then. I had a brain full of experience working in a highly challenging environment , where I engineered solutions using the emerging xml , content management and dynamic UI technologies. Before the end of 2001, I began the design of a generalized Entity management system. This system would be completely different from previous content management systems in that it would not specialize on any particular type of content. Content managent up to that time had been dominated by management only of certain types of content important for web applications or sites. Stories and Articles , Images, Video and Audio files were the ones that made sense within the context of web applications by 2005, management systems broadened their scope to include any file type but the systems failed to manage the most critical elements of a running business that had failed to be properly encapsulated into a combined content management system. Business processes. A truly generalized management system would include Users , Servers and other structural elements of the very application under the management paradigm. If articles , stories and images could be updated, created or deleted so should servers, users and any other conceivable entity of the business given objects in the system. This is what an Entity Management System would do. The task is a difficult one as it is an attempt to solve a problem landscape that is multi dimensional but at the same time still is easily managed both as new Entity types are added to the system and as new instances of all those types proliferate on the system.

Around 2004, when the bulk of my Entity Management API had been built I happened across an article on the J2EE java technology platform. I looked through a few documents and noticed immediately the use of "entity" to describe a generalized management type in the J2EE framework. However, continued reading of the technology quickly led to head aches, the reason was that J2EE was over complex. It tried to solve the entity management problem but not holistically. It had many different possible persistence API's that could be used, it had a strict class inheritance scheme that placed many requirements on the client programmers that didn't seem natural during the class design. After reading a bit on J2EE I was happy that I designed my API using core java from scratch. For a generalized web platform the most critical components were deemed as follows:

  • Compositional xml based type (Entity) framework.
  • Scoped security model based on hiearchical granular permissions.
  • Twin hieararchy for Entity base class to optimize object instance sizes.
  • Custom persistence API for optimal mediation from datastore.
  • Action oriented Workflow for all Entities.
  • DB Vendor agnostic database API.
  • XML based bootstrap system.
  • compiled software distribution for easy installation.
  • Extensible during run time to add new Entities and thus new functionality.
  • dynamic request load distribution across participating cluster nodes.
  • dynamic request load distribution between branches of nodes for out of the box, geographic scalability
  • "Manages itself" requirement, all templates and scripts that compose the framework are managed by the framework.
  • secure web 2.0 Collaboration API to enable users to communicate in real time concerning actions over all Entities.
  • Content delivery Protocol (ssh, sftp,file..etc.) and File type (.doc, .pdf, .html..etc) API

A little bit about these requirements and why they were deemed critical to a web 3.0 web application platform. The use of xml as a compositional core is what allows the content types that are based on standard content formats managed in traditional CMS systems to be easily handled by the new platform. XML based representations for articles, stories or even advertisements could be related to one another in the datastore and the xml of the composed object generated using a global "toXml()" method. Transformation of the composed xml could then follow quickly using the Xsl transformation language templates that can be added for that purpose. Note there is no further delineation of what is being composed, the point is any compositional relationship between business entities can be extracted as xml after being composed dynamically in the framework or as was retreived from the datastore. This allows the final rendering of content types from the composed xml to any desired file type using any desired delivery protocol.

Scoped permissions allow any action against any Entity to generate a unique permission id on the system. Each scope indicates the hieararchy of application for the permission. There are permissions that apply to all Entities, all instances of a single Entity or a single instance of a single Entity. Instance permissions are generated on demand, thus only permissions for objects in use are actually available to disceminate ensuring minimal granted rights for the entire entity landscape over the life of a cluster.

Twin hieararchy for the base classes allows Entities to be tailored to their use on the system. Entities that have compositional relationships with other entities have additional machinery from the baseclass for iterating through the "payload" of child objects and perform important iteration, skip and flow functions that such relationships usually employ. Entities without compositional relationships tend to be structural or system entities, the additional base class machinery is ommitted allowing these objects to be optimal managed in the system.

Custom persistence API, maps data base access objects to the twin hieararchy of base classes mentioned before. The abstract base class for the persistence API ensures necessary methods are implemented while providing any necessary method implemenations to inheriting classes. Ensures that the access objects are optimal in size for each entity type. Optimal sized entities allow more efficient use of application running memory. This translates to faster loading objects because they are optimal sized for their functions, more objects in smaller total memory space translates to doing more with that space. The more objects that fit in a memory, the less memory is needed to satisfy a given amount of application load, this translates directly to cost of memory and number of servers and that reduces operating costs for the business using the software.

Action oriented workflow is a User/System delegation paradigm that creates a lose coupling between the actions associated with Entity objects and the Users who may be able to perform them. By allowing the creation of workflow stages that collect Users with different permissions but delegated for a given action execution on a requested entity type by another User, commitment of an action can be made to converge on a User that has the permission for the action requested. Action centric rather than permission centric resolution of changes to Entity objects. This eliminates the need to create access control lists by allowing businesses to map their real world processes very easily into the system itself. The lose coupling enables a massive business process landscape to be covered by the paradigm. Also, actions can be delegated to the system to perform at any desired time either at action origination or by the action commiter.

A db vendor agnostic api allows the framework to be easily installed to use one of many popular RDBMS vendors and if desired to have the system extended to use custom vendors, so long as those vendors are ANSI sql compliant which forms the bulk of data management systems in use at the enterprise.

The XML based bootstrap, allows the detailed configuration of the entire cluster and of individual nodes and branches in the cluster to be managed using xml files. The framework has a permission scoped UI that allows GUI management of attributes in these files but the same files are also manageable through a GUI to users with the requisite system permissions. Making changes to the system can be done on a per node/branch or total cluster scope depending on the permissions of the commiting User and what is desired.

Run time extensibility, allows new Entity classes and associated db access classes and corresponding RDBMS tables to be added to the system via GUI or programmatically using a system XML script. Detailed instructions are provided to make the process of adding the new class packages, database tables and necessary management templates an efficient and fast process. A cluster of nodes can be extended in this way without requiring much beyond page refreshes once the relevant classes, tables and templates are in place.

Dynamic request load distribution, allows each node to determine if it should handle an incoming request for authentication to the system based on node specific criteria indicated in the system xml for each node in a cluster. The system can be adjusted so that requests beyond a certain percentage of utilized system memory or database connections are redirected to other nodes in the cluster or failing that denied. Since every node participates in this simple action, the entire cluster behaves in such a way as to push requests toward the least loaded nodes over time. This allows the most efficient handling of incoming load spikes by those machines in the cluster that are able to best handle them. The system allows warning and redirection events to notify designated Users, allowing constant awareness of the load on the live system to be had by User administrators. (Note, because of the permission system and the fact that the system manages itself...system administrators are Users just like content editors or writers, only their permissions distinguish what they can see or do on the system)

Geographic scale dynamic load distribution allows nodes to pass requests out to nodes located in other data centers across a WAN. The algorithm for load distribution biases redirections to nodes in the same branch as a redirecting node but if all nodes are redirecting a second request will be routed out to a node in another branch, this allows spikes in load that are geographically local to be difused to other geographies where requests that would other wise go denied can be handled at the cost of a slight penalty in latency. This is what allows clusters of the framework to scale smoothly even when the nodes are geographically dispersed.

"Manages itself" requirement satisfies the generalized Entity management system goal of the design. The system manages the Users of the system as Entities, it manages the Permissions granted to Users as Entities, it manages the UI templates that make the system functionality available as Entities, it manages the system configuration xml and xsl files , the javascript and css scripts for UI elements all as Script entities. Allowing all of these to take advantage of the aformentioned permissions, load distribution, action oriented management without any code. New Entities will automatically be manageable by the system once added.

Real time collaboration in the form of instant messaging, group chat, outbound email notifications for events or collaboration requests, collaboration feeds for User contact groups are all part of the system. User can control the scope of their collaboration to include only invited Users on the system or they can invite anonymous or ip specific internet guests. This enables Users to communicate with clients or partners or interact with customers. Again, the core idea of generality was built into the collaboration API allowing it to be maximally useful to any applications that are added later to the system. It is important to note, that once the design of the core framework was complete, adding functionality such as the content management API and the collaboration API proceeded as if the framework were being extended following the design paradigm that was the goal of the project in the first place.

There is a GUI managed API for setting up delivery objects for xml based generated content. New Protocol and FileTypes can be added by extending the corresponding API baseclasses , imported into the system and applied to existing Entity xml content set up to use the new protocols or file types. Content can be rendered to pdf and sent via ssh to a remote server, or the same content can be rendered to a doc file and sent via ftp to another location. There is no limit on the combination's as they are orthogonal in application.

So web 3.0 will be highlighted by frameworks of the kind I've designed. The applications built on web 3.0 platforms will be geographically scalable, implicitly load balanced, run time extensible and secure systems for building web applications and managing them with in the context of an existing or desired business workflow(s). Separate software for managing users(human resources) independent of the work they are tasked to perform is not needed, software for managing tasks performed by a user, when those tasks are performed, who collaborated on the changes are all implicitly managed by the system. Allowing managers to determine User efficiency by looking at reports of when actions are submitted and then committed. A single application can be managed by a handful of Users or even one User, delegation of actions through workflows can give this User sole control over performance of critical actions that other Users request. Agents can be delegated to perform actions from any web accessible location, enabling business to occur 24/7 and without need for physical offices. This allows massive savings on costs in time and money for setting up such processes using the previous web 2.0 model. Implicit load balancing and security allows the business application designers to focus on the business logic and not worry about scalability which is a forgone conclusion of the frame work, this enables them to engineer better applications that allow the business to be more competitive or dominant in the sector they are competing in while spending less money and hiring less people to do it. The parallel move to "cloud" based resources for file storage , e-commerce and even databases will allow platforms designed like my framework to become extremely flexible and cost efficient while allowing massive amounts of Users and business functions to be efficiently managed by small numbers of Users. Thus the applications built on web 3.0 technologies like my framework will be some of the most efficient software every designed allowing continued productivity gains to be had to the business community as they come online.

24 March, 2009

Automated calls ....

I've noticed in the last year or so an increase in the number of calls I receive where instead of being greeted by a living agent I am greeted by an automated one. Usually these calls start with an annoying 2 second gap where I say "hello" several times before the machine begins to play its recorded spiel. Today, I received another of many calls I've taken from a company that claims (through the robotic voice) to want to provide me with a better rate on my auto insurance. The only problem is that I don't have a car, and haven't had one since 2002. The automated message is interested as it goes something like this:

"If you wish to speak to an agent about our package please press 1. If you do not wish to take advantage of this offer and wish to be removed from our call list please press 2."

Unfortunately, the company in question is deceptive in both options. In the several times I've received this call (often at the most inopportune times) I've pressed both options. At first, I pressed option 2, thinking that I would not hear from the robot again....I was wrong, I did it 2 more times before realizing that option 2 is false, they are using a hard sell tactic to force you to speak to their agents. After being annoyed with the message yet another time, I decide to press 1 just to hear what the agent would say. During this call I must admit to having been quite upset at the call, and I am sure the agent heard the frustration in my voice when I told him "take me off this damned list." his response was immediate. "I'll go ahead and do that for you, good bye."

This worried me as it was too fast an answer. Sure enough a few days later the robot call was back, after composing myself I selected option 2 again, thinking maybe it didn't "take" the first few times (still holding out hope that it would work irrationally) ...a few weeks passed and I thought I was free of the scourge of the robot but today that joy was smashed by the call yet again. This time, I chose option 1 again composed myself and readied for the agent.

Agent: "Good morning , my name is Tasha , How are you doing today?"

Me: "Not very good Tasha."

Agent: "Click!"

Now, it maybe the case that she's received many such introductions but there was no reason for her to assume I was talking about the automated call I'd received but it seems she did. Likely caused by the many fire breathing calls she must take every day thanks to the slimy tactics of her company. However that is no excuse to hang up on a potential "customer" without saying a word. Here behavior is a reflection of the slimy tactics of the company she works for and it doesn't surprise me that she works there. Next time I plan on answering with absolute joy in my heart, to see what happens if I inquire about their plan. I'll tell them that I will be getting a car in the future but want information on their plans just to see what goes on.

stay tuned.

Update April 29, 2009

I received another call about 2 weeks ago and this time selected the option to speak to the representative again. I put on a joyful spirit and when connected with the representative expressed interest. I told them that I didn't have a car at the moment and the response was:

"You don't have a car?"


So he sure will get the customer service medal of the year. It seems clear that the company is not interested in even making the effort of treating anyone who they won't be able to extract money from as a human being.

This very day, a local radio station had a story about the calls and the fact that Verizon (my phone company thankfully) has fined and removed the companies from their system, finding the calls abusive. I sure hope I never have to answer another call from that or any similar service!

23 March, 2009

An open letter to the third world.

The current economic crisis provides a sobering reality check to the western developed countries on the viability of their current economic models. The continued rise of standard of living in the west, gained on the backs of exploiting cheap labor or resources in other parts of the world has run its course. The last 60 years, western countries have been forced to relinquish the colonial control over the lands and economies of their former colonial possessions and instead turned to capitalism to exact a measure of control over those former states. The establishment of the world bank and the IMF have allowed western nations to establish , often very favorable monetary relationships in the form of loans, and to enable exchanges of goods, resources and services between western countries and the still resource rich countries of the third world. At the same time, active or passive attempts to maintain control of these areas by securing favorable governments into power through political and economic pressure or by out right funding of opposition groups within foreign countries have allowed the west to maintain a level of control over these regions for the last 40 years.

However, things have changed, today the former colonial countries of South East Asia have elevated their infrastructures and manufacturing capabilities to supplant and usurp the former prowess in these areas that existed in western countries. The capitalistic system which was the engine of the flowering western economies has served to also build the economies of their former possessions. In so doing, the difference in standard of living between the peoples of the west and the third world has allowed the latter to stand out as manufacturing capitals that allow them to produce the much desired acoutriments of the standard of living in the western countries for much lower production costs than could be had if produced in the west. In this difference lies the slow bleed of manufacturing prowess from the west to the newer nations that has attended the last 45 years.

The link between standard of living and manufacturing prowess is an old one, the Greeks exploited innovations in military tactics and warfare to gain control over their neighbors. In so doing they extracted the ability to gain very favorable terms (in the form of vassals, acquired technology, resources and new foods from the regions conquered, slave populations to extract native natural resources in conquered regions and regular tithes from the governors of controlled regions) these terms allowed them to build a society of "thinkers". Aristotle, Plato, Socrates were allowed to think about so many other areas of human endeavor and concepts because the Greek society had freed them of the need to be constantly at war or constantly in the field. This freedom was one at the hands of battle and war of the previous generations of Greeks. The Spartans and later Alexander of Macedonia. The emergence of the thinking classes in Greece gave rise to further advances in technology that came not from a magic innate ability of Greek culture or people to generate them, but instead by the slow steady acquisition of much knowledge from conquered lands. Two of the greatest prizes to Greek expansion and development to this cause were the ancient and faltering civilizations of the Persians to the East and the Egyptians to the South. In conquering the Persians, the Greeks gained access to more advanced methods of numeration, the concept of "zero" originated in the Indus valley, and until Alexander's concquering of those areas was unknown to the Greeks. Additional gains from the Persians include advanced methods of agriculture in the form of access to additional spices and plant species. In conquering the Egyptians the Greece benefited even more, here they absorbed advanced building techniques both for masonry and as well for building ships. They also learned the advanced metal craftsmanship that had flowered in Egypt for 2,000 years before the Greeks conquered them. Also lucrative, was the fertile coast of the Nile that served as a bread basket to the Greeks and later to the Romans who conquered the Greeks.

Examples of "advanced" cultures exploiting the resources of conquered lands to spur their advance can be seen over again in history since those times. In modern times, the capitalistic system has mostly supplanted active warfare as the mechanism by which favorable trade terms are achieved but has come at the cost of the penalty of standard of living. As the Greek society became more and more sedentary and less in touch with the tactics of war that led to their control of foreign lands peoples and resources they became vulnerable to attack from those regions. Invariably, in order to control a remote land the invader must bring their people and their military advantages to the perifery of the controlled areas in order to fend off attack from uncontrolled areas further off. This meant that the conquerer had to teach the conquered how to serve as their proxy on the battle field. Once this parity had been reached, the original culture no longer had a dominating military advantage to maintain extraction of the favorable terms of extraction of resources, use of slaves (they interbred with them and thus increased the percentage of the population that were free citizens) and more importantly, lacked the ability to field the necessary numbers of troops to qwell uprisings in far of regions, often occuring simultaneously. These factors slowly were at the heart of the demise of the Egyptians and Persians by the Greeks, of the Greeks to the Romans, of the Romans to the many dozen of controlled areas (precipitated initially by Hunnish antagonists) that they lost control over slowly from roughly A.D. 100 to A.D. 450.

The economic version of the conquerer making the conquered their proxy exists in the manufacturing migration that has occurred from western lands to former colonies over the last 50 years or so. The massive populations of countries like China and India coupled with policies that are pro economic development have allowed them to isolate manufacturing as a gold mine for development. The more people you have to work, the less each can demand payment, supply and demand played out in clear detail. The production of cheaply made (from a cost perspective not necessarily a quality perspective) manufactured goods in these lands have allowed them to extract favorable terms in reverse to the western countries, and their populations of people who demand these products to maintain their high standards of living. The result being that over the decades, a former hard line communist China has gone from being a country with barely any diplomatic relations with the United States to being one to which the United States owes several hundred billion dollars in the form of outstanding treasury notes. In the ancient past a country like the United States would have asserted its military advantage over weaker nations to continue extracting the strongest possible terms of control over those countries people and resources but capitalism has replaced the risky tactic of using warfare to gain economic power.

The global recession of 2008 in which we are now all mired is the result of the continued imbalance between the western countries (and the United States in particular) to pay for the goods and services they are purchasing with their treasury notes as over time, the foreign lands that have been buying these notes have increased their parity with us in manufacturing and technology. The continued desire to maintain high standards of living have forced the US government to over leverage itself in order to maintain the status quo. Programs instituted to promote ownership of homes have prodded individuals to assume large amounts of personal debt in the form of outstanding mortgages to these homes, many extravagantly built with the belief that soaring prices in local markets would continue to add value to the original purchace. Tie this together to a lax regulatory environment that allowed a decoupling of the risk inherent in purchasing a particular home from the purchaser in the form of exotic financial instruments like collatoralized debt obligations and mortgage backed securities, allowed US bankers to pass on the risky purchases of overlevereged Americans on to the rest of the world, where up to now US securities had always been seen as a safe bet. The crash of the real estate and banking markets is a direct result of the entire world coming to the sudden conclusion that "the value believed to be in US securities was wrong", unable to know if they could ever realize the value of notes purchased many foreign banks folded and even some governments (Iceland) as they overleveraged their holdings of US based securities, the value of which have all gone to be nearly worthless.

It is safe to say that the US securities will never again see the heights that they had prior to this recession. The more advanced manufacturing prowess of the Asian market has sounded the death knell for the US and Europe as being the only two areas of the world where growth appears nearly limitless. However, it would be a mistake to think that these emerging markets are also free to grow as vigorously as it was believed the US market would grow. Like what happened before in Greece, parity will be reached in manufacturing capability but this will come at the hands of the third world. The Asian countries are already organizing labor forces and unions in order to extract better terms for producing the goods exported to western countries. As per worker payments rise, the advantage of manufacturing in these countries will fall and the companies that own them will look for yet cheaper places to continue production at lower cost. Countries like Brazil, Mexico, Nigeria, Kenya, South Africa are poised to serve as the new locations where extremely low labor costs can be had for the production of manufactured goods desired by the Western and Asian markets to serve their high standards of living. The cycle will continue but the third world stands poised to ride the wave that has already passed under the boats of Europe, the United States and Asia.


http://www.msnbc.msn.com/id/17424874/ (who do we owe?)


18 March, 2009

hulu adds more tv shows to catalog

In a recent post, I ranted about the fact that media companies just don't realize how much money they are leaving on the table by not making their old tv shows available online for free (ad supported) so that people can watch them any time they wish. I am a sci fi fan and during the last 15 years have missed a few really good series because I was so focused on my career and most recently the software for my company. One of the shows that I recently took an interest in , is the Sci Fi channel's "Stargate SG-1". During the late 90's it was on showtime and I never took an interest in it, but since appearing on Sci-fi , I've been able to get into the universe and it is pretty good. Just a few days ago I was searching hulu.com to see if SG-1 was there, and I could only find Stargate Atlantis (which is the spin off series from SG-1) It just turns out while searching for the Late Night with Jimmy Fallon show clips, hulu had on their main page a splash advertising the availability of hulu.com. It is only the first season , but I gather they'll be releasing the seasons periodically...good going hulu!

16 March, 2009

chaotically damped exponential...ie: the stock market

In a conversation I had with a friend last night, I mentioned that I understood much of the mathematics and physics that I learned as an undergraduate engineering student by approaching the problems from the perspective of my visual sense. I have used this sense to understand areas of mathematical complexity that I probably would understand differently or not at all, had I not had the art related visual ability to create shapes of the abstract ideas that I could then use to relate and form an understanding. During the conversation I mentioned the stock market and described it as a "chaotically damped exponential" system. What follows is a rational for that description that I wrote up to flesh out this idea that I had otherwise not fully explored other than visually.

In mathematics, the term "chaos" is used to describe signals or ensembles (collections of signals) that are varying in a random way across an infinitely long sample or wide enough space. Exponential's how ever are non linear functions that are easy to predict (they form the heart of many theories, radiative decay being a prominent example) but exponential functions can rise with time or they can go down with time depending on the sign associated with the functional exponent. So in my view the stock market works as an exponential function with exponents that are a chaos function or a random function or variable. In statistical dynamics , random variables are a powerful tool used to study variance across samples of some type of data. Mind you I am talking about the average behavior of the market as taken by one of the various indexes, S&P 500 for example. The actual market of stocks is much larger and the indexes are really nothing more than samples of the true total market that are said to reflect gyrations in the whole.

The stock market works like a collection or ensemble of independent agents (companies) whose value rises and falls with the *perception* of the market (investors) who buy or sell based on how they feel the stock is doing, what they want the stock to be doing, what they need in their own personal lives. The reasons for any particular agent viewing a stock vary, and thus the gyrations of the indexes over smaller intervals approximate perfect noise (since on those scales the gyration is simply an average of investor desires, which can be simultaneously divergent) note this is different from what happens with regard to the gyrations of a particular stock only in that the average is accounted for in a different way, using a smaller set of data (ie. only investors in the stock) but even here the agents still employ the same reason for buying or not buying stock in a company within a given market session and those reasons are often divergent. They do however have the ability to converge very rapidly, should some news be released about the company, say indicating that it will be closing a factory or buying another company. Such news could cause immediate effect on a particular company stock but leave little impact on the wider index and market as a whole. The ultimate conclusion seems to be that market gyrations vary with scale but at every scale are still subject to random events that can shift the curves (be they individual stock or a market index) in unpredictable ways. A good example to illustrate the market is the analogy of a group of people in a park, if we took samples of groups of people in the park we would find that the numbers vary with many events, what day of the week it is, what is he nature of the weather, if their is a band playing in the nearby ampitheatre, these events are correlated with various levels of people showing up but they are not predictors of any precise number of people in the park for any particular day. So though it is possible to say "today there will be more people than yesterday" it is impossible to say "today their will be 39 more people than yesterday" the variance in possible people acts as a random variable over all the days that a sample of individuals is taken in the park. I posit, that the stock market works the same way with an inability to predict specific gyration values. Company forms, company exists or survives for a time frame, possibly thrives and then company dies. Unlike living entities though , where collections have relatively known "death" times after they are born, companies can live indefinitely, so long as they perform well by competing in their respective markets and keeping investor interest. They often, during their "lives" purchase other companies or provide markets to them, over an average of companies this obvious statement is true "those that survive last longer than those that die", which is to say, there are more companies maintaining or thriving on average than their are those dying. I have only an intuitive sense of this being true for now but it is the reason for my view that the behavior of the markets is exponential but damped in a random way, which accounts for the fact that over it's history (assume the dow jones average), it has slowly gained but if we add in this latest downturn (the greatest since the great stock crash of 29) we are probably average zero growth or near it, over the history of the markets, which is exactly what we would expect over a chaotically damped system. It is important to realize that the dynamics over all companies extant and extinct define the true nature of the market and none of the averages is a total sample of all companies, the dow jones only tracks 30, it is also important to recognize that apparent trends in growth can be illusions of multi-decade trends in the local and world economy. For example, over the last 145 years, the industrial revolution has been the engine of a flowering of growth of companies to exploit potential in many industries. As the world economy equalizes production over the entire planet, the ability for any local index to exercise a dominant growth results with respect to the other indexes will fade. So over time , I predict that the indexes will reflect more globally random behavior and will lose much of their effectiveness as effective gauges for the viability of a local economy. As companies globalize and nations provide the work force that can satisfy the needs of business at reduced prices, growth will slow even while production may be increasing within specific industries. This prediction is one that we may see play out in the next 50 - 75 years, as I predict by that time that the equalization of production capability should have run its course around the world. More specifically, it will occur once all the potential areas manufacturing, resource and labor are fully mobilized to contribute to the world economy. Currently vast areas in Africa, Central and South America and Central and South Asia remain ripe with potential. Have another view, let us discuss your ideas in the comments.




14 March, 2009

case in point that old media just doesn't get it.


Touching on this blog post a few days ago...

13 March, 2009

late hour super class surgery

The original idea behind the design of my distributed web application framework started from the idea to create a content management system with a generalized compositional design paradigm. In my work at TheStreet.com I designed an ad management system that used the rudiments of this idea, the compositional structure of the design was pretty simple, the ad management tool simply needed to manage collections of advertisements in blocks known on the system as "tiles", the collection of the tile code into groups called "tilex" groups allowed the management of the collection as one, which included dynamic replacement of various strings to match parameters for the pages to which the "tilex" groups were joined. This simple relationship between specific ad management Entities and the associated relational tables, took a month to design and build. Months before I thought of a solution that could use a similar design idea but be generalized to manage any Entity at all. A collection of Stories on a Page, a collection of posts in a forum thread, collections of threads in a forum and even physical Entities like a collection of nodes in a system cluster. The generalized design idea is what was the impetus behind the development of the platform, with the first application designed as proof of concept to be a content management system. One of the requirements of the system was that management of all Entities would occur through a web interface, this was engineered in such a way as to be maximally efficient, allowing each Entity to customize the interface for managing instances of its type dynamically. To do this, allowing management of instances by Users was required. In most management UI's , Users are able to perform several actions against objects they can manage, most often the CRUD actions, of creating, retrieving, updating or deleting changes to objects stored on the associated datastore. I expanded these actions to cover 4 more actions that allow publishing, searching, importing and exporting. The problem is that how these actions are done will vary with the underlying datastore. I originally was designing to a specific database vendor but realized that a generalized vendor API should be built to maximize the power of the framework. I designed a vendor agnostic sub API that allowed me to encode the vendor specific differences into dynamically invoked code and allow client programmers to be agnostic to the actual underlying database. This takes us to the main issue of different key insertion methods between the vendors.

Key Insertion, different strokes for different folks.

Key insertion is an important part of relational database design. A key is basically a unique label associated with the rows of a database table. Some are integers, others are alphanumeric and some can be a combination of table fields. Some database vendors (mysql and mssql server) allow keys to automatically be inserted for tables with primary keys when new objects are inserted into those tables. However others, require the creation and use of special sequences that are incremented outside of the database tables, like Oracle. The different methods and different non ANSI compliant methods for enabling their use on tables makes my choice to create a db vendor API a prudent one, but it also forced a requirement on the part of the Entity class designs. Entities that are created or retrieved programmatically must have a way to temporarily extract the inserted items agnostically. If you don't have a known key value , you can't know if the item you are pulling out is the one you recently created. The framework is distributed across multiple processing nodes , so creation actions for new Entities can be happening simultaneously with subsequent retrieval how to disambiguate these objects? Enter, user defined keys.

User defined key

A user defined key is quick and dirty way to ensure that a new Entity object inserted into a table is retrieved by generating a unique sequence of letters or numbers and adding that to a field in the table. In my framework all tables have a name or description field which is of type "varchar", in the tables that require programmatic modification, the insert populates the field with a unique string consisting of the date and time and the node id of the machine performing the creation, this guarantees that the subsequent request for retrieval of an item by searching the field for that key will be the inserted item (if it succeeded) and not an item inserted at the time by another machine. The use of user defined keys comes with many advantages and a few disadvantages:


  • no need to accommodate any db, agnostic to their particular primary key generation method.
  • allows arbitrary length keys that can be dynamically changed in client code at run time without recompile if uniqueness constraint is weakened. If the rate of addition of new items exceeds the ability to generate unique keys (say containing date or time stamps) additional uniqueness factors can be added without recompile.
  • abstract method enforcement (see next section) provides a hint to class programmers of new Entity types to implement the method for client use.
  • allows mapping of huge space of possible keys inserted or extracted simultaneously with uniqueness enforced.


  • uniqueness is only guaranteed if sufficiently orthogonal factors are chosen (date/time/node/site/user..etc.)
  • if key is hard to calculate because it is large, searching on table will incur increasing time cost
  • additional method implementation in all inheriting classes increases size of instances and there for the size of the running memory cost for the entire application.(see next section)

Two additional disadvantages are unique to my framework, in order to make the change at this late date to the core API , I will have to recompile the jar class distribution for the application and redistribute to the production servers. This will require they are restarted but thankfully they are designed in a redundant set so this will not require total downtime for the applications hosted on the site. Secondly, my system uses serialized instances of class objects to store object state to disk for system wide versioning. Objects of the previous class state will no longer be accessible in the interface as the class signature will fail, these objects need to be deleted. Luckily because none of the services are being utilized by paying customers yet, this will not impact any Users objects.

The actual insertion and retrieval are done by a database access API or persistence API custom built to allow Entities to mutate the datastore. The insert, retrieve, update and delete actions are methods enforced by underlying abstract method constraints in the superclass for all db access classes. The retrieve methods include the superclass enforced signature that has attribute integer, "id" and boolean "fillobject". The overloaded retrieve method exists for String retrieval of the mentioned name or description parameter only for those Entities that where encountered to require programmatic insert and retrieve. This solves the problem of programmatic insertion/retrieval but it leaves open the possibility that new Entities added to the framework dynamically could not be able to enable efficient programmatic modification via insertion and retrieval without the retrieve(String) method. I realized this would be an issue years ago while coding the ECMS application but felt I would address it at a later date, now that the second application is about to go live as a commercial solution and other applications may come online that time is now.

Adding a new abstract method signature

In order to ensure that all new Entities implement a retrieve method that can be used extract User defined keys, the superclass of the db class for all Entities must be modified. The java programming language makes this simple as adding the following to the class:

public abstract Entity retrieve(String desc,boolean recurse) throws EntegraDBAccessException ;

:this method signature forces all inheriting classes to provide an implementation of the method during compile time and thus ensures that all Entity classes that use db accessors will have a retrieve method for programmatic inserts and retrieves. The next part is the hard part or the easy part, depending on your perspective. I mentioned the two applications built so far using the framework , ECMS and collaboration, these applications necessitated the creation of nearly 40 Entities, of which only 15 had required programmatic insert/retrieve , thus 25 remain to provide implementation for the now enforced abstract signature mentioned above. To make matters more laborious, system classes use a different superclass called a db object , which is similar to the Entities but leaves out the compositional parent/child retrieval logic, but both are managed through the same UI and thus it also requires a retrieve(...) method signature, there are about 12 of those that require implementation. So the hardest part of this involves adding the required implementations, the methods are simple so it will be mostly cut and paste for a few hours but once it is done, the entire API will be enabled to support programmatic insert/retrieve using user defined keys and more important, any new Entities or system classes that inherit from the core classes will be forced to provide an implementation at compile time. This will make the client programmers life a lot easier when designing with the Entity objects.

12 March, 2009

media myopia

I am a big science fiction fan, as a kid I spent hours watching shows of the late 70's and early 80's. From the original Battle Star Galactica to Buck Rogers in the 25th Century, being a baby of the 70's. I was just old enough to be absolutely awed by the new age of sci fi films ushered in by George Lucas and Stephen Spielberg and then to have a front seat for the many block busters of the 80's. Most of my viewing came from tv, cable (remember WHT?) and then VHS. Today, the internet has made many of the shows from the 80's and 90's a stones throw away IF the media companies that own them would only get wize to the possibilities. Just a minute ago I was at tv.com , watching episodes of Star Trek, what I noticed was that the episodes available are only a subset of the show's total run. Why? Why are the media companies blind to the potential of allowing anyone , anywhere to watch the complete series online? The conventional explanation is that they are unable to extract sufficient revenue from ads to support this, but how can they know this if they don't try? More important, is the fact, that it seems their analysis seems to be valuing the potential of each series at a time, rather than looking at the potential for all their properties online to produce ad revenue. Think about it, the Star Trek series is something that on cable , can ONLY be watched during specific intervals of time when they show (syndicated in various countries) happens to be on the local schedules of those areas. If we assume that at any given moment , 50 syndicated episodes of Star Trek are being shown somewhere in the world (and that is a generous number in my view) , the media company can at most only be showing 50 different episodes from the series over 100 episode run. Additionally, in each area , there is a display of usually one episode per 24 hour period. So those that want to watch the show, needs to tune in at the precisely right time to find it. The ad revenue potentially derived from the spots placed between the segments of such episodes are thus wasted if those individuals miss the episode for at least 24 hours.

Now contrast this to what could be possible online, if Star Trek was available, every episode of the entire season, people could watch it when they wanted to. They would not have to tune in to a specific time and would not have to wait 24 hours since they would never "miss" episodes they can queue up at any time. Additionally, by interspersing the online episodes with single commercials, the potential number of viewers of all the displays of the ads across all of the possible episodes being displayed 24 hours a day could easily swamp the impressions made across all the areas on tv. They would be making ad revenue continuously throughout the day instead of at specific intervals in the day, they would be able to satisfy the desire for viewers to watch the show at the time of their choosing, be able to take advantage of repeat viewings by enabling people to watch mult-part episode arcs and season open/close finale sets without season breaks. Finally, and probably most lucrative, would be the ability to watch episodes in places that make tv viewing impossible. Viewers with laptops and smart phones could watch from and to work or vacation. The ability to gain an impression would no longer be restricted to the living rooms of people's homes but would go with them where ever they went. These would allow users watching to stay on the site as they watch, at the same time, the site could be used to mine the viewership of precisely the type of shows they are interested in, they could mind series for particular times of the year to see if there is a correlation with the type of show, they can use this data to inform their new series production directions. Ultimately, putting the old series online would allow them to better target newer series to a much wider audience. As they begin to reap the rewards of this open series concept they would see the potential of being really bold and doing the same for currently running series. Why should the latest episode of CSI only be available online at hulu.com , the day after it originally airs? Why isn't available at the same time? The people who watch it on tv, will do so, the people who watch it online will do so, but giving the option , allows the tv viewers to catch an episode they missed online at any time they wish. The broadcaster would again be able to mine the metrics of the episodes in a much better way than is currently by buying analysis data from companies like Neilsen, with their own sites they could mine their own data and wouldn't have to pay for a third party for the analysis.

I think the media companies are still trying to make sense of what the internet means, the devastation that availability of mp3's made to the music media companies is only going to repeat itself in video IF the companies don't take the reigns and control distribution of their media online instead of letting the pirates do it for them. Already, signs are in place that some of the companies get it, but it is not bold enough a move, with only partial lists of episodes for many series that first aired 20 years ago. It's not an issue of encoding either, original shows could be encoded to HD quality video for online streaming in very short time using an average pc. The media companies have some other motive for restricting their set of options but they are only ultimately , restricting their ability to derive revenue from their archives while missing out on the chance of being THE online destination for watching cherished shows from the past, at any time desired for the small fee of suffering 4 or 5 30 or 60 second spots between the episode segments.