Skip to main content

the attributes of web 3.0...

As the US economy continues to suffer the doldrums of stagnant investment in many industries, belt tightening budgets in many of the largest cities and continuous rounds of lay offs at some of the oldest of corporations, it is little comfort to those suffering through economic problems that what is happening now, has happened before. True, the severity of the downturn might have been different but the common factors of people and businesses being forced to do more with less is the theme of the times. Like environmental shocks to an ecosystem, stresses to the economic system lead to people hunkering down to last the storm, but it is instructive to realize that during the storm, all that idle time in the shelter affords people the ability to solve previous or existing problems. Likewise, economic downturns enable enterprising individuals and corporations the ability to make bold decisions with regard to marketing , sales or product focus that can lead to incredible gains as the economic malaise runs its course through. Examples of this variety can be found in the opportunities squeezed from the dry rock of the great depression by the US manufacturing industry. Again, in the mid 19070's through similarly bleek economic times marked by stagflation and inflated oil prices during an anemic stock market the pc revolution silently ran it's course. Intel , Microsoft and Sun were all babies of these difficult times and are all still with us today having contributed in major ways to the flowering of our economy over the last 30 years. With the emergence of the www in the early 90's the productivity revolution created by the stand alone personal computer, or by networked pc's to mainframes in businesses allowed businesses and individuals a new ability to squeeze out potential from the landscape options provided by a world of tcp/ip connected machines communicating data between users using http and html in a graphic program called a browser.

web 1.0

Web 1.0 has been used to define the first generation of browsers and web technologies. The Mozaic and the Nescape browsers dominated much of this era followed later by Microsofts Internet Explorer. Key attributes of web 1.0 are that they pages were straight html for the most part, javascript wasn't invented until mozilla saw a need to add more dynamic elements to pages such as change on hover effects. I remember designing sites in 1995 that used these simple "dynamic" elements but the sites were not really dynamic. Most of them were not data driven from a database backend and the ones that were, were built on complex content management platforms or engines like Vignette storyserver or Dynamo. Microsoft added much confusion to the mix with their introduction of ActiveX in 1995 and then incorporation of it into Internet Explorer in 1996. This provided a hodge podge of technologies for doing more data drive and dynamic type of sites but there was no cohesion. It was almost guaranteed that writing a dynamic site on one browser would break in any other browser. Though Netscape invented javascript and made it available for implementation by other browser makers, Microsoft chose to develop their own version of a dynamic front end scripting language and called it "Jscript". The technology pot holes that designers needed to navigate were legion and the lack of technological interoperability hampered the spread of cross site paradigms for user interface design. As the new millenium approached however, designers began to standardize on a set of "core" technologies that provided the maximum amout of usefulness across all the available web browsers. It worked by mating a back end data driven core (java, active x/vb, vignette, dynamo..etc) to front end user inteface technologies using html, javascript and the newly emerging CSS or cascading style sheets. Unfortunately the clouds began to gather and web 1.0 was about to be challenged by the bursting of what was called the ".com" bubble. By 2001, the unsustainable ramping of stock prices of web related stocks gave way to a correction, and that correction led to recession in the US and global markets as over valued tech stocks shed workers. web 1.0 was in sunset but the web wasn't dead yet, enter...

web 2.0

In early 2002, the continued shedding of jobs in the tech industry made it a hard time for web properties. Many which were flush with cash from exuberant venture capital valuations were forced to cut expenses as those valuations were brought down to Earth by rapidly shrinking stock prices. Free coffee and snack areas were silently closed out, expense paid limosine trips home were curtailed and it was no longer cool to be seen playing foosball in the gameroom during daylight working hours. At the same time the technology continued to consolidate under a fixed way of doing things online that worked more or less across web browsers. The javascript dynamic language that Netscape had invented half a decade earlier became powerful for doing things on the browser that many back end languages would otherwise be tasked to do. Creating graphs and charts , even games was made possible by this powerful language, interestingly an innovation added to javascript by Microsoft would turn out to be the lynch pin of the coming web 2.0 age. A simple object that allowed javascript (originally Microsoft's Jscript) to separate the state of a web page from resource calls associated with the content on that page called the XmlHttpRequest Object was added. This enabled pages to make calls to resources without requiring the user viewing the browser to perform any action. This added the last needed piece of dynamism to a web page that would make it behave (when properly designed) much like a stand alone application interface in an installed program on a users pc. The XmlHttpRequest object was first introduced in 2000 and quickly versions of the object were adopted into the other major browser makers at the time (Mozilla, Opera, Safari) by 2002 hackers began using the object in clever ways to allow page elements to be dynamically updated with back end data. In the late 90's , many sites growing rapidly to support many pages utilized xml based repositories that made storing content and then repurposing that content very easy.

XML based feeds were part of many of the first web 1.0 sites, I built many such feeds as a content management producer at in 2000 and 2001. The emergence of the XmlHttpRequest object would allow the convergence of content separation from presentation, dynamic presentation and dynamic update of presentation that allowed the next generation of web applications to be the first web 2.0 applications. The acronym AJAX was coined to symbolize the pieces of the new standards based dynamic site design paradigm. Asynchronous Javascript and XML. Simultaneously the power of XML as a content transport agnostic of presentation was generalized so that XML could represent many other elements in connected systems that previously required direct object delivery and reception via object broker technologies specific to platforms like CORBA and DCOM. The WS (web services) model was born first by using xml to simulate remote procedure calls on server based objects using xml instructions from other machines. About the same time another XML based grammar called SOAP was created, the simple object access protocol allowed two way communication enabling dynamic conversations between machines , possibly on completely different platforms so long as they both could "speak"(write) and listen to (read) the SOAP messages. Additional grammars were designed to enable security, read and write to directories (UDDI) and perform other functions commonly performed between computers previously only with object broker technologies. So web 2.0 involved two revolutions, the first was the emergence of the asynchronous access to data sources using the XmlHttpRequest object , the second was enabling computer systems on any platform to agnostically communicate data between systems. The design and implemenation of systems that exposed application components in the form of RPC/SOAP or WS* XML protocols became known as Service Oriented Architecting.

As 2005 approached many businesses were investigating ways to switch their applications over to an open WS* model that could be used to allow those applications to expose and consume services. In essence, the XML revolution allowed applications running inside enterprises the ability to communicate and receive data using protocols not unlike that used by the web itself. In fact the html language syntax is itself a subset of the XML family of structured languages, based on the Standard Generalized Mark Up Language (SGML). web 2.0 saw its flowering in the form of consumer sites that utilized it to the fullest with the emergence of youtube, the continued growth of google, the transformations of yahoo, ebay and amazon and in the emergence of countless new 'startups' that employed very simple ideas implemented with unique dynamic elements enabled by either AJAX or by using the now mature Flash technology from Adobe that had finally reached sufficient performance on the newer more powerful computers to enable creation of rich internet applications. While the front end and application technologies changed so did the preference for the back end development platforms used to produce web applications. The weight and expense of the full content management systems like Vignette could not be supported on the tight budgets of the couples of college students that wanted to build web applications. For them , it was much easier to coddle together a site using the new server side scripting language platforms that emerged to allow easy back end development without the need of a heavy application server. Python, PHP and Ruby were the languages of choice for this set and soon these languages had "platforms" designed around them to enable them to be used much like the old content management applications were but without the expense. Plone, Drupal and Ruby on Rails became the foundation upon which many simple web 2.0 web applications were put together. Ruby on Rails in particular brought together the dynamic page creation (data side with the database and interface side with AJAX) and page management elements necessary to create fast sites. Unfortunately, these new platforms lacked the stability and scalability that would be necessary for them to serve as the base for massive growth. The web 2.0 era has been dotted with spectacular successes after much difficulty on the back end (twitter, digg) as the hodge podge of technologies put together to define these sites were optimized to scale to tens of thousands to millions of simultaneous users.

However, despite the success of many of these sites today, there is much room for improvement. The scalability that many of them currently have comes at the cost of large amounts of server resources and massive arrays of disks, not to mention the expense of running those servers, hiring managers for those servers and in general managing all the employees of the rapidly growing companies that they work for. The ability to design a truly scalable, secure and managable web application framework could never be found as the web iterated from 1.0 to 2.0, the next generation of web applications will take advantage of this deep synthesis of efficiency in the back end code design from the ground up to enable extremely efficient and dynamic web applications and sites using any of the desired technologies of the previous generation to build even more resource intensive sites but without the massive expense for hardware resources, managing users and employees. Enter the next step...

web 3.0

The economic slowdown of 2008 began with troubling signs in the housing market that soon snowballed into the collapses of several institutions (Fannie Mae , Freddie Mac) for home mortgages and several large and old banks (Leaman Brothers, Bear Stearns, Washinton Mutual) shutting their doors over night. The cascade of failed banks led to a tightening in the credit markest which again as in the 70's and the early 2000's made it extremely difficult for entrepreneuers to acquire venture capital. I consider web 3.0 a defining moment because the type of software that I believe that will characterize it is what I've been working on for the last 7 years. In order to realize the extreme efficiencies of low resource costs under massive scalability, the design time necessary to engineer a generalized solution had to be greater. When I was laid off from in 2001, I decided that the best way to pursue my dream of running my own business was to start it then. I had a brain full of experience working in a highly challenging environment , where I engineered solutions using the emerging xml , content management and dynamic UI technologies. Before the end of 2001, I began the design of a generalized Entity management system. This system would be completely different from previous content management systems in that it would not specialize on any particular type of content. Content managent up to that time had been dominated by management only of certain types of content important for web applications or sites. Stories and Articles , Images, Video and Audio files were the ones that made sense within the context of web applications by 2005, management systems broadened their scope to include any file type but the systems failed to manage the most critical elements of a running business that had failed to be properly encapsulated into a combined content management system. Business processes. A truly generalized management system would include Users , Servers and other structural elements of the very application under the management paradigm. If articles , stories and images could be updated, created or deleted so should servers, users and any other conceivable entity of the business given objects in the system. This is what an Entity Management System would do. The task is a difficult one as it is an attempt to solve a problem landscape that is multi dimensional but at the same time still is easily managed both as new Entity types are added to the system and as new instances of all those types proliferate on the system.

Around 2004, when the bulk of my Entity Management API had been built I happened across an article on the J2EE java technology platform. I looked through a few documents and noticed immediately the use of "entity" to describe a generalized management type in the J2EE framework. However, continued reading of the technology quickly led to head aches, the reason was that J2EE was over complex. It tried to solve the entity management problem but not holistically. It had many different possible persistence API's that could be used, it had a strict class inheritance scheme that placed many requirements on the client programmers that didn't seem natural during the class design. After reading a bit on J2EE I was happy that I designed my API using core java from scratch. For a generalized web platform the most critical components were deemed as follows:

  • Compositional xml based type (Entity) framework.
  • Scoped security model based on hiearchical granular permissions.
  • Twin hieararchy for Entity base class to optimize object instance sizes.
  • Custom persistence API for optimal mediation from datastore.
  • Action oriented Workflow for all Entities.
  • DB Vendor agnostic database API.
  • XML based bootstrap system.
  • compiled software distribution for easy installation.
  • Extensible during run time to add new Entities and thus new functionality.
  • dynamic request load distribution across participating cluster nodes.
  • dynamic request load distribution between branches of nodes for out of the box, geographic scalability
  • "Manages itself" requirement, all templates and scripts that compose the framework are managed by the framework.
  • secure web 2.0 Collaboration API to enable users to communicate in real time concerning actions over all Entities.
  • Content delivery Protocol (ssh, sftp,file..etc.) and File type (.doc, .pdf, .html..etc) API

A little bit about these requirements and why they were deemed critical to a web 3.0 web application platform. The use of xml as a compositional core is what allows the content types that are based on standard content formats managed in traditional CMS systems to be easily handled by the new platform. XML based representations for articles, stories or even advertisements could be related to one another in the datastore and the xml of the composed object generated using a global "toXml()" method. Transformation of the composed xml could then follow quickly using the Xsl transformation language templates that can be added for that purpose. Note there is no further delineation of what is being composed, the point is any compositional relationship between business entities can be extracted as xml after being composed dynamically in the framework or as was retreived from the datastore. This allows the final rendering of content types from the composed xml to any desired file type using any desired delivery protocol.

Scoped permissions allow any action against any Entity to generate a unique permission id on the system. Each scope indicates the hieararchy of application for the permission. There are permissions that apply to all Entities, all instances of a single Entity or a single instance of a single Entity. Instance permissions are generated on demand, thus only permissions for objects in use are actually available to disceminate ensuring minimal granted rights for the entire entity landscape over the life of a cluster.

Twin hieararchy for the base classes allows Entities to be tailored to their use on the system. Entities that have compositional relationships with other entities have additional machinery from the baseclass for iterating through the "payload" of child objects and perform important iteration, skip and flow functions that such relationships usually employ. Entities without compositional relationships tend to be structural or system entities, the additional base class machinery is ommitted allowing these objects to be optimal managed in the system.

Custom persistence API, maps data base access objects to the twin hieararchy of base classes mentioned before. The abstract base class for the persistence API ensures necessary methods are implemented while providing any necessary method implemenations to inheriting classes. Ensures that the access objects are optimal in size for each entity type. Optimal sized entities allow more efficient use of application running memory. This translates to faster loading objects because they are optimal sized for their functions, more objects in smaller total memory space translates to doing more with that space. The more objects that fit in a memory, the less memory is needed to satisfy a given amount of application load, this translates directly to cost of memory and number of servers and that reduces operating costs for the business using the software.

Action oriented workflow is a User/System delegation paradigm that creates a lose coupling between the actions associated with Entity objects and the Users who may be able to perform them. By allowing the creation of workflow stages that collect Users with different permissions but delegated for a given action execution on a requested entity type by another User, commitment of an action can be made to converge on a User that has the permission for the action requested. Action centric rather than permission centric resolution of changes to Entity objects. This eliminates the need to create access control lists by allowing businesses to map their real world processes very easily into the system itself. The lose coupling enables a massive business process landscape to be covered by the paradigm. Also, actions can be delegated to the system to perform at any desired time either at action origination or by the action commiter.

A db vendor agnostic api allows the framework to be easily installed to use one of many popular RDBMS vendors and if desired to have the system extended to use custom vendors, so long as those vendors are ANSI sql compliant which forms the bulk of data management systems in use at the enterprise.

The XML based bootstrap, allows the detailed configuration of the entire cluster and of individual nodes and branches in the cluster to be managed using xml files. The framework has a permission scoped UI that allows GUI management of attributes in these files but the same files are also manageable through a GUI to users with the requisite system permissions. Making changes to the system can be done on a per node/branch or total cluster scope depending on the permissions of the commiting User and what is desired.

Run time extensibility, allows new Entity classes and associated db access classes and corresponding RDBMS tables to be added to the system via GUI or programmatically using a system XML script. Detailed instructions are provided to make the process of adding the new class packages, database tables and necessary management templates an efficient and fast process. A cluster of nodes can be extended in this way without requiring much beyond page refreshes once the relevant classes, tables and templates are in place.

Dynamic request load distribution, allows each node to determine if it should handle an incoming request for authentication to the system based on node specific criteria indicated in the system xml for each node in a cluster. The system can be adjusted so that requests beyond a certain percentage of utilized system memory or database connections are redirected to other nodes in the cluster or failing that denied. Since every node participates in this simple action, the entire cluster behaves in such a way as to push requests toward the least loaded nodes over time. This allows the most efficient handling of incoming load spikes by those machines in the cluster that are able to best handle them. The system allows warning and redirection events to notify designated Users, allowing constant awareness of the load on the live system to be had by User administrators. (Note, because of the permission system and the fact that the system manages itself...system administrators are Users just like content editors or writers, only their permissions distinguish what they can see or do on the system)

Geographic scale dynamic load distribution allows nodes to pass requests out to nodes located in other data centers across a WAN. The algorithm for load distribution biases redirections to nodes in the same branch as a redirecting node but if all nodes are redirecting a second request will be routed out to a node in another branch, this allows spikes in load that are geographically local to be difused to other geographies where requests that would other wise go denied can be handled at the cost of a slight penalty in latency. This is what allows clusters of the framework to scale smoothly even when the nodes are geographically dispersed.

"Manages itself" requirement satisfies the generalized Entity management system goal of the design. The system manages the Users of the system as Entities, it manages the Permissions granted to Users as Entities, it manages the UI templates that make the system functionality available as Entities, it manages the system configuration xml and xsl files , the javascript and css scripts for UI elements all as Script entities. Allowing all of these to take advantage of the aformentioned permissions, load distribution, action oriented management without any code. New Entities will automatically be manageable by the system once added.

Real time collaboration in the form of instant messaging, group chat, outbound email notifications for events or collaboration requests, collaboration feeds for User contact groups are all part of the system. User can control the scope of their collaboration to include only invited Users on the system or they can invite anonymous or ip specific internet guests. This enables Users to communicate with clients or partners or interact with customers. Again, the core idea of generality was built into the collaboration API allowing it to be maximally useful to any applications that are added later to the system. It is important to note, that once the design of the core framework was complete, adding functionality such as the content management API and the collaboration API proceeded as if the framework were being extended following the design paradigm that was the goal of the project in the first place.

There is a GUI managed API for setting up delivery objects for xml based generated content. New Protocol and FileTypes can be added by extending the corresponding API baseclasses , imported into the system and applied to existing Entity xml content set up to use the new protocols or file types. Content can be rendered to pdf and sent via ssh to a remote server, or the same content can be rendered to a doc file and sent via ftp to another location. There is no limit on the combination's as they are orthogonal in application.

So web 3.0 will be highlighted by frameworks of the kind I've designed. The applications built on web 3.0 platforms will be geographically scalable, implicitly load balanced, run time extensible and secure systems for building web applications and managing them with in the context of an existing or desired business workflow(s). Separate software for managing users(human resources) independent of the work they are tasked to perform is not needed, software for managing tasks performed by a user, when those tasks are performed, who collaborated on the changes are all implicitly managed by the system. Allowing managers to determine User efficiency by looking at reports of when actions are submitted and then committed. A single application can be managed by a handful of Users or even one User, delegation of actions through workflows can give this User sole control over performance of critical actions that other Users request. Agents can be delegated to perform actions from any web accessible location, enabling business to occur 24/7 and without need for physical offices. This allows massive savings on costs in time and money for setting up such processes using the previous web 2.0 model. Implicit load balancing and security allows the business application designers to focus on the business logic and not worry about scalability which is a forgone conclusion of the frame work, this enables them to engineer better applications that allow the business to be more competitive or dominant in the sector they are competing in while spending less money and hiring less people to do it. The parallel move to "cloud" based resources for file storage , e-commerce and even databases will allow platforms designed like my framework to become extremely flexible and cost efficient while allowing massive amounts of Users and business functions to be efficiently managed by small numbers of Users. Thus the applications built on web 3.0 technologies like my framework will be some of the most efficient software every designed allowing continued productivity gains to be had to the business community as they come online.


Anonymous said…
This comment has been removed by a blog administrator.

Popular posts from this blog

On the idea of "world wide mush" resulting from "open" development models

A recent article posted in the Wall Street Journal posits that the collectivization of various types of goods or services created by the internet is long term a damaging trend for human societies.

I think that the author misses truths that have been in place that show that collectivization is not a process that started with the internet but has been with us since we started inventing things.

It seems that Mr. Lanier is not properly defining the contexts under which different problems can benefit or suffer from collectivization. He speaks in general terms of the loss of the potential for creators to extract profit from their work but misses that this is and was true of human civilization since we first picked up a rock to use as a crude hammer. New things make old things obsolete and people MUST adapt to what is displaced (be it a former human performance of that task or use of an older product) so as to main…

Highly targeted Cpg vaccine immunotherapy for a range of cancer


This will surely go down as a seminal advance in cancer therapy. It reads like magic:

So this new approach looks for the specific proteins that are associated with a given tumors resistance to attack by the body's T cells, it then adjusts those T cells to be hyper sensitive to the specific oncogenic proteins targeted. These cells become essentially The Terminator​ T cells in the specific tumor AND have the multiplied effect of traveling along the immune pathway of spreading that the cancer many have metastasized. This is huge squared because it means you can essentially use targeting one tumor to identify and eliminate distal tumors that you many not even realize exist.

This allows the therapy for treating cancer to, for the first time; end the "wack a mole" problem that has frustrated traditional shot gun methods of treatment involving radiation and chemotherapy ...which by their nature unfortunately damage parts of the body that are not cancer laden but …

First *extra Galactic* planetary scale bodies observed

This headline

So every so often I see a story that has me sitting at the keyboard for a few seconds...actually trying to make sure the story is not some kind of satire site because the headline reads immediately a nonsense.
This headline did just that.
So I proceeded to frantically click through and it appears it was a valid news item from a valid news source and my jaw hit the floor.
Many of you know that we've been finding new planets outside of our solar system for about 25 years now.
In fact the Kepler satellite and other ground observatories have been accelerating their rate of extra-solar planet discoveries in the last few years but those planets are all within our galaxy the Milky Way.
The three major methods used to detect the bulk of planets thus far are wobble detection, radial transit and this method micro lensing which relies on a gravitational effect that was predicted by Einstein in his general theory of relativity exactly 103 years ago.