30 April, 2008

give us your email please...

As an IT professional my email inbox is constantly inundated by offers, white papers and seminar invitations to various sponsored events wishing to sell one thing or another under the guise of doing my business a favor. I usually skim and delete the messages that aren't relevant to my business focus or that I otherwise don't find interesting. Today I received a ZiffDavis letter filled with sponsored white papers , now I've signed up to ZiffDavis events before but I can almost swear that my account information goes kaput every few months. I was about to just delete the message but then noticed a white paper that was relevant to my business and grudgingly slicked the disengenuously titled "download now" button.

As I was expecting that action took me to another page where I was instructed to enter my email address. Okay, no problem I have several email addresses to fill up with the junk that you eventually receive when you sign up for these papers and used one. Now you would expect the next step to be the download starting but you'd be wrong, another page presented itself this one filled with empty form fields for me to provide incredible details about myself that have absolutely NOTHING to do with my downloading the paper. I wonder why companies feel the need to be so sneaky in trying to get my information? I've encountered this type of activity from various forms and sites for over 10 years now and every time I get pissed off and usually terminate the browser window on the spot. They miss out on a prospective customer or viewer of their service because they are insulting my intelligence, and I won't accept it from corporations. I've noticed that this is a common practice in online sites, for example many sites state you only have to provide your email address and then after you do that send you a page or two more of account details to add before you are given a free account it is to me a sign of a bad business relationship and not something I would ever do to a prospective customer. Say what you mean and mean what you say, will get you the loyalty of visitors who know that you aren't trying to trick them into giving information. In my consumer site design currently under way, all that is needed to create a new account is an email address and confirmation via live verification using a Turing image, everything else can be provided at the users leisure AFTER they have an active account. In my site the account is activated by clicking on the email sent to the provided email account, a dead simple process as it should be. Most non techies using a computer are fearful of clicking the wrong icon and crashing a computer, they are far less sophisticated on average than most web designers so making things as easy for them as possible is imperative to getting them to create accounts and then later spread the news to their friends and colleagues about just how great your service is!! So moral of the story, in your designs don't insult the intelligence of your users and make sign up as minimal as possible.

28 April, 2008

making it socially sticky

One of the key characteristics to the success of recent web 2.0 websites has been the ability to get the customers to keep coming back for more by leveraging the desire for people to engage socially with their friends and new people all over the world. As I finish up the consumer site of my start up I've been thinking of ways that I can enhance the social stickiness of my web site beyond the (imo) innovative features that allow that already. In thinking about this problem I've isolated a few points that I see as key goals for any start up wishing to use social stickiness in their business models to employ:



  1. Let your users control who they see and who sees them. This allows the user to limit their access and exert a level of privacy on your system. People hate (at least I do) being immediately available to other users on a social network if they can not control that contact in some way. This also allows users to control their exposure to spam!
  2. Allow users to notify one another of ones actions, Facebook beacon and Myspace action feeds do this, twitter does this implicitly on the site and through RSS feeds...this keeps the user in the social loop of their contacts and more importantly brings them back to the site to provide more eyeballs and keep ad impressions high.
  3. In a world of RSS feeds , email is still the most ubiquitous way to notify users of events on th site so allowing users to control email notification possibly in addition to feeds for appropriate content or features allows even more stickiness to extend to devices that otherwise aren't able to access RSS feeds, like many wireless devices.
  4. Community, this is the general goal, easily stated but difficult to achieve...the social interactions have to foster community between interacting users or give the impression that the users are part of a community. A portal where free interaction between users is encouraged allows this, giving users the ability to make new contacts and have more incentive to stay around. Allowing users to follow memes in real time fosters community interactions that keep the community vibrant.

22 April, 2008

more site design popcorn...

It has been a long time since I sat down and designed the UI for a consumer facing web site. The last few years I'd been working on the UI elements for the management interfaces of the distributed framework that powers the consumer collaboration site. Now that I am head deep in the task of writing feature lists, designing comparison tables and writing description pages for the various pages that are important for a consumer site I remember just how boring it can be! Tons of boring pop corn, tons of basic html but much of it repetive save for a few minor changes which must be done manually (especially when doing comparision tables) argh!

Good to know that I am almost done, but I was able to make several choices in the design of the site that might be instructive to those working on or soon to be working on a similar project. First, a short description of the page and then I'll go over the choices I made to expedite facile management of the content later as the need arises. The site presents the simple options of starting a new account with the service, participating as a guest to the service without starting an account, logging in to the service with an existing account , selecting a preferred language to read the site in and asking a live agent about the service when such an agent is available to converse. Additionally the site presents a standard list of links that specify the options for learning about the services being offered, my list is as so: "home,features,for business,compare,serviceplans,tos,privacy,about us" Pretty basic. A banner section at the top of the page displays the logo and slogan. The content is displayed below the banner for the selected clicked on link option....how to proceed?

I decided straight away to cut up the page into sections that render dynamic content and static content. The main page is a dynamic jsp template that calls in dynamic templates for the links list, the side bar that displays the new account, existing account login and guest service options and one variable call (for some pages static resource others dynamic) to the content corresponding to the displayed link and the banner logo. The reason these are static/dynamic calls to separate files has to do with two things, first is the language options, I want every native speaker to think the entire site was originally written in their language so I am translating even the images to the selected language option. The content of the selected page will of course be translated as well...the best way is to simply pass a parameter to the url for the selected language and dynamically call the static content that is associated with that language. The second reason is that some of the pages "serviceplans" vary their display depending on if a user is logged in, in this case showing a short summary of the logged in Users plan and account status as well as indicating their current plan and providing links to change their plan option.

The dynamic calls are very similar to the static ones in terms of having copies for each language but they have dynamic processing that must be done to determine if a user is logged in or if the viewer is simply a visitor. There is also logic to determine if a live agent is available to help with questions a visitor may have in real time chat, finally there is logic to allow a real time count of the number of participants in the service at the time the visitor is viewing the page. Though it took a bit longer to set up this structure, it will now be trivial for me to a) add new languages to the site and b) change in particular language content for each link page independently. It is a perfect example of how taking the time to properly design something leads to more efficiency in the long term. Now that I am getting out of the Briar patch of html popcorn I am starting to see the usefulness of the modular design, for example, I needed to provide the links on a new Users management interface after they've logged in, since the links were generated in a dynamic resource varied by language, I simply included it in the management interface pages and provided the Users language parameter to render the correct template and voila. No need to duplicate any code. Object orientation rules the day. I hope to start the logic to enable service plan options after receiving confirmation from the payment processor by mid week...getting to the end of the road, slowly but surely, just have a bit more popcorn to munch before I get there!

17 April, 2008

Commerce Enable Nirvana on the way...

After deciding to bootstrap the launch of my site I had to get to work putting together the consumer facing pages that will allow internet users to quickly figure out our services and get started right away using the service. The last week has been days and nights of long and tedious task sessions tweaking html tables to look just so, rendering graphics and updating style sheet styles but I am fast approaching completion of the various pages for the site. One set of code that I am looking forward to is the code for enabling automatic e commerce. When I was working at TheStreet I was always curious about the commerce code used on the web site to register new accounts and confirm provided payment information. Since then I've picked up extensive knowledge of the software design process far beyond the knowledge of content management systems and xml feeds that I specialized in at my time with the company. Now, I am finally getting into the meat of a commerce system, and like all things that we investigate that was previously unknown territory, it isn't magic at all. Having designed my platform to operate in a distributed fashion, using polling and event actions between servers to effect dynamic load redistribution the idea of sending off user payment requests to an automated payment processor (I am leaning toward using paypal but have to do some more investigating) is very familiar indeed. I am looking forward to finally getting the interactions working end to end, probably the last bit of some what interesting code I'll be doing before launch.

Stay tuned!

13 April, 2008

Google lights a Campfire...

This week Google launched their App Engine Platform to add their hand to the collection of products and services provided by large and small providers of web frameworks. As a developer of just such a framework still in steath, the announcement is not a surprise (if it is to any of the other guys they may have a few things more to worry about) but with this announcement Google also announced a few "proof of concept" applications built using their App Engine. To demonstrate the ability to build apps quickly, three of their developers are said to have worked on their "spare time" to create a free web group chat application called HuddleChat very much like the service provided by 37Signals Campfire product. I am quite familiar with Campfire as in my initial research for developing a collaboration API in my framework 2 years ago I came across their website. The product serves a simple purpose of allowing a team of individuals to come together and converse in a chat room while sharing files collaboratively. It is precisely this simplicity that has made the product vulnerable, the technology needed to create such an app makes it simple to reproduce with other technologies. There is very little in the way of innovative distinction in the Campfire product that can prevent others from copying the functionality. Also, as far as I know the product implementation may not have any patents behind its technology. True enough, Google's Huddle chat is inspring some controversy in the blogosphere for how closely it mirrors both the look of Campfire and its functionality.

Having designed a collaboration tool that encompasses all the functionality provided by Campfire and Huddle Chat but includes a patent pending set of technologies critical to the scalability of the implementation I wonder why 37signals felt entitled to complain. It is clear from looking at the interfaces that they are layed out similarly but I wouldnt call one a clone of the other, Google could have used a different layout (say like parachat, meebo or userplanes for example) but the main fact that they are all using the same simple implementation method to make the chat work is common regardless of the interface. The machine behind is what needs to be unique and protected. If it is novel and efficient, it will allow a company to compete with established players and gain traction without fear of strong competition for a period of time that will allow them to hopefully thrive. This is what I hope to do with my product which will soon be coming out of stealth. I look forward to seeing if Google can "throw together" a scalable competitor to my service when I do...just so long as it takes them about a year or two to get it out there;).

Bring on the competition I say!

11 April, 2008

Michio Kaku on Time

A former professor of mine from City has been a strong proponent of science and teaching it to the masses. Along with other such popular authors in the field Professor Kaku uses his keen understanding of the fundamentals of reality and his excellent speaking skills to spread the wonder of technology and science. Currently on several cable channels (I am watching it on "The Science" channel) Professor Kaku is tackling the idea of "time" , what it is and what it means from a scientific , geological, human and universal perspective.

I recall well the class I took with him , "The Physics of Science Fiction" he was always willing to discuss and explain concepts and did so in a such a way that you just got it. Check your local listing to see if you can catch the episodes ...or they might be on youtube. ;)

http://www.bbc.co.uk/bbcfour/documentaries/features/time.shtml

08 April, 2008

Another bug in the eternity bin...

I just finished implementing full image branding capabilities for the multi-tenant site management options provided by my platform. One great consequence of the winding down phase of a piece of software development is the reduction of load that attends the end of the line. If your design is one that is conducive to scalability, both in terms of adding new code and functionality to the platform via the API and as well by run time scalability, then as time goes by you should find that it is easier to do more complex things. I was able to implement the branding logic very easily, it only required adding two new columns to the associated site table, the rest of the changes were UI related to enable mutation of the new values.

The branding is a perfect example, being able to allow for distinct managed and secured sites on the same platform required design decisions that were coded into the core API, quite literally several years ago. The main decision was to chose a permission structure that was fine grained and right based NOT group based. I always saw group based permissions systems without an underlying granular right based foundation as asking for trouble. If you have only groups then there will come a point when a desired combination of functionality can not be achieved since no such group atomically defines that functionality. In a right based scheme, rights can be defined to be associated with permissions and then applied to particular class instances. The right designation is orthogonal to the object instance and this allows an exponential relationship between the possible permission combinations that can exceed presently designed needs.

The ability to exceed presently designed need when designing a permission system is important as, when designing a class structure you can't predict how client programmers will use the classes. In order to ensure that rights associated with the permissions vary freely with the desires of client programmers (which you don't even know yet) simply allowing the independent association between instances and rights provides the finest gradation (as fine as the number of rights) possible. This is where choosing the right set of rights comes in, my platform has rights tied to actions that are desired to be performed on instances of a class. The rights include some of the standard rights or permissions that are familiar from unix , (read,write..etc.) but applied to object instances not file system objects. For example, read is analogous to view, write is analogous to edit, the rights are mapped to associated actions via the persistence API. However, because the class objects are managed in a database , additional rights come into play that don't exist in an OS right based system, such as search, import or export. It turns out that some rights have larger scope than the needs of a given class, for example search makes sense for proving permissions to scan collections of a given type which includes all instances of that type, but it makes no sense (currently) within the context of individual instances of a given type. The ability to define search rights for instances of a type are inherited "for free" and allow the implementation of that functionality (for whatever purpose the client programmer should wish) in the future. Now, you might think that this is wasteful but when you realize the permissions are right based, the right only exists and is associated with permissions that are granted to Users. If a permission is not needed, it is not instantiated and does not incur any processing resource to maintain the logic for it in the db in the form of a single row. When a User needs the ability to search a class type, they need only a single permission to allow search for ALL instances of that type. There is an orthogonal relationship between the type, the instance id and the right that allows for a very fine set of permissions , but these are only invoked as needed.

I've found that using a right based permission system has streamlined so many aspects of the design, for example, I can authenticate and authorize Users very efficiently and dynamically modify the UI resources requested by Users to collapse to the limits of the permissions they possess. Thus a fluid UI results that dynamically conforms to the fine grained permissions of the Users requesting the resources. Users that need expansive powers require only single permissions to cover required rights over all instances of type. If they need control over a set of instances of a type they can be given specific permissions for each instance separately. Management of the instances then forms a virtual limit on the number of permissions granted to a given User by virtue of the increasing difficulty with managing many instances. In such cases management of the Users workflow makes it easy to determine if they should have their permission scope increased. So in such a system, each permission is a unique key, and functionality is added by giving a new key. To manage collections of permissions, virtual groups can be created but they compose permissions NOT Users (as in Windows). They are called therefor permission sets, this allows collections of permissions to be managed and granted to or revoked from a User. Permission sets makes setting up right profiles to be granted to Users trivial, the work is done once of defining the right profile by adding the desired class or instance permissions to the set and then the set itself is given to Users, implicitly granting the contained permissions to the User. Allowing permissions to be added just in time has cascaded efficiencies throughout the design. I'll be getting more into the details of the advantages of this system after the site launch.

01 April, 2008

avoiding de spaghettification in client implementations of good OO classes

http://en.wikipedia.org/wiki/Wilkins_Sound

That's about as political as this post will get. nuff said.

In other news, I did not get much done today. I rebuilt the software distribution for a windows environment , incorporating the fix from last night that I was working on for the previous 3 days (but thought would only take 3 minutes)...ha!

The fix was in the code to my guest pm dashboard API that I added a few weeks ago. If you have ever been to a site with a "live help" feature, you know what a guest pm dashboard is about. I targeted it as an easily added service to my collaboration API because it takes advantage of the unique architecture of my distributed framework. (Built in multi-tenancy, build in fine grained permissions, automatic auditing of guest requests and agent engagement history) The solution details are not really relevant but the bug highlights a problem that can crop up when too much code is put into a single dynamic resource. For example, recommended OO design principles involve creating classes in methods to encapsulate the specific elements of functionality in the problem space into methods associated with classes that map to entities in the problem space. This is the art of OO design that comes only with much practice encoding problems into solutions. I talked about it previously in several posts , like this one .
The thing is, is that some problems have many methods or functions that are mapped logically. In the final implementation these classes can end up being very thick, even if at any given time, the client code that uses the class will invoke only a fraction of the methods at any given time. So it is possible to follow the correct design precepts of OO and end up designing classes which during run time are memory inefficient because of the specific use pattern of the class objects as designed in the final client code that implements the class.

This type of problem sneaks up on the coder slowly and quietly. I had this happen not with a class but with the actual client code as implemented in a jsp template. A single template named "emit" is used to manage the authentication and interaction in a conversation regardless of the type of conversation. The associated class conversation has a type attribute which defines 4 distinct types so far.

  • Instant message (2 participants max)
  • conference (n participants)
  • im mail (1 participant)
  • guest pm client im.

Each conversation type has associated code in the emit template that is unique to that type, now during my initial implementation only two types (IM and conference) were planned but the other two types were added as those functions were deemed necessary. Slowly the emit template ballooned to support the different initiation logic for each type of conversation and the actual code for the unique elements. Currently the emit template is a fat 155kb uncompiled but slims down to 119kb in the server executed compiled form. I can optimize the code in it to get it under 100kb per conversation instance but the ideal solution would be to cut up the functions for each conversation type into separate jsp or servlets. I can keep the common elements (authentication, message management, presence and file display) in one template and then create specific templates for the conversation unique invocation elements. (initializing an IM , conversation , im mail or guest pm) This would then allow the 119kb to be cut up into smaller blocks that execute only in the time that they need. This would allow the continuous memory hit for run time conversation actions under load to have a lower profile and fewer spikes on the servers. Of course the cost is additional work for me in time but the benefits are lower average memory load for conversations, which allows more conversations to be active in a given amount of memory , which allows better scalability for a given amount of memory which is simply cheaper for me to procure. I literally get a more gradual scalability profile under load, which for the paid service options directly determines how much revenue can be pulled from each server. So though I haven't implemented the described client code split, I have it targetted for optimization just before launch. However, the point is that it is possible to follow OO principles and because of the loaded nature of the problem space (many different types of conversations in this one) certain objects can become "heavy" on resource utilization under load and negatively impact performance. In such situations it is necessary to break up the solution into logically related items that can be invoked just in time to ensure an efficient resource utilization profile. It is actually a good problem to have in the sense that optimization may result in significant memory efficiency gains for little more than the time of doing the optimization. (which amounts to taking out scissors on the client code) How do you determine if you'll have this issue? You simply ask the question if the number of variants for a given attribute of the class will be finite and if those variants require associated unique code in client implementations. If it does, the best option is to create unique client blocks of dynamic code (jsp, servlet in this case) for each attribute variant.

So keep an eye out for solutions that overload in such a way that they may lead to inefficient memory or processor utilization at run time under loaded conditions, this way you can do the cutting before hand and only code the chunks that are distinct for the new function being added. Of course, if you are doing good OO, the only difference between doing it before hand and after is that when you do it after you may have to do some de spaghettification of the combined client code but you get to realize the resource reduction as a hopefully noticable increase in scalability on your servers. ;)