28 August, 2008

MTBF..so much for that theory...

Turns out my development box had a deeper affliction than the feared power supply fault. I received the new PS and installed it and the computer whimpered in the same way mentioned in the previous post *sigh*. After a longer session of investigation trying to trouble shoot the actual source I've come to the conclusion that it is the worst possible component failure next to the crash of an un backed up hard drive, namely a mobo meltdown. This is second to a hdd failure because at least it can be recovered from, but it requires lots of hardware replacement (essentially a rebuild of the computer).....argh! I am now investigating the parts needed from newegg.com to rebuild the box. Amazing how this happens NOW just when I am days away from deploying my code (now silent on the hdd of this box) to the production servers! Murphy's Law is supreme! The bright side is that I will get to build the more modern box that I was hoping to after the site launch early, this current machine is build on a single core AMD Athlon XP 1.5Ghz chip , which by today's dual core standards is not even good enough to go into a cheap laptop. I am going to order a dual core proc. / mobo and a gig of fast memory to finally bring my development into the future. I can say bye bye at least to the annoying slow downs I was experiencing with the existing box when ever I tried to simultaneously launch more than 10 applications. (which is common on my dev box)

off to newegg for the ordering fun!

27 August, 2008

HDTV without the HDTV

About 5 years ago, I recall noticing the difference in quality between the signals that I received on non cable based tv from say 10 years ago and today's digital cable signals. The main difference lay in the fact that digital tv replaces ghost and snow artifacts for digital pixelization. In the early days I noticed a marked difference in quality as when both signals were at their best , the local cable provider applied enough compression to a signal that various scenes would clearly show the artifacts. The compression used on the digital signals is mostly mpeg format compression which uses a discrete cosine based method to compress luminance but mostly chrominance information to reduce the bandwidth requirements of the signal for transmission. However, cosine based compression is subject to quantization errors and artifacts having to do with the selection of a specific sized quantization kernel for the compression algorithm, for scene data that moves faster than the algorithm can encode the chrominance data there is a marked pixelization of the image. There is also a good loss when contrast is low between objects displayed on the screen (shows particularly well on light to dark transitions), finally when there is a high level of variation in a particular frame a fixed compression sampling rate will vary the appearance of the pixelization from frame to frame, making for a horrible effect. If you've watched badly compressed web video on youtube you know exactly what I am referring to. Now, the cable signals aren't "that" bad but I was able to see the difference between them and what I was used to seeing with an analog signal or with a pure dvd signal from a set top box to know that the cable signal wasn't as good as it could be. I recently upgraded my cable service so that my cable receiver is able to access the "HD" tv signals that many channells are providing along side their standard definition channels. I have a standard definition flat screen television set, the Toshiba 27AF43 that I purchased 5 years ago mostly for its convenient inputs (component out) and for the perfectly flat screen. It provides a clean and sharp and noise free display for my DVD player (also a Toshiba) I've used this signal as a reference for just how good the screen is compared to the cable signals it displays when I am watching CNN or some the Science channel and the difference is clear. The experience gave me the indication that the HD channels might provide quality to approach the DVD signal, sure enough upon upgrading to the new receiver and tuning to an HD channel I was surprised at how much better the signal was. Gone were the obvious pixelization squares in low contrast transitions, fast moving object scenes and high detail scenes. The simply reduced compression on the digital signal improved it markedly on my standard def. TV. It makes you wonder that as we are being prodded by the electronic companies to purchase new HD tv sets, many of us have existing standard definition screens that aren't being pushed to their limits of resolution because the cable companies have so severely compressed the digital signals they are sending. I have seen an HD screen both on a computer and on an HD monitor and the difference in quality between a 1080i/p and a standard def is again obvious but I don't I wouldn't say the difference is bigger than what I observed when going from normal cable digital on a standard def. monitor to HD cable digital on that same monitor. A few of the cable providers are getting away with providing HD quality that only barely exceeds the resolution capability of a standard definition monitor it seems!

Just an observation I thought was worth sharing...

video compression

MTBF catches up to development.

Intent on finishing the implementation of the new permission token feature into the framework code I was fully engaged yesterday in getting it done hopefully by the middle of this week. That is until a strange occurrence yesterday. As I was at the computer the mouse became unresponsive, I have a periodically flaky KVM switch that some times does this so I switched to one of my other development pc's to see if the KVM was frozen, it was not. I switched back to the main development server and after a few silent expletives , hit the power button as I had no choice. The computer immediaetly began a reboot but at the point of reaching the BIOS screen simply went dead. I was curious but already had a feeling that my computers power supply was in trouble. I rebooted again after first unplugging the power cable from the pc for a few seconds, the machine indicated it was getting power by blinking the power and hdd light, but the hdd light was solid and the screen didn't receive a signal...uh oh I thought as I reached around to turn off the machine again. This time I noticed the faint smell of burnt circuit board that is a tell tale sign of failing or failed components. After another 10 minutes waiting with the power cable unplugged I came back, plugged it in and pressed power. The computer made a dull beeping sound and the power led blinked for a second and then went quiet. I did this 4 more times with the same result, it appears I experienced the slow death of my power supply. The main development server was rebuilt (mobo + memory + proc) around 2002, a year later the older PS that hadn't been replaced during the 2002 rebuild gave up the ghost just about at the time it reached the standard MTBF (mean time before failure) for power supplies of 5 years. While noticing the behavior I conjectured the Power supply had again reached MTBF and had finally given in, when I upgrade parts in my machines I tend to put a date on them to signal the age of the component. The powersupply was marked with 10-15-2003 as the installation date , which is just under 5 years ago. I turned off the pc, and switched to my other machine to quickly place an order for a new Power Supply from Newegg.com, hopefully the diagnosis is on the mark and after a fast 15 minute replacement I'll be back up and ready to finish the implementation. Murphy's law strikes again, but not so big a deal I'll enjoy a 3 day mini-cation while I wait for delivery of the PS. ;)

21 August, 2008

one last feature before push to production...

In the previous post I detailed the roller coaster ride of implementing e commerce enablement to the consumer site that I'll be launching in a few weeks. The service plan options that I provide allow users to manage their own private conference room in the basic "free" configuration, additional plans that require payments allow a user to manage or create multiple rooms. The problem I ran into revolved around how to provide the users the ability to create new rooms in a limited fashion. Originally I thought that the uniqueness of the problem constrained the generality of the solution so that all I needed to do was upgrade the User class to add a new "create room token" which was simply an integer indicating the number of available requests to create a room that the associated user could invoke. This solution however broke the symmetry of the permissions system in that it granted a right that the permissions granted outside of the ken of the permissions system structure. I was able to implement and test the room tokens in a day of coding but something about the solutions asymmetry just bugged me.

Two days ago, after I'd finished testing the e-commerce integration with Amazon I decided to come up with a more generalized solution to the problem. I would add into the existing framework API a mechanism for creating permission invocation limits that was completely agnostic of the type of action being permitted. To do this I realized the optimal solution would be to map the permission_id associated with all valid permissions in the system with a user_id and finally tie it together with a count or invocation limit which specifies the number of executions remaining for the corresponding permission by the associated user. This solution would require the creation of a new Permissions_token table , each row indicating a token unique to the user and permission. The User objects of my framework already are mapped in a one to many method with permissions, the new table would allow another one to many relationship to exist between the User and permission_tokens.

Create the Permission_Token class...

The next step was to create a new class to manage the Permission_Token rows to be added and modified in the database. The Class simply contains three parameters and their mutator methods and a method to output an xml representation of the object. This would be used to manipulate ordered sets of the tokens for extraction from the db or insertion to it, it also makes managing the rows in the User class facile.



Update the class methods...

After creating the new Permissions_Tokens table and the PermissionToken class I had to update the User class to provide the methods that allow mutation of the permission tokens for a User. I'd need a collection object to store the set of Tokens, for that I used a private ArrayList object. Then I provided methods for adding, removing , getting , and updating PermissionToken items from the the array list. These methods performed various tests on the Tokens depending on their action, for example the "add" methods tested for the existence of a Token by ensuring the permission_id field doesn't already exist in the ArrayList. This adds additional computational expense but makes it impossible to add the same Token to the array. Also, the remove and update methods required unique implementation to ensure synchronized mutation of the array under concurrent modifications. After adding all the methods and testing in main() I was ready to move to the next most important step. Implicit implementation of the permission_token methods into the User class.

One of the greatest advantages of object oriented design is the ability to use encapsulation to hide underlying changes to a class implementation that could significantly change the inner workings of a class. The proper selection of change points inside the methods of a class allows class programmers to make deep foundational changes to code without affecting any of the client code that uses those classes. In my framework the User class is the nexus through which the security framework provided by the permissions system is expressed. Users are given permissions and client code queries the Users for their ability to perform actions associated with the client code. For example, to access an administrative interface a User must be able to view the interface, which requires a "view" permission for system objects. The client code in the administrative interface queries all User requests for this "view" permission, those that have it are allowed access to the resource those that don't are denied. This key based system is preferred over group based permissions (like used in windows) precisely because of the granularity it provides. However, once a permission is granted to a User the user (barring Flag restrictions to their account access) has unlimited rights to invoke it, thus any user with "view" rights to the administrative interface can view that interface as long as they are logged and not Flag limited.

The permission_tokens allows us to apply invocation limits on the execution of any permission but to prevent its inclusion from causing changes to client code the method must be implemented in the User class methods that are interrogated to determine if a User has a given permission. The "hasPermission" variants. There are several types, the first type simply iterates over the collection of Permission objects for a User until a requested target permission type or type id is encountered, if it is found the User has the permission if not they do not. The calling code then performs an action based on the response. The next type are "hasImplicitPermission" methods.

A little background on the Permissions API is required here, the Permissions all map to a Permissions db table with an int valued primary key, the Permissions have a "entity_type" , "right_id" and an "entity_type_id" field , the former indicates the fully qualified class name of the class type associated with the permission, the latter indicates the id in the db table for that class type. All entity classes have a corresponding table and corresponding permissions generated when the entities are added to the system. If we specify a positive value for the "entity_type_id" we then have a permission for a particular instance of the given type for the indicated "right_id". The "right_id" maps to a table that simply lists the actions that can be performed on any instance of any type.

View
Create
Update
Delete
Search
Import
Export
Publish

Different types perform different actions for each right based on what it is made for, but a glaring problem presents itself. If each instance is tied to a permission, how do we provide the permission to perform a given "right" over ALL instances of a given type if we don't know yet what all the instance Id's are???? We need a permission that grants global application of a right over all instances, the obvious solution is to use a zero valued "entity_type_id", this indicates that all instances of the given type for the given right as a permission. With a global permission for every right of every type we now have an extremely granular permissions granting system. Some users will be given permissions only to perform rights on specific instances of a type and no other, other Users who may have management duties can be given the general zero permission for a right allowing them to perform it for all instances "implicitly". Thus comes the relevance of the "hasImplicitPermission" method types of the User class.

The hasImplicitPermission methods perform a test to determine if a User has permission over a given instance even if they don't have that instances permission. Thus Users with the global permission for a queried entity type will return "true" from this method when any instance id is indicated as the entity type id. Using implicit permissions a User can be granted access to a large set of instances without actually having the instance permission, the general permissions exist also for a special class of system managed rights. For example, the ability to update configuration setttings on a node (a server running a copy of the framework software) the User must have the Config "Update" permission, which is a global permission type. Also, since all Entities derive from a base class , it is possible to grant "view" permissions across all entities of any type by granting the permission for the base class...thus with as little as 16 permission objects a User can be granted "God" powers over the entire framework, from creating and modifying configuration settings across nodes, to creating and publishing stories or thread posts. Collections of permissions known as permission sets can be created to grant desired permissions to Users expeditiously. All that said, the introduction of permission_tokens adds yet another dimension of granularity, allowing any permissions that a User does have to have different invocation limits for each one.

So to prevent the necessity of client code changes the best integration method involves modifying the "has...Permission" methods to internally factor in the existence of Permission_token items corresponding to the Permissions that a User currently possesses. If the User is being interrogated for the right to perform a specific right, the permission that matches that right would normally return "true" for the result of the method invocation but if the User has a Permission_Token for that permission, the count of the token must be determined as positive non zero, before the "true" is returned otherwise "false" is returned. This way the existing client code (which as of this late date extends into two applications built on the framework since the last major change to the permissions system over 2 years ago) will be untouched. This is the benefit of a deep analysis of the best place to include the required functionality, you make fewer changes but those changes are more powerful. Designs that predominate in changes of this sort tend to be the most efficient ones in my experience. In any event, including the test for the count of permission_tokens for an invoked permission takes advantage of the short circuit behavior of Boolean operations such that if a User doesn't have a permission requested the first argument of the check will fail out of the test and return "false" without having to iterate through the permission_tokens at all. Also, since only limited permissions have tokens , the default behavior is unlimited invocations for a Permission, this reduces the number of tokens that would need to be loaded with a User ensuring that the cost for iterating the set is always efficient for the ArrayList collection being used. The execution of code on demand ensures that memory utilization under concurrent actions by multiple users on the system ramps slowly rather than in a spiky fashion.

I am currently working on the db handler for the permission_tokens which allows mutated tokens to be retrieved or persisted back to the permissions_tokens table. Implementation to replace the previous "create room token" will follow...

Amazon integrated....but not without a roller coaster ride.

Launching a start up is an amazing experience, in the last few months I've written code in many different areas to facilitate the successful and smooth launch to come. As mentioned in previous posts the last step of getting my commercial site up and running involved coding the consumer web site and providing users the ability to browse and select or upgrade to any of the service plan options that my site makes available.

In this post, I discussed some of the ways that I was able to efficiently handle problems that consumer sites enabling e commerce run into. Basically, a trade off must be made does the site manage every aspect of the service plans that users purchase by keeping subscriptions? If so where is the subscription information managed? In a proprietary method on the consumer site or is that function off loaded (at cost) to a third party payment processor? In the ideal I would design and build my own payment processing API, get a merchant account for handling credit cards and process them directly but time and cost constrains preclude such an action. A second option is to perform the service plan set up and off load the payment processing to a third party company like Paypal or Amazon or another payment service.

 This is what I decided to do but soon ran into the Gordian knot that was Paypal's payment services API. I like a good puzzle but nothing pisses me off more than a puzzle that that makes no sense, that changes the rules as you go along or that is just needlessly complex. The payment services API from paypal has an amazingly complex procedure that requires setting up multiple accounts (paypal business account, client account, sandbox business account, sandbox client account) and then goes further to require that various accounts are actively logged in to the testing machine while testing....this is not so bad if the accounts didn't have such short time outs. I'd be in the midst of testing a change only to have to re-login, I'd get browser windows confused as I was logging into to the wrong accounts....I spent more time book keeping than testing the code I was writing and this says nothing of the appaling documentation that paypal has.

 Rather than providing detailed use cases and code examples, it provides page after page of numbered instructions which again is not so bad but code samples are always better. I got to a point where a user on my site would click a service plan option and that would take them to the paypal login screen, the sandbox account would then be logged into (while the sandbox business account is logged in as well..they never explain why this must be the case) at this point the user should be prompted to enter credit card information, review the selected plan and then submit but I was never able to get past a single cryptic message. Posts to the Paypal developer forums went unheeded by the paypal developers who seemed to be actively answering the questions of others. After 3 days of waiting for a reply I gave up on Paypal.

Enter Amazon


I had to take a few days off after wrestling with Paypal's draconian system and in that time looked into my other options. Amazon , I recalled at recently began offering payment services through their AWS (Amazon Web Services) program. I investigated their offerings and noticed that they did have "pay now" services but unlike Paypal , they didn't have a "subscription" service. Luckily , as explained in In this post my framework provides a more efficient and reliaable way for me to simulate subscriptions. I could simply manage the users account access and duration using my Flags' and use only the "pay now" buttons to allow users to switch between plans or extend service for existing plans at any time.

This way Users can build up credit for their selected service plan in the form of usage days and when desired extend or upgrade their plans. This provides a versatility that neither Paypal or Amazon provided. As I continued to look into implementing Amazon I was pleased to see that their documentation was just slightly better managed than Paypal's but there were still glaring ambiguities in some of the writings. As programmers you think that the importance of syntax and the precise use of words would be something that writers of technical documentation would pay attention to, but not all technical writers are technically minded. Some of the instructions ommitted critical steps that absent would cause transactions to fail, others were plain wrong , still others would use "should" when they really meant "must" as I road the coaster of implementing the Amazon payment services I corrected several document mistakes by pointing them out in threads on the developer central web site.

 I had already completed the required code on my end of the interaction, basically , when a user selects a plan the information would be sent securely to Amazon which would display the plan options , allow the user to select their credit card options (or store one with Amazon for use) and then submit the request and then send the user back to my site to a designated return URL. The parameters of the url would indicate the status of the transaction and payment information for the plan selected by the user. All this worked fine when the interaction was NOT signed, Amazon allows transactions to be signed with an encryption signature value generated from the unique attributes sent with a pay request. My site would generate the signature and add it to the outgoing request, Amazon would verify the request came from me by inversing the signature operation with a secret key.

This would guarantee that the request came from my servers and was not tampered with after the code was generated. On Amazon's end , after the user transaction is completed Amazon would then sign the return parameters and add a signature to that affect for my servers to inverse and guarantee came from Amazon. This way the transaction is authenticated at both ends and guaranteed to not have a "man in the middle" changing values either to upgrade services or lower prices without authorization. Unfortunately , the signature verification was not working, I sent out several threads asking for help on the Amazon site and unlike Paypal received answers from Amazon programmers that helped me hone in on the proper solution.

The proper solution involved fixing several errors in the existing Amazon test code (at least they had test code) and adding a convenient method to the test code that does the signature verification (Amazon only provided numbered instructions for this previously, now they have actual working code they can provide to implementer) I finally got the transactions working perfectly, securely and with signed parameters. I must mention that like Paypal Amazon requires the use of a "sandbox" account for the business (a fake business) and a sandbox account for a fake client. These are used to simulate payments when testing the code but unlike Paypal they didn't require that I be logged in on the computer while testing the code from my commercial site (which was bizarre) this allowed me to proceed with testing without getting frustrated at all the logins I had to perform.

Now that the e-commerce enablement is complete I am ready to roll the code out to the production servers....well, almost...there is one feature I wanted to sneak in to the framework...I'll talk about that more in the next post... ;)

20 August, 2008

The bigger the paradigm shift the harder it is to predict.

In a recent article at Silicon Alley Insider , a wall street analyst came to the conclusion that Verizon building out a fiber network is a bad idea. I wrote a fast post listing several reasons why the analysis is at best way off the mark.

I extract it here but link to the post below.

The original article at SAI.


The short term (what i call idiot analysis) of a stock always makes me chuckle. If a 15 year wait for return on investment is too long for you Moffet that is you but some investors actually buy on those horizons.

I had to blink at the screen when I read the ending of the article indicating that Moffet thinks the best thing to do now is nothing??? huh??? Like the horse cart manufacturers did as the car came on the scene eh? Where are they now? No, only someone on LSD would do nothing, by acting NOW Verizon does several things.

a) They get first mover in marketing the converged service that will be cheaper for them and us, more powerful in terms of bandwidth provisioning, more reliable thanks to the optical fiber that doesn't need replacement nearly as often as copper.

b) They get to compete with a superior service against the permanently crippled (thanks to their copper cables) cable providers. Even if they take an early hit now, like the proverbial tortoise and hare , the technology advantage of fiber will provide the competitive pressures that really eat into their cake.

c) They get an early start on building or providing network access to products in the home that currently are not networked. The companies that provide the interfaces to these devices will be the ones people go to and recommend to their friends, that can amount to a huge advantage in market dominance as far as mind share is concerned.

The type of analysis that Moffet did suffers from one major flaw, his ignorance of all the ways that Verizon can profit from the new network that are not even currently envisioned by us today. It is like some one mentioned earlier, what if Edison and Westinghouse decided it was too expensive to build out the wirelines. They had no idea that devices like microwave ovens, Radar sets, FM Radio's , CD Players, TV's and computers would ever be connected to them...all they cared about what light and maybe morse code...yet the networks provided the ability for those products to spread to every wired home. Moffet needs to realize he's just as ignorant (or more) of the coming enabled applications that FTTP (fiber to the premises) will provide as Edison was about the usefulness of wired lines.

19 August, 2008

DAD runs wild.


As a self trained illustrator and lover of art and art history, I still keep a tab on the works of animators, designers and artists immersed in the digital media that are available to us these days. Recently on a recent surfscapade of the net I discovered the following animation.

DAD at work

Needless to say the crude art work work perfectly with the animations frenetic style and makes a hilarious little clip. I was so impressed by the title character (which just happens to match an acronym of a very important program in my framework) that I created an illustration of DAD that I plan to put on a T-shirt. I also created versions of DAD with the suit case and hat. Of course only those that have seen this video will recognize the character , sort of an inside joke then to the people that enjoy this type of work. If you are interested in getting a copy of the graphic file (.psd format) to print your own shirt I can send it to you, just send me an email and I'll send it over. I am using the popular vista prints site for the shirts at the moment.

Note: The original author of the animation is sakupen, so make sure to head over to his site and check out other aspects of his work. You can buy official gear from the site as well.

This animation was the second one made, the first called DAD's home was also excellent check it out when you get a chance.

DAD's home


enjoy!

14 August, 2008

Riding the coaster....

It has been a while since I posted anything, have had a few personal problems that are getting in the way of my normal output and I am very deep into implementing the payment solutions for the site launch (still)

I switched to use Amazon payment services instead of Paypal which turned out to be a nightmare, as it stands Amazon integration was only a fraction as formidable and I am almost done with the testing. I'll have more to say about the differences between Paypal and Amazon in a subsequent post.

Until then, enjoy the following:

http://www.newgrounds.com/portal/viewer.php?id=386773&key=BdVqZyXzBtOzdxbStiN2YyNzMxOGY4QjE4MTIrNV9xMGI5ODE7QjE0QlYxOzJWNTJiMDI0ZjArVnFfOTYxbTk5NjUyMDg4OQ%3D%3D

A modern classic.