29 December, 2008

Is Blue-Ray too expensive ? Answer: Maybe

A recent article at the Content Agenda Site makes the case that Blue-Ray optical storage technology is far too expensive given the market dynamics of an existing strong base of DVD based devices and media and the incrimental nature of the multimedia and quality capabilities provided by the new format.

There is some merit to the arguments of the article but it also misses some key points that require an understanding of how the semiconductor and electronics industries operate internally before any assessment of the pricing of Blue-Ray technology can be made.

First is the fact that unlike 1998 when DVD made its big debut and supplanted the old magnetic technology in VHS with the optical DVD (which is per unit cheaper to produce than VHS offering a compelling impetus to switching over) today, Blue-Ray has to be built on production lines that are already doling out the more profitable (per unit of production, since authorized supply is lower) DVD players and discs. In the two years or so that Blue-Ray has been in the zeitgeist (thanks in large part to the medias idea of a "second generation format war") it has had to be produced next to a nearly comparable product in DVD. In order for manufacturers to ramp up production they would have to outlay cost for the Blue-Ray differential components while cutting back on a raging profit center. It should be no surprise that the manufacturers would be reticent to doing this until such time that a blue tuned laser head costs the same as a red tuned one and the additional DSP horse power required to perform the relevant encodings/decodings of disk data on the fly are also cost competitive with the now 10 year old DSP's and lasers produced in volume (and at low cost) for DVD's.

The details of ramping up production of Blue-Ray over DVD go even further than switching parts at the electronic manufacturers. Most of the DVD player makers put them together using third party acquired parts. The true controllers of cost and even player production rates are the semiconductor and optical laser suppliers. If they are making a killing on red lasers and DVD (mpeg 2 encoding) DSP's they have no real reason to open parallel design lines for Blue-Ray lasers and DSP's until the return on profit (in bulk purchases from the player manufacturers) substantiates the switch over. Retooling semi-conductor lines are serious business and explain why every so often the big fish like SMC and others have massive fabrication build costs that go into the buildings. Designing the DSP's in DVD and Blue-Ray players require an initial investment in clean rooms, incredibly high tolerances in manufaturing and ensuring adequate yield in the way of working chips (only a percentage of chips produce actually work!) that the sem-con makers simply will not leave to the fickle chance that Sony, Toshiba and other providers will provide the new chips in sufficient bulk to cover their outlay. In a climate where the difference in visual quality is marginal and only appeals to a subset of individuals with HD sets (which are usually produced on relatively low yield screen technologies like plasma or LCD) the margins to be had are absolutely razor thin, so choosing to ramp Blue-Ray slowing rather than quickly , waiting for the consumer market to be able to switch over at pace is the much smarter action than taking a bet on a fast adoption and building production of Blue-Ray devices that may not sell adequately to recoup costs...especially under the competitive conditions of multiple manufacturers selling the same product but wanting to distinguish them in some way.

The availability of online resources for researching the technologies, the manufacturers and the trends in adoption and pricing make it critically important that both the manufacturers of the players and the makers of the chips get production volumes precisely right or they will be tempting billions in losses.

So the assumption that the providers are milking the format for margin is a bit of a naive one, the dual level linking of manufacturing processes sensitive to pricing and already small per product margins make it critical to hit the right production volumes and it is better to go too slow, make money but leave some on the table than to go to fast and lose billions without making anything.

Finally, just a word on the statement that mp3 is not as high quality as CD. That is correct if you are talking about sampling rates below 44khz and bit rates under about 120 bps but most software mp3 encoders allow the CD quality 44Khz or higher and bit rates nearing 500 bps, enabling better quality than what is produced by most CD players. The software nature of these encoders ensures that arbitrarily high quality files can be produced say from original signals sources, mixers, live instruments at high quality and then be encoded to super high quality mp3 as masters to be used to distribute to CD or SACD. The wikipedia article below states that SACD is roughly equivalent to a 20 bit /192Khz CD. An mp3 with this quality can be made by some software tools.

http://www.mp3newswire.net/stories/2000/bestsound.html

http://jthz.com/mp3/ (scroll to section "Fighting the "MP3 does not deliver professional quality audio" myth.")

26 December, 2008

GoDaddy ... not exactly convenient service options...

So a few days ago I decided to do a little research and purchase an ssl certificate for use on my production servers. This certificate will allow those servers to encrypt data back and forth between clients ensuring that Users have secure channels of communication while using the sites services. GoDaddy has three options prominently displayed on their website. The first option provides a simple certificate for a single domain name. Like yoursite.com, another option allows you to buy a single certificate for a multiple set of domain names, for example "yoursite.com,yourbiz.com,yourhome.com" and another options allows you to buy certificates for a wildcard of subdomains under a desired domain. For example, "*.yoursite.com" where the "*" can be any subsite. The problem is they don't provide a combined option for multi domain and wild card, forcing you to purchase a separate certificate for each, even if you can easily use a single certificate on all of your web servers as I've designed my software to be able to do. The ideal situation for me would be to have a combined option but GoDaddy instead chooses to separate these important services into two price buckets and I'd have to buy both to get the services which isn't what I'd call customer convenience. I found it interesting that what I brought this up to the tech. that his only response was, "well our prices our lower than the competition." to which my response is that it is irrelevant to me. If you are going to lower your prices per service , don't neutralize the usefulness of the service options so that they are maximally useful to the customer only when both are purchased together. That doesn't strike as an honest business move and it is why GoDaddy gets a call out on this act in this blog. I am stuck with the three year certificate (as I can't even switch to the "*" option without being charged more money) but I don't know if I'll be renewing it with GoDaddy.com when the time comes.

the blog about nothing to blog about

I had nothing to blog about , so decided to blog about nothing. It is quite annoying wanting to blog about a topic that I find interesting but having nothing unique in my cadre of interests that I have anything insightful to offer today. Christmas day was great, I had more than my fair share of food and slept like a baby as a result. I am looking forward to getting back to work and have been focused on getting some much needed java script optimization done on some pages before the year ends but the post Christmas doldrums seem to have me in a grip, despite the fact that it is early in the day. Maybe after I've had my morning coffee I'll feel more enthusiastic about attacking those optimizations. As for now I have nothing left to say, so this blog post about nothing comes to an end. ;)

12 December, 2008

the illusion of a grand audience

Facebook has been rising in the ranks as one of the best known and used social networks in the world. One of the reasons that facebook is so successful has to do with how it allows you to think that you are broadcasting your life to the world. It enables you to add contacts to your list and broadcast your ideas, actions and events to them usign the various feeds of content and status that are available but this provides an illusion, namely the feeling that a larger audience is "listening" to your productions than just those people on your contact list. You may not think this at the moment that you are updating your profile with your latest status event but the fact that Facebook allows you to do this to your list, gives the illusion that you are broadcasting to the world...even if "world" is only a subset of those people on the Facebook site that can receive your feeds.

I think this is a powerful enticement for existing users to continue to post items to their profiles, weather they be images, video or audio files and links or just status updates. In this way Facebook emulates the abilities of several other social like networks that are attempting to focus on a particular area of expertise. For example, Facebook allows you to create galleries of images and upload your images to those galleries and then make those galleries available in a selective fashion to either the Facebook community or to your network or even to a specific number of users, this fine grained ability to segment how much of your content you expose to others makes it very much like the popular Flickr site..but it lacks many of the advanced and image specific features that ensure that Flickr is known as only a "photo sharing site" as opposed to a social network. If Facebook decided to, they could easily cannibalize the market of Flickr by adding extended image sharing capabilities to their existing photo services. This would allow them to leverage the massive number of users being added to the Facebook platform on a daily basis and then direct them to use the expanding photo manipulation and presentation features. The same is true with regards to uploaded video files, Facebook could easily become more like Youtube by emulating the uploading features which that specialty site uses to define its uniqueness. Facebook can leverage the social aspect of its massive and rapidly growing network to steal users from those other services which are also very large but restricted only to specific actions. I doubt it will be long before Facebook decided to extend itself into these areas (and more) once it has reached what it deems to be sufficient critical mass (in terms of users) that it can constitute a viable competitor to the dedicated sites for hosting and sharing such content.

That said, the ideal distribution method for the average internet user is not to restrict it to specifid online silo's of distribution, rather people want to share content of any type, at any time with the specific set of people that they wish. Some content is desired to be available for general consumption but many businesses want to share content only with specific clients or with prospective clients at specific times. As it it is now the landscape lacks an option to provide this service for both businesses and consumers. Still, the illusion of a grand audience is a powerful motivating factor toward getting existing Facebook users to continue to send out status updates and upload media to the site thinking the entire world is actually listening to their broadcasts ...even when it is only restricted to those individuals on their friends list. ;)

05 December, 2008

Apple's long term memory loss...

In the last few months I've been testing the accessibility of my web site to wireless handheld devices. The latest crop of smart phones include countless useful functions, one phone that stands out is the IPhone which has been intelligently designed to allow third party applications to be easily purchased, downloaded and installed wirelessly. This software purchasing paradigm gives the IPhone a unique ability to satisfy the application needs of user that previous generations of wireless phones didn't have. The main reasons for the lack of this software variety had to do with the desire of the wireless providers to keep their customers locked into their own provided wireless device operating systems and applications. This "lock in" syndrome that many corporations love to place on their customers invariably is broken by a manufacturer who realizes there is profit in allowing the customer freedom to select the software they wish to run on their device.

This harkens back to a history lesson, see the open software concept replays a battle that waged in the early 80's. In those days the players were many companies that are no longer with us as hardware providers of pc's. Atari, Commodore, DEC and some that are still with us but have moved to providing services and software as their primary business lines like IBM and Apple. IBM plays a particular role in this story as it was the chief adversary of Apple during the early 80's when each company had its own unique pc architecture. The architecture defines the internal design characteristics of the computer, the type of central processing unit, the configuration and size of the memory, the ability to allow the attachment of external storage peripherals like tape drives, cartridges and other devices. In these early days , each new pc model produced by the manufacturers were attended by a unique architecture...that is until IBM realized the power of building a modular architecture that could be used from model to model but upgraded by having specific components upgraded. They applied this idea in successive generations of modular pc's from the XT to the AT to the PS/2 and along the way the computing industry picked up many of their modular innovations and combined them into the pretty much standard architecture used for pc's today.

The motherboard manufacturers (Intel was the only big dog of IBM machines at the time) facilitated this by designing their mother boards to accomodate the modular components of the new pc modular architecture. More importantly, alternate pc makers began with "clones" of the IBM models and that allowed the architecture to spread across manufacturers...this was key to spreading the pc platform across the world and ensured that it was the dominant architecture to this day. At the same time as the IBM machines were being made more modular and cross manufacturer friendly. The software on the machines was targeted by a shrewed guy out of Redmond, Washington named Bill Gates owner of Microsoft. Gates saw the power of the architecture that IBM was building and realized that if he had his software on it, and controlled the access to the architecture he would be in a position to make serious bank. Microsoft thus moved to quickly secure a deal with IBM that allowed them to be the exclusive provider of operating system software for the first pc's and by allowing third party software designers the ability to code to their operating system they opened a huge market of software for the platform...thus starting the symbiotic relationship between Intel based pc's and Microsoft software that is still with us to this day.

What was Apple doing at this time? Apple had designed it's best pc's (the MAC line) based on Motorola processors at the core, they had key innovations and architecture changes that made them very different internally from the IBM machines but most importantly they were not designed to be as modular and were there for not appealing to the clone makers. Apple also didn't see (like most people at the time) that an open hardware platform was the key to an open software platform and make it appealing to third party developers to design software which would then bring more customers to the platform. Apple kept their platforms proprietary, releasing the MAC which did well, the LISA which bombed and several other MAC models into the early 90's , none of which ever moved beyond the niche markets of graphic design and desktop publishing cache that they had earned thanks to a relationship with one of their original software providers Adobe Systems.

So the story is that the pc platform grew to its current monster size, as a massive ecosystem of a generalized hardware architecture that could be put together using components from hundreds of manufacturers (read: competition reduces prices) to make new machines that all run the same software. At the same time the Apple platform, tied to a proprietary architecture and restricted to development by proprietary software and few third parties languished. The components were always more expensive since they came from smaller and fewer providers and that ensured the market share remained small.

Fast forward to today, the IPhone gives Apple a chance to do what it failed to do with the MAC architecture over 20 years ago. They have the chance to open up their proprietary platform by allowing third party manufacturers the ability to design their smart phones to the specifications of the modified OS X operating system that runs the IPhone. If they do this, they allow multiple smart phone makers to sell their product without sharing in the production costs and risks. They also get the ability to spread their software service through the IStore to more third party developers who can concentrate on writing code for the IPhone OS weather it runs on an Apple Iphone or on a Nokia phone enabled to run the OS. Yet Apple is not doing this, rather than open up their platform they are again playing the proprietary game....then about two years ago, Google gets into the game. The rumors of Google creating a smart phone actually go back further, I remember reading them as far back as 4 years ago and in the last few months we've seen the release of the first "Gphone" running the open source Android operating system for mobile devices.

Google is hitting at everything that Apple is NOT doing with the Iphone that they would be doing if they were paying attention to their past history. Google , first and formost is making the operating system freely open to development and use by alternate wireless phone manufacturers. They are providing resources for third party software providers to design software for the platform via their SDK and they are providing a market place similar to Apple's Istore to allow third party developed applications to be downloaded by users owning Phones that run Android. I predict that unless Apple opens up its platform in a similar fashion by allowing their mobile OS software to be licensed by other mobile phone providers they will watch a replay of the slow punishment they took in the 80's at the hands of the increasingly modular and cheap pc platform. It would be the ultimate irony that Apple , the company most positioned to advance in the wireless device platform space would again fail to see the importance of open approaches to gaining more market share and eventual profit.



http://en.wikipedia.org/wiki/Macintosh_128K

http://en.wikipedia.org/wiki/IBM_5150

http://en.wikipedia.org/wiki/IBM_Personal_System/2

01 December, 2008

why you see what I see...for the most part....

I find that running is an excellent activity to be engaged in when I need my mind to wander, the mind walks I've taken while so engaged have revealed the solution to many difficult problems in my code and design of a distributed web application framework. The freedom of thought inspired by this part of my day often inspires entirely unrelated ideas to my current line of work. Case in point was an idea that I explored while running several months ago. As I concentrated on breathing and keeping my form optimal for my pace, I took a moment to fully experience the vivid colors of the city and of nature that passed me as I ran. I thought to myself of an idea that I'd come across much earlier in my life as a high school student but then had not the knowledge to answer the question.

The question is thus: Why do you and I experience color the same way?

Having gained knowledge on how the human visual system works, as well as how the brain-eye system processes visual stimulation the answer follows from simply traversing the path of photons as they enter the eye. The journey proceeds as follows. The first stage involves photons of various energies released or reflected by the surrounding environment in the direction of the eye. Electromagnetic theory tells us that photon energy corresponds to photon wavelength in a definite way by way of the equation:

E = hv

where E is the energy , h is the Planck constant and v is not the Roman "vee" but rather the greek symbol "nu" and represents the frequency of the propagating photon. Since frequency is inversely related to wavelength , the equation could be written as:

E= h/w

where w is now the wavelength of the propagating photon. I won't get into the potentially sticky area of photons as propagating particles versus propagating waves, Physicists learned that in order for electromagnetic equations to provide consistent results both interpretations are required depending on what is happening at the interaction point. For our journey the interaction occurs at the back of the human retina, but before we get there, a bit about the wavelength and energy. It just so happens that photons with high frequency (therefore low wavelength) have higher energy than photons with low frequency or high wavelength. In my time studying quantum mechanics I've conceptualized this as a measure of the interaction probability for the propagating photon. High energy photons have high interaction probabilities while lower ones have lower interaction probabilities...unfortunately this fails to hold. The curious nature in which matter absorbs energy in quantized packets means that it is possible for high frequency photons to be completely ignored by certain materials if the quantum states required to stimulate those materials are not triggered. In particular, the mechanism of stimulation employed by eyes involves stimulation of photosensitive enzymes in the rod and cone sensors in the back of our retinas. The rods are sensitive to ANY photon energy while the cones have specific filter ranges of sensitivity. Like the aformentioned surfaces, the enzymes that form the sensitive material of these rods and codes reacts in specific ways. The rods are sensitive to a plethora of energies across the electromagnetic spectrum thus making them good gauges of intensity, where the cones have specific violet, red-blue and yellow-green sensitivity ranges. The perception of a rainbow of colors comes from our brains synthesis of the many signals coming from these tiny cones. These sensitivites are provided by enzymes that have specific molecular construction that is stimulated when photons of given energies or ranges impinge them. The molecules absorb part of the energy and emit additional photons that propagate down the optic nerve in a cascade of triggered stimulation and emmissions. Eventually these signals reach the brains visual system and are synethesized into the perceptions of color that we have.

The answer to the question of why we see the same is in fact hidden in the mechanism of transition. The cones consist of specific types, that will be stimulated only when photons having the required energy impacts them, the mechanism for this is explained by quantum jumps of photons from high to low orbital positions. These gap jumps are always the same because the unique molecules are constructed of the exact same elements. Thus a 445 nm photon hitting a cone sensitive to that wavelength of energy will always kick an electron to a different orbital and in so doing release a photon of a specific energy in return...thus continuing the cascade. The point is the underlying physics of the molecules is what normalizes the response to the original photons at the rod and cone sensitive surfaces. All photons that do not induce stimulation at a cone are filtered out of the response leaving only the pure responses to fixed energies. Since these responses are tied to the enzymes used to transmit the signal to the brain, and these enzymes are the same ones used in all human beings , the response at the brain processing region must be the same. However this is not sufficient to conclude that the "perception" of stimuli will be the same. Here is the consistency of neuronal cells comes into play, whatever neurotransmitters that are responsible for relaying the visual signals (now normalized) they cause the same effects at the neurons, so long as the quantities of neuronal stimulation are similar the perception will be the same. Note the key phrase "so long as", there are in fact examples of perception being different for different people. People with color blindness for example see a skewed pallette as a result of a different filtering of the stimuli at the rods and cones, similarly variations in the neurotransmitters between people can similarly lead to warped perceptions (as may be experienced by those under the effects of psychoactive drugs or other brain modifying agents) so the answer is not as cut and dried as it may seem to be from the outset. Perception does change when ever any stage of the stimulation, neurotransmission or neuronal processing is effected, otherwise the preeminence of the physics (the underlying photon stimulated emission cascades) leads to similar perceptions for all.




http://hyperphysics.phy-astr.gsu.edu/hbase/vision/rodcone.html#c2

http://hubel.med.harvard.edu/b40.htm