Skip to main content

Is Blue-Ray too expensive ? Answer: Maybe

A recent article at the Content Agenda Site makes the case that Blue-Ray optical storage technology is far too expensive given the market dynamics of an existing strong base of DVD based devices and media and the incrimental nature of the multimedia and quality capabilities provided by the new format.

There is some merit to the arguments of the article but it also misses some key points that require an understanding of how the semiconductor and electronics industries operate internally before any assessment of the pricing of Blue-Ray technology can be made.

First is the fact that unlike 1998 when DVD made its big debut and supplanted the old magnetic technology in VHS with the optical DVD (which is per unit cheaper to produce than VHS offering a compelling impetus to switching over) today, Blue-Ray has to be built on production lines that are already doling out the more profitable (per unit of production, since authorized supply is lower) DVD players and discs. In the two years or so that Blue-Ray has been in the zeitgeist (thanks in large part to the medias idea of a "second generation format war") it has had to be produced next to a nearly comparable product in DVD. In order for manufacturers to ramp up production they would have to outlay cost for the Blue-Ray differential components while cutting back on a raging profit center. It should be no surprise that the manufacturers would be reticent to doing this until such time that a blue tuned laser head costs the same as a red tuned one and the additional DSP horse power required to perform the relevant encodings/decodings of disk data on the fly are also cost competitive with the now 10 year old DSP's and lasers produced in volume (and at low cost) for DVD's.

The details of ramping up production of Blue-Ray over DVD go even further than switching parts at the electronic manufacturers. Most of the DVD player makers put them together using third party acquired parts. The true controllers of cost and even player production rates are the semiconductor and optical laser suppliers. If they are making a killing on red lasers and DVD (mpeg 2 encoding) DSP's they have no real reason to open parallel design lines for Blue-Ray lasers and DSP's until the return on profit (in bulk purchases from the player manufacturers) substantiates the switch over. Retooling semi-conductor lines are serious business and explain why every so often the big fish like SMC and others have massive fabrication build costs that go into the buildings. Designing the DSP's in DVD and Blue-Ray players require an initial investment in clean rooms, incredibly high tolerances in manufaturing and ensuring adequate yield in the way of working chips (only a percentage of chips produce actually work!) that the sem-con makers simply will not leave to the fickle chance that Sony, Toshiba and other providers will provide the new chips in sufficient bulk to cover their outlay. In a climate where the difference in visual quality is marginal and only appeals to a subset of individuals with HD sets (which are usually produced on relatively low yield screen technologies like plasma or LCD) the margins to be had are absolutely razor thin, so choosing to ramp Blue-Ray slowing rather than quickly , waiting for the consumer market to be able to switch over at pace is the much smarter action than taking a bet on a fast adoption and building production of Blue-Ray devices that may not sell adequately to recoup costs...especially under the competitive conditions of multiple manufacturers selling the same product but wanting to distinguish them in some way.

The availability of online resources for researching the technologies, the manufacturers and the trends in adoption and pricing make it critically important that both the manufacturers of the players and the makers of the chips get production volumes precisely right or they will be tempting billions in losses.

So the assumption that the providers are milking the format for margin is a bit of a naive one, the dual level linking of manufacturing processes sensitive to pricing and already small per product margins make it critical to hit the right production volumes and it is better to go too slow, make money but leave some on the table than to go to fast and lose billions without making anything.

Finally, just a word on the statement that mp3 is not as high quality as CD. That is correct if you are talking about sampling rates below 44khz and bit rates under about 120 bps but most software mp3 encoders allow the CD quality 44Khz or higher and bit rates nearing 500 bps, enabling better quality than what is produced by most CD players. The software nature of these encoders ensures that arbitrarily high quality files can be produced say from original signals sources, mixers, live instruments at high quality and then be encoded to super high quality mp3 as masters to be used to distribute to CD or SACD. The wikipedia article below states that SACD is roughly equivalent to a 20 bit /192Khz CD. An mp3 with this quality can be made by some software tools.

http://www.mp3newswire.net/stories/2000/bestsound.html

http://jthz.com/mp3/ (scroll to section "Fighting the "MP3 does not deliver professional quality audio" myth.")

Comments

Popular posts from this blog

the attributes of web 3.0...

As the US economy continues to suffer the doldrums of stagnant investment in many industries, belt tightening budgets in many of the largest cities and continuous rounds of lay offs at some of the oldest of corporations, it is little comfort to those suffering through economic problems that what is happening now, has happened before. True, the severity of the downturn might have been different but the common factors of people and businesses being forced to do more with less is the theme of the times. Like environmental shocks to an ecosystem, stresses to the economic system lead to people hunkering down to last the storm, but it is instructive to realize that during the storm, all that idle time in the shelter affords people the ability to solve previous or existing problems. Likewise, economic downturns enable enterprising individuals and corporations the ability to make bold decisions with regard to marketing , sales or product focus that can lead to incredible gains as the economic

How many cofactors for inducing expression of every cell type?

Another revolution in iPSC technology announced: "Also known as iPS cells, these cells can become virtually any cell type in the human body -- just like embryonic stem cells. Then last year, Gladstone Senior Investigator Sheng Ding, PhD, announced that he had used a combination of small molecules and genetic factors to transform skin cells directly into neural stem cells. Today, Dr. Huang takes a new tack by using one genetic factor -- Sox2 -- to directly reprogram one cell type into another without reverting to the pluripotent state." -- So the method invented by Yamanaka is now refined to rely only 1 cofactor and b) directly generate the target cell type from the source cell type (skin to neuron) without the stem like intermediate stage.  It also mentions that oncogenic triggering was eliminated in their testing. Now comparative methods can be used to discover other types...the question is..is Sox2 critical for all types? It may be that skin to neuron relies on Sox2

AgilEntity Architecture: Action Oriented Workflow

Permissions, fine grained versus management headache The usual method for determining which users can perform a given function on a given object in a managed system, employs providing those Users with specific access rights via the use of permissions. Often these permissions are also able to be granted to collections called Groups, to which Users are added. The combination of Permissions and Groups provides the ability to provide as atomic a dissemination of rights across the User space as possible. However, this granularity comes at the price of reduced efficiency for managing the created permissions and more importantly the Groups that collect Users designated to perform sets of actions. Essentially the Groups serve as access control lists in many systems, which for the variable and often changing environment of business applications means a need to constantly update the ACL’s (groups) in order to add or remove individuals based on their ability to perform cert