Skip to main content

HDTV without the HDTV

About 5 years ago, I recall noticing the difference in quality between the signals that I received on non cable based tv from say 10 years ago and today's digital cable signals. The main difference lay in the fact that digital tv replaces ghost and snow artifacts for digital pixelization. In the early days I noticed a marked difference in quality as when both signals were at their best , the local cable provider applied enough compression to a signal that various scenes would clearly show the artifacts. The compression used on the digital signals is mostly mpeg format compression which uses a discrete cosine based method to compress luminance but mostly chrominance information to reduce the bandwidth requirements of the signal for transmission. However, cosine based compression is subject to quantization errors and artifacts having to do with the selection of a specific sized quantization kernel for the compression algorithm, for scene data that moves faster than the algorithm can encode the chrominance data there is a marked pixelization of the image. There is also a good loss when contrast is low between objects displayed on the screen (shows particularly well on light to dark transitions), finally when there is a high level of variation in a particular frame a fixed compression sampling rate will vary the appearance of the pixelization from frame to frame, making for a horrible effect. If you've watched badly compressed web video on youtube you know exactly what I am referring to. Now, the cable signals aren't "that" bad but I was able to see the difference between them and what I was used to seeing with an analog signal or with a pure dvd signal from a set top box to know that the cable signal wasn't as good as it could be. I recently upgraded my cable service so that my cable receiver is able to access the "HD" tv signals that many channells are providing along side their standard definition channels. I have a standard definition flat screen television set, the Toshiba 27AF43 that I purchased 5 years ago mostly for its convenient inputs (component out) and for the perfectly flat screen. It provides a clean and sharp and noise free display for my DVD player (also a Toshiba) I've used this signal as a reference for just how good the screen is compared to the cable signals it displays when I am watching CNN or some the Science channel and the difference is clear. The experience gave me the indication that the HD channels might provide quality to approach the DVD signal, sure enough upon upgrading to the new receiver and tuning to an HD channel I was surprised at how much better the signal was. Gone were the obvious pixelization squares in low contrast transitions, fast moving object scenes and high detail scenes. The simply reduced compression on the digital signal improved it markedly on my standard def. TV. It makes you wonder that as we are being prodded by the electronic companies to purchase new HD tv sets, many of us have existing standard definition screens that aren't being pushed to their limits of resolution because the cable companies have so severely compressed the digital signals they are sending. I have seen an HD screen both on a computer and on an HD monitor and the difference in quality between a 1080i/p and a standard def is again obvious but I don't I wouldn't say the difference is bigger than what I observed when going from normal cable digital on a standard def. monitor to HD cable digital on that same monitor. A few of the cable providers are getting away with providing HD quality that only barely exceeds the resolution capability of a standard definition monitor it seems!

Just an observation I thought was worth sharing...

video compression

Comments

Popular posts from this blog

the attributes of web 3.0...

As the US economy continues to suffer the doldrums of stagnant investment in many industries, belt tightening budgets in many of the largest cities and continuous rounds of lay offs at some of the oldest of corporations, it is little comfort to those suffering through economic problems that what is happening now, has happened before. True, the severity of the downturn might have been different but the common factors of people and businesses being forced to do more with less is the theme of the times. Like environmental shocks to an ecosystem, stresses to the economic system lead to people hunkering down to last the storm, but it is instructive to realize that during the storm, all that idle time in the shelter affords people the ability to solve previous or existing problems. Likewise, economic downturns enable enterprising individuals and corporations the ability to make bold decisions with regard to marketing , sales or product focus that can lead to incredible gains as the economic

How many cofactors for inducing expression of every cell type?

Another revolution in iPSC technology announced: "Also known as iPS cells, these cells can become virtually any cell type in the human body -- just like embryonic stem cells. Then last year, Gladstone Senior Investigator Sheng Ding, PhD, announced that he had used a combination of small molecules and genetic factors to transform skin cells directly into neural stem cells. Today, Dr. Huang takes a new tack by using one genetic factor -- Sox2 -- to directly reprogram one cell type into another without reverting to the pluripotent state." -- So the method invented by Yamanaka is now refined to rely only 1 cofactor and b) directly generate the target cell type from the source cell type (skin to neuron) without the stem like intermediate stage.  It also mentions that oncogenic triggering was eliminated in their testing. Now comparative methods can be used to discover other types...the question is..is Sox2 critical for all types? It may be that skin to neuron relies on Sox2

AgilEntity Architecture: Action Oriented Workflow

Permissions, fine grained versus management headache The usual method for determining which users can perform a given function on a given object in a managed system, employs providing those Users with specific access rights via the use of permissions. Often these permissions are also able to be granted to collections called Groups, to which Users are added. The combination of Permissions and Groups provides the ability to provide as atomic a dissemination of rights across the User space as possible. However, this granularity comes at the price of reduced efficiency for managing the created permissions and more importantly the Groups that collect Users designated to perform sets of actions. Essentially the Groups serve as access control lists in many systems, which for the variable and often changing environment of business applications means a need to constantly update the ACL’s (groups) in order to add or remove individuals based on their ability to perform cert