Skip to main content

Rumored Playstation 4 10 year life...hardware and history explains why.

In a recently reported article a Sony representative had this to say about their console plans:

"We at PlayStation have never subscribed to the concept that a console should last only a half-decade. Both the original PlayStation and PlayStation 2 had life cycles of more than 10 years, and PlayStation 3 will as well. The 10-year life cycle is a commitment we've made with every PlayStation consumer to date, and it's part of our philosophy that we provide hardware that will stand the test of time providing that fun experience you get from day one for the next decade."

It makes a lot of sense, but the trend of longer life times is not something that is unique to Sony's devices. Release periods between new game systems have been growing longer since the days of the Atari VCS. Note the current generation of video game systems last for a lot longer than the ones that were out when I used to play them which were updated yearly.

Modern systems only have finite processing requirements given that so many have already maxed out high frame rate performance for 3D gaming (the most processor and memory intensive) at high resolution on the most used panels (today LCD and LED panels of either 720 or 1080 p resolution). All the necessary hardware muscle to drive that panel depth can safely be packed into a cell phone sized screen device fact there are smart phones today that have the *same* max. resolution pixel (1920 x 1080) wise as the Playstation 3.

Once processing needs are maxxed out there is no real reason to deploy new hardware, you simply focus on making better game experiences with the hardware that is already *good enough* to give great performance on the games designed for it.

There is a reason why all the game console makers basically stopped even using resolution as a marketing strategy in their sell of the games when I was smaller resolution was a top of line item to tout that you had it above your competitors console because then you could *see* the difference, not any more.

The latest generation graphics boards and chips are now doing stuff that was unheard of 10 years ago *in real time* let alone simulating them using old tricks like various types of mapping or shadowing procedures. Real time physics is all the rage, real time object deformation, real time fire and water effects...the hardware simply has gotten far beyond the applications that the developers are programming it to perform.

Any PS 4 is going to be at least current gen. capable and that would make it a *beast* when run at the relatively small resolution (of a standard large screen panel) of 1080p hd. Now things will change once high bit density displays start hitting the 50" + size panel market and those will show the lower resolution of a 1080p (upscaled to Ultra HD) for what it is...which of course will get people wanting a player which can drive the physics in real time, the textures and the impacts and all the rest at the higher resolution...that will require more horse power but Ultra HD panels are still tens of thousands of dollars just coming to will be ta least 10 years before they get down in price to where the gamers start buying them...and noticing that their (then 10 year old) PS4 can't drive them without slow frame rates...and then they will want to upgrade.


Sal A. Magnone said…
I generally agree. Hardware cycles are expensive and software cycles less so. I think making device components like GPUs user replaceable might also help. It's have to be something as easy like taking an old Atari cartridge in and out. The old Atari 800 had a flip top and you could plug in RAM and ROM modules. In the case of the PS4 the most likely early killer might these new hi-res TVs. It's possible Microsoft and Sony didn't release new hardware for so long not only because of the expense but also because they were waiting to see how 3D panned out.
runa laila said…
Hay, Do you find any computer game or Game Device? you can visit this site to view an new game to play in your computer. Carton Game, mission game, playstaion game are available there. Please visit this site and choose a game and enjoy it.

Popular posts from this blog

Highly targeted Cpg vaccine immunotherapy for a range of cancer


This will surely go down as a seminal advance in cancer therapy. It reads like magic:

So this new approach looks for the specific proteins that are associated with a given tumors resistance to attack by the body's T cells, it then adjusts those T cells to be hyper sensitive to the specific oncogenic proteins targeted. These cells become essentially The Terminator​ T cells in the specific tumor AND have the multiplied effect of traveling along the immune pathway of spreading that the cancer many have metastasized. This is huge squared because it means you can essentially use targeting one tumor to identify and eliminate distal tumors that you many not even realize exist.

This allows the therapy for treating cancer to, for the first time; end the "wack a mole" problem that has frustrated traditional shot gun methods of treatment involving radiation and chemotherapy ...which by their nature unfortunately damage parts of the body that are not cancer laden but …

Engineers versus Programmers

I have found as more non formally trained people enter the coding space, the quality of code that results varies in an interesting way.

The formalities of learning to code in a structured course at University involve often strong focus on "correctness" and efficiency in the form of big O representations for the algorithms created.

Much less focus tends to be placed on what I'll call practical programming, which is the type of code that engineers (note I didn't use "programmers" on purpose) must learn to write.

Programmers are what Universities create, students that can take a defined development environment and within in write an algorithm for computing some sequence or traversing a tree or encoding and decoding a string. Efficiency and invariant rules are guiding development missions. Execution time for creating the solution is often a week or more depending on the professor and their style of teaching code and giving out problems. This type of coding is devo…

AgilEntity Architecture: Action Oriented Workflow

Permissions, fine grained versus management headache
The usual method for determining which users can perform a given function on a given object in a managed system, employs providing those Users with specific access rights via the use of permissions. Often these permissions are also able to be granted to collections called Groups, to which Users are added. The combination of Permissions and Groups provides the ability to provide as atomic a dissemination of rights across the User space as possible. However, this granularity comes at the price of reduced efficiency for managing the created permissions and more importantly the Groups that collect Users designated to perform sets of actions. Essentially the Groups serve as access control lists in many systems, which for the variable and often changing environment of business applications means a need to constantly update the ACL’s (groups) in order to add or remove individuals based on their ability to perform certain actions. Also, the…