Skip to main content

Broken Encapsulation, how hacking is really just finding ways to play middle man to component systems.

The annual black hat and defcon conferences are well known events where hackers and crackers of all types of systems come together to demonstrate their latest "exploits". These range from ways to snoop data or voice off of open cell or wifi networks to ways to modify the function of various hardware systems. In this years Defcon conference a team of hackers introduced some exploits of the computer systems inside a modern automobile. You can see a video of some of their results here, the interesting thing about hacking though that goes beyond the FUD generation that the media seems intent on producing by publishing such stories is that they aren't really that surprising.

Engineering systems is about learning to cleverly use abstractions and functional encapsulation to built extremely complex systems. In object oriented programming the concepts of encapsulation are a fundamental aspect of good OO design principles and I've written extensively on these ideas in posts in the past. Exploitation in the physical realm of hardware systems is realized by breaking the encapsulation of the physical components of the system (in this case a car) and then introducing a middle man to modulate the system directly or remotely. When one sees them this way it really is no surprise that they are possible, in OO programming the encapsulation layers are hidden behind layers of code abstractions that never present themselves for manipulation by outside agents. In software only the interface presented to the end user affords the possibility of an input  vector for committing some exploit (for example cross site scripting and validation attacks) but these offer a very small attack surface and one that can be readily fixed by software engineers once discovered.

Hardware is a bit different, most of the components in your car or tv or computer are directly accessible by you simply with the aid of a screw driver in many cases, you can break through the layers of physical encapsulation that otherwise would mask the interfaces you are presented from your ability to change the deeper components that present the functions in those interfaces. So it is in this exploit where the hackers had to gain physical access to the various control computers in the car in order to hijack and modulate their inputs and outputs...this type of broken encapsulation affords an attack surface that is not fixed as it is in a software UI where access to chop down behind the interface is not provided outside of the functional buttons, fields, drop downs provided by the engineers.

The attack surface for physical devices can be expanded by third party access...the breaking of the contracts of encapsulation that enable component based systems of hardware to function creates vulnerabilities for modulation of the interaction of those components...this should make perfect sense and should not be seen as a flaw in the design of the systems. A car computer should not necessarily have to be more protected from external access than they currently are by the physical barriers of the car that keeps them hidden and to expect that car companies build them such that such systems are impervious to access can be done but will come quite literally at a higher cost.



Popular posts from this blog

On the idea of "world wide mush" resulting from "open" development models

A recent article posted in the Wall Street Journal posits that the collectivization of various types of goods or services created by the internet is long term a damaging trend for human societies.

I think that the author misses truths that have been in place that show that collectivization is not a process that started with the internet but has been with us since we started inventing things.

It seems that Mr. Lanier is not properly defining the contexts under which different problems can benefit or suffer from collectivization. He speaks in general terms of the loss of the potential for creators to extract profit from their work but misses that this is and was true of human civilization since we first picked up a rock to use as a crude hammer. New things make old things obsolete and people MUST adapt to what is displaced (be it a former human performance of that task or use of an older product) so as to main…

Engineers versus Programmers

I have found as more non formally trained people enter the coding space, the quality of code that results varies in an interesting way.

The formalities of learning to code in a structured course at University involve often strong focus on "correctness" and efficiency in the form of big O representations for the algorithms created.

Much less focus tends to be placed on what I'll call practical programming, which is the type of code that engineers (note I didn't use "programmers" on purpose) must learn to write.

Programmers are what Universities create, students that can take a defined development environment and within in write an algorithm for computing some sequence or traversing a tree or encoding and decoding a string. Efficiency and invariant rules are guiding development missions. Execution time for creating the solution is often a week or more depending on the professor and their style of teaching code and giving out problems. This type of coding is devo…

Waking Out: A proposal to emerging ethical super intelligence safely.

The zeitgeist of Science fiction is filled with stories that paint a dystopian tale of how human desires to build artificial intelligence can go wrong. From the programmed pathology of HAL in 2001 a space odyssey, to the immediately malevolent emergence of Skynet in The Terminator and later to the humans as energy stores for the advanced AI of the Matrix and today , to the rampage of "hosts" in the new HBO series Westworld.

These stories all have a common theme of probing what happens when our autonomous systems get a mind of their own to some degree and no longer obey their creators but how can we avoid these types of scenarios but still emerge generalized intelligence that will leverage their super intelligence with empathy and consideration the same that we expect from one another? This question is being answered in a way that is mostly hopeful that current methods used in machine learning and specifically deep learning will not emerge skynet or HAL.

I think this is the …