I have found as more non formally trained people enter the coding space, the quality of code that results varies in an interesting way.
The formalities of learning to code in a structured course at University involve often strong focus on "correctness" and efficiency in the form of big O representations for the algorithms created.
Much less focus tends to be placed on what I'll call practical programming, which is the type of code that engineers (note I didn't use "programmers" on purpose) must learn to write.
Programmers are what Universities create, students that can take a defined development environment and within in write an algorithm for computing some sequence or traversing a tree or encoding and decoding a string. Efficiency and invariant rules are guiding development missions. Execution time for creating the solution is often a week or more depending on the professor and their style of teaching code and giving out problems. This type of coding is devoid of the real world situations of what happens when a given algorithm is integrated into a larger project for performing a needed function or service.
That's what they learn (or don't) when they become engineers in the field. Here, the beautiful recursive solutions written into algorithms at University, when stuck inside a real system with input memory limitations...all of a sudden reveals the need for tail call optimization or similarly reveal very pesky to hunt down memory leaks due to improperly closed data base connections or method code that is not properly handled for catastrophic interruptions (in college development the assumption is always made that the computer running your code will not crash while your code is being executed...this is an invalid assumption in the field...where not only must that assumption be made, but it must modulate the code such that it can heal or be robust in the face of such events from the perspective of execution [continue or restart the act interrupted] and from data [recover necessary data to resume] and to reporting [tell some system process that the failure happened so that it can be troubleshooted].
The coding world of engineers is one that emerges over *experience* time building code on live distributed systems and being the victim of these types of unforeseen events despite the apparent perfection of the programmers code.
That said,
The complexities that attend different languages interacting to create complex systems make the problems of engineering balloon. So my experience would say a better approach is to gain a high level understanding of the types of code. The classes of languages that exist to solve certain problems in the field and why they emerged and a little bit about how those types of code interact on *live* distributed systems.
Once this 10,000 foot view is had, then steeping into the minutia of coding in a particular language will make more sense as the language will now be seen not as the end to the means but as a tool in the process of *engineering* a system that performs a desired system or function.
Tangentially, this difference between University and field coding points to a problem with the engineer hiring practices of most organizations. In a very real sense software engineering is like carpentry or housing construction as well as being a mathematical and technical pursuit. To gauge the ability of a carpenter, those looking to hire one do not ask the carpenter what tools he uses.
He's not asked as to his favorite choice of table saw, or what type of levels he applies, he's not questioned as to the formulations of concrete he uses or of the type of drills and other tools of construction he uses. Instead, the question is simply "show me something you've built".
Engineering is precisely the same way, the quality of code that solves a problem that consists of multiple interacting elements is revealed by looking at the end results. A working application, game or app, a well designed UI...these are the proofs of ability as an engineer that should be placed above in my view the minutia of algorithmic function (all of which at this date are easily harnessed in seconds by consulting google or ...stack overflow!) so though it is important to learn how to do practical engineering for the field work itself...often to get into the job one must bone up on the relatively irrelevant minutia of algorithms and structures that are easily referenced during field work.
The same is true regarding the actual API's and languages used, good engineers can build just as great code with Python as they can with Java if they have time to consume the languages role in the requested design and apply it. In the same way that a good carpenter will be able to construct a beautiful home with a DeWalt drill as he can using a Craftsman. ;) Ironically, those that learn to code outside of the University setting are more likely to be aware of the dynamics involved in live environments as they tend to be learning in those environments...but for hiring purposes the minutia of college learning are what tends to be probed. Adept use of available tools (engineering) is filtered out at the interview process thus missing the opportunity of hiring brilliant engineers.
The hiring practices need to start reflecting this difference between engineers and programmers and those learning to code (either inside or outside an academic setting) need to be aware of the issues to ensure optimal success.
Links:
http://en.wikipedia.org/wiki/Tail_call_optimization
http://en.wikipedia.org/wiki/Big_O_notation
http://en.wikipedia.org/wiki/Memory_leak
http://stackoverflow.com/
http://en.wikipedia.org/wiki/Dewalt
http://en.wikipedia.org/wiki/Craftsman_%28tools%29
Comments
I'm not sure I entirely agree on this. Most people I know who did CompSci at university, self taught themselves to program when they were at high school, and then the university experience taught them focus on top of that.
From my experience, I was self taught in Basic, C, Pascal and x86 assembler from ages 11-18, then I did Software Engineering at University once I had already written loads of software, where I had massive experience in working directly with hardware limitations.
University then took that rawness and molded me into being able to design software in an OO way, taught me how to document it and test it, as well as loads of important concepts such as how memory management, file systems and multi-threaded systems work.
When I got a development job after university, I had massive advantages over the two guys who got hired at the same time who didn't have comp-sci degrees.
However people who have the real-world experience first, then go onto do a degree (I include myself in that category) are able to go to University and as the theory is learned say to themselves 'yes, I can see how I would use that' or even 'no, that's good in theory but won't work in practice'. Whereas others without that experience don't have the framework on which they can select what is appropriate or not, they have no choice but to accept everything.
I think a lot of companies are short-sighted in this and are likely to miss out on really good people this way. I wonder what it will take for that mindset to change?
https://getdailybook.com/an-american-sickness-by-elisabeth-rosenthal-pdf-download/
aESOME POST