Skip to main content

How many cofactors for inducing expression of every cell type?

Another revolution in iPSC technology announced:

"Also known as iPS cells, these cells can become virtually any cell type in the human body -- just like embryonic stem cells. Then last year, Gladstone Senior Investigator Sheng Ding, PhD, announced that he had used a combination of small molecules and genetic factors to transform skin cells directly into neural stem cells. Today, Dr. Huang takes a new tack by using one genetic factor -- Sox2 -- to directly reprogram one cell type into another without reverting to the pluripotent state."

-- So the method invented by Yamanaka is now refined to rely only 1 cofactor and b) directly generate the target cell type from the source cell type (skin to neuron) without the stem like intermediate stage. 

It also mentions that oncogenic triggering was eliminated in their testing. Now comparative methods can be used to discover other types...the question is..is Sox2 critical for all types? It may be that skin to neuron relies on Sox2 modulation but say skin to lung or heart to kidney cell might require different numbers or combinations of factors.

It's true that parsimony is on our side (I posit that "overloading" is the default state for evolutionary selective innovation over "overriding" as a form of selective polymorphism) so there may be a few modulation channels so to speak that Sox2 uniquely controls...but there are so many known cell types that there must be crossing with other key controls.  I hypothesize, that with over 200 cell types...the minimal number of cofactors required to uniquely express all types directly should be such that variation per co factor is low multiplied by number of cofactors accounts for at least a minimum of 200 (or the number of unique cell types across the life stages of a human being). So , (3)^5 or (2)^8 or (4)^4.

Conservation of energy would be the overriding constrain that determined how one combination of factors gave rise to the evolution of new cell types by modulation of the cofactors but the question of which pattern of cofactor polymorphism was most effective. Did the cofactors get "overloaded" with functionality (exponent) more than they were "overridden" (mantissa) ?? The vagaries of selective processes in the early life forms from which the original variation in cell types I fell would hold the key.

In correlation with this theory is the fact that the cofactors thus far being used to induce pluripotency have been based on 6 gene families (myc,klf,cox,lin28,nanog,oct3/4), which could cover all cell types with only 3 exponent hops...assuming those are indeed the base variations. The fact that various combinations induced cancer formation seems to indicate that not all of them may be or that they must be modulated in complex temporal ways in order to avoid cancer formation.

Whatever the correct minimal number of cofactors involved, I suspect they go way back to the Cambrian age and thus the associated pathways are highly conserved. It will be interesting to see what the actual algorithm of pluripotency turns out to be, my bet is on overriding being more important than overloading so of the three I bias for combinations that have larger exponents as they are more likely to generate new cell lines during cross modulation with minimal effort (time/energy) during a natural selection process.

http://en.wikipedia.org/wiki/Induced_pluripotent_stem_cell

Comments

Popular posts from this blog

On the idea of "world wide mush" resulting from "open" development models

A recent article posted in the Wall Street Journal posits that the collectivization of various types of goods or services created by the internet is long term a damaging trend for human societies.

http://online.wsj.com/article/SB10001424052748703481004574646402192953052.html

I think that the author misses truths that have been in place that show that collectivization is not a process that started with the internet but has been with us since we started inventing things.

It seems that Mr. Lanier is not properly defining the contexts under which different problems can benefit or suffer from collectivization. He speaks in general terms of the loss of the potential for creators to extract profit from their work but misses that this is and was true of human civilization since we first picked up a rock to use as a crude hammer. New things make old things obsolete and people MUST adapt to what is displaced (be it a former human performance of that task or use of an older product) so as to main…

Engineers versus Programmers

I have found as more non formally trained people enter the coding space, the quality of code that results varies in an interesting way.

The formalities of learning to code in a structured course at University involve often strong focus on "correctness" and efficiency in the form of big O representations for the algorithms created.

Much less focus tends to be placed on what I'll call practical programming, which is the type of code that engineers (note I didn't use "programmers" on purpose) must learn to write.

Programmers are what Universities create, students that can take a defined development environment and within in write an algorithm for computing some sequence or traversing a tree or encoding and decoding a string. Efficiency and invariant rules are guiding development missions. Execution time for creating the solution is often a week or more depending on the professor and their style of teaching code and giving out problems. This type of coding is devo…

Waking Out: A proposal to emerging ethical super intelligence safely.

The zeitgeist of Science fiction is filled with stories that paint a dystopian tale of how human desires to build artificial intelligence can go wrong. From the programmed pathology of HAL in 2001 a space odyssey, to the immediately malevolent emergence of Skynet in The Terminator and later to the humans as energy stores for the advanced AI of the Matrix and today , to the rampage of "hosts" in the new HBO series Westworld.

These stories all have a common theme of probing what happens when our autonomous systems get a mind of their own to some degree and no longer obey their creators but how can we avoid these types of scenarios but still emerge generalized intelligence that will leverage their super intelligence with empathy and consideration the same that we expect from one another? This question is being answered in a way that is mostly hopeful that current methods used in machine learning and specifically deep learning will not emerge skynet or HAL.

I think this is the …