Skip to main content

Bioethics of Gene editing, an analysis of some concerns as addressed by Steven Pinker



Steven Pinker provides some details regarding his view that banning gene editing technology outright is naive in this interview. I some times agree with what Pinker states on various issues and on this one a particular area of my research focus he does get the general reality correct in my view (that bans are naive at this point) however he's a philosopher and not a Scientist and thus has made a few errors that I will point out here.

First, the idea that crispr is prone to significant errors is false. The early versions of the technology produced by Doudna's team in late 2012 were but even then they were significantly less likely to induce errors than prior methods (zinc fingers and TALENS).

Since then far more advanced methods have emerged that are virtually perfect in make single gene modifications consistently, keep in mind that a large part of the accuracy is involved in ensuring the a sufficiently unique guide RNA is utilized to zero in on a given gene and induce a change...there is no reason to supposed that an arbitrarily long and unique guide RNA can't be utilized to target a change over a desired region. Technical issues exist in packing these longer guide RNA's with the complex machinery and vectoring them but those are hard to do's not impossible to do's, if we throw the will and money at the problem it will surely be able to go away...or at least go away to such a degree that people are willing to pay the necessary money to overcome any inherent risk. A paper by George Church et al described some of the issues with the then current process, some of which have since been alleviated or mitigated:


Our current pharmacy business is thriving and it operates on exactly this kind of risk profile calculus....every time you take an aspirin depending on who you are you risk dying from a stroke or having some cross effect with a drug you have been taking.

Second, with regard to "designer" babies, his description of psychological traits as being extremely difficult to modulate is very on point...such things as we saw in "GATACA" where intelligence was directly modulated in a general way are unlikely how ever it can be modulate with much more gene specific changes that effect general intelligence for example changes that may control rate of neurogenesis (which are likely to be just a few genes over thousands) or changes that control memory formation by regulating certain key neurotransmitters....sure lots of testing needs to happen on these but we have an extant body of living people already expressing a full landscape of interactions of genes.

I've described some of this in hypothetical stories on the technology going back to 2008:


Understanding what we can do will come from looking at what nature IN us is already doing with no problem and simply replicating those patterns at earlier stages of development in target individuals.

Also, there are a bunch of designer baby changes that have nothing to do with enhancement but simply eliminate dangerous traits, for example susceptibilities to all types of genetic diseases have many well known genetic factors that can be singled out and edited with little reason for concern. Again, using the corpus of expression in living people to identify extant interactions is a great guide to avoid willy nilly changes ....and so having a rapidly improving ability to perform whole genome sequencing as well as characterizing patterns in those massive data sets to identify such patterns is a very good thing to be happening now.


Third, all the difficulties many in the bioethicist camp that feel a delay on research to germ line editing should be placed ignore the fact that many phenotype associated genetic changes can be made very few to no side effects at all. These would be the types that would be included in what I've termed cosmecutical changes.



http://sent2null.blogspot.com/2015/06/cosmecuticals-trigger-to-injection-of.html In my chapter of the book "The Future of Business" I provide a break down of some of the potential opportunities that loom in pursing development of phenotype targeted gene editing. The industry has a potential to be a trillion dollar industry inside of 20 years time. Coupled with it's relative ease to implement and safety compared to other gene line modifications as well as the potential social gains it makes a very big and obvious goal to advanced humanity along several lines that is worth the small risks. http://fob.fastfuturepublishing.com/

Changes in this camp include edits to genes associated with the expression of melanin in the skin , eyes and hair...these would induce visible modulation of expression of skin types, hair color and eye color. I explain the social ramifications of such change in a few posts over the years but the advantage to society of the elimination of "race" as a fixed identifying element in the human family should be an obvious thing that we WANT to do and do as soon as we can. The age of "neapolitan people" as I've called it is important for us to mollify a great deal of the causes of xenophobia that exist in the world which are due to ignorance and fear based on differences in apparent morphology in phenotype.


Moreover, from the ethical perspective these changes are less ominous and also more potentially lucrative as targets to any researchers who are looking to take advantage of the technology...particularly in light of all the hubbub concerning modification to the germ line.

Finally such changes don't have to apply to the germ line at all, an advanced version of crispr used to precisely target and edit genes associated with skin color could confer temporary effects approximating a really long but genetically modified tan (but one where you can tan darker or lighter)...even that could be marketable and would likely have a demand space for it being done.

Comments

Popular posts from this blog

On the idea of "world wide mush" resulting from "open" development models

A recent article posted in the Wall Street Journal posits that the collectivization of various types of goods or services created by the internet is long term a damaging trend for human societies.

http://online.wsj.com/article/SB10001424052748703481004574646402192953052.html

I think that the author misses truths that have been in place that show that collectivization is not a process that started with the internet but has been with us since we started inventing things.

It seems that Mr. Lanier is not properly defining the contexts under which different problems can benefit or suffer from collectivization. He speaks in general terms of the loss of the potential for creators to extract profit from their work but misses that this is and was true of human civilization since we first picked up a rock to use as a crude hammer. New things make old things obsolete and people MUST adapt to what is displaced (be it a former human performance of that task or use of an older product) so as to main…

Engineers versus Programmers

I have found as more non formally trained people enter the coding space, the quality of code that results varies in an interesting way.

The formalities of learning to code in a structured course at University involve often strong focus on "correctness" and efficiency in the form of big O representations for the algorithms created.

Much less focus tends to be placed on what I'll call practical programming, which is the type of code that engineers (note I didn't use "programmers" on purpose) must learn to write.

Programmers are what Universities create, students that can take a defined development environment and within in write an algorithm for computing some sequence or traversing a tree or encoding and decoding a string. Efficiency and invariant rules are guiding development missions. Execution time for creating the solution is often a week or more depending on the professor and their style of teaching code and giving out problems. This type of coding is devo…

Waking Out: A proposal to emerging ethical super intelligence safely.

The zeitgeist of Science fiction is filled with stories that paint a dystopian tale of how human desires to build artificial intelligence can go wrong. From the programmed pathology of HAL in 2001 a space odyssey, to the immediately malevolent emergence of Skynet in The Terminator and later to the humans as energy stores for the advanced AI of the Matrix and today , to the rampage of "hosts" in the new HBO series Westworld.

These stories all have a common theme of probing what happens when our autonomous systems get a mind of their own to some degree and no longer obey their creators but how can we avoid these types of scenarios but still emerge generalized intelligence that will leverage their super intelligence with empathy and consideration the same that we expect from one another? This question is being answered in a way that is mostly hopeful that current methods used in machine learning and specifically deep learning will not emerge skynet or HAL.

I think this is the …