Blog


Human Theome Project sets sights on 2012

Author: 
Nathaniel Comfort

Joe and Mary Juke are models of piety. They attend services twice a week, are active in faith-based charity organizations, and their house brims tastefully with Christian iconography and literature. They describe themselves as “fundamentalists,” although Joe is quick to emphasize, “We’re moderate fundamentalists—we don’t bomb clinics or anything.” They are planning to have a family, and they are making sure to create a pious environment for their children. They know that the setting in which a child is raised helps determine the kind of adult he or she becomes.

But for the Jukes, books, icons, and saying “Grace” are not enough. In what is being cited as a milestone in personal genomics, Joe and Mary have taken steps to ensure their baby is religious—by selecting its genes.

Using preimplantation genetic diagnosis (PGD), a combination of genetic screening and in vitro fertilization, Joe and Mary are loading the genetic dice for their progeny, selecting embryos that carry the traits they want in little Joe Jr. (or mini-Mary). Modern techniques allow them to choose or a wide range of qualities, from avoiding hereditary diseases, to selecting eye, hair, and skin color, to shaping aspects of personality. For example, choosing a combination of half a dozen genes allows them to add a cumulative 40 points to their unborn child’s IQ. Many of these tests have been available for years, although they have only recently begun to be available to consumers. But the most striking decision in their family-planning process was to expressly select for embryos that will grow up to be religious, because they carry the allele known colloquially as the “god gene.”

“It kind of gives a whole new meaning to the phrase, “Chosen One,” Mary says.

Sequencing the human theome

The gene, which was identified statistically in twins in a study published in 2005 (1), was recently cloned and sequenced, as reported in the online journal Nature Theology. Dubbed yhwh1, the gene correlates strongly with feelings of religious fervor. Studies show that the gene encodes a protein that is expressed in a part of the brain called Chardin’s area 86, long associated with religious activity and, strangely, anterograde amnesia. One famous patient, known only by his initial, “A.,” suffered an injury with a nail gun that resulted in a metal spike being driven precisely into area 86; he spent the last two decades of his life on a constant pilgrimage along US Route 66 between Kingman and Barstow, accompanied by his wife, Winona, whom he continually left behind at gas stations.

Particular expression of religiosity in a given individual varies according to environment; what is inherited is the capacity for intense religious experience and evangelism. First described in the Amish in a classic study of the 1960s, the trait was described as an autosomal recessive with high penetrance, and was linked to a rare inherited form of dwarfism. Recent analyses have also found the trait occurring at high frequency among charismatic ministers, shamans, and suicide bombers.

The yhwh1 allele is one of the latest findings in the burgeoning field of “theomics,” which aims to identify all genes associated with the practice of preaching, as well as general feelings of spirituality. Researchers plan to complete the project by December 21, 2012, when, according to some interpretations of the Mayan calendar, the world will come to an end. Here are some of the most exciting new findings of the project:

• Scientists estimate that at least 400 genes are involved with religious feelings or activity. • A related project seeks to uncover the epigenetics of evangelism, which is thought to be caused by methylation of regions of the X chromosome, a reversible process that can profoundly affect gene expression. • A newly discovered kinase, called Bub666, is strongly correlated with atheism. It seems to be responsible for the breakdown of yhwh1, suggesting that biochemists are approaching a mechanistic explanation of religious experience.

“It’s tremendously exciting research,” said Mary Magdalene-Gohdtsdottir, a senior researcher in the University of Utah’s Department of Omics. “Just think of it: the genes for God! Isn’t that cool?” Indeed, the federal government thinks so. NIH Director Francis Collins, a molecular biologist and born-again Christian, has recently created a National Institute of the Molecular Biology of Yahweh (NIMBY), with an annual research budget of $400/year, as part of the government’s effort to support faith-based initiatives in biomedicine.

But is it science?

Some critics have called the Jukes’ actions a step toward eugenics, described in the 1920s as the “self-direction of human evolution.” They see religiosity as a gift, not something that can be ordered from a catalog. “This is an outrage,” said the Reverend Reginald S. Inkblot, of Southboro Baptist Church in Onan, Kansas. “Religion can’t be in your genes. Science can’t explain it. It’s just a part of who...you...um, are. It’s just in your...uh, yea.” He added, “If God had wanted us to be religious, he would have....oh, wait.”

Others are appalled that religion would receive scientific consideration from scientific foundations at all. Dick Dorkins, President of the atheistic Society for the Prevention of Intelligent design, Theology, Or Other Nonsense (SPITOON), calls the entire effort a “travesty.” “If I must check my brain at the church-house door,” he said in a Skype interview, “then you must check your soul at the laboratory door. Come on—be fair.”

Dorkins worries that should the procedure become widespread, it could lead to nonreligious persecution. If those chosen by PGD tend to express genes such as yhwh1, scientists predict, it could lead to changes in gene frequency across the population. Dorkins envisions a dystopian scenario in which an atheistic underclass washes the wineglasses and polishes the pews for their genetic spiritual superiors. “It will be GATTACA crossed with The Ten Commandments,” Dorkins said, an audible quiver in his voice.

Evolution in religious hands

Some theologians have condemned in vitro fertilization because it normally results in the destruction of unused embryos. However, new gene therapy techniques make it possible to link a “suicide gene” to alternative forms of the desired genes in Joe’s sperm samples; thus, only sperm that carry the traits they want survive to fertilize Mary’s eggs. No embryos are destroyed in the process. This makes in vitro fertilization acceptable to many pro-life Christians.

Joe and Mary dismiss critics who say they are taking evolution into their own hands. “That’s just your theory,” says Joe. They view their decision to choose the religiosity of their unborn child as a command from above. “WWJC?,” Mary asks. “Who would Jesus clone?”

Ironically, as Biblical literalists, the Jukes dismiss Darwinian evolution as “unproven.” To them, the earth is 4,000 years old, and all the types of animals in the world today were on Noah’s Ark. They see themselves as spearheading a Crusade of believers into biomedicine.

His eye acquiring that spark of evangelism that is becoming a tell-tale sign of heavy methylation at Xq66, Joe’s voice deepened and he intoned, “The heresy of modern science will only be righted when human evolution is safely in the hands of people who do not believe in it.”

(1) Koenig, L. B., M. McGue, R. F. Krueger, and T. J. Bouchard, Jr. "Genetic and Environmental Influences on Religiousness: Findings for Retrospective and Current Religiousness Ratings." Journal of Personality 73, no. 2 (2005)471-88.

Gender and Science

Author: 
Theral Timpson

Do women do science differently than men? Are women more intimate with their research subject, more personal, and therefore more intuitive? Is a man more rational and objective? Does a woman by nature choose projects that a man wouldn’t think of? The gender gap in science has narrowed. Science has a lot for women scientists. Do women have something special for science?

This was brought to the fore a few years back when Larry Summers, the president of Harvard, made the controversial and career-altering remarks on this subject (he later resigned as president of Harvard and perhaps lost consideration to be Treasury Secretary in the Obama administration). In a talk to a 2005 conference on women and minorities in science and engineering, Summers suggested three reasons for the declining rate of women being offered tenure in the fields of science and math. First, he postulated, because women have children they are unable or unwilling to work 80 hour work weeks. Second he asserted (and this was the kicker) that differences in scores by high school students might have a biological basis. Third, he said it was not clear that discrimination played a role in the shortage of women in teaching positions of science and engineering at top universities. Summers had to apologize within 24 hours after receiving a letter from the Standing Committee on Women signed by many of the Harvard faculty.

Summers's suggestions were quite simplistic and the controversy is still newsworthy. Questions of gender rise quickly to the surface in a book I'm reading by one of our guest bloggers, Nathaniel Comfort, Ph D, science historian at Johns Hopkins, on the life of one of the most famous female scientists. Barbara McClintock is often mentioned by feminists as a woman scientist whose important contributions to the field of genetics were ignored by the leading geneticists of the time--all men-- and then later acknowledged, indeed eventually with a Nobel Prize in Physiology or Medicine. In his book, The Tangled Field, Comfort calls this a 'myth' and seeks to clarify the life and studies of this scientific giant and give some answers to the gender question.

“Here is the Barbara McClintock most people think they know,” he begins the book. For the first page and a half, Comfort summarizes the ‘myth‘ of his subject. She began as a geneticist in the 20’s and 30’s studying maize, or Indian corn, at Cornell University. When the others in her group got jobs--all men--she didn’t. She would eventually end up at Cold Spring Harbor in Long Island, New York, but was isolated from her colleagues. She didn’t even get a grad student. It was there she made her greatest discovery which she called transposition--the idea that genes jumped spontaneously to new sites on the chromosomes. When presenting her findings, the result of six years’ research, she was ignored. “A few scientists were outright hostile,” Comfort writes. An even harsher reaction occurred five years later when she tried presenting her research for a second time. “She ceased publishing and retreated into her laboratory, pursuing her meticulous experiments in isolation for decades.” In the ’70’s a new group of molecular biologists discovered transposition in their work and McClintock was seen as the pioneer. She began to win prizes culminating in her Nobel Prize in 1983.

“McClintock could challenge the canonical view of the gene," writes Comfort, "because she was not bound by dogma as other geneticists were. She attended to her corn plants with sensitivity, even empathy. Free from the ossified theory that constrained other scientists’ vision, she could see what others could not: genes were dynamic interactive, flexible. . . . She seems in every way the opposite of the archetypal molecular biologist: senior, humble, intuitive, and female working in the fields on a large, slow-growing, complex organism--in contrast to the young rational arrogant, male biologists working on bacteria and viruses.”

The story of McClintock being slighted by a male dominated field was perpetuated by another woman in science, Evelyn Fox Keller, also a biographer of McClintock. (Keller’s biography, A Feeling for the Organism, appeared just months before McClintock was awarded her Nobel Prize.) Keller was herself a scientist who had experienced difficulty in the male dominated world of science. “Harvard was a disaster,” she is quoted as saying in an extensive article in the Guardian from 2000. “It was a very difficult time to be a woman in a physics department.” Keller went from physics to biology, spending a summer at Cold Spring Harbor where she first came in contact with McClintock. Eventually Keller became a leading voice for the feminists of the 70’s and 80’s writing other books with titles such as Reflections on Gender and Science. In this book, she focused on the “historic conjunction of science and masculinity, and the equally historic disjunction between science and femininity. My subject therefore,” Keller stated at the outset, “is not women per se, or even women and science: it is the making of men, women, and science, or, more precisely, how the making of men and women has affected the making of science.” Keller explored how the common myth of masculinity as objective, impersonal, rational and mental on the one hand, and femininity being associated with subjectivity, feeling, the personal, and nature on the other has affected science itself.

“The consequence of such a division is not simply the exclusion of women from the practice of science. That exclusion itself is a symptom of a wider and deeper rift between feminine and masculine, subjective and objective, indeed between love and power--a rending of the human fabric that affects all of us, as women and men, as members of society, and even as scientists,” Keller wrote.

Though we have come a long way in guaranteeing women an equal playing field in the sciences, some basic questions remain. The science presented in a scientific research article is limited by the very language in which the discovery is couched. Just as I used the metaphor of a piece of furniture just now, scientific explanation is written in metaphor which, by necessity, is socially constructed. Objectivity in science has always been the high aim. The feminist, such as Keller, asks whether that “high aim” itself has not been a notion that comes from a field began and dominated by men. Should there be other ways and goals? For Keller, McClintock provided an alternative way of looking at science. If the history of modern science was the expression of a masculine intellectual type dating back to not just to Francis Bacon, but to Plato--a style which was dominating, controlling, reductionist, rational and linear, then McClintock was the example of an alternative that is holistic, intuitive, interactionist, even mystical.

In his book, Comfort gives Keller credit that she was not looking for an alternative ‘feminine science,’ but a non-gendered science. However, he points out that the myth of McClintock degenerated into “sentimental fancy.” McClintock was even considered a “mystic by nature, who tended to think intuitively.” This idea of McClintock defied the facts according to Comfort's research. Reading through hundreds of hours worth of his subject’s research notebooks, he arrives at the conclusion “McClintock’s alledgedly holistic, intuitive scientific style was in fact highly rational and based on immense experience and reading." In probing the concepts of "empathy" and "intuition" in the work of McClintock, Comfort disagrees with Keller and states that these are not gendered concepts. His research leads him finally to state that there is no feminine or masculine science, only good and bad science.

In a 1995 paper about feminist epistemology, Elizabeth Anderson of the University of Michigan points to a fear that feminism in science would be an extreme influence much like the perceived influence that totalitarian politics exerted over science and math. This fear is played upon in a joke from the former Soviet Union:

Apparatchik (impatiently):  How much is 2 + 2?
Mathematician (cautiously): How much do you want it to be?  

Is there truly no innate difference to the way women and men approach science? And if there is, does the approach matter when there is an independent way of testing the results?

Kathleen Okruhlik, professor of the philosophy of science at the University of Western Ontario, writes: “If you arrived at your hypothesis by reading tea leaves, it doesn’t matter so long as the hypothesis is confirmed . . . . You test the hypothesis in the tribunal of nature, and if it holds up, then you’re justified in holding on to it--whatever its origins.”

(Comments welcomed!)

Genomes on Facebook? Beyond Sequencing 2011

Author: 
Theral Timpson

Who owns our genomic data? This question began a panel discussion titled “When People Share Their Genome on Facebook” at the second annual Beyond Sequencing conference in San Francisco Tuesday. The panel was led by Bio-IT World’s editor-in-chief, Kevin Davies, author of The $1,000 Genome. Another panelist, Dr. Pilar Ossorio, quickly deflected the question saying “it’s not a matter of ownership.” She posed another question, could a representation of one’s genomic data be copyrighted? According to Dr. Ossorio, associate professor of law and bioethics at the University of Wisconsin School of Law, one thing is certain: when someone posts their genomic data to Facebook or any other public site, that person is giving up their rights to the data--whether they had rights to begin with or not.

A third panelist, Dr. Ken Patel from Sandia National Laboratories, admitted he didn’t know whether Facebook would be the best place to share this kind of data, but that Facebook is tremendous in size and certainly “something to be reckoned with. It’s already there.” Dr. Patel envisions the approaching day when “my doctor will follow my Facebook page and keep in touch with other health professionals on my medical info.” In 2008, Congress passed legislation called GINA (the Genetic Information Non-discrimination Act) to protect, as the name says, against discrimination by insurance companies and employers based on one’s genetic or genomic data. Dr. Jonathan Eisen, a panelist from UC Davis Genome Center, asked “just how much teeth the legislation really has.” What about informed consent for relatives? Say we ponder what it means to put our genomes out there in cyberspace, what about others --our children and parents--who are affected by our genetic data going public? What about international protection? GINA covers US residents at home, but not abroad. Eisen pointed out that Dr. George Church, founder of the Personal Genome Project which seeks to have 100,000 genomes online for public sharing, doesn’t just accept anyone. Those who are selected go through a rigorous briefing and even a class with a geneticist before being selected.

The audience, mostly made up of researchers, was eager to put in their questions. “Why is the medical community so far behind?" The panel did their best with this citing first the cost of genomic sequencing, second that doctors are not trained in genetics, and also the fact that data is so preliminary. Each of the Direct-to Consumer companies who are offering genetic information, vis., 23 and Me, Navigenics, Decode, has different algorithms for calculating risk. The results can be inconsistent from one provider to the next.

A second question from the audience came from a doctor who asked why folks would put up their genomic data online without phenotypic data. So far a lot more is learned from a person’s ancestry, lifestyle habits and past medical history than from the many gigabytes of DNA code. Will it become the norm to share our data on a site like Facebook? Kevin Davies reminded us that 23 and Me is already a sophisticated social networking site that has led to some discoveries. Whether it is their model, or one like Navigenics who works closer with doctors, or a public site like Facebook, time will tell. This discussion posed more questions than answers.

Conference Keynotes

Dr. Michael Snyder, Stanford Center for Genomics and Personalized Medicine

Beyond Sequencing is one of a quartet of conferences on sequencing put on by CHI (Cambridge Healthtech Institute). Others include the Next Generation Summit in August, a conference in September on Applying Next Generation Sequencing and the XGen Congress and Expo in March. Around 400 attended this show June 20-21 at the Hotel Kabuki in San Francisco’s Japantown. Keynote speakers were well known scientific figures from Stanford, Dr. Michael Snyder, Stanford Center for Genomics and Personalized Medicine and Dr. Ron Davis, long time director of the Stanford Genome Center. Dr. Snyder has been tracking his own ‘omics’ profile (including his genome, epigenome, transcriptome, proteome, metabolome, and microbiome) for a year now and keeping a close monitor on his health as a test project. I heard one of Dr. Snyder’s talks earlier this year and he continues to preach the gospel of quality data. In his lab they use various sequencing companies. He showed that the data from each is different. With his own data he compared Illumina’s output with that of Complete Genomics, commenting that Complete Genomics made fewer calls, but that they were more accurate. The first step when looking at one's data is to define all rare variants (only the ones in genes). Snyder then went through his one by one (this is still the only way). An unusual variant that stood out in his data was TERT, an allele associated with one’s telomeres and with anemia. Puzzled by this because he’s not anemic, he checked it out with Sanger sequencing. It was correct.

The next question to ask is whether we are hetero- or homo-zygous for an allele. To answer this, Dr. Snyder sequenced his mother’s genome. He then turned to a Disease Risk Profile developed by his colleague at Stanford, Dr. Atul Butte. Some of this report made sense, some did not. Snyder was surprised to find out that he was at high risk for Type 2 Diabetes. He’s never shown any symptoms. He got tested several times over the past year and found out that his glucose level was indeed high, at times in diabetic range. He offered that he has made some lifestyle changes, such as modifications to his diet and a more committed exercise program. Snyder is now focusing his research on ‘regulators,’ those actors in the cell which work similar to government. There are master regulators which control many processes throughout the body and those which are more local. Snyder compared the function of these two types of regulators to the work of president Obama versus a local school board official who serves part time.

Dr. Ron Davis, Stanford Genome Center

Ron Davis has been the recipient of numerous awards including this year’s Gruber Genetics Prize for 2011. He is a pioneer in the development and application of recombinant-DNA techniques and has seen technologies created in his lab become commercial successes. Dr. Davis shared with us a few of the new projects he’s currently working on at the Stanford Genome Center. He sings a different tune than many in this age of uber-sequencing. Several of the studies and projects he promotes are alternatives to sequencing such as using arrays to go after a targeted sequence. Many are arguing why not just sequence the entire genome because the price has come down so far. Then you can always go back to the data. Ron has several answers. First of all, the clinicians he’s talked to are legally concerned about the idea of going back to a patient and saying, ‘hey, we found out some new stuff about your genome the other day.’ One study he presented compared using exon expression chips as opposed to RNA Seq. Not only were the chips cheaper by about 1/10, but also the work could be done much, much faster. Working on 5,000 samples the sequencing would take 50 years whereas the samples could be processed by the chips in one year.

Perhaps the most promising of Dr. Davis’ projects are ‘Point of Care’ technologies. These will lower cost, provide higher sensitivity and most importantly all be based on electricity. An example of this is a digital antibody assay where the standard assay is used with electronics using a particle rather than fluorescence. He calls it a PLISA (particle) rather than ELISA assay. Sample collection for these technologies would be non invasive, using urine, sweat and/or breath. (One study is a TB breath test for children). In all of these, there is no need for PCR.

Look for an upcoming podcast interview with Dr. Ron Davis from Stanford's Genome Center to hear more about these Point of Care technologies.

Why Should We Care . . .? Part IV Toward a Poetics of HSMT

Author: 
Nathaniel Comfort

If you’re just joining us, I’ve been taking a first stab (the internet was invented for first stabs) at the context of justification for scholarly history of science. My simple premise: it should be beautiful or useful. The last couple of posts have reflected on ways in which scholarship in the history of science, medicine, and technology can be useful. Today I want to tackle beauty.

We don’t often acknowledge the aesthetic dimension of scholarship, and more’s the pity. The history of S, M, and T itself is beautiful. Who among us didn’t go into this field because of a romantic attachment to the objects? Dusty glassware out of bad sci fi movies, cold lancets in velvet cases, steampunk instruments, burnished walnut display cases, preserved specimens, rusty old ships and creaky clocks...the material culture—drop the jargon: the stuff—of old science is just cool. It’s all too easy to lose that aesthetic side in our drive to be taken seriously as professionals. I have seen too many scholars to count take a gorgeous subject and drain all the blood out of it. In the name of seriousness and professionalism we perfuse our rich, sticky, stinky historical worlds with rhetorical formaldehyde, producing for our colleagues an elegantly dressed corpse, eternally preserved but lacking the vitality—the sensuousness—of the original. I for one am more than willing to put aside sophisticated, austere high theory for a romping good story and rich description of cool artifacts.

Thus, by “beauty” I do not have in mind a high-falutin’, sentimental notion of aesthetics. Indeed, some of my more sober colleagues will likely take me for being less than serious. Have at it. In fact, I am prepared to go to the mat in defense of beauty—and, as we’ll see—good writing—as being just as rigorous as more conventional academic scholarship. By beauty, then, I mean everything that is aesthetic, sensual, and pleasurable about history of science writing. I mean the HSMT you read not because it helps you understand current events or guides your reproductive decisions or shapes your policies. I mean the stuff you read because you enjoy it. This broad sense of beauty is the principal justification for pre-18th century HSMT nowadays.

(Principal, not only. Darin Hayton’s excellent pieces on astrology on this website, for example, show that early science can have relevance to contemporary concerns. In fact, prove me wrong! I’d love to hear about ways in which ancient, medieval, and Early Modern science is relevant today. Just don’t be offended: I’m not saying that lack of relevance=lack of value. Beauty, in my view, is perhaps the highest justification.)

History has a unique aesthetic. Scientists take something complex and try to make it simple. They make simplifying assumptions, they isolate variables. This leads to an aesthetic of elegance and austerity. Historians, in contrast, like to take something that appears to be simple and show that it’s more complicated than you thought. The aesthetic of history is one of richness and texture. This puts HSMT in a tight and interesting spot. Nature is complicated. Science tries to impose simplicity on it. And we try to reimpose complexity on that enterprise, to show that science is not as straightforward as scientists tell us it is. I’m arguing that the most effective way to do that is by disciplining ourselves to express that complexity simply.

Which brings us to writing. We pay appallingly little attention to writing in our graduate education and our professional writing. We praise unstructured, tone-deaf, cliché-infused, malaprop-riddled graduate papers as “beautifully written.” We tolerate pompous, vacuous, pedantic prose from our leading scholars. We proudly defend our bad writing and lack of an audience as some sort of badge of scholarly honor, as an emblem of our seriousness.

In a previous life, I was a neurobiology student, in a specialty notorious for its dense, technical writing. When I wrote an article for publication, my advisor turned it back and told me, “It’s too well-written. No one will take it seriously.” That was the moment I decided to leave experimental science. History is somewhat better, but bad writing is much too often tolerated and/or ignored.

The aesthetics of historical writing, then, starts with narrative and argument. Either tell me a story or lead me through an interesting train of thought, á la Montaigne. Or both. Of course, narrative and argument are not mutually exclusive—I don’t think I’ve ever done one without the other. The relationship between them is part of the aesthetics. But I am likely to become drowsy or hungry or to suddenly remember some vital google search I’d forgotten to do when I read paragraph after paragraph of what are essentially transcribed research notes. Rambling prose is not a sign of textured sophistication, it’s a sign of sloppiness.

Next, put people in it. I need to put a face to a name. Minor figures may only need a phrase of identification, but major ones deserve a few sentences or a paragraph of context. Jim Schwartz’s In Pursuit of the Gene does a beautiful job of taking familiar, even boilerplate episodes from the history of science—Mendel’s peas, Morgan’s flies—and pulling out the rich detail that draws you in to the story. Whether or not you agree with his interpretation of the history, the book is worth reading on the strength of its aesthetics. Dry ≠ serious. Much of the above concerns accessibility. Academics can be foolishly snobbish about not making their work accessible. When you’re a small specialty like the history of science you can quickly write yourself into irrelevance. This disdain is based on a false unity between accessibility and dumbing down. Real scholarly integrity, the reasoning goes, means not pandering to people less knowledgeable, less serious, or less smart than you. The true scholar writes for the ages, not for any particular audience.

Bullshit.

First, some scholarly ideas really are too complex. Clever academics easily hide behind complexity; “nuance” can be an excuse for not really thinking something through. And sometimes we just plain overthink things. Sometimes a cigar..., and all that. There is, then, a rigor in distilling one’s ideas. Recently I took a draft book chapter and turned it into an evening lecture for college freshmen and sophomores. I did not dumb anything down—I gave them the most sophisticated, textured argument I could, with real science and tough ethical questions. Maybe it only shows how simplistic I am, but anyway, I did not compromise my scholarly standards, and the students lapped it up with a spoon. In writing the lecture, I figured out what I am really saying in the chapter. Now I return to the chapter, drawing the main lines more boldly but also adding back shading and texture I didn’t have time for in the lecture, and the chapter is better for the experience. Yes, it’s a lot of work, but if your concern is extra work then don’t talk to me about lack of rigor.

Second, much of what makes scholarly writing inaccessible is jargon. Jargon is rarely if ever required to communicate subtle ideas. Technical terms of art can be a useful shorthand—I introduced “HSMT” above so that you wouldn’t have to read “history of science, medicine, and technology” fifteen times in this post. (And did you notice that I did it without having to put it in parentheses? But you got it, didn’t you?) But any terms one needs can be defined in what you’re writing, so that any intelligent, interested reader can follow you without taking three semesters of graduate coursework first.

It’s all too easy to use jargon to exclude readers and to cloak simple-minded ideas with the semblance of sophistication. I’m continually amazed at how many smart academics seem more concerned with establishing little tree-fort clubs with secret passwords designed to let in graduate students or keep out second-rate professors or stinky girls or boys with cooties than with actually talking about ideas.

Jargon actually tends to reduce sophistication. A lot of jargon amounts to cliché. It is a set of stock phrases that we reach for instead of thinking something through. I fear I’m going to choke the next person who “foregrounds,” “adumbrates,” or, god help us, “concretizes” something. Jargon is anti-intellectual.

Finally, there is an old-school scholarly ethos that is essentially antiquarian: the historian’s task is to uncover everything on a subject and record it for posterity. The reader is not the first consideration. This view places more emphasis on historical research; it implicitly treats historiography as a transparent record of historical evidence. In contrast, I am stressing historical expression and ideas. I see the value of history in terms of its effect on readers, and I see that antiquarian impulse as honorable but essentialist.

It is also becoming increasingly quaint. It is difficult to publish very long, abstruse books nowadays, and whether or not you think that is a bad thing, I insist that this trend does not necessarily reduce the rigor of the scholarship. It reflects a shift in aesthetics. Done right, making your work accessible is the opposite of dumbing it down. If an idea can’t stand naked, in plain standard English (or French or, yes, I insist, even German), it can’t be very important. Accessibility is a form of scholarly rigor.

One final thought on the aesthetics of scholarship concerns the role of ambiguity in our work. Scientific writing has a single intended meaning. Good scientific writing is clear, simple, unambiguous, and logical. On the other hand, poetry, fiction, and creative nonfiction, like all art, deliberately embrace ambiguity. The value of art can be measured by the number of its meanings, between members of one’s audience or even within an individual. History is somewhere in the middle, but it is more like art than most historians probably care to admit. No serious historian would argue that a given set of events has one unequivocal meaning. Indeed, it’s a cliché that history needs to be rewritten every 20 years or so. It would be radical to suggest that historical writing should be deliberately ambiguous. Clarity and simplicity are, as I’ve said, to be prized.

And yet, where is the harm in scholarship that is open to interpretation? Few historians would deny that the purpose of scholarship is to generate not answers but questions—to stimulate discussion and reflection. Attention to “beauty” is a way of allowing limited ambiguity into scholarship, loosening the joints in one’s interpretive structure. I’d even suggest that the historical interpretations that allow of multiple readings have longer lives than those that are unequivocal. If a book can have multiple meanings, it can adapt to changing times. As a footnote, this loops us back around to the importance of narrative. A good story can be chewed over, argued about, retold and reinterpreted, and become a guide to deeper reflection. Beauty can be useful.

As always, I have stated my views strongly, deliberately to provoke, in a spirit of inquiry. Although I care less and less about what counts as History of Science, or History of Medicine, or Science Studies, or any of the myriad splinters of this already-small field, I think it incredibly important and worthwhile to understand science, technology, and medicine in historical terms. It should be done both in the academy and outside of it, and it should be done so that anyone interested and dedicated enough can join the conversation without needing a diploma or a bibliography as an entrance-ticket.

Why Should We Care . . .? Part III Maybe We Shouldn't

Author: 
Nathaniel Comfort

I care deeply about the history of science. But maybe I don’t care about the History of Science.

This past weekend, some of my friends and colleagues convened in New Haven for the annual Joint Atlantic Seminar for the History of Biology, a lo-fi, old-school, itinerant conference for grad students to present their work to a friendly, critical audience. JAS-Bio started in 1965 and I think has only missed one year since then, even though no one knows where it’s going to be next year until the very end of this year’s meeting, when someone traditionally stands up and says, “So...I was thinking we could do this at my place next year.” My own advisor was among the founders; I helped organize it as a grad student; and now my own students present their first papers there. It’s my favorite professional meeting.

At the end of the meeting, Bill Summers of Yale held an interview-style discussion with three faculty members—Janet Browne of Harvard, Angela Creager of Princeton, and Susan Lindee of Penn—about some issues in the profession. Summers is trained as a molecular biologist, although he has been writing and teaching on historical subjects for many years. He took an outsider’s stance, asking his panelists—“real historians,” in his self-deprecating phrase—about the state of the field.

He asked the panel to discuss the utility of the history of science for scientists; the difference, if any, between the history of science and related fields such as Science and Technology Studies (STS), the history of medicine, and sociology and anthropology of science; and whether historians of science had anything to say any more to philosophers of science. The discussion then ranged to popularization of our work and the relevance of what we do to science policy and ethics. A number of remarks came out about how we train students so that they can compete for scarce faculty jobs, most of which are in straight history departments. In short, the discussion centered on drawing boundaries around our field. Distinguishing what we do from other people who study the social implications and context of science, parsing it relative to general history, and how to brand oneself within the academic marketplace.

In the back of many people’s minds was the recent debate between Lorraine Daston in Critical Inquiry and Peter Dear and Sheila Jasanoff in Isis , and on a conference at Harvard last week on the “next 20 years” of STS. (see also commentaries by Darin Hayton, Will Thomas and Henry Cowles on the debate) At the end, someone handed Dan Kevles the microphone and asked him to comment on writing serious history for wide audiences. Does it mean “dumbing down” your work or sacrificing scholarly standards? How should one “position” one’s book so that it earns the respect of other scholars and gets noticed in the popular press? Kevles said not to worry about that. “Follow your passion,” he said. “Follow your own drummer.” If you want to write for a wide audience, there are ways to do that; if you want to reach a narrow but knowledgeable audience of peers, then do that. He is right. There’s no point in doing this if you aren’t passionate about it. Become too calculated, and you’re sunk.

I want to add that what counts as the History of Science could not matter less. I currently work at the interface of the history of science and the history of medicine. I find the relations between science and medicine fascinating and rich with implications for better understanding our society and even human nature. The relations between historians of science and historians of medicine, however, are petty, chauvinistic, and completely uninteresting. Ditto our relation to philosophers, sociologists, and anthropologists of science. Double ditto for the difference between “HPS” (History and Philosophy of Science, à la Cambridge) and “STS” (Science and Technology Studies, à la Cornell).

I can’t resist the old chestnut about the reason that academic debates are so vehement is that the stakes are so small. This is a tiny field, people! We already have far less impact on wider discourse than our subject matter deserves. Splintering into these absurd factions undermines the enterprise and hastens our irrelevance. For students, positioning yourself as an STSer vs. a historian of science is a colossal waste of time. Better to find a project you love and hopefully one that matters to more than half a dozen other people. Pursue it using whatever scholarly tools come to hand—those of history, philosophy, anthropology, sociology,...journalism, fiction, meditation,...motorcycle repair, saponification, or philately. Then, when it’s time to look for jobs, learn how to pitch it to different audiences. Is there a gender angle? A racial dimension? Would a medical school be interested in it? Or maybe a group of Classicists?

Envoi: The discipline doesn’t matter. The profession is secondary. Break down the barriers of your research and writing. Learn to make connections with others as broadly and imaginatively as possible. Be savvy, eclectic, and passionate. Do it for the love.

End Times (The Telos of Telomeres)

Author: 
Nathaniel Comfort

For Aristotle, both ethics and politics flowed from the telos, the end or purpose of all things. In what may be record time for translating Nobel Prize benchwork to biotech snake oil, telomeres are the latest rage in high-tech diagnostics. Several startups are now pitching them as a way to tell your “biological age,” a new health metric that is as baffling as it is troubling.[1]

The 2009 Physiology or Medicine prize went “for the discovery of how chromosomes are protected by telomeres and the enzyme telomerase.” Telomeres, as I wrote in 1991, are like aglets—the plastic tips on the ends of your shoelaces—for your chromosomes. They’re long stretches of repetitive DNA that essentially keep chromosome ends from unraveling. The thing is, unlike shoelaces, chromosomes duplicate themselves. They have to—otherwise half your cells would have no genes. And every time they do, because of one of those ubiquitous little design flaws that show there’s Nobody Home Upstairs, your telomeres get a tiny bit shorter. The telomere is therefore a sort of clock, ticking down the mitoses until a cell reaches the Hayflick Limit, at which point basically chromosomal Fukushima occurs. Telomerase counteracts that shortening in certain cell types, making those cells effectively immortal.

There have always been a few people who find it irresistible to think that cellular aging determines organismal aging. Seems logical, right? I mean, we all know an organism is made of cells. When you get old, doesn’t it have to be because your cells are getting old?

Well, sort of, sometimes, in some cases, but not in any simple way. Senescence is an incredibly complex process—and nowhere more so than in humans, naturally—and it mostly happens independently of your telomere length. It’s certain that your telomeres will be shorter when you’re old than they were when you were young, but old people can have long telomeres, and young people can have short ones. For that matter, short people can have old telomeres.

Further, cellular immortality is a mixed blessing, to say the least. On the one hand, slowing cellular senescence could in principle forestall certain degrading diseases of disintegration. On the other, there’s already a name for immortalized cells: cancer.

Anyway, cellular and organismal aging operate on different time scales, and the latter involves a lot of processes that have nothing to do with the former. To equate them is to confuse variation at two different levels—the same sort of error found in claims that racial differences in IQ result from genetics rather than systematic discrimination.[2]

That neat little fallacy is the foundation of the business plan at several new biotech companies, in Spain, Houston, and of course, in Menlo Park, CA. They claim to be able to take a few of your cells, open them like sacrificial goats, spread out their entrails, and tell you how old you “really” are. These new longevity companies claim to only be raising a warning flag, pointing you toward medical treatments you might want to consider.

A century ago, the Life Extension Institute had a similar model. The New York-based company offered medical exams, denying that they were offering any medical treatment. They pitched their product directly to individuals, but their biggest customers were employers and the many life insurance companies springing up in the Progressive Era. They eventually shut down in the wake of numerous lawsuits and accusations of fraud and misrepresentation. And LEI’s cofounder, Irving Fisher, went on to found the American Eugenics Society.[3]

For a few hundred bucks, these new companies will give you one-stop diagnostic shopping. With one measurement, they claim, they can tell you all sorts of things about your overall health and well-being. You even get to pick your metaphor: you either get a “wake-up call” about how fast you are really aging, or you can have your “check engine light” checked, according to spokesmen for the companies quoted in the Times article. The check-engine light in my car goes on for free, but after that I guess that in both cases it means a lot of expensive, computer-based diagnostics, and a lot of fervent hopes that all you did was pop a circuit-breaker.

Some of the scientists involved show a refreshing degree of candor. Jerry Shay, of UT Southwestern and on the board of the company Life Length (didn’t I get email from Nigeria about a similar product the other day?), acknowledged that although they won’t tell anyone how long they will live, insurance companies might want this information “to set rates or deny coverage.” In other words, they’re perfectly happy to sell the actuarial illusion that they can tell anyone how long they will live.

A Spanish telomerista, Maria Blasco, said the telomere test might prove helpful to people “especially keen on knowing how healthy they are.” Never mind the deeply problematic notion of playing to the fears of bored upper middle-class hypochondriacs; outside of a couple of risk factors for very specific and rare diseases, no one has any idea what this tells you about how healthy you are. But hard data and actual products tend to be hell on stock prices anyway.

Everyone got that? They’re telling us this is a scam. Now look, I am not saying these folks are dishonest, or even cynical. I think they are a bunch of basically honest scientists swayed by the allure of high-tech translational biomedicine. It’s the scientific version of the American Dream. And it comes with the always-handy Biomedical Moral Pass. It’s a great example of overfunded, overhyped science that benefits corporations and stockholders but may well do patients more harm than good.

The thing I wonder about most, though, is this idea of “biological age.” Apparently, the age I think I am—which I have until now naively correlated to the number of birthdays I’ve had, and never mind—is now “non-biological.” It’s like an acoustic guitar. There was no such thing as an acoustic guitar until the electric guitar was invented. All guitars were acoustic. Are there now multiple kinds of time—chronological, biological, and who knows what else—where I can be getting older in one dimension and younger in another? Have these biologists actually reinvented time?

Or have they figured out what time really is? Is my chronological age now merely a figment, a simulacrum—a fictive representation of some supra-biological horological process? If so, I don’t think I like where this is going. We’re not headed down the sunny, leafy lane of, “You’re only as old as you feel! Have another bran muffin and go enjoy the morning.” This is more like the gray, trash-strewn alley of, “You poor dumb bastard. You only think you want to go to that punk show downtown this weekend. Sit down, shut up, and drink your Mylanta.”

It sounds like the beginning of the end.

[1] Andrew Pollack, “A blood test offers clues to longevity,” 2011-05-18 (http://nyti.ms/iIO3gv). [2] In other words, conflating within-group and between-group variation. Whatever IQ is, it is highly heritable. But heritability is a measure of variation within a group. The heritability of IQ is different for different groups under different conditions. It simply cannot be used as a measure of how “innate” a trait is. [3] I write about Fisher in chapter 2 of my forthcoming book, “A Science of Human Perfection.”

Retirement Fail . .

Author: 
Shawn Baker

Well, I find myself a little less retired than I expected to be. My intention was to take a good long break before looking for something interesting. It seems, however, that something interesting has found me. I'm doing some work for a Danish startup called BlueSEQ.

They're setting up a sequencing service exchange. If you've got a sequencing project but no sequencer (or not enough capacity) you can put your project on the exchange and compare the various bids (to get the best price, right platform for the job, level of expertise you need, etc.) If you're a service provider or core lab with excess sequencing capacity you can bid for projects to keep your pipeline full. The service is free for researchers while the service providers pay a small service fee (but only for successful bids).

The best part is that the whole process is centered around the Project Design Tool. It takes researchers step by step through the process of describing their experimental needs, ensuring that both the researcher and the service provider are on the same page and everybody is getting exactly what they expected. The project designs can be saved and reused for future projects to maintain consistency.

So where do I come in? I'm developing the BlueSEQ NGS Knowledge Bank, a neutral repository of all things next gen sequencing. I've got the basics up now with general descriptions of the major platforms and applications as well as a list of NGS-focused conferences and meetings. Have a look and tell me what you think. I'd love to hear your thoughts about what's there now and what you'd like to see in the future.

Check out the official press release</> from BlueSEQ.

Why Should We Care? Part II. History As a Way of Knowing

Author: 
Nathaniel Comfort

Last week I raised a rhetorical question about why we or anyone else should care about the history of science. I see two broad categories of answers to that question:

Because it is useful or because it is beautiful.

Today I want to talk about being useful. In her recent book, Heredity and Hope, Ruth Schwartz Cowan writes about using “the historian’s tools” to better understand bioethical questions surrounding genetic testing and screening that ordinary, non-scholarly people often have to face. What are those “historian’s tools,” exactly? What is it that training in historical research and reasoning, combined with knowledge of science, can bring to the table in the wider marketplace of ideas?

Historicism is one of the fundamental ways of knowing. In the 1980s, the fashion was for science as a way of knowing; John Pickstone’s history of science and medicine text cleverly pirates that late Cold-War phrase. “Science as a way of knowing” implies an emphasis on logic over emotion, and of course on evidence over belief. When the debate is between science and religion, I come down solidly on the side of the rational empiricists.

But science is also about reasoning strategies—it is, as Lindley Darden and colleagues have carefully articulated, about mechanisms. Science analyzes the gears and levers of a natural system, cleans them up in solvents, and lays them out on a clean paper towel. It then painstakingly reassembles and lubricates them. And then—the ultimate demonstration of scientific understanding—it manipulates them and predicts the results. Scientific understanding is eternal, independent of time. A watch just runs; this gear connects to that and trips that lever. Not all science works that way, of course, but the main exceptions, are fields such as geology, ecology, and evolutionary biology—the “historical” sciences.

History as a way of knowing implies cause and effect through time. It is contingent, local. Scientists often dismiss historical contingency as a kind of randomness or pointlessness:

“For historical reasons,” writes the distinguished scientist Harvey Lodish in his textbook Molecular Cell Biology, “the names of various cyclins and cyclin-dependent kinases from yeasts and vertebrates differ.” In other words, it’s more to remember, I know, but sorry, you’ll just have to deal with it.

“Primarily for historical reasons,” begins chap. 13 of Current Protocols in Molecular Biology, “most studies on yeast have involved Saccharomyces cerevisiae (hereafter termed yeast).” I.e., pombe is just as good a model organism and in some ways better, but the decision wasn’t made rationally.

“For historical reasons,” write Peter Atkins and Loretta Jones in Chemical Principles: The Quest for Insight, “the molecular formulas of binary hydrogen compounds of Group 15/V elements are written with the Group 15/V element first.” Sorry, no insight possible with this one. Don’t you hate it when nomenclature has these messy bits with no rational basis?

I guess I should have realized I was heading toward history when, back in neurobiology grad school, I found such explanations bemusing and dissatisfying. Oh, the stories behind those “historical reasons”! But we’re getting ahead of ourselves and starting to talk about beauty.

Historical reasons, then, are not governed by iron laws of determinism. They could have been otherwise. As historians of science have gotten bolder in recent decades, we have shown that it’s not just nomenclature and choice of organism—everything in science is for historical reasons. Contingency tints almost every action of the lab. Now, in my view those who have pushed this line the hardest—for example the Edinburgh School and Bruno Latour’s crew—became polemical about it. They seemed to be trying to push mechanism and reasoning out of science altogether, to deny that scientists actually understand what they are doing. Most people not romantically attached to high-falutin’ French theory recognize that scientists in fact have an extremely robust understanding of nature that often works astonishingly well. Science is predictive in ways that historians can only fantasize about.

Scientists take complex phenomena and makes them simple, by isolating variables, making simplifying assumptions, modeling, and other methods. Historians like to take something simple and make it complicated. A mundane recipe book contains clues to gendered ideas about medical care in the 17th century. A modern laboratory is a zone of metaphorical exchange of resources and knowledge production. An ancient Chinese drawing reveals a culture’s views of the body and helps explain their poetic system of five elements. One can get carried away complicating the simple—sometimes a stethoscope is just a stethoscope—and we need to guard against long-windedness. But in today’s world of soundbites and cynical spin, nuance and texture can be salutary.

Historians can put current & recent events into a broader, more nuanced perspective than journalists can. Naomi Oreskes and Erik Conway’s recent book, Merchants of Doubt, is doing a superb job of awakening people to long-term trends in the political manipulation of scientific data. In the so-called “Panda trial” over the teaching of evolution, Barbara Forrest’s superb and witty analysis of the text of the creation-science textbook Biology and Origins into the Intelligent Design text Of Pandas and People gave the lie to ID proponents’ claims that the two were unconnected.

The history of science began as a way to document and, often, to celebrate science, and still much is written in that vein. But nowadays, such scholarship—which can be valuable, interesting, and pleasurable—is balanced by other scholars who take the role of science critics. This, it seems to me, is especially important in biomedicine today, a field so rich and powerful and with so many cheerleaders that it desperately needs a coalition of informed skeptical analysts. Once again, historical cause-and-effect can be anodyne to scientific determinism. Memorable history of science often contributes a pithy phrase that captures a process or event. Think of Kohler’s “breeder reactor” in the Drosophila room, or Peter Galison’s “trading zone.” Kuhn’s “paradigm shift.” Butterfield’s “Whig history.” A good phrase changes the way people think. Careful use of language is not just cosmetic; it is integral to the value of scholarship.

I’m sure there are many more ways the historian’s toolkit bears on contemporary issues—I look forward to comments. But I’ll finish with a plea to reach out to an audience beyond the half-dozen people with a professional interest in your topic. One of the hazards of serious scholarship is a kind of vanity, a tendency to think that everything we have found is so important that it doesn’t matter whether or not anyone else is interested. I admire the integrity of that sort of uncompromising scholarly standard, but the principle of utility is a counterweight. It places value on productivity, on access, on changing people’s minds. The history of science has much to say about contemporary debates over the teaching of evolution, genetic medicine, climate change, healthcare policy, and energy policy, to name but a few. Those of us fortunate enough to get paid (at all) to do scholarly research have a duty to reach somebody, make an impact, change something.

Who Cares about the History of Science?

Author: 
Nathaniel Comfort

I want to start what I hope will be not only a series of posts but also a discussion about the value of the history of science. We don’t often stop to think about—let alone systematically formulate a set of justifications for—our field. But it matters for things that affect our daily lives. Why do we teach? Why should the NSF, NEH, NIH, or other foundation give us a grant—or our university pay our salary? Who should publish our book? On what basis should we recruit graduate students?

Imagine you are a Dean. The university president is expecting your budget and he demands that you cut $10M from it. Painful decisions have to be made. Well-intentioned but unproductive junior faculty are going to be denied tenure. Maybe some of those interdisciplinary programs that were so trendy ten years ago can be cut. Who really takes Classics any more? And what about that program for the history of science? Can’t that be trimmed? Let’s float a proposal to axe it and see what happens.

Okay, now imagine you direct that history of science program. How do you respond to the letter? Why should your program not be sacrificed?

I absolutely think our field deserves a place in the curriculum and on the bookshelves--indeed, a more prominent one than it now has. And to get there, we have to do some soul-searching and to be honest with ourselves.

We have, I fear, painted ourselves into a corner in the last few decades. The history of science, of course, used to be done mainly by retired scientists. Then Thomas Kuhn came along and turned the whole enterprise on its head, and then the Edinburgh School and the Strong Programme turned Kuhn on his head and then we had tea. The history of science differentiated itself from the field it studied, it fledged, rebelled against its parent—in a word, professionalized.

But it got carried away. In the 1990s, the “science wars” pitted us against the scientists, and Alan Sokal’s brilliant—yes, brilliant— hoax exposed the pose of much scholarly science studies, making meticulously articulated arguments mockable. Science scholars’ strategy of dismantling its own subject in order to demonstrate its own sophistication backfired; society backed the scientists. It was the scholars who came off seeming ridiculous.

In the past decade, where there ought to have been a pendulum swing there has been inertia. For the most part, the history of science and science studies has become irrelevant to wider social discussions. There are important exceptions to this—most recently, Naomi Oreskes and Erik Conway’s Merchants of Doubt has engaged serious, important contemporary scientific issues with serious scholarship. This stuff can be done, people.

This is just a hint of one of the reasons that science scholarship—a term I’ll use to refer to the history of science, sociology of science, philosophy of science, and science studies—matters. And there are, I think, really only a few fundamental justifications for it. We need to think pragmatically, but we can’t lose sight of aesthetics and principles, either.

I want to explore these reasons in an occasional series of posts. Let’s begin.

New Guest Bloggers

Author: 
Theral Timpson

We're happy to welcome two guest bloggers to mendelspod.com.

Nathaniel Comfort, Ph D

Nathaniel is an associate professor at Johns Hopkins University where he teaches and writes on the history of biology. Comfort is currently writing a book, The Science of Human Perfection, a history of American medical genetics. Of the project, Comfort says,

"I am interested in heredity and health in 20th century America. My current book project examines the growth and evolution of medical genetics from the early days of Mendelism to the Human Genome Project. In it, I show that heredity, health, and human improvement have always been intermingled; there was no break when medical genetics became "legitimate." The professionalization of medical genetics that began around mid-century involved many refinements of the message, but the old goals of human improvement dating back to Francis Galton carry down to present-day efforts such as gene therapy. Likewise, trendy contemporary notions of individualism and personalized medicine have roots back in the late nineteenth century, with Archibald Garrod's emphasis on diathesis and biochemical individuality. I strenuously avoid labeling one of these good and the other bad; these twin impulses resonate with and feed off of one another, and both have inspiring and sobering implications for how we think about health and identity today."

We found Nathaniel over at his own blog, www.Genotopia.com after reading one of his terrific posts on April Fool's Day. Since then he and I have been bantering emails back and forth comparing notes on Dawkins' The Selfish Gene and how new genetic information impacts society. Comfort sees his role as stepping back and taking a look, a longer look, at scientific discovery as it's made. I stole this line from one of his emails.

"If science is king, then it needs a jester to give it honest opinions and evaluations about how its judgments will affect the public. Someone who is critical out of loyalty, not treason, and who knows how to amuse and entertain while delivering his barbs--both for pure fun and so he doesn't get himself beheaded!"

Comfort is eager to hear your comments. And, by the way, he says he's heard about every pun there is on his name.

Shawn Baker, Ph D

Shawn recently retired from Illumina after helping the company skyrocket from a 15 employee start-up to the shining star it is today. But you can't know as much as Shawn knows, love the field of genomics and stay retired. Especially at his tender young age. See his blog, Retirement Fail . . .

Having received his Ph.D. at the University of California, Davis, Shawn started his career as a Research Scientist at Illumina. After spending several years at the bench developing gene expression array products, he transitioned to Product Marketing. In addition to consulting for BlueSEQ, Shawn is the founder of www.BiotechCareerCenter.com and part-time blogger at www.biotechmarketer.blogspot.com.

Earlier in the year, I interviewed Shawn for our show, Looking for a Job is a Full Time Job. Shawn began his website BiotechCareerCenter.com before joining Illumina and has kept it updated. It's a terrific resource for those seeking a job in the industry, being well linked to some of the popular job hunting websites (www.biospace.com, www.sciencebiotech.careerbuilder.com) and to some smaller local organizations as well (www.biocom.org in San Diego and www.michbio.org in Michigan). The site also includes a job board. Maintaining the site and interviewing over a hundred job applicants while at Illumina has given Shawn some insight into how one might go about seeking a new job in this industry.

It's not all work for Baker. He just returned from Istanbul, Capadoccia and Ephesus where I'm sure he forgot, at least for a few hours, about A, G, C and T.

Welcome Shawn and Nathaniel.



mendelspod

New to Mendelspod?

We advance life science research, connecting people and ideas.
Register here to receive our newsletter.

or skip signup