Blog


A Year Later: Indie Scientist, Ethan Perlstein

Author: 
Theral Timpson

Social media sites are all the fad.  But how much are they really impacting life science?

Ask Ethan Perlstein that question, and he’ll tell you “a lot.”  

We featured Ethan last July in an interview, Life Scientist Goes Indie.  Moving to the Bay Area and living on his savings, Ethan was sharing some big ideas over social media sites, his blog, and with any reporter who would listen.  He talked of moon shots and breaking into the rare disease area with an entirely new kind of drug development, what he calls evolutionary pharmacology.    And he wouldn't be going the typical academic route.  He was done with academia and grants from the NIH.  He would try other funding routes.  He was going indie and encouraging his fellow scientists to do the same.

So we followed up with Ethan to see how that has worked for him.   Were his ideas doable, or were they just the understandable rebound of an ego that had been hurt by the system?    In short, does going indie really work?

The answer is looking pretty good.   Ethan is now operating in a new lab at [email protected] in San Francisco.  He has raised somewhere north of one million dollars and is joined by a team of five and cadre of top tier advisors.  Indie science is taking shape.

Video clips from our interview with Ethan

Let’s back up a bit.  Ethan ran a lab as a postdoc at Princeton where he got an independent research fellowship to work on validating a new evolutionary, yeast-based approach to studying how drugs work.  Then the fellowship ran out.  He was out on the street as so many postdocs are today.  Ethan applied around the country at thirty universities where he might continue his career.  None of them accepted him.

"Just hang in there and keep trying," he was told.  "This is typical for young scientists today."

Or he could have joined industry.  

Neither appealed to Ethan.   He decided to go rogue.

And the timing was right.  No sooner did he take to Twitter and blogging, then the world spoke back to him, saying, tell us more.  He became a media favorite with a vision of breaking out of the academic system-what he calls  the “postdocalyse.”   He gave crowd funding a shot--and succeeded.    Soon the Wall Street Journal was writing about him.  

Since then he has turned his ideas into a new reality.  He calls it Indie Science, which he defines as a combination of the good parts of academia and the commercial enterprise.  

“We let the curiosity drive the research as in academia, but insist on deliverables typical of industry,” he says.

The name Ethan has chosen for his enterprise, Perlstein Lab, PBC, reflects this combination of academia and industry.  PBC stands for Public Benefit Corporation, a new class of corporation that allows companies to pursue profit as well as a strong social or environmental mission.  Standard corporations must pursue shareholder value.  But a PBC can also build value for the stakeholders--employees, suppliers, a community, or the environment.  

As we sit in the shared conference room at QB3 and catch up on the last year before we look in on the actual lab, I can’t help but wonder if the name “Perlstein Lab” will become something bigger than one scientist’s independent accumulation of bench and researchers.  Will Perlstein Lab represent a new model of biological research and drug development?

Ethan attributes his success this year in building the Indie Science brand to Twitter, his blog, and the chance to hang out with the team of a start-up.

I know first hand that Ethan is very active on Twitter, but how does that turn into funding, I ask.  He says that if there’s been a conversation happening about rare diseases, he’s usually jumping into the middle of it.  One of the tweets he wrote impressed the CEO of  a drug development company in the rare disease space.  The CEO was also an independent investor.  Now he's Perlstein Lab's top investor.  Other investors came much the same way.  

Ethan says the early partners of the business, such as patient advocacy organizations and advisors have come through his media coverage, most importantly the Wall Street Journal article.

I ask Ethan to tell me more about the Indie Science brand.  What is it?  And is it duplicatable by others?

Team at Perlstein Lab, PBC

Perlstein says that his weekly lab meeting is a lot like those that happen in the university.   He refers to his team as "academic castaways."   The scientists at Perlstein Lab, PBC will be more free, he says, to pursue their own curiosity than those in a typical drug development company.   In addition to the lab meeting, the team gets together each Friday for happy hour where they are all peers. (For more details on indie science culture, see video clips of our interview with Ethan above. And for more on the team members, they're featured nicely at the Perlstein Lab blog.)  

Ethan’s lab is housed at QB3 which is no small part of his success.  This is an incubator space sponsored by Janssen Labs that has been reported on widely and houses some promising startups.  Significantly, Perlstein and his team have saved over a quarter million dollars in capital equipment that is shared among the various labs at the incubator.

Perlstein is doing what he calls evolutionary pharmacology, using simple model organisms such as yeast for research.  These organisms grow faster and therefore potentially offer a quicker timeline. Part of a wave of recent  research into the rare disease space, this new approach has yet to prove itself.  

Still, to get this far, one must acknowledge some success in breaking out from the traditional funding system.  With big pharma being more open to non-traditional partnerships, one can easily see Ethan finding an exit for his assets.  

Doug Crawford oversees the QB3 facility where Perlstein Lab, PBC is located. He recognizes Ethan's trailblazing.

"Ethan has proven that there is a larger and more diverse community of investors interested in early stage life science than we have been accessing," Doug told me. "I think he is out in front of what could become an important trend."

We stroll over to the lab itself. Our chat is immediately cut off by one of the scientists asking Ethan a question.  The team--who would otherwise be looking for academic research positions--has yet to do any real research.  They are busy setting up equipment and validating protocols.   As Ethan poses for some pictures with the team, he continues to fill me in on some of the supplier partnerships, such as that with the virtual lab manager, HappiLabs. Before we leave, he begins to talk of an Institute for Rogue Scientists.   

I can see myself coming back in another year to chat with some of the rogue scientists at Perlstein Lab, PBC.  What work will they have accomplished?  Will the investors be happy?  Will Perlstein Lab, PBC be that different from any another biotech?

Ethan goes by the traditional title of CEO and says he's not able to work at the bench anytime soon himself.  Like any CEO, he has to raise more money.

What then is a rogue scientist?  I ask.

“One who operates in a more creative space,” he replies.  “We don’t have to live or die by what the gatekeepers say.”

Raising the Standards of Biomarker Development - A New Series

Author: 
Theral Timpson

We talk a lot on this show about the potential of personalized medicine. Never before have we learned at such breakneck speed just how our bodies function. The pace of biological research staggers the mind and hints at a time when we will “crack the code” of the system that is homo sapiens, going from picking the low hanging fruit to a more rational approach. The high tech world has put at the fingertips of biologists just the tools to do it. There is plenty of compute, plenty of storage available to untangle, or decipher the human body. Yet still, we talk of potential.

Chat with anyone heavily involved in the life science industry--be it diagnostics or pharma-- and you’ll quickly hear that we must have better biomarkers.

Next week we launch a series, Raising the Standards of Biomarker Development, where we will pursue the “hotspots” that are haunting those in the field.

The National Biomarker Development Alliance (NBDA) is a non profit organization based at Arizona State University and led by the formidable Anna Barker, former deputy director of the NCI. The aim of the NBDA is to identify problem areas in biomarker development--from the biospecimen and sampling issues to experiment design to bioinformatics challenges--and raise the standards in each area. This series of interviews is based on their approach. We will purse each of these topics with a special guest.

The place to start is with samples. The majority of researchers who are working on biomarker assays don’t give much thought to the “story” of their samples. Yet the quality of their research will never exceed the quality of the samples with which they start--a very scary thought according to Carolyn Compton, a former pathologist, now professor of pathology at ASU and Johns Hopkins. Carolyn worked originally as a clinical pathologist and knows first hand the the issues around sample degradation. She left the clinic when she was recruited to the NCI with the mission of bringing more awareness to the issue of bio specimens. She joins us as our first guest in the series.

That Carolyn has straddled the world of the clinic and the world of research is key to her message. And it's key to this series. As we see an increased push to "translate" research into clinical applications, we find that these two worlds do not work enough together.

Researchers spend a lot of time analyzing data and developing causal relationships from certain biological molecules to a disease. But how often do these researchers consider how the history of a sample might be altering their data?

"Garbage in, garbage out," says Carolyn, who links low quality samples with the abysmal non-reproducable rate of most published research.

Two of our guests in the series have worked on the adaptive iSpy breast cancer trials. These are innovative clinical trials that have been designed to "adapt" to the specific biology of those in the trial. Using the latest advances in genetics, the iSPY trials aim to match experimental drugs with the molecular makeup of tumors most likely to respond to them. And the trials are testing multiple drugs at once.

Don Berry is known for bringing statistics to clinical trials. He designed the iSpy trials and joins us to explain how these new trials work and of the promise of the adaptive design.

Laura Esserman is the director of the breast cancer center at UCSC and has been heavily involved in the implementation of the iSpy trials. Esserman is concerned that "if we keep doing conventional clinical trials, people are going to give up on doing them." An MBA as well as an MD, Esserman brings what she learned about innovation in the high-tech industry to treatment for breast cancer.

From there we turn to the topic of “systems biology” where we will chat with George Poste, a tour de force when it comes to considering all of the various aspects of biology. Anyone who has ever been present for one of George’s presentations has no doubt come away scratching your head wondering if we’ll ever really glimpse the whole system that is a human being. If there is one brain that has seen all the rooms and hallways of our complex system, it’s George Poste.

We’ll finish the series by interviewing David Haussler from UCSC of Genome Browser fame. Recently Haussler has worked extensively on an NCI project, The Cancer Genome Atlas, to bring together data sets and connect cancer researchers around the world. What is the promise and pitfalls David sees with the latest bioinformatics tools?

George Poste says that in the literature we have identified 150,000 biomarkers that have causal linkage to disease. Yet only 100 of these have been commercialized and are used in the clinic. Why is the number so low? We hope to come up with some answers in this series.

Why Hasn't Clinical Genetics Taken Off? (part 2)

Author: 
Sultan Meghji

In my previous post, I made the broad comment that education of the patient and front line doctors was the single largest barrier to entry for clinical genetics. Here I look at the steps in the scientific process and where the biggest opportunities lie:

The Sequencing (still)

PCR is a perfectly reasonable technology for sequencing in the research lab today, but the current configuration of technologies need to change. We need to move away from an expert level skill set and a complicated chemistry process in the lab to a disposable, consumer friendly set of technologies. I’m not convinced PCR is the right technology for that and would love to see nanopore be a serious contender, but lack of funding for a broad spectrum of both physics-only as well as physical-electrical startups have slowed the progress of these technologies. And waiting in the wings, other technologies are spinning up in research labs around the world. Price is no longer a serious problem in the space - reliable, repeatable, easy to use sequencing technologies are. The complexity of the current technology (both in terms of sample preparation and machine operation) is a big hurdle.

The Analysis (compute)

Over the last few years, quite a bit of commentary and effort has been put into making the case that the compute is a significant challenge (including more than a few comments by yours truly in that vein!). Today, it can be said with total confidence that compute is NOT a problem. Compute has been commoditized. Through excellent new software to advanced platforms and new hardware, it is a trivial exercise to do the analysis and costs tiny amounts of money ($<25 per sample on a cloud provider appears to be the going rate for a clinical exome in terms of platform & infrastructure cost). Integration with the sequencer and downstream medical middleware is the biggest opportunity.

The Analysis (value)

The bigger challenge on the analysis is the specific things being analyzed as mapped to the needs of the patient. We are still in a world where the vast majority of the sequencing work is being done in support of a specific patient with a specific disease. There isn’t even broad consensus yet in the scientific community about the basics of the pipeline. A movement away from the recent trend in studying specific indications (esp. cancer) is called for. Broadening the sample population will allow us to pick simpler, clearer and easier pipelines which will then make them more adoptable. It would be a massive benefit to the world if the scientific, medical and regulatory communities would get together and start creating, in a crowdsourced manner, a small number of databases that are specifically useful to healthy people. Targeting things like nutrition, athletics, metabolism, and other normal aspects of daily life. A dataset that could, when any one person’s DNA is references, would find something useful. Including the regulators is key so that we can begin to move away from the old fashioned model of clearances that still permeate the industry.

The Regulators

Beyond the broader issues around education I referenced in my previous post, there is a massive upgrade in the regulation infrastructure that is needed. We still live in a world of fax machines, overnight shipping of paper documents and personal relationships all being more important than the quality of the science you as an innovator are bringing to bear.

Consider the recent massive growth in wearables, fitness trackers and other instrumentation local to the human body. Why must we treat clinical genetics simply as a diagnostic and not, as it should be, as a fundamental set of quantitative data about your body that you can leverage in a myriad of ways. Direct to consumer (DTC) genetics companies, most notably 23andme, have approached this problem poorly - instead of making it valuable to the average consumer, what they’ve done is attempted to straddle the line between medical and not. The Fitbit model has shown very clearly that lifestyle activities can be directly harnessed to build commercial value in scaling health related activities without becoming a regulatory issue. It’s time for genetics to do the same thing.

State of Biotech Turns to State of Disbelief with Fraud Allegations against Steve Burrill

Author: 
Theral Timpson

Earlier today Nathan Vardi at Forbes broke the news that one of biotech’s most notable investors, Steve Burrill, was ousted from his own company’s fund and has been sued for fraud by a former managing partner.   The news hit the biotech community like a major earthquake, particularly in the Bay Area where Steve has been advising and funding bitoech companies since the the founding of Cetus and Genentech. 

Reporting from documents just filed with the California State Court in San Francisco, Vardi writes that thirteen investors to Burrill Life Sciences Capital Fund III, including the Treasury of the State of North Carolina, Oregon Investment Fund, Unilever, Monsanto, and Celgene removed Steve as general partner of the fund in March.  

And why?  The investors in Fund III claim that the Glass Ratner Advisory Capital Group audited the fund and reported that nearly $20 million had been paid to “various designees and affiliates” beyond what had been earned in management fees.

Steve Burrill at One Embarcadero Center 

Steve Burrill has built up a venerable name in biotech, having participated in getting the first biotech companies off the ground, funding some of the winners over the years, and publishing financial data and other industry news in the The Burrill Report.  

We’ve featured Steve on the program each of the last three years to review his annual State of Biotech book which covers the latest trends in everything from the R & D crisis in big pharma to the new digital health apps.

While I’ve heard various opinions about Steve--most of which go something like “Steve is all about Steve Burrill” and “Steve is a brilliant guy, but too heavy handed”--allegations of fraud and dishonesty have never been among them.  If true, these claims would stain what has been a phenomenal career.

The Forbes scoop has been tweeted right and left today, but with no more comment than:

“Wow.”

“Whoa!”

“Big.”

“Yikes.”

@ArthurKlausner wrote the most in his tweet:  Stunning News about Steve Burrill -- hope there’s an “explanation.”

Alex Lash of Xconomy put out a piece this afternoon with the same details from the Forbes article and a link to the fraud lawsuit against Steve filed by Ann Hanham, former managing partner at Burrill and Co.  

A story like this takes some time to digest.

I emailed Steve this morning for comment, but so far nothing has come back.  

And I’ve reached out to some of our local advisors, but no one wants to go on record yet.

Steve Burrill has been an advisor to Mendelspod, and agreed to an in-depth interview shortly after we got up and running.  (It doesn’t take long to run into the Burrill brand in this industry.)  He’s been very approachable and even  offered up his rolodex, saying he’d reach out to anyone we wanted to get to the program.  We’ve partnered with Burrill and Co.’s media division, attending and reporting on their always high level conferences.  Last year Burrill and Co. partnered with the Buck Institute on a conference on aging that was a first of its kind.  I’ve valued all of my chats with Steve, both formal and informal, over the past few years.  He’s offered great advise and pointed us to key trends.

At Burrill's Personalized Medicine conference last Septemeber, as I reported here, there was quite a break in tone from previous conferences.  After that we couldn’t reach some of our steady contacts in the media division.  On December 3rd, Luke Timmerman, biotech editor at Xconomy wrote a piece, Burrill VC Fund Splits into New Firm, Biomark, after Short Marriage.  Timmerman interviewed Steve and partner David Wetherell for the piece and even probed them to see if there were any “disagreements” over startegy with what was called the Burrill Capital Fund IV.  

“No,” came Wetherell’s answer.

Well, it turns out there were already problems with Fund III.  According to the Forbes piece, Burrill and Co’s managing partner, Ann Hanham “discovered that a substantial portion of the money that had been raised from Fund III limited partners had gone missing.”  Hanham’s number?  About $20 million.  

In what appears to be the most damning evidence against Steve are quotes from an email he himself wrote urging Hanham and her colleagues not to report the missing funds to the investors.

“We can earn our way out of trouble,” Steve wrote.  “What we need is revenue to solve our problems n just timely enough to meet any capital calls which might be needed . . . Each of u are part of the solution.”

I've heard it said around the industry that Steve doesn’t hire anyone who will give him trouble.  Perhaps that all changed with Ann.

So far it’s a "she said" story without the "he said."   But with some big consequences.  The investors of Fund III have heard enough to be convinced that they are done with Steve.

Steve has been using the title, CEO of Burrill Media.  As Timmerman reported back in December, Steve had let go some of the media staff and made contractors out of the rest in what he called a "rightsizing."   For example, Danny Levine has moved on to create his own company, but still produces the weekly podcast for The Burrill Report.  

I interviewed Steve just last month to go over his annual book (which came out on time), and Steve carried on pretty much as usual.  I noticed in his email signature that his address had changed from the 27th floor of the prestigious One Embarcadero Center building to a “Suite 120” in The Presidio.  A fire engine siren sounded during the interview.  With the address change, I obvserved to Steve that he must be down on the ground floor now.

“Yes.  I’m very grounded now,” he replied.

 

Some Glimpses into the Challenges of Data Visualization Panel Event

Author: 
Theral Timpson

Big Data might offer tremendous breakthroughs in healthcare and personalized medicine.  But with the new amounts of terabytes and petabytes flooding organizations today, old architectures aren't able to keep up.

Take the genome, for instance.  We know that there is a ton of valuable information in there.  But how does one go about looking at it?  Doctors have very little time as it is, and decision making becomes a burden becuase it takes days to get answers to questions, if at all.  And what about the opportunity to get genomic data to the lay person as 23andMe was doing?  

On June 5th, about sixty of us turned the Oakland office of Omicia into an event venue for networking and a discussion, "Delivering Genomic Medicine: Challenges in Data Visualization and Reporting."

Panelists were Martin Reese, the CSO and founder of Omicia; Michele Cargill, a genetic scientist at InVitae, and Adam Baker, a product designer at the new start up, Iodine.   The audience, including Omicia's new CMO, Paul Billings, and a group of 23andMe folks actively drove the discussion on what is clearly a hot topic.

Some pictures and takeaways:

-While visualizations of the genome have come a long way, genome interpretation companies still have to find new, simpler ways of presenting their data.

-Doctors want a simple, green-yellow-red light kind of answer to their question. They don't have time to go through long reports.

-Online reports are best.  They offer the chance to show a simple report up front with the possibility to click through for further information if needed/desired.

-Genetic counselors have a critical, but not very fun job.  The science is often vague, and patients' knowledge of genetics is limited.

-Success in genomic reporting will depend in large part on the user interface. 

-You don't have to dumb down the presentation. Yelp and Expedia are examples of complex data sets that are navigated by millions every day.

 

Panelists Martin Reese, Michele Cargill, and Adam Baker with Moderator, Theral Timpson

Matt Landry, Nola Masterson

Paul Billings

Ashley Dombkowski 

The event was sponsored and co-produced by Chempetitive Group, a life science marketing company that is "introverted about marketing and extroverted about science."

Why Hasn't Clinical Genetics Taken Off?

Author: 
Sultan Meghji

Insiders to genomics are looking around and, generally over a tasty adult beverage, bemoan the lack of forward progress on the clinical side of adoption. Why haven clinical adoption rates gone up faster? What’s making this hard? I’ve become frustrated over the last few years, raising a significant amount of money across a number of companies, all trying to speed up the scale of adoption in the non-sick population. Looking back, looking around and seeing how the current landscape of startups and new activities in clinical genetics are being run, I’ve come to the following conclusions.

1.      Why should I (the patient) care about my genetics?

It’s expensive. It’s confusing. It doesn’t give me any actionable information. We’ve seen how quickly the alpha consumers are taking up activity trackers and other actions-oriented technologies; the lack of an equal demand in clinical genetics is a clear indication.  The general population just doesn't see the point in utilizing clinical genetics and are not asking for it.

2.     The case hasn’t been made to the physicians

The vast majority of front line caregivers acknowledge the technological advances but just aren’t convinced that genetics would make a useful differentiation in healthcare. The current environment has placed an onerous burden to change the standard of care, with the exception of a new pill to replace the old one, usually at a pricing premium. Not too long ago I was talking to the #2 at a major regional medical center (an oncologist by training and practice) and the quote from him I took away was, “..clinical genetics doesn’t matter to me in my practice. There is no proven utility to the vast majority of our patients.” 

3.     The front line caregivers need to be (re)educated

Given that most front line caregivers do not have an education around molecular biology, the entire dialogue around genetic information needs to start at scratch and there isn’t a common platform to do that.  Any easy way to discover this is to ask your primary care physician if (a) they took molecular biology in college and (b) how many molecular tests they’ve ordered this year. For far too many the answers are “no” and “zero”. 

4.     The hospitals, clinics and labs which the physicians have access to don’t offer full service, cost effective solutions

Another interesting question to ask your PCP is what tests are actually available now. Is there a test that they can order right off the menu from their office/lab/hospital/clinic? Again, for the vast majority of doctors, even if they had an educated and motivated patient and an educated, motivated and willing doctor, the local infrastructure doesn't support this. The one shining light of hope is for people with chronic or specific issues, especially around cancer. 

There is a tremendous amount of effort being put into “4” and the other supporting systems, but until 1, 2, and 3 are resolved we’ll never see the broader impact we hope for.  There are a few interesting movements out there, but right now change can be measured generationally. A ray of hope comes from the current pain in the public educational system where k-12 is using resources like the Khan Academy, which has a nice introductory set of content on genetics.

In an environment where most PCP’s are between the ages of 50-55 (AMA), coupled with the increasing age of the general population, we find ourselves in a situation where most people inside the medical ecosystem do not understand genetics.  This is true both on the patient and the provider side.  The ecosystem is currently focused on financially oriented activities (ACA, playing nicely with Medicare/Medicaid and the health insurance providers) vs. actually integrating new useful technologies.  

Inside the industry, we still talk about the lack of adoption as a scientific or techincal issue, when actually this "last mile" issue is with doctors and the average person just not seeing the value. We need to do a much better job of educating the general populace as to the value of genetics in their daily lives. 

Join Us Next Week for a Discussion about the Challenges of Data Visualization and Reporting in Genomic Medicine

Author: 
Theral Timpson

Next Thursday, June 5th, Mendelspod teams up again with Chempetitive Group to bring you an evening of networking and a special panel discussion.

"Delivering Genomic Medicine: Challenges in Data Visualization and Reporting"

When:  Thursday, June 5th

   Networking:  5 30 pm

   Panel Discussion:  6 15 pm

Where:  Omicia Inc, 1625 Clay St, Oakland, CA  94612

Price:  Free (Register here)

What is a genome?  And how does one go about looking at it?  Furthermore, how can it help a doctor make a quick call for a patient?

Source: AmericanProgress.org

Remember after the Human Genome Project was completed when Science Magazine offered that insert foldout in one of their issues that was a full color representation of the entire genome?  We hung it on our walls and showed it off to our friends.  But what did it tell us? 

Helpful and accurate visualization of biological data--from genomics to lipidomics--will be a key step in translating discovery to everyday use. Doctors use X-ray and other images to help diagnose a patient's illness. How can genomic and other omic data be visualized in a way that will offer quick and accurate insights into a person's "health state" (as Stanford's Mike Snyder likes to call it)?

This is the question we'll be putting to a panel of experts next Thursday,  folks who are working every day to help the rest of us better understand the latest gigabytes of biological data.   We'll be housed at the trendy new office of the genome interpretation company, Omicia, in downtown Oakland where we'll be surrounded by work stations and data scientists up to their elbows in data.  There is limited seating, so if you haven't already, register now.

See you June 5th in Oakland!

Stanford’s Big Data in BioMedicine Conference Turns Two

Author: 
Theral Timpson

With Silicon Valley blazing on as number one hot spot for high tech and the Bay Area claiming the same for biotech, it makes sense that Stanford, sitting there mid-peninsula basking in all that brilliance, should command a leading role in bioinformatics.

Today, Stanford’s Big Data in Biomedicine Conference kicked off with a star lineup and even a whiff of glamour.  The conference room at the Li Ka Shing Center was packed with attendees looking up on a stage lit with colored lighting, backed by giant screens, and filled with elegant white leather chairs.  Cameras and lights filled the lobby, with interviews being done on the side.  The conference is being tweeted (#bigdatamed) like a presidential election!   

Along with a contingent from Oxford University, speakers included representatives from the White House, NIH, and the FDA.  Stanford  is well connected with the national funders and policy makers.   Though beginning with some of the hype from last year (the first time is always the most exciting), by the end of the first morning the conference had settled down in a somewhat grounded pace when some good questions were addressed:  whether there is enough "science" in data science.  Questions of privacy and quality were lightly touched on.  

I was disappointed to see that none of the panel topics for the entire three days--which range from integrating genome scale data to machine learning--included bioethics.  Stanford has some great speakers on bioethics, such as regular Mendelspod guest, Hank Greely.  Big data certainly holds big promise for improving healthcare, but new technology always brings up big unforseen consequences.  Addressing those thorny issues like privacy, equality, and safety head-on would have kept the conference more balanced.

Todd Park is the second CTO ever for the U.S.  And having him come directly from the White House adds some glamour, yes.  But Mr. Park’s keynote was more Easter service than scientific conference, more cathedral than campus.  Using lines like “there has never been a better time to innovate in healthcare than now” and “we are so blessed to be living in . . .”, Mr. Park “god blessed” the crowd of scientists praying that “the force be with you.”  Hm hm.  Bring on the data, please. 

And yes, the next speaker, Colin Mahony, used the term “nirvana,” alluding to that big data heaven that we all dream about.  But from there on the conference came back to earth with slides and graphs of . . . . data.  

Mike Snyder, perhaps the most biologically studied person in history, gave an update on his iPOP, or integrated personal omics profiling project.  He didn't offer much new today other than that he’s had some progress in tracking epigenetic changes--not easy--and some characterization of his microbiome.  Steve Quake and Stanford newcomer, Julia Salzman, rounded out the session leaving plenty of time for a panel discussion with Q & A.

Snyder’s ongoing project of looking at hundreds of thousands of his own biomarkers over time, often referred to as the Snyderome, provoked a great question from panel moderator and bioinformatics super star, Atul Butte.  There’s been lots of progress on the various omes, but isn’t the most challenging one the time-ome (anyone for temporome?), or the ability to continually access samples and build the data set over time, Atul asked.

This is of course the key to Snyder’s project, so he launched into more details of his “longitudinal” study. It was Steve Quake who delivered the provocative line.

“I think of time as my friend,” Steve said.

It’s when you look at the data points over time that you’re able to find an anomoly or a signal, Steve reasoned.

Panelist Colin Mahony chimed in with an excellent observation as well.  When you look at the various slides that the speakers use to present their data, Colin observed, time is usually the only constant.  Time can provide “the best primary key” to work around in building data sets, he concluded.

Julia Salzman delivered what was for me the biggest WOW moment of the morning with her talk on “circular RNA.”  

Apparently scientists have already been aware of RNA molecules that circle back on themselves, biting their own tail.  But these molecules--unlike their linear siblings-- have been dismissed as non-protein coding, and therefore not interesting.  Julia said that by being “willing to look at data that was in the trash”, her team discovered that circular RNA has implications for disease and could be used as a diagnostic tool.  She went so far as to say that with this discovery, biological textbooks are now obsolete.  That sounds like a big deal!

The issue of quality was raised.  But unfortunately the discussion followed a specific instance of sequencing, and the general question of how to clean up huge amounts of data was not addressed.    A recent guest bioinformatician at Mendelspod said her biggest challenge was not with storage or compute power, but in improving the quality of the data.

A small debate broke out between Snyder and Quake over big science vs. small science when the NIH representative speaker, Philip Bourne, asked what the NIH could do more for bioinformatics projects--other than give more money.  Quake said that we have some great data sets out there produced from big consortium projects already. Now money should go to good ideas at the individual research level.  Snyder argued that the ongoing large ENCODE project had been beneficial and proved that big data sharing projects could consist of individual researchers pooling their RO1s (smaller grants) together, sharing their data, and benefiting from more real time interaction.  A hybrid of big and small science.

A top question at Mendelspod this year has been whether with the increased data storage and data mining abilities, research has become more data driven than hypothesis driven.  And is anything lost in that?  I presented this question to the panel.

Mike Snyder asserted, with examples,  that “the biggest discoveries in science were not hypothesis driven.”  

So the question was asked, if there is plenty of data, then what is the "scarcity" for generating better questions.  

Steve Quake couldn't resist sharing a local joke:  "how do you define data scientist?  any statistician who lives in San Francisco.”   Then Quake threw out a serious challenge:

“We have a scarcity of ideas, not data.”  

The big data conference continues through Friday and is being broadcast live at https://bigdata.stanford.edu

For Twitter stream, search #bigdatamed.

Delivering Genomic Medicine: Challenges in Data Visualization and Reporting - Panel Event Set for June 5th in Oakland

Author: 
Theral Timpson
Mendelspod and Chempetitive Group are pleased to team up and bring you another evening of networking with a panel discussion, "Delivering Genomic Medicine: Challenges in Data Visualization and Reporting." Join us after work in Oakland on Thursday, June 5th for drinks, bites and discussion.
 
Since the FDA shut down 23andMe's health related products late last year, a separation has grown in the genomics community between those who think 23andMe is good for personalized medicine, and those who think the DTC company has been harmful. The former argue that 23andMe has simplified genomics for the masses through their marketing and PR campaigns and through their online reports. Those opposed warn that the data and the reports are over simplified.
 
When talking about how to understand, interpret, and report genomic data, one of the challenges is presenting to various levels of users.  There are the patients--or consumers as some like to say, the doctors, lab techs, and, of course, other researchers.  The secret behind translating research into better clinical outcomes depends not just on Big Data, but how it looks.
 
 
We'll be talking with some data experts who are finding their own ways to look and report on genomic data:
 
-- Martin Reese, CEO, Omicia - Omicia is now targeting clinics with Opal, their new software which helps clinicians "understand, interpret, and report" on genome sequence data.  Martin has been around bioinformatics since the mid-90's and worked directly on the Human Genome Project with what later became UCSC's Genome Browser.  
 
-- Michele Cargill, Geneticist, InVitae - Michele has a strong passion for democratizing genomics.  She believes that genomic data should be shared in a kind of Web 2.0 She joined Randy Scott's new company, InVitae, after working as a geneticist at the former DTC company, Navigenics.
 
-- Euan Ashley, Asooc. Prof. of Genetics, Stanford - Euan is directing the new Clinical Genome Service at Stanford Hospital.  He refer's to the problem of visualization in genomics as looking for a "needle in a needlestack."  Benefiting from a strong genomics research community at Stanford, Euan is attempting to put some of that research into healthcare benefits.
 
-- Theral Timpson, Host & Producer, Mendelspod.com - Founded in 2011, Mendelspod is a premier media source advancing life science research by connecting people and ideas. Mendelspod goes beyond quick soundbites to create a space for probing conversations and deep insight into the topics and trends shaping our industry's future and the future of our species.
 
This event is brought to you free thanks to underwriting from Chempetitive Group, an international life science marketing agency that gets it. From PR and design to creative and marketing strategy, Chempetitive Group has a love of science and a passion for strategic marketing.
 
Join us June 5th in Oakland at the hip new offices of Omicia where we'll have time for networking, food and drinks before and after. Event begins at 5:30 p.m. and the program begins promptly at 6:15 p.m.
 
Register soon as there is limited seating capacity!

Finding the Sweet Spot in Regulating Genomic Medicine

Author: 
Theral Timpson

New technologies and the possibilities they bring to improve human life always come in fits and starts.

Genomic medicine is no exception.  The overdriven tools space of next generation sequencing has created a bursting spring season in genomics research.  New studies linking “this” biomarker with “that” phenotype bloom with a force of nature leading some to make bold predictions about man’s ability to conquer his own form.  We can smell eternity.

Selling information about our bodies--biomarkers, for instance--will be big business, maybe bigger than pharmaceutical remedies.  Genomic knowledge brings power.  Will it also bring profit?

In the meantime, medicine continues it’s steady march.  Streams of new genomic tests based on biomarker studies are finding their way to patients, sometimes through established channels, sometimes in new avenues,  often leaked through the cracks.   Streams and rivers must be watched closely when it rains.  When linked to human health, they must be regulated.

So far the translation of genomics into routine clinical use has been regulated mostly by CLIA, the Clinical Laboratory Improvement Amendments overseen by CMS, the Centers for Medicare and Medicaid Services.  

When a biomarker test is used in conjunction with a therapeutic, the FDA or Food and Drug Administration has insisted on an approval process for the biomarker test.  The FDA calls such biomarker tests medical devices.

But can all genomic and other ‘omic information be converted to simple, easily commercialized tests like Roche's HER2 test?  Furthermore, should this information and the clinical interpretation of it be regulated by two different government regulatory bodies?

These questions remain unanswered and form the basis of a new series at Mendelspod, Regulation and Genomic Medicine.  

The deluge of genomic data has found routes outside of traditional healthcare channels.  Through ubiquitous internet connections, the data is being made directly available to the masses.  Some say that healthcare is becoming more democratic and that we are seeing a fundamental shift from treated patients to smarter consumers.

Last November, however, the FDA put a monkey wrench in this democratic genomic current by stopping the direct-to-consumer company, 23andMe, from selling their health related genomics tests to consumers.  And with much backlash.  Some have charged the FDA with being “paternalistic” and a “killer of innovation.”  Others are relieved by the FDA’s action, contending that this so called “democratization” of genomics is eroding the quality and therefore the potential value of genomic testing.

Our first guest in the series is Cliff Reid, the CEO of Complete Genomics.  With an eye on delivering whole genome tests and reports to clinicians, Cliff has been of the opinion that routine clinical genomics might first take off in a country outside the control of the US FDA.  His company was recently purchased by the Chinese genomics firm, BGI or Beijing Genomics Institute.  Practically then, Cliff talks of the opportunities in China where regulation has been limited, if not primitive.

However, last month China cracked down on NGS based genetic testing.  Cliff explains that the new regulations are an important step forward for China and remains bullish on the opportunity there.

Back in the U.S., Cliff envisions a two-tiered system:  a highly regulated clinical avenue on the one hand coupled with democratic opportunities for consumers to get their own data on the other. 

The FDA says they are not opposed to consumers having their own genomic data, but are concerned over how the data is interpreted.  Cliff envisions consumer sites that aid in interpretation and get around FDA concerns.

Our second guest will be Anne Wojcicki, CEO of 23andMe, the company recently targeted by the FDA.  In her interview, Anne says her vision of delivering genomics data out to the masses remains undiminished.  She says the company is working closely with the FDA to get their test back on the market.   Is this a straightforward process, or does she see it as a major setback?  Ms. Wojcicki sounds confident about satisfying the regulators at the FDA.  She says she draws inspiration from her employees who “have always known they were in a ‘whiplash’ culture.”  For Anne, there may be setbacks and the way forward ambiguous, but the target remains clear.

Our third guest is working on the front lines of genomic medicine.  Elaine Lyon is the senior medical director of molecular genetics at ARUP Laboratories.  She’s also the new president of AMP, or the Association of Molecular Pathologists.    After discussing the impact of next gen sequencing on the lab, Elaine has some well informed thoughts as to the sweet spot of regulation.  

First of all, she says that her product is not a simple test, but the report that is delivered to the clinician.  Elaine feels that the professionalism of laboratory technicians is overlooked.  Much preparation and study is needed to be able to run these tests and interpret the results in a way that doctors find clinically relevant.  

This is why Elaine supports a change in terminology.  A lot has been said and written about whether LDTs, or laboratory developed tests, should be regulated by the FDA or whether CLIA is enough.    Elaine says that, in fact, they are not LDTs, but really LDPs, or laboratory developed processes.  And how does the FDA go about regulating a process, Elaine asks, which includes a highly trained subjective interpretation of the results? 

Elaine is concerned that having both FDA and CLIA regulation will be too onerous for labs.  “We’ll just end up not offering the tests,” she says in her interview.

The remaining two guests are yet to be interviewed.  We plan to speak with someone from the FDA to reveal their latest thinking.  And we aim to represent the diagnostics industry, members of which are increasingly making FDA approval part of their business plan and strategy up front.

What is this sweet spot for regulating genomic medicine?  Will Cliff Reid's two-tiered suggestion be the way forward?

We found this suggestion written on April 1st provocative, even if made in fun.



mendelspod
-->