bioinformatics

When an Exome Test Is Part of the Therapy and Not a Diagnostic: John West on Personalis and Personalized Cancer Vaccines

About six years ago there was a wave of genome interpretation startups getting their first rounds of funding. One of them was Personalis, a company founded by a well known group of Stanford geneticists and bioinformaticians.

John West is the CEO of Personalis, and he joins us today to talk about how the company is participating in the dramatic shift in drug development toward immuno oncology drugs. Our listeners might remember John from his days at Solexa where he served as CEO and presided over the sale of the company to Illumina.

At the same time Personalis came on the scene, the first drug that would harness the immune system to fight cancer was being approved by the FDA, Yervoy by Bristol-Myers Squibb. This was the first of four drugs known as checkpoint inhibitor drugs. These four drugs have had spectacular success and together generate revenue of over 6 billion per year, a level which has doubled in the past year.

John and Personalis are working with biotech companies on a new generation of immuno therapies known as personalized cancer vaccines. These new drugs are actually custom synthesized for each patient after an “immunogram” or genetic workup of the tumor has been done. We know today that tumor growth is driven mostly by neoantigens, or new antigens which arise from mutations that happen after the cancer first appears, says John. So an immunogram done by Personalis must look at all the genes (over 20,000) and not just the original driver mutations. An immunogram could only be done in the last few years with the latest developments in next gen sequencing and algorithm creation.

How far along are these new personalized cancer vaccines? And what is the commercialization challenge for Personalis?

“We are essentially an integral part of the therapy,” says John. "So we don’t think of it as a diagnostic test. We think about it as the initial part of the manufacturing of the therapy."

Frontiers of Sequencing: Putting Long Reads and Graph Assemblies to Work

OK, so we get it. Long read sequencing technology is cool. But how cool? Is it another great player on the field, or does it change the game altogether? 

The Mike Schatz lab at Cold Spring Harbor is well know for de novo genome assemblies and their work on structural variation in cancer genomes, so we were curious to hear how long reads have impacted their work. In todays show, lab leader, Mike Schatz, and doctorate student, Maria Nattestad tell of two new projects that include the de novo assembly of a very difficult but important flatworm genome and, secondly, making better variant calls for oncogenes such as HER2.

In the case of the flatworm, Mike says that the move to using PacBio’s long reads improved the assembly by more than a 100 times. That means the difference of looking at a super high resolution picture versus a fuzzy, blurry one, he says. With her work on cancer cell lines, Maria is seeing variants that just weren’t there with short reads. Will her work translate to lower false positive rates for HER2 in clinical diagnostics?

What will be the major headline for sequencing and informatics in 2016?

Mike says we’ll see many more reference genomes done, that the term “reference genome” itself is changing as we go from the one standard reference genome to multiple reference genomes representing the broader population. These new reference genomes are pushing bioinformaticians to come up with new ways to visualize and compare the genomes. Maria details her work into using “graph” assemblies as opposed to the linear approach made popular by the Genome Browser. She says that already a new generation of informaticians are rethinking genome visualization using graph assemblies. (Included below is an image representing her work.)

Neither mentioned it, so we ask at the end, what about Oxford Nanopore’s tech?

 

(The spectral karyotype of the Her2-amplified breast cancer cell line SK-BR-3. The original chromosomes are different colors, so this genome is a complex mixture of various chromosomes. The total number of chromosomes has also jumped from 46 to 80, and there is approximately twice as much DNA as in a normal human genome. Maria Nattestad and Mike Schatz are studying this genome to see how oncogenes like Her2 became amplified while all these changes took place in the big picture of the genome.)

With Two New Easy-to-Use Sequencing Instruments, Thermo Readies for Primetime in the Clinic

The race to the $1,000 genome has been full of breathtaking advances, one after the other. But is next gen sequencing reaching maturity? Will there be that many more significant innovations?

Yes, says our first guest in today’s program, Andy Felton, VP of Product Management at Thermo’s Ion Torrent division. Andy presented Thermo’s two new sequencing instruments, the Ion S5 and the Ion S5XL at a press conference today. While their numbers (accuracy, read length, throughput) don’t look that significant an achievement over the stats of their predecessors--the Personal Genome Machine (PGM) and the Ion Proton--the S5 and S5XL perhaps lead the industry now in ease-of-use.

Integrated with Thermo’s new sample prep station launched last year, the Ion Chef, and robust bioinformatics software, the workflow from sample to report is impressively simple and straight forward. Only two pipetting steps are required. The genomics team at Thermo is betting that this attractive simplicity will open a new market. "Genomics for all," they boast.

Does this just catch Thermo up with Illumina, or does it put them in the lead for clinical sequencing, we ask our second guest, Shawn Baker, CSO of AllSeq. (See Shawn's own blog here.)

Bina CEO Details Secret to Success in NGS Informatics

Last year, pharma giant Roche went on a buying spree, picking up one company after another. In December, when it was announced they had bought out Bina Technologies, many of us were playing catch up. Who is Bina, and how do they fit in the overall bioinformatics space?

Today we hear from Bina's CEO, Narges Bani Asadi. As with many new bioinformatics companies, Bina has changed their service and product since they spun out of Stanford and UC Berkeley four years ago. Narges says that the biggest demand from customers is to provide a comprehensive solution for the entire organization. Often, she says, she encounters brilliant bioinformaticians working at customer organizations who are completely overwhelmed by all of the various informatics tools available. Many of these tools are offered free over the internet, and, she says, it’s creating “open source overload.”

Bina has been a very ambitious company from the start, working to provide NGS users with a comprehensive informatics solution, from beefy, fast infrastructure to an interface for the various kinds of users in an organization, to high powered analytics. And Narges is excited about the Roche buyout, saying that it will speed up their plans. Indeed, just providing bioinformatics solutions to Roche in both their drug and diagnostic divisions is already a huge project.

What was Bina doing so well that attracted Roche, and what will the future NGS informatics ecosystem look like? Join us for an inside look at the world of bioinformatics with one of the space’s most dynamic leaders.

Paperwork, Not Algorithms the Biggest Challenge for Large Bioinformatics Projects, Says David Haussler, UCSC

Guest:

David Haussler, Director, Center for Biomolecular Science and Engineering, UCSC
Bio and Contact Info

Listen (8:08) Paperwork not algorithms the biggest challenge with bioinformatics

Listen (7:01) With Amazon Cloud around are compute and storage still issues?

Listen (3:23) Global Alliance for Genomics and Health

Listen (5:05) What are the technical challenges yet to be tackled?

Listen (7:35) A global bioinformatics utility has to be an NGO

David Haussler and his team at UC Santa Cruz have gone from one large bioinformatics project to another. After creating the original Genome Browser (which still gets over 1 million hits per month), David worked to build a large data set for cancer genomics, The Cancer Genome Atlas.

“With more data comes statistical power,” David says in today’s show. “The only way we can separate out the “driver” mutations from the “passenger” mutations is to have a large sample of different cancers."

This makes sense. One needs millions of samples to see when a mutation is just random, or when it occurs with true statistical frequency. So what have been the challenges to building such a large data set?

David says issues around consent and privacy have actually held up his projects more than any technical difficulties. For example, the NIH has had several meetings for over a year now to determine whether their data can be put on the commercial cloud. In addition there are issues connecting large medical institutions around the country and various countries from around the world. David is a co-founder of the Global Alliance for Genomics and Health, which he says is nearing the tipping point of being THE bioinformatics utility that will be globally adopted.

In the days of commercial offerings such as Amazon Cloud, is compute and storage still a problem? And what, after the privacy issues are seen to, are the technical challenges for bioinformaticians like Haussler?

Podcast brought to you by: National Biomarker Development Alliance - Collaboratively creating standards for end-to-end systems-based biomarker development—to advance precision medicine

Raising the Standards of Biomarker Development - A New Series

We talk a lot on this show about the potential of personalized medicine. Never before have we learned at such breakneck speed just how our bodies function. The pace of biological research staggers the mind and hints at a time when we will “crack the code” of the system that is homo sapiens, going from picking the low hanging fruit to a more rational approach. The high tech world has put at the fingertips of biologists just the tools to do it. There is plenty of compute, plenty of storage available to untangle, or decipher the human body. Yet still, we talk of potential.

Training the Next Generation of Bioinformaticians: Russ Altman, Stanford

Guest:

Russ Altman, Dept Chair, Bioengineering, Stanford University

Bio and Contact Info

Listen (5:32) A bioinformatician bottleneck?

Listen (4:19) Does the engineer or coder have enough basic biology?

Listen (5:04) Have we been overly reductionist?

Listen (5:16) Beautiful but useless algorithms

Listen (4:13) New breakthroughs in natural language processing

Listen (3:39) A new regulatory science

For our last episode in the series, The Bioinformatician Bottleneck, we turned to someone who has not only done lots of bioinformatics projects (he's been lead investigator for the PharmGKB Knowledgebase) but also one who is training the next generation of bioinformaticians. Russ Altman is Director of the Biomedical Informatics program at Stanford. He's also an entertaining speaker who's comfortable with an enormous range of topics.

It's been some time since we had Russ to the program, so we had some catching up to do. What are his thoughts on the recent philosophy of science topics we've been discussing? Are the new biologists becoming mere technicians? What is meant by open data? Etc. He warns of being too black and white when it comes to reductionism or antireductionism. And agrees that the new biologist needs quite a bit of informatics training. But he's not worried that all bioinformaticians have to be better biologists, saying that there's a whole range of jobs out there.

What's Russ excited about in 2014? The increased ability to do natural language processing, he says.

"We have 25 million published abstracts that are freely available. So that's a lot of text. Increasingly we're having access to the full text and figures. I think we're near the point where we'll have an amazing capability to do very high fidelity interpretation of what's being said in these articles," he says in today's interview.

Russ finishes up by talking about a new West Coast FDA center in which he's involved. The center is focused on a program for a new emerging regulatory science, which he defines as the science needed to make good regulatory decisions.

"This area of regulatory science," he says, "has great opportunity to accelerate drug development and drug discovery."

I saw Russ at Stanford's Big Data conference after our interview and asked him at what age he decided against Hollywood and for going into a life of academia and science.

"Who says I did?" he retorted without hesitation.

Podcast brought to you by: Roswell Park Cancer Insititute, dedicated to understanding, preventing and curing cancer for over 115 years.

Stanford’s Big Data in BioMedicine Conference Turns Two

With Silicon Valley blazing on as number one hot spot for high tech and the Bay Area claiming the same for biotech, it makes sense that Stanford, sitting there mid-peninsula basking in all that brilliance, should command a leading role in bioinformatics.

Myths of Big Data with Sabina Leonelli, Philosopher of Information

Guest:

Sabina Leonelli, Philosopher, University of Exeter

Bio and Contact Info

Listen (6:44) Not a fan of the term Big Data

Listen (4:20) Something lost in bringing data together from various scientific cultures

Listen (3:36) Are data scientists really scientists?

Listen (4:11) Controversies around Open Data

Listen (3:03) Data systems come with their own biases

Listen (6:22) Message to bioinformaticians: Come up with the story of your data

Listen (1:15) Data driven vs hypothesis driven science

Listen (2:46) Thoughts on the Quantified Self movement

For the next installment in our Philosophy of Science series, we look at issues around data. Sabina Leonelli is a philosopher of information who collaborates with bioinformaticians. In today's interview, she expresses her concerns about the terms Big Data and Open Data.

"I have to admit, I'm not a big fan of this expression, 'Big Data,'" she says at the outset of the show.

Using data in science is, of course, a very old practice. So what's new about "big" data? Sabina is mostly concerned about the challenges of bringing data together from various sources. The biggest challenge here, she says, is with classification.

"Biology is fragmented in a lot of different epistemic cultures . . and each research tradition has different preferred ways of doing things," she points out. "What I'm interested in is the relationship between the language used and the actual practices. And there appears to be a very strong relationship between the way that people perform their research and the way in which they think about it. So terminology becomes a very specific signal for the various research traditions."

Sabina goes on to point out that the nuances of specific research traditions can be lost as data is integrated with other traditions. For instance, most large bioinformatics databases are done in English, whereas some of the individual research data may have been originally done in another language.

This becomes especially important with the new movement toward Open Data, where biases are built into the databases.

"The problem resides with the expectation that what is 'Open Data' is all the data there is," she says.

In fact, the data in Open Data tends to come from databases which are highly standardized and often from the most powerful labs.

How can bioinformaticians deal with these challenges? Sabina says researchers should be more diligent about creating "a story" around their data. This will help make the biases more transparent. She also says that a lot of conceptual effort must go into creating databases from the outset so that the data might be used for yet unknown questions in the future.

We finish the interview with her thoughts on the Quantified Self movement.

Podcast brought to you by: Chempetitive Group - "We love science. We love marketing. We love the idea of combining the two to make great things happen for your marketing communications."

Bioinformatics Pioneer, Martin Reese, on Scaling Up Human Genome Interpretation

Guest: Martin Reese, Co-founder, President & CSO, Omicia

Bio and Contact Info

Chapters: (Advance the marker)

0:40 How did you get started in bioinformatics?

3:04 What is the biggest challenge with human genome interpretation?

8:01 Diagnosing Ogden Syndrome

13:30 What sets Omicia apart?

18:08 Who is ordering your tests?

23:29 FDA letter to 23andMe unfortunate

25:47 What's your main objective for 2014?

Martin Reese's career in bioinformatics began in 1993 when he attended a lecture in Heidelberg, Germany entitled "Genome Informatics." Reese, a German, then switched his studies from medical informatics to bioinformatics and moved to Berkeley where he worked on assembling the genome for the Human Genome Project. In 1996, he started a company with his Ph D advisor, David Haussler (of Genome Browser fame), called Neomorphic, part of the first commercialization of bioinformatics.

Martin is now the president of Omicia, a company he founded to take on the challenge of scaling up human genome interpretation.

How far have we come in the clinical interpretation space? Martin says that in 2013, 80% of human genome interpretation was done for research and 20% for the clinic. In the next 3-5 years, he predicts those percentages will switch to 20% for research and 80% clinical.

Martin says that one of the biggest challenges for human genome interpretation is easy-to-use visualization tools. For this reason, he's been a fan of the DTC company, 23andMe, and felt that the FDA's letter to the company was "very unfortunate."

"[23andMe] educated the whole population about genetics," he says in the interview, "and they tried to make the reports easily understandable and manageable by a regular person. . . . The easier we make the reports, the better doctors can understand them."

Just who is ordering reports from Omicia, and what is the company's objective in the year ahead? Join us for an insider's take on clinical genomics.

Podcast brought to you by: Your company name here. - Promote your organization by aligning it with the great content.




New to Mendelspod?

We advance life science research, connecting people and ideas.
Register here to receive our newsletter.

or skip signup