Interview: Old is new: maximizing read lengths and yield for genome assembly
- Home
- Interview: Old is new: maximizing read lengths and yield for genome assembly
Date: Thursday 24th October
Time: 4pm UK time
Speaker: John Tyson, University of British Columbia
Dr John Tyson is a senior research associate in the lab of Professor Terrance Snutch based in the Michael Smith laboratories and Djavad Mowafaghian Centre for Brain Health at the University of British Columbia, and a member of the Nanopore Whole Genome Sequencing Consortium. A molecular biologist by training, John has been utilising nanopore sequencing in his research since 2014, focusing on both full-length transcript sequencing to investigate splice variation and whole genome sequencing and assembly. He is currently working on methods to expand nanopore read lengths for better production of whole genome assemblies, and using individual full-length RNA/cDNA transcript sequencing to better understanding contextual splice variation in neurological disease.
John presented his webinar ‘Old is new: maximizing read lengths and yield for genome assembly’ on Thursday 24th October, 4pm (BST). Here, he tells us about how long-read sequencing has changed his work, and the impact of ultra-long reads on data analysis and interpretation.
What are your current research interests?
I focus mainly on aspects of genome modification and altered transcript splicing events relating to neurological disease, with projects involving whole genome sequencing, methylation and full-length contextual splice variation.
What first ignited your interest in genomics?
The ability to define and understand a “system” from a defined “blueprint”, to read it and be able to change it.
Can you tell us more about how long-read sequencing is changing your field? How has it benefited your work?
Long-read sequencing is allowing us to define more specifically the individual modular structure of functional full-length ion channels transcripts, and how this changes from tissue to tissue and disease state. It is also allowing us to tie together aspects of genome structure and its modification to transcriptional control of gene expression and splicing.
What impact does an increased proportion of ultra-long sequencing reads have on downstream data analysis?
Ultra-long sequencing is a major benefit to completing areas in a genome that are highly repetitive or have complex structures that are comprised of non-unique sequence blocks. The longer the reads, the greater the repeat-spanning ability becomes, and the greater the genome accuracy.
What affect does simultaneous acquisition of nucleotide sequence and modification data have on experimental results and interpretations?
Analysis is simplified and less ambiguous when modification detection is directly measured from the sequencing signal itself. You are measuring directly at the molecular level and not via a secondary assay, where errors or sensitivity issues can creep in. It’s also a lot cheaper.
What have been the main challenges in your work, and how have you approached them?
The main challenges have come with the main advantages brought about from using this new long-read sequencing technology – which is the requirement to develop new methodological approaches to extract the most benefit from the unique properties of having the ability to sequence single full length, or very long, nucleic acid molecules.
I have been mainly focused on developing methodological approaches to deliver larger and larger molecules to the surface of the flow cell, so that signal/sequence information can be captured.
What’s next for your research?
More method development and integration of genomic, transcriptomic, and modification profiles, relating to our study of ion channels and neurological disease.
Hear more from John from his webinar ‘Old is new: maximizing read lengths and yield for genome assembly’