Some form of tissue diagnostics is performed every time a biopsy is taken from a suspected tumour or confirmed cancer and used to make diagnostic and treatment decisions. Although tissue diagnostics is in some ways mature and one of the mainstays of modern cancer diagnostics, more can be done now to translate the exciting research advances in spatial biology into robust clinical workflows.
In practice, this requires technologies that are reliable enough to handle complex multi-reagent workflows and concurrent detailed analysis to process millions of pathology slides each year. But if these technologies are developed to capture sufficient data, and offer ease of use and the right cost point, they may take clinicians one step along the journey from FFPE samples towards spatial multi-omics, and from today’s physical sample banks to digitized tissue samples with genomic and proteomic information that can be revisited for research or diagnostic analysis.
Tissue diagnostics today
Tissue pathology and biomarker detection are both essential aspects of cancer diagnostics and treatment. Formalin-fixed paraffin-embedded (FFPE) tissue staining in conjunction with analysis by a skilled pathologist is used to diagnose both types and grades of cancer. The presence of specific cancer biomarkers is detected using FISH (fluorescence in situ hybridisation – a method that uses labelled oligonucleotides to bind nucleic acids within the cells) or antibodies that are able to target proteins expressed on the outer membrane of the cell.
These techniques are widespread in clinical practice but come with limitations. The process of formalin fixing and paraffin embedding necessary to generate FFPE slices is by its nature destructive to both the nucleic acids and the proteins within a sample. This limits the amount of information readily available to a diagnostician. Despite current limitations, progress in advancing recent research technology innovations to clinical use to achieve a more detailed diagnosis has been slow.
FFPE or Fresh Frozen section
To fit into current clinical tissue diagnostics workflows, any new techniques will certainly have to work on FFPE tissue, which remains the main method for long-term tissue preservation. In the research platforms that are revolutionising our understanding of cancer, however, we are already a seeing a shift to spatial analysis of fresh frozen (FF) sections. This is because – in contrast to FFPE – freezing causes less nucleic acid degradation and protein denaturation, which limits the depth of biological insight and the range of diagnostic procedures that can be performed.
The downside of FF section is that tissues cannot be kept long-term without freezing at - 80°C, which implies that any FF slice cannot be indefinitely stored and revisited. With this comes the responsibility to capture and store enough data during initial analysis, but without overwhelming the physician or exponentially increasing the cost of analysis. Perhaps with enough automation and detailed spatial biological analysis at the time of collection, the future will see digital tissue banks of digitized samples with full spatial genomic and proteomic information akin to today’s frozen sample banks.
A key step with considerable improvement potential in any current or future tissue diagnostics workflow is the tissue sectioning itself. This remains a rate-limiting step, since it can be challenging to do either quickly or correctly. Regardless of how the tissue is prepared, sectioning is typically a hands-on process in which a histologist carefully needs to adjust the angle and feed the tissue into a microtome in such a way as to minimise tissue waste.
Some automated systems purport to solve this problem, but even if they cut the tissue evenly, issues still exist. For example, many are unable to mount slides for viewing. A fully automated system would need to be able to slice, tag and position the tissue such that it could be both read automatically and indexed back to its original position on the tissue or tumour block. Perhaps surprisingly, some of TTP’s thinking towards the automation of contact lens testing may be relevant here too.
New technologies for extracting multiplex information
Spatial tissue analysis is currently one of the most exciting areas of biological research and extends far beyond the basic tissue pathology in clinical use today. Many companies are exploring high multiplex approaches where tens or hundreds of probes are combined and imaged to give a more complete nucleic acid or protein profile.
While majority of immunohistochemistry needs can be met by probing 100+ targets, spatial analysis of, for example, a whole transcriptome at single cell resolution, will need further development before it can be moved to the clinic. To achieve this, it may be necessary to tag and disrupt the tissue to allow downstream processes such as sequencing or mass spectrometry to be performed. This is more experimental at present and could be done using in situ library preparation or some form of pre-disruption tagging.
The opportunity is to bring some of this emerging science and methodology into diagnostics and develop robust next-generation tissue diagnostic technologies that can:
- deliver meaningful throughput – recall that hundreds of millions of pathology slides are analysed each year;
- rapidly identify multiple genomic and proteomic targets at suitable resolution (which may extend from sub-cellular to broad areas of tissue) without compromising on throughput. This includes the ability to offer single-cell resolution and multi-omics where needed, but also with the ability to zoom out for simpler diagnostic procedures. Ideally, the capture process would be the same in all cases with resolution and type of information analysed being determined at a digital level.
Data collection and analysis
Advances in biotechnology development, particularly in DNA and RNA sequencing, imaging mass spectrometry and spatial analysis fields have led to the generation of enormous amounts of data.
We can therefore expect an increasing emphasis on data acquisition, storage and algorithms. Tissue diagnostics is exceptionally dependent on human expertise – an inexperienced analyst may easily misinterpret and misdiagnose a stained tissue.
This has meant that, even when data can be generated, they cannot necessarily be analysed in a format that is clinically validated. Not only would it require hundreds of fully analysed examples for each tumour type, but also algorithms sufficiently sophisticated and consistent that these results could be categorised meaningfully.
As such, a substantial element of image analysis and detailed algorithm development such as machine learning approaches will be required to identify and appropriately categorise specific stain/probe patterns within a tissue section to reach a diagnostic conclusion.
Automated diagnosis in a regulated environment is a huge challenge even for the simplest data sets, and any data stored currently will need to be in a format that can be revisited by an expert who would then be able to reinterpret the algorithm’s findings.
Consequently it is naive to think that current technology would reduce data storage to a “diagnosis” rather than a fully captured series of images; and as noted earlier the digital tissue bank is a more likely case. Tissue samples are regularly revisited – be that for further diagnostic analysis or research purposes – and any digital version would need to allow the same. In this future, digital storage issues and beyond-big-data analysis would need to be addressed.
As the insights from spatial multi-omics make their way into the clinic, we need to develop core technologies that support automation. Currently, it is not possible to expand tissue diagnostics to use high-level multiplexing for clinically relevant applications, but it can be made possible with the addition of key enabling technologies, namely:
- consistent and automated means of sectioning tissue and applying it to slides for imaging
- automation to prepare, label and image the tissue, and for library preparation for downstream sequencing analysis if needed
- reliable algorithms to both analyse data and report results to pathologists or clinicians (this will become even more essential as the acquired data volume increases)
These technologies also need to hit the right cost point to enable research clinicians to generate reference datasets that can be made available for diagnostic purposes.