Molecular biology unravels the processes occurring inside our cells, and how changes may lead to disease. Studying these molecular processes is tackled at different levels, from DNA to RNA to proteins. The advent of the so-called genomics, transcriptomics, and proteomics fields revolutionized how scientists approach problems that, before, were impossible to solve. However, proteomics has long lagged behind the capabilities of its counterparts. The reason: contrary to DNA and RNA technologies, protein analysis cannot depend on signal amplification to detect exceedingly rare molecules. Thus, proteomics analysis is often limited to studying those proteins that are most common inside a cell. “Low-abundant proteins are expressed in such a low copy number that we struggle to detect them using conventional proteomics approaches,” says Rupert Mayer, postdoctoral researcher at the Proteomics Tech Hub.
The Proteomics Tech Hub, a facility shared between IMBA, IMP, and the GMI and led by Karl Mechtler, aims to advance cutting-edge proteomics methods with a strong focus on single-cell proteomics. In a recently published study in Nature Communications, researchers at the Proteomics Tech Hub present a new proteomics pipeline that combines several recent state-of-the-art methods and technologies to improve the throughput and sensitivity of proteomics analyses.
Many important processes involving low-abundant proteins are understudied due to the lack of technical solutions. At the same time, existing technologies make sample processing rather slow, limiting the throughput of proteomics analysis. “A lot of projects don’t come to fruition because the researchers are lacking the technical tools and the time to perform these studies,” says Mayer. “With innovations such as our new pipeline, we aim to speed up the whole scientific experience, to help researchers obtain and publish their results faster”.
Proteomics analyses consist of four main parts: preparation, peptide separation, peptide sequencing, and analysis. The newly developed proteomics pipeline combines remarkable state-of-the-art improvements in the stages of peptide separation, peptide reading, and data analysis. This new workflow allows the team at the Proteomics Tech Hub to significantly reduce sample analysis time and boost protein detection sensitivity.
In the usual proteomics workflow, protein samples are first prepared for analysis, which includes breaking full proteins into smaller pieces called peptides. Second, these peptides are separated using a chromatography column, which “spreads out” groups of peptides based on their size. After separation, the peptides can be sequenced by a mass spectrometer, which determines their weight. Lastly, the data produced by the spectrometer is analyzed using specialized software to reconstruct the initial proteins. This complex process allows scientists to identify and quantify the different proteins in a sample.
The newly presented workflow combines improvements at several stages. For peptide separation, the Tech Hub scientists moved away from using conventional chromatography columns, opting instead for newly available micropillar array columns. These finely manufactured columns improve peptide separation by using highly organized pillar arrays which, compared to previous alternatives, provide a more homogeneous path for samples to move through the column. This ensures that all identical peptides are analyzed together in the following steps, increasing the accuracy and sensitivity of the technology. “After sample preparation, we can have hundreds of thousands of peptides to read, and the spectrometer cannot deal with that much information,” Mayer explains. “Peptide separation narrows that number down to between ten and a hundred peptides at the same time, making it much simpler”.
The Proteomics Tech Hub also improved the sequencing of peptide samples by the mass spectrometer. As Rupert Mayer explains, “Instead of having the spectrometer read one peptide at a time, which makes data acquisition very slow, we’ve adapted the settings on the machine to read several peptides at the same time.” While this generates more complex reading data, the group found a way to effectively interpret that data using a new AI-driven software called CHIMERYS, which can identify different peptides being read at the same time. “This software has been trained with thousands of peptide datasets and is very good at making sense of data that, before, would have been too complex to analyze with other tools,” Mayer adds.
The combination of these three innovations has allowed the Proteomics Tech Hub to detect up to twice the number of peptides and 50 percent more proteins from the same sample than with the previous, conventional workflow. Even more importantly, the newly presented pipeline has proven to be effective at identifying protein interactions that, until now, would have gone unnoticed.
The researchers proved that, in protein interaction studies, they were able to detect up to 92 percent more interactors, which allowed them to identify up to 50 previously unknown interactors for a specific protein. This sensitivity will allow researchers at IMBA, IMP, and GMI to discover new relationships between proteins and study their involvement in molecular signal pathways.
Overall, the advancements provided by the Proteomics Tech Hub will allow the Proteomics Facility to reduce sample analysis times by up to 25 percent while providing researchers with top-notch quality data to support their research.
The next stage in the field of proteomics is here.