Request Demo
< Go back

Blog post   |   15/10/2024

The information age in biomedical research

Author: Shaked Ashkenazi

The information age in biomedical research

Only days after the Nobel Prize announcements, it is evident that this year the Royal Swedish Academy of Sciences chose to highlight the role of computational tools in modern scientific research. The Nobel prize in physics was awarded to John J. Hopfield and Geoffrey E. Hinton, for laying the foundations for what we know today as machine learning. Meanwhile, the Nobel Prize in chemistry was awarded to David Baker from the University of Washington in Seattle and from Howard Hughes Medical Institute, and to Demis Hassabis and John M. Jumper from Google DeepMind for their work on computational predictions of protein structures and interactions. 

 

One might say that these choices should not surprise us, since we are currently in the peak of the information age (at least as we know it today), and choosing these laureates simply reflects the contemporary trends. However, this might be an oversimplification of the real effect that these developments have on scientific research. For decades scientists were puzzled by the enigma of protein folding: with such complex chemical properties, how do these molecules “know” how to assume a stable 3D structure? Which chemical feature triumphs the others, when it comes to the spatial localization of amino acids? Even though people still cannot point out what are the rules that govern this chemical phenomenon, the algorithm AlphaFold (Google DeepMind), is, in fact, able to predict a 3D structure from an amino acid sequence. With that, it was Google that changed the face of biochemistry forever. Interestingly, the work of David Baker, which focuses on engineering new proteins with new desired functions, actually predated AlphaFold, although there is no doubt that these two fields will now move forward synergistically, each benefiting from the other. 

 

Similarly, the importance of machine learning algorithms in biomedical research cannot be overstated. For example, many efforts are invested in training algorithms for image processing, from the detection of tumors and tissue abnormalities, to the accurate identification of extracellular vesicles in liquid biopsy samples. Many of the recent advancements in the field of super-resolution microscopy are based on software development and computational tools. The most obvious ones are clustering algorithms which turn a collection of recorded fluorophore localizations into meaningful information. Even though modalities like dSTORM can yield beautiful images, software tools are needed in order to draw significant conclusions. On top of that, only in the last few weeks we came across several publications that describe machine learning algorithms that utilize super-resolution images for diagnostic purposes, e.g., training an algorithm to identify pathological conditions based on nanoscale changes in cells. 

 

Likewise, other fields of biomedical research are also in the midst of an explosion of new machine learning-based developments. Single-cell “omics” are another prominent player in this court, with more and more molecular categories being explored. It might be interesting to think about these software developments as two ends of the same pole - on one end, information on nanoscale molecular details, and on the other, tremendous amounts of data from dozens of cells within a tissue or even a whole organism.

 

While it can be argued whether the algorithm development that laid the foundations for the modern machine learning tools can be categorized as work in “physics”, there is no doubt that the Royal Swedish Academy of Sciences is sending a very clear message on what, in their opinion, is in the center of scientific research today.