Skip to main content
Description

Clinical images have a wealth of data that are currently untapped by physicians and machine learning (ML) methods alike. Most ML methods require more data than is available to sufficiently train them. In order to obtain all data contained in a clinical image, it is imperative to be able to utilize multimodal, or various types of, data such as tags or identifications, especially where spatial relationships are key to identification of a clinical diagnosis. To this end, LLNL scientists have developed a method for embedding representations into an image for more efficient processing. Elements within an image are identified, and their spatial arrangement is encoded in a graph. Any machine learning technique can then be applied to the multimodal graph, as representations of the images. These representations can give information such as the proximity of one cell to another allowing the image viewer to obtain knowledge, informing their next decisions. By tapping into the wealth of data in a clinical image, a doctor can gain knowledge that they might not have known prior to applying this patented method, potentially saving time and lives.

US patent application 20210151168 "Universal image representation based on a multimodal graph"

Image
Reference Number
IL-13464
Contact