Portfolio News and Webcast
Lawrence Livermore National Laboratory scientists and engineers have collected three R&D 100 Awards. Often called the “Oscars of invention", the R&D 100 Awards recognize the top 100 industrial inventions worldwide.
Analyzing the performance and efficiency of complex facilities with modern instrumented components – and the performance of regional networks of such facilities - is a daunting task. Increasingly, facilities collect data from manually input systems as well as diverse Internet of Things sensors and monitoring tools for specialty equipment, storage systems, computing networks, and power/cooling infrastructure. Analyzing the disparate collected data can be intractable. Similar to data from complex hospital facilities, LLNL’s high performance computing center data comprises different formats, granularities, and semantics. Handwritten data processing scripts no longer suffice to transform the data into a digestible form.
Complex problems, such as COVID-19, are being studying computational, prior to be tested experimentally. These complex computational problems require HPC resources, of which must be understood and allocated properly. This requires the user to waste valuable computational time just setting up a job on the HPC system. In order to allow computational scientists to focus on the science, LLNL scientists created Maestro. Maestro is an open-source HPC software tool that automates execution of software by defining required multi-step workflows on HPC resources. The core design of Maestro focuses on encouraging clear workflow communication and documentation, while making consistent execution easier to allow users to focus on science.
To understand complex problems using machine learning it is generally necessary to have large amounts of data. In order to generate these large amounts of data, researchers utilize simulation. Simulations are best run on High-Performance Computers (HPCs) which require various complex processes. To simplify running machine learning based workflows on HPCs, LLNL scientists developed Merlin. The goal of Merlin is to make it easy to build, run, and process the kinds of large scale HPC workflows needed for cognitive simulation. At its heart, Merlin is a distributed task queuing system, designed to allow complex HPC workflows to scale to large numbers of simulations.
Lawrence Livermore researchers are the first to successfully develop a practical fiber-optic amplifier that generates significant optical gain from 1,390 nanometers (nm) to 1,460 nm with relatively good efficiency. This discovery enables the potential for installed optical fibers to operate in an untapped spectral region known as the E-band, in addition to the C- and L-bands where they currently operate -- effectively doubling a single optical fiber's information-carrying potential.
LLNL’s new amplifier design is based on a novel Neodymium-doped microstructured optical fiber that is tailored to preferentially enhance optical signal gain in the E-band while effectively suppressing competing gain in other spectral bands. The new amplifier design is built around the same…