Portfolio News and Webcast
An ongoing Technology Commercialization Fund (TCF) project with Eaton Corporation is producing great results for grid planning. IPO is proud to present this private sector collaboration at the Innovation XLab Grid Modernization Summit January 24-25.
DOE's High Performance Computing for Energy Innovation (HPC4EI) Initiative released a call for proposals seeking American companies interested in collaborating with DOE's national laboratories on one-year projects to apply high-performance computing (HPC) modeling, simulation and data analysis to key challenges in U.S. manufacturing and material development.
You will hear about a software tool to utilize modeling and simulation to optimize product development and risk management. The Simrev tool shows great promise for maximizing the parallelization of the product engineering design phase.
IT and Communications Technologies
The LiDO code combines finite element analysis, design sensitivity analysis and nonlinear programming in a High-Performance Computing (HPC) environment that enables the solution of large-scale structural optimization problems in a computationally efficient manner. Currently, the code uses topology optimization strategies in which a given material is optimally distributed throughout the domain. Originally the code parameterized the material's characteristic function field as piece-wise uniform over the finite elements, however this proved problematic when implementing LiDO's Adaptive Mesh Refinement (AMR) strategies. LiDO has since implemented higher-level parameterizations for the material's characteristic function field. One such parameterization uses the level-set function of an…
LLNL has developed a new active memory data reorganization engine. In the simplest case, data can be reorganized within the memory system to present a new view of the data. The new view may be a subset or a rearrangement of the original data. As an example, an array of structures might be more efficiently accessed by a CPU as a structure of arrays. Active memory can assemble an alternative representation within the memory package so that bytes sent to the main CPU are in a cache-friendly layout.
The invention utilizes the statistical nature of radiation transport as well as modern processing techniques to implement a physics-based, sequential statistical processor. By this we mean that instead of accumulating a pulse-height spectrum as is done in many other systems, each photon is processed individually upon arrival and then discarded. As each photon arrives, a decision is refined using the energy deposited as well as the photon arrival time. Detection is declared when such a decision is statistically justified using estimated detection and false alarm probabilities. The result is a system that has the potential to provide improved detection performance with higher reliability and lower acquisition time.
The method has two major innovations over…
LLNL has developed a method of extending device lifetimes by imprinting into the device a shape that excludes specific vibrational modes, otherwise known as a phononic bandgap. Eliminating these modes prevents one of the primary energy loss pathways in these devices. LLNL’s new method enhances the coherence of superconducting circuits by introducing a phononic bandgap around the system’s operating frequency.
LLNL is seeking industry partners to collaborate on quantum science and technology research and development in the following areas: quantum-coherent device physics, quantum materials, quantum–classical interfaces, computing and simulation, and sensing and detection.
To solve these challenges using new and existing CT system designs, LLNL has developed an innovative software package for CT data processing and reconstruction. Livermore Tomography Tools (LTT) is a modern integrated software package that includes all aspects of CT modeling, simulation, reconstruction, and analysis algorithms based on the latest research in the field. LTT contains the most expansive and recently published CT data preprocessing and reconstruction algorithms available.
MimicGAN represents a new generation of methods that can “self-correct” for unseen corruptions in the data out in the field. This is particularly useful for systems that need to be deployed autonomously without needing constant intervention such as Automated Driver Assistance Systems. MimicGAN achieves this by treating every test sample as “corrupt” by default. The goal is to determine (a) the clean image and (b) the corruption both of which are unknown to the system at test time. MimicGAN solves this by making alternating guesses between what the clean sample should look like and what corruption might make it look like the observed corrupted sample. If there is no corruption at all, MimicGAN simply learns the corruption to be an identity transform – i.e., no corruption.
LLNL has developed a new method for securely processing protected data on HPC systems with minimal impact on the existing HPC operations and execution environment. It can be used with no alterations to traditional HPC operations and can be managed locally. It is fully compatible with traditional (unencrypted) processing and can run other jobs, unencrypted or not, on the cluster simultaneously. The method has been prototyped and is continuing to be developed at LLNL.
Simrev is a python library imported into a user-generated program. As the program grows in capability and complexity, the engineered product matures. The "software twin" handles all changes to product configuration and is the portal to running supercomputing analysis and managing workflow for engineering simulation codes. Assemblies become program modules; parts, materials, boundary conditions, and contact interfaces become user defined classes or library-provided objects; and simrev and handles mesh export, input translation, and batch job submission. Simrev has been used to develop models that run in LLNL-developed analysis codes ALE3D, ParaDyn, NIKE3D, and Diablo.
Simrev contains patent-pending technology where the version-control state of the software-twin can be mapped one…
LLNL has developed a new system, called the Segmentation Ensembles System, that provides a simple and general way to fuse high-level and low-level information and leads to a substantial increase in overall performance of digital image analysis. LLNL researchers have demonstrated the effectiveness of the approach on applications ranging from automatic threat detection for airport security, to natural images and cancer detection in medical CT images. Furthermore, LLNL’s approach naturally leads to a big data type approach for unsupervised problems able to exploit massive amounts of unlabeled data in lieu of ground truth data, which is often difficult and expensive to acquire. LLNL has filed a patent application on the new system and is interested in continuing development focused on…
LLNL's new "Catalyst" supercomputer is now available for collaborative projects with American industry. Developed by a partnership with Cray and Intel, the novel architecture behind this high performance computing (HPC) cluster is intended to serve as a proving ground for new HPC and Big Data technologies and algorithms.
Catalyst boasts nearly a terabyte of addressable memory per compute node through the addition of 128 gigabytes (GB) of dynamic random access memory (DRAM) per node and 800 GB of non-volatile memory (NVRAM) per node in the form of PCIe high-bandwidth Intel Solid State Drives (SSD). Additionally, each Lustre router node contains 3.2 terabytes (TB) of NVRAM. Improved cluster networking is achieved with dual rail Quad Data Rate (QDR-80) Intel TrueScale fabrics.…
The Discriminant Random Forest combines advantages of several methodologies and techniques to produce lower classification error rates.
LLNL’s technology does not use battery-powered tags. Rather it uses a tag technology that has the same range characteristics of battery-powered tags (approximately 10 m) but without the conventional battery.