Nvidia Corp. is expanding its product portfolio with new software capabilities and a high-performance computing platform that scientists can use to accelerate research initiatives.
The updates were set to make their debut today at the annual Supercomputing 2022 event. During the SC22 event, Nvidia also will detail its work with Lockheed Martin Corp. to build a system for visualizing geophysics data such as sea temperature measurements.
Digital twin collaboration
The U.S. National Oceanic and Atmospheric Administration has selected Nvidia and Lockheed Martin to build a new computing system dubbed the Earth Observation Digital Twin, or EODT. The system will be capable of processing multiple types of geophysics data ranging from sea temperature measurements to solar wind information. Using this data, it will generate climate and weather visualizations to support research initiatives.
EOTD will run on Amazon Web Services Inc. cloud instances equipped with graphics processing units. Additionally, it will perform some computing tasks using systems from Nvidia’s DGX and OVX data center appliance lineups. The appliances include built-in GPUs optimized to run workloads such as artificial intelligence applications.
According to Nvidia, the software architecture of the system likewise comprises multiple components.
Lockheed Martin’s open-source OpenRosetta3D application will be used to collect the geophysics data that EOTD will process. Once the data is collected, it will be moved to Nvidia’s Omniverse Nucleus database for processing. Another component of the system is a software tool called Agatha, which was developed by Lockheed Martin and makes it easier for researchers to interact with geophysics data collected from multiple sources.
The first demonstration of EOTD’s capabilities is set to take place next September. According to Nvidia, the initial prototype of the system will be designed to visualize sea surface temperature data.
New edge platform
Nvidia also plans to debut several additions to its product portfolio at SC22. The first addition is a platform that will enable researchers to more easily move scientific data between servers and other systems that are far apart from one another.
There are many situations where researchers require the ability to send data over long distances. A university, for example, might wish to send measurements from a scientific instrument located in one facility to a supercomputer hosted in a different facility. Similarly, researchers might wish to share simulation results between multiple supercomputers running at different locations.
“To overcome this problem, Nvidia has introduced a high-performance computing platform that combines edge computing and AI to capture and consolidate streaming data from scientific edge instruments, and then allow devices to talk to each other over long distances,” Senior Product Manager Geetika Gupta detailed in a blog post.
The platform is based on three technologies from Nvidia’s product portfolio: MetroX-3, Holoscan and BlueField-3 chips.
MetroX-3 is a technology slated to launch next month that can significantly extend the range of a data center network. With the technology, a data center can be connected to information technology infrastructure located more than 25 miles away. Researchers can use network links powered by MetroX-3 to move scientific data among servers located in different facilities.
Nvidia’s new platform also incorporates Holoscan and BlueField-3 chips. Holoscan is a collection of tools that researchers can use to process data from medical devices. Nvidia’s BlueField-3 chips, in turn, are specialized processors optimized for tasks such as coordinating network traffic among servers.
Nvidia offers a software development platform called Omniverse that can be used to create digital twins and simulations. At SC22 today, Nvidia was set to debut an update to Omniverse that will make it easier for scientists to use the platform as part of research projects.
Omniverse can now be used to run batch workloads on data center systems powered by Nvidia’s H100 and A100 graphics cards. Batch workloads are applications such as physics simulators that don’t necessarily require manual user input to complete calculations. Thanks to the update, research organizations with servers that contain H100 and A100 chips can more easily use the machines to run Omniverse-powered scientific software.
As part of the update, Nvidia is also integrating Omniverse with a number of popular scientific applications. The applications include the ParaView, IndeX and NeuralVDB tools for visualizing scientific data. Omniverse will now also work with Modulus, a software tool for building neural networks that automatically perform physics calculations.
Quantum computing is one of the areas where the company’s GPUs are used to support research initiatives. To ease scientists’ work, Nvidia is releasing two new features specifically focused on quantum computing.
The first feature is rolling out for the company’s CUDA Toolkit. That’s a set of software components for building applications that can run on Nvidia graphics cards. The new feature is aimed at helping optimize the performance of scientific applications that perform quantum mechanical calculations.
In conjunction, Nvidia is updating its cuQuantum framework. Researchers use the framework to run simulated quantum computers on conventional computing hardware. Thanks to the update announced today, cuQuantum now makes it possible to simulate quantum computers with up to tens of thousands of qubits.
Show your support for our mission by joining our Cube Club and Cube Event Community of experts. Join the community that includes Amazon Web Services and Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.