Skip to main content
eScholarship
Open Access Publications from the University of California

About

The mission of Computing Sciences at Berkeley Lab is to achieve transformational, breakthrough impacts in scientific domains through the discovery and use of advanced computational methods and systems and to make those instruments accessible to the broad scientific community.

Computing Sciences

There are 5959 publications in this collection, published between 1962 and 2024.
Applied Math & Comp Sci (2094)

Real time evolution for ultracompact Hamiltonian eigenstates on quantum hardware

In this work we present a detailed analysis of variational quantum phase estimation (VQPE), a method based on real-time evolution for ground and excited state estimation on near-term hardware. We derive the theoretical ground on which the approach stands, and demonstrate that it provides one of the most compact variational expansions to date for solving strongly correlated Hamiltonians. At the center of VQPE lies a set of equations, with a simple geometrical interpretation, which provides conditions for the time evolution grid in order to decouple eigenstates out of the set of time evolved expansion states, and connects the method to the classical filter diagonalization algorithm. Further, we introduce what we call the unitary formulation of VQPE, in which the number of matrix elements that need to be measured scales linearly with the number of expansion states, and we provide an analysis of the effects of noise which substantially improves previous considerations. The unitary formulation allows for a direct comparison to iterative phase estimation. Our results mark VQPE as both a natural and highly efficient quantum algorithm for ground and excited state calculations of general many-body systems. We demonstrate a hardware implementation of VQPE for the transverse field Ising model. Further, we illustrate its power on a paradigmatic example of strong correlation (Cr2 in the SVP basis set), and show that it is possible to reach chemical accuracy with as few as ~50 timesteps.

EXAGRAPH: Graph and combinatorial methods for enabling exascale applications

Combinatorial algorithms in general and graph algorithms in particular play a critical enabling role in numerous scientific applications. However, the irregular memory access nature of these algorithms makes them one of the hardest algorithmic kernels to implement on parallel systems. With tens of billions of hardware threads and deep memory hierarchies, the exascale computing systems in particular pose extreme challenges in scaling graph algorithms. The codesign center on combinatorial algorithms, ExaGraph, was established to design and develop methods and techniques for efficient implementation of key combinatorial (graph) algorithms chosen from a diverse set of exascale applications. Algebraic and combinatorial methods have a complementary role in the advancement of computational science and engineering, including playing an enabling role on each other. In this paper, we survey the algorithmic and software development activities performed under the auspices of ExaGraph from both a combinatorial and an algebraic perspective. In particular, we detail our recent efforts in porting the algorithms to manycore accelerator (GPU) architectures. We also provide a brief survey of the applications that have benefited from the scalable implementations of different combinatorial algorithms to enable scientific discovery at scale. We believe that several applications will benefit from the algorithmic and software tools developed by the ExaGraph team.

2091 more worksshow all
NERSC (1831)

Measurement of the inclusive cross-sections of single top-quark and top-antiquark t-channel production in pp collisions at s=13 TeV with the ATLAS detector

A measurement of the t-channel single-top-quark and single-top-antiquark production cross-sections in the lepton+jets channel is presented, using 3.2 fb−1 of proton-proton collision data at a centre-of-mass energy of 13 TeV, recorded with the ATLAS detector at the LHC in 2015. Events are selected by requiring one charged lepton (electron or muon), missing transverse momentum, and two jets with high transverse momentum, exactly one of which is required to be b-tagged. Using a binned maximum-likelihood fit to the discriminant distribution of a neural network, the cross-sections are determined to be σ(tq) = 156 ± 5 (stat.) ± 27 (syst.) ± 3 (lumi.) pb for single top-quark production and σ(t¯ q) = 91 ± 4 (stat.) ± 18 (syst.) ± 2 (lumi.) pb for single top-antiquark production, assuming a top-quark mass of 172.5 GeV. The cross-section ratio is measured to be Rt=σ(tq)/σ(t¯q)=1.72±0.09 (stat.) ± 0.18 (syst.). All results are in agreement with Standard Model predictions.[Figure not available: see fulltext.]

Combined search for neutrinos from dark matter self-annihilation in the Galactic Center with ANTARES and IceCube

We present the results of the first combined dark matter search targeting the Galactic Center using the ANTARES and IceCube neutrino telescopes. For dark matter particles with masses from 50 to 1000 GeV, the sensitivities on the self-annihilation cross section set by ANTARES and IceCube are comparable, making this mass range particularly interesting for a joint analysis. Dark matter self-annihilation through the τþτ−, μþμ−, bb¯, and WþW− channels is considered for both the Navarro-Frenk-White and Burkert halo profiles. In the combination of 2101.6 days of ANTARES data and 1007 days of IceCube data, no excess over the expected background is observed. Limits on the thermally averaged dark matter annihilation cross section hσAυi are set. These limits present an improvement of up to a factor of 2 in the studied dark matter mass range with respect to the individual limits published by both collaborations. When considering dark matter particles with a mass of 200 GeV annihilating through the τþτ− channel, the value obtained for the limit is 7.44 × 10−24 cm3 s−1 for the Navarro-Frenk-White halo profile. For the purpose of this joint analysis, the model parameters and the likelihood are unified, providing a benchmark for forthcoming dark matter searches performed by neutrino telescopes.

1828 more worksshow all
Scientific Data (2331)

High-Performance Computational Intelligence and Forecasting Technologies

This report provides an introduction to the Computational Intelligence and Forecasting Technologies (CIFT) project at Lawrence Berkeley National Laboratory (LBNL). The main objective of CIFT is to promote the use of high-performance computing (HPC) tools and techniques for analysis of streaming data. After noticing the data volume being given as the explanation for the five-month delay for SEC and CFTC to issue their report on the 2010 Flash Crash, LBNL started the CIFT project to apply HPC technologies to manage and analyze financial data. Making timely decisions with streaming data is a requirement for many different applications, such as avoiding impending failure in the electric power grid or a liquidity crisis in financial markets. In all these cases, the HPC tools are well suited in handling the complex data dependencies and providing timely solutions. Over the years, CIFT has worked on a number of different forms of streaming data, including those from vehicle traffic, electric power grid, electricity usage, and so on. The following sections explain the key features of HPC systems, introduce a few special tools used on these systems, and provide examples of streaming data analyses using these HPC tools.

Measurement of the inclusive cross-sections of single top-quark and top-antiquark t-channel production in pp collisions at s=13 TeV with the ATLAS detector

A measurement of the t-channel single-top-quark and single-top-antiquark production cross-sections in the lepton+jets channel is presented, using 3.2 fb−1 of proton-proton collision data at a centre-of-mass energy of 13 TeV, recorded with the ATLAS detector at the LHC in 2015. Events are selected by requiring one charged lepton (electron or muon), missing transverse momentum, and two jets with high transverse momentum, exactly one of which is required to be b-tagged. Using a binned maximum-likelihood fit to the discriminant distribution of a neural network, the cross-sections are determined to be σ(tq) = 156 ± 5 (stat.) ± 27 (syst.) ± 3 (lumi.) pb for single top-quark production and σ(t¯ q) = 91 ± 4 (stat.) ± 18 (syst.) ± 2 (lumi.) pb for single top-antiquark production, assuming a top-quark mass of 172.5 GeV. The cross-section ratio is measured to be Rt=σ(tq)/σ(t¯q)=1.72±0.09 (stat.) ± 0.18 (syst.). All results are in agreement with Standard Model predictions.[Figure not available: see fulltext.]

Constraining Reionization with the z ∼ 5–6 Lyα Forest Power Spectrum: The Outlook after Planck

The latest measurements of cosmic microwave background electron-scattering optical depth reported by Planck significantly reduces the allowed space of reionization models, pointing toward a later ending and/or less extended phase transition than previously believed. Reionization impulsively heats the intergalactic medium (IGM) to , and owing to long cooling and dynamical times in the diffuse gas that are comparable to the Hubble time, memory of reionization heating is retained. Therefore, a late-ending reionization has significant implications for the structure of the Lyμ forest. Using state-of-the-art hydrodynamical simulations that allow us to vary the timing of reionization and its associated heat injection, we argue that extant thermal signatures from reionization can be detected via the Lyμ forest power spectrum at . This arises because the small-scale cutoff in the power depends not only the the IGM temperature at these epochs, but is also particularly sensitive to the pressure-smoothing scale set by the IGM full thermal history. Comparing our different reionization models with existing measurements of the Lyμ forest flux power spectrum at , we find that models satisfying Planck's constraint favor a moderate amount of heat injection consistent with galaxies driving reionization, but disfavoring quasar-driven scenarios. We study the feasibility of measuring the flux power spectrum at using mock quasar spectra and conclude that a sample of ∼10 high-resolution spectra with an attainable signal-to-noise ratio will allow distinguishing between different reionization scenarios.

2328 more worksshow all
Scientific Networking (377)

High-Performance Computational Intelligence and Forecasting Technologies

This report provides an introduction to the Computational Intelligence and Forecasting Technologies (CIFT) project at Lawrence Berkeley National Laboratory (LBNL). The main objective of CIFT is to promote the use of high-performance computing (HPC) tools and techniques for analysis of streaming data. After noticing the data volume being given as the explanation for the five-month delay for SEC and CFTC to issue their report on the 2010 Flash Crash, LBNL started the CIFT project to apply HPC technologies to manage and analyze financial data. Making timely decisions with streaming data is a requirement for many different applications, such as avoiding impending failure in the electric power grid or a liquidity crisis in financial markets. In all these cases, the HPC tools are well suited in handling the complex data dependencies and providing timely solutions. Over the years, CIFT has worked on a number of different forms of streaming data, including those from vehicle traffic, electric power grid, electricity usage, and so on. The following sections explain the key features of HPC systems, introduce a few special tools used on these systems, and provide examples of streaming data analyses using these HPC tools.

High Energy Physics Network Requirements Review: Two-Year Update

The Energy Sciences Network (ESnet) is the high-performance network user facility for the US Department of Energy (DOE) Office of Science (SC) and delivers highly reliable data transport capabilities optimized for the requirements of data-intensive science. In essence, ESnet is the circulatory system that enables the DOE science mission by connecting all its laboratories and facilities in the US and abroad. ESnet is funded and stewarded by the Advanced Scientific Computing Research (ASCR) program and managed and operated by the Scientific Networking Division at Lawrence Berkeley National Laboratory (LBNL). ESnet is widely regarded as a global leader in the research and education networking community. ESnet interconnects DOE national laboratories, user facilities, and major experiments so that scientists can use remote instruments and computing resources as well as share data with collaborators, transfer large datasets, and access distributed data repositories. ESnet is specifically built to provide a range of network services tailored to meet the unique requirements of the DOE’s data-intensive science. In July 2023, the Energy Sciences Network (ESnet) and the High Energy Physics program (HEP) of the DOE SC organized an interim ESnet requirements review of HEP-supported activities, to follow up on the work started during the 2020 HEP Network Requirements Review. Preparation for these events included checking back with the key stakeholders: program and facility management, research groups, and technology providers. Each stakeholder group was asked to prepare updates to their previously submitted case study documents, so that ESnet could update the understanding of any changes to the current, near-term, and long-term status, expectations, and processes that will support the science activities of the program.

Algal genomes reveal evolutionary mosaicism and the fate of nucleomorphs

Cryptophyte and chlorarachniophyte algae are transitional forms in the widespread secondary endosymbiotic acquisition of photosynthesis by engulfment of eukaryotic algae. Unlike most secondary plastid-bearing algae, miniaturized versions of the endosymbiont nuclei (nucleomorphs) persist in cryptophytes and chlorarachniophytes. To determine why, and to address other fundamental questions about eukaryote eukaryote endosymbiosis, we sequenced the nuclear genomes of the cryptophyte Guillardia theta and the chlorarachniophyte Bigelowiella natans. Both genomes have 21,000 protein genes and are intron rich, and B. natans exhibits unprecedented alternative splicing for a single-celled organism. Phylogenomic analyses and subcellular targeting predictions reveal extensive genetic and biochemical mosaicism, with both host- and endosymbiont-derived genes servicing the mitochondrion, the host cell cytosol, the plastid and the remnant endosymbiont cytosol of both algae. Mitochondrion-to-nucleus gene transfer still occurs in both organisms but plastid-to-nucleus and nucleomorph-to-nucleus transfers do not, which explains why a small residue of essential genes remains locked in each nucleomorph.

374 more worksshow all