Papers updated in last 183 days (1733 results)

Last updated:  2025-03-04
Recover from Excessive Faults in Partially-Synchronous BFT SMR
Tiantian Gong, Gustavo Franco Camilo, Kartik Nayak, Andrew Lewis-Pye, and Aniket Kate
Byzantine fault-tolerant (BFT) state machine replication (SMR) protocols form the basis of modern blockchains as they maintain a consistent state across all blockchain nodes while tolerating a bounded number of Byzantine faults. We analyze BFT SMR in the excessive fault setting where the actual number of Byzantine faults surpasses a protocol's tolerance. We start by devising the very first repair algorithm for linearly chained and quorum-based partially synchronous SMR to recover from faulty states caused by excessive faults. Such a procedure can be realized using any commission fault detection module -- an algorithm that identifies the faulty replicas without falsely locating any correct replica. We achieve this with a slightly weaker liveness guarantee, as the original security notion is impossible to satisfy given excessive faults. We implement recoverable HotStuff in Rust. The throughput resumes to the normal level (without excessive faults) after recovery routines terminate for replicas and is slightly reduced by for replicas. On average, it increases the latency by for replicas \usenix{and for replicas}. Aside from adopting existing detection modules, we also establish the sufficient condition for a general BFT SMR protocol to allow for complete and sound fault detection when up to Byzantine replicas (out of total replicas) attack safety. We start by providing the first closed-box fault detection algorithm for any SMR protocol without any extra rounds of communication. We then describe open-box instantiations of our fault detection routines in Tendermint and Hotstuff, further reducing the overhead, both asymptotically and concretely.
Last updated:  2025-03-04
An Undetectable Watermark for Generative Image Models
Sam Gunn, Xuandong Zhao, and Dawn Song
We present the first undetectable watermarking scheme for generative image models. Undetectability ensures that no efficient adversary can distinguish between watermarked and un-watermarked images, even after making many adaptive queries. In particular, an undetectable watermark does not degrade image quality under any efficiently computable metric. Our scheme works by selecting the initial latents of a diffusion model using a pseudorandom error-correcting code (Christ and Gunn, 2024), a strategy which guarantees undetectability and robustness. We experimentally demonstrate that our watermarks are quality-preserving and robust using Stable Diffusion 2.1. Our experiments verify that, in contrast to every prior scheme we tested, our watermark does not degrade image quality. Our experiments also demonstrate robustness: existing watermark removal attacks fail to remove our watermark from images without significantly degrading the quality of the images. Finally, we find that we can robustly encode 512 bits in our watermark, and up to 2500 bits when the images are not subjected to watermark removal attacks. Our code is available at https://github.com/XuandongZhao/PRC-Watermark.
Last updated:  2025-03-04
Is ML-Based Cryptanalysis Inherently Limited? Simulating Cryptographic Adversaries via Gradient-Based Methods
Avital Shafran, Eran Malach, Thomas Ristenpart, Gil Segev, and Stefano Tessaro
Given the recent progress in machine learning (ML), the cryptography community has started exploring the applicability of ML methods to the design of new cryptanalytic approaches. While current empirical results show promise, the extent to which such methods may outperform classical cryptanalytic approaches is still somewhat unclear. In this work, we initiate exploration of the theory of ML-based cryptanalytic techniques, in particular providing new results towards understanding whether they are fundamentally limited compared to traditional approaches. Whereas most classic cryptanalysis crucially relies on directly processing individual samples (e.g., plaintext-ciphertext pairs), modern ML methods thus far only interact with samples via gradient-based computations that average a loss function over all samples. It is, therefore, conceivable that such gradient-based methods are inherently weaker than classical approaches. We introduce a unifying framework for capturing both ``sample-based'' adversaries that are provided with direct access to individual samples and ``gradient-based'' ones that are restricted to issuing gradient-based queries that are averaged over all given samples via a loss function. Within our framework, we establish a general feasibility result showing that any sample-based adversary can be simulated by a seemingly-weaker gradient-based one. Moreover, the simulation exhibits a nearly optimal overhead in terms of the gradient-based simulator's running time. Finally, we extend and refine our simulation technique to construct a gradient-based simulator that is fully parallelizable (crucial for avoiding an undesirable overhead for parallelizable cryptanalytic tasks), which is then used to construct a gradient-based simulator that executes the particular and highly useful gradient-descent method. Taken together, although the extent to which ML methods may outperform classical cryptanalytic approaches is still somewhat unclear, our results indicate that such gradient-based methods are not inherently limited by their seemingly restricted access to the provided samples.
Last updated:  2025-03-04
Garblet: Multi-party Computation for Protecting Chiplet-based Systems
Mohammad Hashemi, Shahin Tajik, and Fatemeh Ganji
The introduction of shared computation architectures assembled from heterogeneous chiplets introduces new security threats. Due to the shared logical and physical resources, an untrusted chiplet can act maliciously to surreptitiously probe the data communication between chiplets or sense the computation shared between them. This paper presents Garblet, the first framework to leverage the flexibility offered by chiplet technology and Garbled Circuits (GC)-based MPC to enable efficient, secure computation even in the presence of potentially compromised chiplets. Our approach integrates a customized hardware Oblivious Transfer (OT) module and an optimized evaluator engine into chiplet-based platforms. This configuration distributes the tasks of garbling and evaluating circuits across two chiplets, reducing communication costs and enhancing computation speed. We implement this framework on an AMD/Xilinx UltraScale+ multi-chip module and demonstrate its effectiveness using benchmark functions. Additionally, we introduce a novel circuit decomposition technique that allows for parallel processing across multiple chiplets to further improve computational efficiency. Our results highlight the potential of chiplet systems for accelerating GC (e.g., the time complexity of garbled AES is 0.0226ms) in order to guarantee the security and privacy of the computation on chiplets.
Last updated:  2025-03-04
Multi-Authority Functional Encryption: Corrupt Authorities, Dynamic Collusion, Lower Bounds, and More
Rishab Goyal and Saikumar Yadugiri
Decentralization is a great enabler for adoption of modern cryptography in real-world systems. Widespread adoption of blockchains and secure multi-party computation protocols are perfect evidentiary examples for dramatic rise in deployment of decentralized cryptographic systems. Much of cryptographic research can be viewed as reducing (or eliminating) the dependence on trusted parties, while shielding from stronger adversarial threats. In this work, we study the problem of multi-authority functional encryption (MAFE), a popular decentralized generalization of functional encryption (FE). Our main contributions are: 1. We design MAFE for all poly-sized circuits, in the bounded collusion model, under the minimal assumption of PKE/OWFs. Prior to our work, this required either sub-exponentially secure obfuscation, or -party key exchange, or Random Oracles and sub-exponentially secure PKE. We also extend our constructions to the dynamic collusion model under the minimal assumptions of IBE/OWFs. Unlike all prior works, our MAFE systems are truly dynamic and put no restrictions on the maximum number of authorities. 2. Under the hardness of learning with errors (LWE) assumption, we design MAFE for all poly-sized circuits where we allow adversaries to adaptively corrupt local authorities. We allow an adversary to corrupt any out of local authorities as long as = poly. Prior to this, such MAFE relied on sub-exponentially secure obfuscation. Additionally, we design a new MAFE compiler for boosting selective authority corruptions to non-adaptive authority corruptions. 3. We prove a tight implication from MAFE to (VBB/indistinguishability) obfuscation. We show that MAFE implies obfuscation only if the number of attribute bits (jointly) controlled by all corrupt local authorities is . This proves optimality of our second result for a wide range of parameters. 4. Finally, we propose a new MAFE system that we refer to as multi-authority attribute-based functional encryption (MA-ABFE). We view it as an approach to get best of both worlds (fully collusion resistant MA-ABE, and bounded collusion resistant MAFE). By combining our results with prior MA-ABE results, we obtain MA-ABFE for from standard pairing-based assumptions, and for from LWE, both in the Random Oracle Model. We also describe a simple construction of MA-ABE for general predicates from witness encryption, and combining with known results, we also get MA-ABFE for from evasive LWE.
Last updated:  2025-03-04
Universally Composable SNARKs with Transparent Setup without Programmable Random Oracle
Christian Badertscher, Matteo Campanelli, Michele Ciampi, Luigi Russo, and Luisa Siniscalchi
Non-interactive zero-knowledge (NIZK) proofs enable a prover to convince a verifier of an NP statement’s validity using a single message, without disclosing any additional information. These proofs are widely studied and deployed, especially in their succinct form, where proof length is sublinear in the size of the NP relation. However, efficient succinct NIZKs typically require an idealized setup, such as a a common reference string, which complicates real-world deployment. A key challenge is developing NIZKs with simpler, more transparent setups. A promising approach is the random-oracle (RO) methodology, which idealizes hash functions as public random functions. It is commonly believed that UC NIZKs cannot be realized using a non-programmable global RO—the simplest incarnation of the RO as a form of setup—since existing techniques depend on the ability to program the oracle. We challenge this belief and present a methodology to build UC-secure NIZKs based solely on a global, non-programmable RO. By applying our framework we are able to construct a NIZK that achieves witness-succinct proofs of logarithmic size, breaking both the programmability barrier and polylogarithmic proof size limitations for UC-secure NIZKs with transparent setups. We further observe that among existing global RO formalizations put forth by Camenisch et al. (Eurocrypt 2018), our choice of setup is necessary to achieve this result. From the technical standpoint, our contributions span both modeling and construction. We leverage the shielded (super-poly) oracle model introduced by Broadnax et al. (Eurocrypt 2017) to define a UC NIZK functionality that can serve as a drop-in replacement for its standard variant—it preserves the usual soundness and zero-knowledge properties while ensuring its compositional guarantees remain intact. To instantiate this functionality under a non-programmable RO setup, we follow the framework of Ganesh et al. (Eurocrypt 2023) and provide new building blocks for it, around which are some of our core technical contributions: a novel polynomial encoding technique and the leakage analysis of its companion polynomial commitment, based on Bulletproofs-style folding. We also provide a second construction, based on a recent work by Chiesa and Fenzi (TCC 2024), and show that it achieves a slightly weaker version of the NIZK functionality.
Last updated:  2025-03-04
MERCURY: A multilinear Polynomial Commitment Scheme with constant proof size and no prover FFTs
Liam Eagen and Ariel Gabizon
We construct a pairing-based polynomial commitment scheme for multilinear polynomials of size where constructing an opening proof requires field operations, and scalar multiplications. Moreover, the opening proof consists of a constant number of field elements. This is a significant improvement over previous works which would require either 1. field operations; or 2. size opening proof. The main technical component is a new method of verifiably folding a witness via univariate polynomial division. As opposed to previous methods, the proof size and prover time remain constant *regardless of the folding factor*.
Last updated:  2025-03-04
GPU Implementations of Three Different Key-Switching Methods for Homomorphic Encryption Schemes
Ali Şah Özcan and Erkay Savaş
In this work, we report on the latest GPU implementations of the three well-known methods for the key switching operation, which is critical for Fully Homomorphic Encryption (FHE). Additionally, for the first time in the literature, we provide implementations of all three methods in GPU for leveled CKKS schemes. To ensure a fair comparison, we employ the most recent GPU implementation of the number-theoretic transform (NTT), which is the most time-consuming operation in key switching, and evaluate the performance across two fully homomorphic schemes: BFV and CKKS. Furthermore, we highlight the advantages and shortcomings of the three methods in the context of leveled HE schemes, and discuss other aspects such as memory requirements. Our GPU implementation is integrated with HEonGPU Library and delivers up to a ×380 improvement in execution time compared to the Microsoft SEAL Library. Since key switching is a specialized form of the external product common in many HE schemes, our results are directly relevant to time-intensive homomorphic operations such as relinearization and rotation. As homomorphic rotation is one of the most dominant operations in bootstrapping, our results are also applicable in bootstrapping algorithms of BFV, BGV and CKKS schemes.
Last updated:  2025-03-04
Simple and General Counterexamples for Private-Coin Evasive LWE
Nico Döttling, Abhishek Jain, Giulio Malavolta, Surya Mathialagan, and Vinod Vaikuntanathan
We present a simple counterexample to all known variants of the private-coin evasive learning with errors (LWE) assumption. Unlike prior works, our counterexample is direct, it does not use heavy cryptographic machinery (such as obfuscation or witness encryption), and it applies to all variants of the assumption. Our counterexample can be seen as a "zeroizing" attack against evasive LWE, calling into question the soundness of the underlying design philosophy.
Last updated:  2025-03-04
Security of the Ascon Authenticated Encryption Mode in the Presence of Quantum Adversaries
Nathalie Lang, Stefan Lucks, Bart Mennink, and Suprita Talnikar
We examine the post-quantum security of the Ascon authenticated encryption (AE) mode. In spite of comprehensive research of Ascon's classical security, the potential impact of quantum adversaries on Ascon has not yet been explored much. We investigate the generic security of the Ascon AE mode in the setting where the adversary owns a quantum computer to improve its attack, while the adversarial encryption or decryption queries are still classical. In this so-called Q1 model, Ascon achieves security up to approximately evaluations, where is the capacity, the key size, and the adversary is block-wise adaptive but restricted to one forgery attempt. Our technique is based on applying the semi-classical one-way to hiding (O2H) lemma, and on tailoring the puncture set to the Ascon mode. Additionally, we discuss different parameter choices for Ascon and compare our results to generic quantum attacks, such as Grover-based key search and state recovery.
Last updated:  2025-03-04
TreeKEM: A Modular Machine-Checked Symbolic Security Analysis of Group Key Agreement in Messaging Layer Security
Théophile Wallez, Jonathan Protzenko, and Karthikeyan Bhargavan
The Messaging Layer Security (MLS) protocol standard proposes a novel tree-based protocol that enables efficient end-to-end encrypted messaging over large groups with thousands of members. Its functionality can be divided into three components: TreeSync for authenticating and synchronizing group state, TreeKEM for the core group key agreement, and TreeDEM for group message encryption. While previous works have analyzed the security of abstract models of TreeKEM, they do not account for the precise low-level details of the protocol standard. This work presents the first machine-checked security proof for TreeKEM. Our proof is in the symbolic Dolev-Yao model and applies to a bit-level precise, executable, interoperable specification of the protocol. Furthermore, our security theorem for TreeKEM composes naturally with a previous result for TreeSync to provide a strong modular security guarantee for the published MLS standard.
Last updated:  2025-03-04
Low Communication Threshold FHE from Standard (Module-)LWE
Hiroki Okada and Tsuyoshi Takagi
Threshold fully homomorphic encryption (ThFHE) is an extension of FHE that can be applied to multiparty computation (MPC) with low round complexity. Recently, Passelègue and Stehlé (Asiacrypt 2024) presented a simulation-secure ThFHE scheme with polynomially small decryption shares from “yet another” learning with errors assumption (LWE), in which the norm of the secret key is leaked to the adversary. While “yet another” LWE is reduced from standard LWE, its module variant, “yet another” module-LWE (MLWE), lacks a known reduction from standard MLWE. Because of this, it is left as an open question to extend their scheme to the MLWE-based construction. In this paper, we address this open problem: we propose a simulation-secure ThFHE scheme with polynomially small decryption shares whose security is (directly) reduced from standard LWE/MLWE. Our core technique, which we call “noise padding”, eliminates the need of “yet another” assumptions: we distribute shares of a small error and use them to adjust the distribution of decryption noise so that no information about the secret key is leaked. As side benefits of our construction, our ThFHE efficiently realizes arbitrary T-out-of-N threshold decryption via simple Shamir secret sharing instead of {0, 1}-linear secret sharing. Furthermore, the sizes of keys, ciphertexts and decryption shares in our scheme are constant w.r.t. the number of parties N ; we achieve compactness w.r.t. N.
Last updated:  2025-03-04
Quarantined-TreeKEM: a Continuous Group Key Agreement for MLS, Secure in Presence of Inactive Users
Céline Chevalier, Guirec Lebrun, Ange Martinelli, and Abdul Rahman Taleb
The recently standardized secure group messaging protocol Messaging Layer Security (MLS) is designed to ensure asynchronous communications within large groups, with an almost-optimal communication cost and the same security level as point-to-point se- cure messaging protocols such as Signal. In particular, the core sub-protocol of MLS, a Continuous Group Key Agreement (CGKA) called TreeKEM, must generate a common group key that respects the fundamental security properties of post-compromise security and forward secrecy which mitigate the effects of user corruption over time. Most research on CGKAs has focused on how to improve these two security properties. However, post-compromise security and forward secrecy require the active participation of respectively all compromised users and all users within the group. Inactive users – who remain offline for long periods – do not update anymore their encryption keys and therefore represent a vulnerability for the entire group. This issue has already been identified in the MLS standard, but no solution, other than expelling these inactive users after some disconnection time, has been found. We propose here a CGKA protocol based on TreeKEM and fully compatible with the MLS standard, that implements a quarantine mechanism for the inactive users in order to mitigate the risk induced by these users during their inactivity period and before they are removed from the group. That mechanism indeed updates the inactive users’ encryption keys on their behalf and secures these keys with a secret sharing scheme. If some of the inactive users eventually reconnect, their quarantine stops and they are able to recover all the messages that were exchanged during their offline period. Our Quarantined-TreeKEM protocol thus increases the security of original TreeKEM, with a very limited – and sometimes negative – communication overhead.
Last updated:  2025-03-04
Radical 2-isogenies and cryptographic hash functions in dimensions 1, 2 and 3
Sabrina Kunzweiler, Luciano Maino, Tomoki Moriya, Christophe Petit, Giacomo Pope, Damien Robert, Miha Stopar, and Yan Bo Ti
We provide explicit descriptions for radical 2-isogenies in dimensions one, two and three using theta coordinates. These formulas allow us to efficiently navigate in the corresponding isogeny graphs. As an application of this, we implement different versions of the CGL hash func- tion. Notably, the three-dimensional version is fastest, which demonstrates yet another potential of using higher dimensional isogeny graphs in cryptography.
Last updated:  2025-03-04
On the Effects of Neural Network-based Output Prediction Attacks on the Design of Symmetric-key Ciphers
Hayato Watanabe, Ryoma Ito, and Toshihiro Ohigashi
Proving resistance to conventional attacks, e.g., differential, linear, and integral attacks, is essential for designing a secure symmetric-key cipher. Recent advances in automatic search and deep learning-based methods have made this time-consuming task relatively easy, yet concerns persist over expertise requirements and potential oversights. To overcome these concerns, Kimura et al. proposed neural network-based output prediction (NN) attacks, offering simplicity, generality, and reduced coding mistakes. NN attacks could be helpful for designing secure symmetric-key ciphers, especially the S-box-based block ciphers. Inspired by their work, we first apply NN attacks to Simon, one of the AND-Rotation-XOR-based block ciphers, and identify structures susceptible to NN attacks and the vulnerabilities detected thereby. Next, we take a closer look at the vulnerable structures. The most vulnerable structure has the lowest diffusion property compared to others. This fact implies that NN attacks may detect such a property. We then focus on a biased event of the core function in vulnerable Simon-like ciphers and build effective linear approximations caused by such an event. Finally, we use these linear approximations to reveal that the vulnerable structures are more susceptible to a linear key recovery attack than the original one. We conclude that our analysis can be a solid step toward making NN attacks a helpful tool for designing a secure symmetric-key cipher.
Last updated:  2025-03-04
Attribute-Based Threshold Issuance Anonymous Counting Tokens and Its Application to Sybil-Resistant Self-Sovereign Identity
Behzad Abdolmaleki, Antonis Michalas, Reyhaneh Rabaninejad, Sebastian Ramacher, and Daniel Slamanig
Self-sovereign identity (SSI) systems empower users to (anonymously) establish and verify their identity when accessing both digital and real-world resources, emerging as a promising privacy-preserving solution for user-centric identity management. Recent work by Maram et al. proposes the privacy-preserving Sybil-resistant decentralized SSI system CanDID (IEEE S&P 2021). While this is an important step, notable shortcomings undermine its efficacy. The two most significant among them being the following: First, unlinkability breaks in the presence of a single malicious issuer. Second, it introduces interactiveness, as the users are required to communicate each time with issuers to collect credentials intended for use in interactions with applications. This contradicts the goal of SSI, whose aim is to give users full control over their identities. This paper first introduces the concept of publicly verifiable attribute-based threshold anonymous counting tokens (tACT). Unlike recent approaches confined to centralized settings (Benhamouda et al., ASIACRYPT 2023), tACT operates in a distributed-trust environment. Accompanied by a formal security model and a provably secure instantiation, tACT introduces a novel dimension to token issuance, which, we believe, holds independent interest. Next, the paper leverages the proposed tACT scheme to construct an efficient Sybil-resistant SSI system. This system supports various functionalities, including threshold issuance, unlinkable multi-show selective disclosure, and non-interactive, non-transferable credentials that offer constant-size credentials. Finally, our benchmark results show an efficiency improvement in our construction when compared to CanDID all while accommodating a greater number of issuers and additionally reducing to a one-round protocol that can be run in parallel with all issuers.
Last updated:  2025-03-04
Constructing Quantum Implementations with the Minimal T-depth or Minimal Width and Their Applications
Zhenyu Huang, Fuxin Zhang, and Dongdai Lin
With the rapid development of quantum computers, optimizing the quantum implementations of symmetric-key ciphers, which constitute the primary components of the quantum oracles used in quantum attacks based on Grover and Simon's algorithms, has become an active topic in the cryptography community. In this field, a challenge is to construct quantum circuits that require the least amount of quantum resources. In this work, we aim to address the problem of constructing quantum circuits with the minimal T-depth or width (number of qubits) for nonlinear components, thereby enabling implementations of symmetric-key ciphers with the minimal T-depth or width. Specifically, we propose several general methods for obtaining quantum implementation of generic vectorial Boolean functions and multiplicative inversions in GF(2^n), achieving the minimal T-depth and low costs across other metrics. As an application, we present a highly compact T-depth-3 Clifford+T circuit for the AES S-box. Compared to the T-depth-3 circuits presented in previous works (ASIACRYPT 2022, IEEE TC 2024), our circuit has significant reductions in T-count, full depth and Clifford gate count. Compared to the state-of-the-art T-depth-4 circuits, our circuit not only achieves the minimal T-depth but also exhibits reduced full depth and closely comparable width. This leads to lower costs for the DW-cost and T-DW-cost. Additionally, we propose two methods for constructing minimal-width implementations of vectorial Boolean functions. As applications, for the first time, we present a 9-qubit Clifford+T circuit for the AES S-box, a 16-qubit Clifford+T circuit for a pair of AES S-boxes, and a 5-qubit Clifford+T circuit for the chi function of SHA3. These circuits can be used to derive quantum circuits that implement AES or SHA3 without ancilla qubits.
Last updated:  2025-03-04
zkMarket : Privacy-preserving Digital Data Trade System via Blockchain
Seongho Park, Seungwoo Kim, Semin Han, Kyeongtae Lee, Jihye Kim, and Hyunok Oh
Ensuring fairness in blockchain-based data trading presents significant challenges, as the transparency of blockchain can expose sensitive details and compromise fairness. Fairness ensures that the seller receives payment only if they provide the correct data, and the buyer gains access to the data only after making the payment. Existing approaches face limitations in efficiency particularly when applied to large-scale data. Moreover, preserving privacy has also been a significant challenge in blockchain. In this paper, we introduce zkMarket, a privacy-preserving fair trade system on the blockchain. we make the data registration process more concise and improve the seller's proving time by leveraging our novel pseudorandom generator named matrix-formed PRG (MatPRG), and existing commit-and-prove SNARK (CP-SNARK). To ensure transaction privacy, zkMarket is built upon an anonymous transfer protocol. By combining encryption with zero-knowledge succinct non-interactive arguments of knowledge (zk-SNARK), both the seller and the buyer are enabled to trade fairly. Experimental results demonstrate that zkMarket significantly reduces the computational overhead associated with traditional blockchain solutions while maintaining robust security and privacy. Our evaluation demonstrates that zkMarket achieves high efficiency while maintaining robust security and privacy. The seller can register 1MB of data in 2.8 seconds, the buyer can generate the trade transaction in 0.2 seconds, and the seller can finalize the trade within 0.4 seconds.
Last updated:  2025-03-04
: One-shot Private Aggregation with Single Client Interaction and its Applications to Federated Learning
Harish Karthikeyan and Antigoni Polychroniadou
Our work aims to minimize interaction in secure computation due to the high cost and challenges associated with communication rounds, particularly in scenarios with many clients. In this work, we revisit the problem of secure aggregation in the single-server setting where a single evaluation server can securely aggregate client-held individual inputs. Our key contribution is the introduction of One-shot Private Aggregation () where clients speak only once (or even choose not to speak) per aggregation evaluation. Since each client communicates only once per aggregation, this simplifies managing dropouts and dynamic participation, contrasting with multi-round protocols and aligning with plaintext secure aggregation, where clients interact only once. We construct based on LWR, LWE, class groups, DCR and demonstrate applications to privacy-preserving Federated Learning (FL) where clients {speak once}. This is a sharp departure from prior multi-round FL protocols whose study was initiated by Bonawitz et al. (CCS, 2017). Moreover, unlike the YOSO (You Only Speak Once) model for general secure computation, eliminates complex committee selection protocols to achieve adaptive security. Beyond asymptotic improvements, is practical, outperforming state-of-the-art solutions. We benchmark logistic regression classifiers for two datasets, while also building an MLP classifier to train on MNIST, CIFAR-10, and CIFAR-100 datasets. We build two flavors of (1) from (threshold) key homomorphic PRF and (2) from seed homomorphic PRG and secret sharing. The threshold Key homomorphic PRF addresses shortcomings observed in previous works that relied on DDH and LWR in the work of Boneh et al. (CRYPTO, 2013), marking it as an independent contribution to our work. Moreover, we also present new threshold key homomorphic PRFs based on class groups or DCR or the LWR assumption.
Last updated:  2025-03-04
Lattice-based -Protocols for Polynomial Relations with Standard Soundness
Lizhen Zhang, Shang Gao, and Bin Xiao
We propose new techniques for enhancing the efficiency of -protocols in lattice settings. One major challenge in lattice-based -protocols is restricting the norm of the extracted witness in soundness proofs. Most of existing solutions either repeat the protocol several times or opt for a relaxation version of the original relation. Recently, Boneh and Chen have proposed an innovative solution called , which utilizes a sum-check protocol to enforce the norm bound on the witness. In this paper, we elevate this idea to efficiently proving multiple polynomial relations without relaxation. Simply incorporating the techniques from into -protocols leads to inefficient results; therefore, we introduce several new techniques to ensure efficiency. First, to enable the amortization in [AC20] for multiple polynomial relations, we propose a general linearization technique to reduce polynomial relations to homomorphic ones. Furthermore, we generalize the folding protocol in LatticeFold, enabling us to efficiently perform folding and other complex operations multiple times without the need to repeatedly execute sum-checks. Moreover, we achieve zero-knowledge by designing hiding claims and elevating the zero-knowledge sum-check protocol [XZZ+19] on rings. Our protocol achieves standard soundness, thereby enabling the efficient integration of the compressed -protocol theory [AC20, ACF21] in lattice settings.
Last updated:  2025-03-04
Hybrid Obfuscated Key Exchange and KEMs
Felix Günther, Michael Rosenberg, Douglas Stebila, and Shannon Veitch
Hiding the metadata in Internet protocols serves to protect user privacy, dissuade traffic analysis, and prevent network ossification. Fully encrypted protocols require even the initial key exchange to be obfuscated: a passive observer should be unable to distinguish a protocol execution from an exchange of random bitstrings. Deployed obfuscated key exchanges such as Tor's pluggable transport protocol obfs4 are Diffie–Hellman-based, and rely on the Elligator encoding for obfuscation. Recently, Günther, Stebila, and Veitch (CCS '24) proposed a post-quantum variant pq-obfs, using a novel building block called obfuscated key encapsulation mechanisms (OKEMs): KEMs whose public keys and ciphertexts look like random bitstrings. For transitioning real-world protocols, pure post-quantum security is not enough. Many are taking a hybrid approach, combining traditional and post-quantum schemes to hedge against security failures in either component. While hybrid KEMs are already widely deployed (e.g., in TLS 1.3), existing hybridization techniques fail to provide hybrid obfuscation guarantees for OKEMs. Further, even if a hybrid OKEM existed, the pq-obfs protocol would still not achieve hybrid obfuscation. In this work, we address these challenges by presenting the first OKEM combiner that achieves hybrid IND-CCA security with hybrid ciphertext obfuscation guarantees, and using this to build Drivel, a modification of pq-obfs that is compatible with hybrid OKEMs. Our OKEM combiner allows for a variety of practical instantiations, e.g., combining obfuscated versions of DHKEM and ML-KEM. We additionally provide techniques to achieve unconditional public key obfuscation for LWE-based OKEMs, and explore broader applications of hybrid OKEMs, including a construction of the first hybrid password-authenticated key exchange (PAKE) protocol secure against adaptive corruptions in the UC model.
Last updated:  2025-03-03
Delegatable ABE with Full Security from Witness Encryption
Rishab Goyal and Saikumar Yadugiri
Delegatable Attribute-Based Encryption (DABE) is a well-known generalization of ABE, proposed to mirror organizational hierarchies. In this work, we design a fully-secure DABE scheme from witness encryption and other simple assumptions. Our construction does not rely on Random Oracles, and we provide a black-box reduction to polynomial hardness of underlying assumptions. To the best of our knowledge, this is the first DABE construction (beyond hierarchical identity-based encryption) that achieves full security without relying on complexity leveraging. Our DABE supports an unbounded number of key delegations, and the secret key size grows just linearly with each key delegation operation.
Last updated:  2025-03-03
A Systematic Study of Sparse LWE
Aayush Jain, Huijia Lin, and Sagnik Saha
In this work, we introduce the sparse LWE assumption, an assumption that draws inspiration from both Learning with Errors (Regev JACM 10) and Sparse Learning Parity with Noise (Alekhnovich FOCS 02). Exactly like LWE, this assumption posits indistinguishability of from for a random where the secret , and the error vector is generated exactly as in LWE. However, the coefficient matrix in sparse LPN is chosen randomly from so that each column has Hamming weight exactly for some small . We study the problem in the regime where is a constant or polylogarithmic. The primary motivation for proposing this assumption is efficiency. Compared to LWE, the samples can be computed and stored with roughly factor improvement in efficiency. Our results can be summarized as: Foundations: We show several properties of sparse LWE samples, including: 1) The hardness of LWE/LPN with dimension implies the hardness of sparse LWE/LPN with sparsity and arbitrary dimension . 2) When the number of samples , length of the shortest vector of a lattice spanned by rows of a random sparse matrix is large, close to that of a random dense matrix of the same dimension (up to a small constant factor). 3) Trapdoors with small polynomial norm exist for random sparse matrices with dimension . 4) Efficient algorithms for sampling such matrices together with trapdoors exist when the dimension is . Cryptanalysis: We examine the suite of algorithms that have been used to break LWE and sparse LPN. While naively many of the attacks that apply to LWE do not exploit sparsity, we consider natural extensions that make use of sparsity. We propose a model to capture all these attacks. Using this model we suggest heuristics on how to identify concrete parameters. Our initial cryptanalysis suggests that concretely sparse LWE with a modest and slightly bigger dimension than LWE will satisfy similar level of security as LWE with similar parameters. Applications: We show that the hardness of sparse LWE implies very efficient homomorphic encryption schemes for low degree computations. We obtain the first secret key Linearly Homomorphic Encryption (LHE) schemes with slightly super-constant, or even constant, overhead, which further has applications to private information retrieval, private set intersection, etc. We also obtain secret key homomorphic encryption for arbitrary constant-degree polynomials with slightly super-constant, or constant, overhead. We stress that our results are preliminary. However, our results make a strong case for further investigation of sparse LWE.
Last updated:  2025-03-03
Proofs for Deep Thought: Accumulation for large memories and deterministic computations
Benedikt Bünz and Jessica Chen
An important part in proving machine computation is to prove the correctness of the read and write operations performed from the memory, which we term memory-proving. Previous methodologies required proving Merkle Tree openings or multi-set hashes, resulting in relatively large proof circuits. We construct an efficient memory-proving Incrementally Verifiable Computation (IVC) scheme from accumulation, which is particularly useful for machine computations with large memories and deterministic steps. In our scheme, the IVC prover PIVC has cost entirely independent of the memory size T and only needs to commit to approximately 15 field elements per read/write operation, marking a more than 100X improvement over prior work. We further reduce this cost by employing a modified, accumulation-friendly version of the GKR protocol. In the optimized version, PIVC only needs to commit to 6 small memory-table elements per read/write. If the table stores 32-bit values, then this is equivalent to committing to less than one single field element per read and write. Our modified GKR protocol is also valuable for proving other deterministic computations within the context of IVC. Our memory-proving protocol can be extended to support key-value store.
Last updated:  2025-03-03
AsyRand: fast asynchronous distributed randomness beacon with reconfiguration
Liang Zhang, Tao Liu, Zhanrong Ou, Haibin Kan, and Jiheng Zhang
Distributed randomness beacon protocols, which generate publicly verifiable randomness at regular intervals, are crucial for a wide range of applications. The publicly verifiable secret sharing (PVSS) scheme is a promising cryptographic primitive for implementing beacon protocols, such as Hydrand (S\&P '20) and SPURT (S\&P '22). However, two key challenges for practical deployment remain unresolved: asynchrony and reconfiguration. In this paper, we introduce the beacon protocol to address these challenges. In brief, leverages Bracha Reliable Broadcast (BRB) or BRB-like protocols for message dissemination and incorporates a producer-consumer model to decouple the production and consumption of PVSS commitments. In the producer-consumer model, PVSS commitments are produced and consumed using a queue data structure. Specifically, the producer process is responsible for generating new PVSS commitments and reaching consensus on them within the queue, while the consumer process continuously consumes the commitments to recover PVSS secrets and generate new beacon values. This separation allows the producer and consumer processes to operate simultaneously and asynchronously, without the need for a global clock. Moreover, the producer-consumer model enables each party to detect potential faults in other parties by monitoring the queue length. If necessary, parties in can initiate a removal process for faulty parties. BRB is also employed to facilitate the addition of new parties without requiring a system restart. In summary, supports reconfiguration, enhancing both the protocol's usability and reliability. Additionally, we propose a novel PVSS scheme based on the protocol, which is of independent interest. Regarding complexity, achieves state-of-the-art performance with communication complexity, computation complexity, and verification complexity.
Last updated:  2025-03-03
Withdrawable signatures in Fiat-Shamir with aborts constructions
Ramses Fernandez
This article presents an extension of the work performed by Liu, Baek and Susilo on withdrawable signatures to the Fiat-Shamir with aborts paradigm. We introduce an abstract construction, and provide security proofs for this proposal. As an instantiation, we provide a concrete construction for a withdrawable signature scheme based on Dilithium.
Last updated:  2025-03-03
Constant time lattice reduction in dimension 4 with application to SQIsign
Otto Hanyecz, Alexander Karenin, Elena Kirshanova, Péter Kutas, and Sina Schaeffler
In this paper we propose a constant time lattice reduction algorithm for integral dimension-4 lattices. Motivated by its application in the SQIsign post-quantum signature scheme, we provide for the first time a constant time LLL-like algorithm with guarantees on the length of the shortest output vector. We implemented our algorithm and ensured through various tools that it indeed operates in constant time. Our experiments suggest that in practice our implementation outputs a Minkowski reduced basis and thus can replace a non constant time lattice reduction subroutine in SQIsign.
Last updated:  2025-03-03
SNARKs for Stateful Computations on Authenticated Data
Johannes Reinhart, Erik-Oliver Blass, and Bjoern Annighoefer
We present a new generalization of (zk-)SNARKs combining two additional features at the same time. Besides the verification of correct computation, our new SNARKs also allow, first, the verification of input data authenticity. Specifically, a verifier can confirm that the input to the computation originated from a trusted source. Second, our SNARKs support verification of stateful computations across multiple rounds, ensuring that the output of the current round correctly depends on the internal state of the previous round. Our SNARKs are specifically suited to applications in cyber-physical control systems, where computations are periodically carried out and need to be checked immediately. Our focus is on concrete practicality, so we abstain from arithmetizing hash functions or signatures in our SNARKs. Rather, we modify the internals of an existing SNARK to extend its functionality. Additionally, we present new optimizations to reduce proof size, prover time, and verification time in our setting. With our construction, prover runtime improves significantly over the baseline by a factor of 89. Verification time is 70 % less for computations on authenticated data and 33 % less for stateful computations. To demonstrate relevance and practicality, we implement and benchmark our new SNARKs in a sample real-world scenario with a (simple) quadcopter flight control system.
Last updated:  2025-03-03
Really Complex Codes with Application to STARKs
Yuval Domb
Reed-Solomon (RS) codes [RS60], representing evaluations of univariate polynomials over distinct domains, are foundational in error correction and cryptographic protocols. Traditional RS codes leverage the Fourier domain for efficient encoding and decoding via Fast Fourier Transforms (FFT). However, in fields such as the Reals and some finite prime fields, limited root-of-unity orders restrict these methods. Recent research, particularly in the context of modern STARKs [BSBHR18b], has explored the Complex Fourier domain for constructing Real-valued RS codes, introducing novel transforms such as the Discrete Circle-Curve Transform (DCCT) for Real-to-Real transformations [HLP24]. Despite their efficiency, these transforms face limitations with high radix techniques and potentially other legacy know-hows. In this paper, we propose a construction of Real-valued RS codes utilizing the Discrete Fourier Transform (DFT) over the Complex domain. This approach leverages well-established algebraic and Fourier properties to achieve efficiency comparable to DCCT, while retaining compatibility with legacy techniques and optimizations.
Last updated:  2025-03-03
Periodic Table of Cryptanalysis: Geometric Approach with Different Bases
Kai Hu, Chi Zhang, Chengcheng Chang, Jiashu Zhang, Meiqin Wang, and Thomas Peyrin
In the past three decades, we have witnessed the creation of various cryptanalytic attacks. However, relatively little research has been done on their potential underlying connections. The geometric approach, developed by Beyne in 2021, shows that a cipher can be viewed as a linear operation when we treat its input and output as points in an induced \textit{free vector space}. By performing a change of basis for the input and output spaces, one can obtain various transition matrices. Linear, differential, and (ultrametic) integral attacks have been well reinterpreted by Beyne's theory in a unified way. Thus far, the geometric approach always uses the same basis for the input and output spaces. We observe here that this restriction is unnecessary and allowing different bases makes the geometric approach more flexible and able to interpret/predict more attack types. Given some set of bases for the input and output spaces, a family of basis-based attacks is defined by combining them, and all attacks in this family can be studied in a unified automatic search method. We revisit three kinds of bases from previous geometric approach papers and extend them to four extra ones by introducing new rules when generating new bases. With the final seven bases, we can obtain different basis-based attacks in the -th order spaces, where the \textit{order} is defined as the number of messages used in one sample during the attack. We then provide four examples of applications of this new framework. First, we show that by choosing a better pair of bases, Beyne and Verbauwhede's ultrametric integral cryptanalysis can be interpreted as a single element of a transition matrix rather than as a linear combination of elements. This unifies the ultrametric integral cryptanalysis with the previous linear and quasi-differential attacks. Second, we revisit the multiple-of- property with our refined geometric approach and exhibit new multiple-of- distinguishers that can reach more rounds of the \skinny-64 cipher than the state-of-the-art. Third, we study the multiple-of- property for the first-order case, which is similar to the subspace trail but it is the divisibility property that is considered. This leads to a new distinguisher for 11-round-reduced \skinny-64. Finally, we give a closed formula for differential-linear approximations without any assumptions, even confirming that the two differential-linear approximations of \simeck-32 and \simeck-48 found by Hadipour \textit{et al.} are deterministic independently of concrete key values. We emphasize that all these applications were not possible before.
Last updated:  2025-03-03
Related-Key Differential and Boomerang Cryptanalysis in the Fixed-Key Model
Chengcheng Chang, Kai Hu, Muzhou Li, and Meiqin Wang
Differential cryptanalysis, along with its variants such as boomerang attacks, is widely used to evaluate the security of block ciphers. These cryptanalytic techniques often rely on assumptions like the \textit{hypothesis of stochastic equivalence} and \textit{Markov ciphers assumption}. Recently, more attention has been paid to verifying whether differential characteristics (DCs) meet these assumptions, finding both positive and negative results. A part of these efforts includes the automatic search methods for both the value and difference propagation (e.g., Liu et al. CRYPTO 2020, Nageler et al. ToSC 2025/1), structural constraints analysis (e.g., Tan and Peyrin, ToSC 2022/4), and the quasidifferential (Beyne and Rijmen, CRYPTO 2022). Nevertheless, less attention has been paid to the related-key DCs and boomerang distinguishers, where the same assumptions are used. To the best of our knowledge, only some related-tweakey DCs of \skinny were checked thanks to its linear word-based key-schedule, and no similar work is done for boomerang distinguishers. The verification of related-key DCs and boomerang distinguishers is as important as that of DCs, as they often hold the longest attack records for block ciphers. This paper focuses on investigating the validity of DCs in the related-key setting and boomerang distinguishers in both single- and related-key scenarios. For this purpose, we generalize Beyne and Rijmen's quasidifferential techniques for the related-key DCs and boomerang attacks. First, to verify related-key DCs, the related-key quasi-DC is proposed. Similar to the relationship between the quasi-DC and DC, the exact probability of a related-key DC is equal to the sum of all corresponding related-key quasi-DCs' correlations. Since the related-key quasi-DCs involve the key information, we can determine the probability of the target related-key DC in different key subspaces. We find both positive and negative results. For example, we verify the 18-round related-key DC used in the best attack on \gift-64 whose probability is , finding that this related-key DC has a higher probability for keys which is around , but it is impossible for the remaining keys. Second, we identify proper bases to describe the boomerang distinguishers with the geometric approach. A quasi-BCT is constructed to consider the value influence in the boomerang connectivity table (BCT). For the DC parts, the quasi-biDDT is used. Connecting the quasi-BCT and quasi-biDDT, we can verify the probability of a boomerang distinguisher with quasi-boomerang characteristics. This also allows us to analyze the probability of the boomerang in different key spaces. For a 17-round boomerang distinguisher of \skinny-64-128 whose probability is , we find that the probability can be for half of keys, and impossible for the other half.
Last updated:  2025-03-03
PEGASIS: Practical Effective Class Group Action using 4-Dimensional Isogenies
Pierrick Dartois, Jonathan Komada Eriksen, Tako Boris Fouotsa, Arthur Herlédan Le Merdy, Riccardo Invernizzi, Damien Robert, Ryan Rueger, Frederik Vercauteren, and Benjamin Wesolowski
In this paper, we present the first practical algorithm to compute an effective group action of the class group of any imaginary quadratic order on a set of supersingular elliptic curves primitively oriented by . Effective means that we can act with any element of the class group directly, and are not restricted to acting by products of ideals of small norm, as for instance in CSIDH. Such restricted effective group actions often hamper cryptographic constructions, e.g. in signature or MPC protocols. Our algorithm is a refinement of the Clapoti approach by Page and Robert, and uses -dimensional isogenies. As such, it runs in polynomial time, does not require the computation of the structure of the class group, nor expensive lattice reductions, and our refinements allows it to be instantiated with the orientation given by the Frobenius endomorphism. This makes the algorithm practical even at security levels as high as CSIDH-4096. Our implementation in SageMath takes 1.5s to compute a group action at the CSIDH-512 security level, 21s at CSIDH-2048 level and around 2 minutes at the CSIDH-4096 level. This marks the first instantiation of an effective cryptographic group action at such high security levels. For comparison, the recent KLaPoTi approach requires around 200s at the CSIDH-512 level in SageMath and 2.5s in Rust.
Last updated:  2025-03-03
SPHINCS-: A Compact Stateless Hash-Based Signature Scheme
Kaiyi Zhang, Hongrui Cui, and Yu Yu
Hash-based signatures offer a conservative alternative to post-quantum signatures with arguably better-understood security than other post-quantum candidates. As a core building block of hash-based signatures, the efficiency of one-time signature (OTS) largely dominates that of hash-based signatures. The WOTS signature scheme (Africacrypt 2013) is the current state-of-the-art OTS adopted by the signature schemes standardized by NIST---XMSS, LMS, and SPHINCS. A natural question is whether there is (and how much) room left for improving one-time signatures (and thus standard hash-based signatures). In this paper, we show that the WOTS one-time signature, when adopting the constant-sum encoding scheme (Bos and Chaum, Crypto 1992), is size-optimal not only under Winternitz's OTS framework, but also among all tree-based OTS designs. Moreover, we point out a flaw in the DAG-based OTS design previously shown to be size-optimal at Asiacrypt 1996, which makes the constant-sum WOTS the most size-efficient OTS to our knowledge. Finally, we evaluate the performance of constant-sum WOTS integrated into the SPHINCS (CCS 2019) and XMSS (PQC 2011) signature schemes, which exhibit certain degrees of improvement in both sign time and signature size.
Last updated:  2025-03-03
Re-Randomize and Extract: A Novel Commitment Construction Framework Based on Group Actions
Kaijie Jiang, Anyu Wang, Hengyi Luo, Guoxiao Liu, Tang Gang, Yanbin Pan, and Xiaoyun Wang
Cryptographic group actions have attracted growing attention as a useful tool for constructing cryptographic schemes. Among their applications, commitment schemes are particularly interesting as fundamental primitives, playing a crucial role in protocols such as zero-knowledge proofs, multi-party computation, and more. In this paper, we introduce a novel framework to construct commitment schemes based on cryptographic group actions. Specifically, we propose two key techniques for general group actions: re-randomization and randomness extraction. Roughly speaking, a re-randomization algorithm introduces randomness within an orbit for any input element, while a randomness extractor maps this randomness to uniformity over the message space. We demonstrate that these techniques can significantly facilitate the construction of commitment schemes, providing a flexible framework for constructing either perfectly hiding or perfectly binding commitments, depending on the type of extractor involved. Moreover, we extend our framework to support the construction of commitments with additional desirable properties beyond hiding and binding, such as dual-mode commitments and enhanced linkable commitments. These extensions are achieved by further adapting the extractor to satisfy trapdoor or homomorphic properties. Finally, we instantiate all our proposed commitment schemes using lattices, specifically leveraging the lattice isomorphism problem (LIP) and the lattice automorphism problem (LAP) as underlying cryptographic assumptions. To the best of our knowledge, this is the first commitment scheme construction based on LIP/LAP. Additionally, we use LIP to provide a repair and improvement to the tensor isomorphism-based non-interactive commitment scheme proposed by D'Alconzo, Flamini, and Gangemi (ASIACRYPT 2023), which was recently shown to be insecure by an attack from Gilchrist, Marco, Petit, and Tang (CRYPTO 2024).
Last updated:  2025-03-03
Identity-based Matchmaking Encryption with Stronger Security and Instantiation on Lattices
Yuejun Wang, Baocang Wang, Qiqi Lai, and Yu Zhan
An identity-based matchmaking encryption (IB-ME) scheme proposed at JOC 2021 supports anonymous but authenticated communications in a way that communication parties can both specify the senders or receivers on the fly. IB-ME is easy to be used in several network applications requiring privacy-preserving for its efficient implementation and special syntax. In the literature, IB-ME schemes are built from the variants of Diffie-Hellman assumption and all fail to retain security for quantum attackers. Despite the rigorous security proofs in previous security models, the existing schemes are still possibly vulnerable to some potential neglected attacks. Aiming at the above problems, we provide a stronger security definition of authenticity considering new attacks to fit real-world scenarios and then propose a generic construction of IB-ME satisfying the new model. Inspired by the prior IB-ME construction of Chen et al., the proposed scheme is constructed by combining 2-level anonymous hierarchical IBE (HIBE) and identity-based signature (IBS) schemes. In order to upgrade lattice-based IB-ME with better efficiency, we additionally improve a lattice IBS, as an independent technical contribution, to shorten its signature and thus reduce the final IB-ME ciphertext size. By combining the improved IBS and any 2-level adaptively-secure lattice-based HIBE with anonymity, we finally obtain the first lattice-based construction that implements IB-ME directly.
Last updated:  2025-03-03
Exploring side-channels in Intel Trust Domain Extensions
Upasana Mandal, Shubhi Shukla, Nimish Mishra, Sarani Bhattacharya, Paritosh Saxena, and Debdeep Mukhopadhyay
Intel Trust Domain Extensions (TDX) has emerged as a crucial technology aimed at strengthening the isolation and security guarantees of virtual machines, especially as the demand for secure computation is growing largely. Despite the protections offered by TDX, in this work, we dig deep into the security claims and uncover an intricate observation in TDX. These findings undermine TDX's core security guarantees by breaching the isolation between the Virtual Machine Manager (VMM) and Trust Domains (TDs). In this work for the first time, we show through a series of experiments that these performance counters can also be exploited by the VMM to differentiate between activities of an idle and active TD. The root cause of this leakage is core contention. This occurs when the VMM itself, or a process executed by the VMM, runs on the same core as the TD. Due to resource contention on the core, the effects of the TD's computations become observable in the performance monitors collected by the VMM. This finding underscore the critical need for enhanced protections to bridge these gaps within these advanced virtualized environments.
Last updated:  2025-03-03
Computational Quantum Anamorphic Encryption and Anamorphic Secret Sharing
SAYANTAN GANGULY and Shion Samadder Chaudhury
The concept of anamorphic encryption, first formally introduced by Persiano et al. in their influential 2022 paper titled ``Anamorphic Encryption: Private Communication Against a Dictator,'' enables embedding covert messages within ciphertexts. One of the key distinctions between a ciphertext embedding a covert message and an original ciphertext, compared to an anamorphic ciphertext, lies in the indistinguishability between the original ciphertext and the anamorphic ciphertext. This encryption procedure has been defined based on a public-key cryptosystem. Initially, we present a quantum analogue of the classical anamorphic encryption definition that is based on public-key encryption. Additionally, we introduce a definition of quantum anamorphic encryption that relies on symmetric key encryption. Furthermore, we provide a detailed generalized construction of quantum anamorphic symmetric key encryption within a general framework, which involves taking any two quantum density matrices of any different dimensions and constructing a single quantum density matrix, which is the quantum anamorphic ciphertext containing ciphertexts of both of them. Subsequently, we introduce a definition of computational anamorphic secret-sharing and extend the work of \c{C}akan et al. on computational quantum secret-sharing to computational quantum anamorphic secret-sharing, specifically addressing scenarios with multiple messages, multiple keys, and a single share function. This proposed secret-sharing scheme demonstrates impeccable security measures against quantum adversaries.
Last updated:  2025-03-03
Tight Adaptive Simulation Security for Identity-based Inner-Product FE in the (Quantum) Random Oracle Model
Tenma Edamura and Atsushi Takayasu
Abdalla et al. (ASIACRYPT 2020) introduced a notion of identity-based inner-product functional encryption (IBIPFE) that combines identity-based encryption and inner-product functional encryption (IPFE). Thus far, several pairing-based and lattice-based IBIPFE schemes have been proposed. However, there are two open problems. First, there are no known IBIPFE schemes that satisfy the adaptive simulation-based security. Second, known IBIPFE schemes that satisfy the adaptive indistinguishability-based security or the selective simulation-based security do not have tight reductions. In this paper, we propose lattice-based and pairing-based IBIPFE schemes that satisfy the tight adaptive simulation-based security. At first, we propose a generic transformation from an indistinguishability-based secure -dimensional (IB)IPFE scheme to a simulation-based secure -dimensional (IB)IPFE scheme. The proposed transformation improves Agrawal et al.'s transformation for plain IPFE (PKC 2020) that requires an indistinguishability-based secure -dimensional scheme. Then, we construct a lattice-based IBIPFE scheme that satisfies the tight adaptive indistinguishability-based security under the LWE assumption in the quantum random oracle model. We apply the proposed transformation and obtain the first lattice-based IBIPFE scheme that satisfies adaptive simulation-based security. Finally, we construct a pairing-based IBIPFE scheme that satisfies the tight adaptive simulation-based security under the DBDH assumption in the random oracle model. The pairing-based scheme does not use the proposed transformation towards the best efficiency.
Last updated:  2025-03-03
BBB Secure Arbitrary Length Tweak TBC from n-bit Block Ciphers
Arghya Bhattacharjee, Ritam Bhaumik, Nilanjan Datta, Avijit Dutta, Shibam Ghosh, and Sougata Mandal
At FSE'15, Mennink introduced the concept of designing beyond-the-birthday bound secure tweakable block cipher from an ideal block cipher. They proposed two tweakable block ciphers and that accepts -bit tweak using a block cipher of -bit key and -bit data. Mennink proved that the constructions achieve security up to and queries, respectively, assuming the underlying block cipher is ideal. Later, at ASIACRYPT'16, Wang et al. proposed a class of new tweakable block ciphers derived from -bit ideal block ciphers that achieve optimal security, i.e., security up to queries. The proposed designs by both Mennink and Wang et al. admit only -bit tweaks. In FSE'23, Shen and Standaert proposed a tweakable block cipher that accepts -bit tweaks and achieves security up to queries. Their construction uses three block cipher calls, which was shown to be optimal for beyond-birthday-bound secure tweakable block ciphers accepting -bit tweaks. In this paper, we extend this line of research and consider designing tweakable block cipher supporting -bit tweaks from ideal block cipher. First, we show that there is a generic birthday-bound distinguishing attack on any such design with three block cipher calls if any of the block cipher keys are tweak-independent. We then propose a tweakable block cipher , which leverages three block cipher calls with each key being dependent on tweak. We demonstrate that achieve security up to queries. Furthermore, we extend this result and propose an optimally secure construction, dubbed , that uses four ideal block cipher calls with only one tweak-dependent key. Finally, we generalize this and propose an optimally secure tweakable block cipher that processes -bit tweaks using block cipher invocations with only one tweak-dependent block cipher key. Our experimental evaluation asserts that ZMAC instantiated with and (i.e., with ) performs better than all the existing ideal cipher based TBC candidates.
Last updated:  2025-03-03
Blind Signatures from Cryptographic Group Actions
Uncategorized
Dung Hoang Duong, Xuan Thanh Khuc, Youming Qiao, Willy Susilo, and Chuanqi Zhang
Show abstract
Uncategorized
We provide a generic construction of blind signatures from cryptographic group actions following the framework of the blind signature CSIOtter introduced by Katsumata et al. (CRYPTO'23) in the context of isogeny (commutative group action). We adapt and modify that framework to make it work even for non-commutative group actions. As a result, we obtain a blind signature from abstract group actions which are proven to be secure in the random oracle model. We also propose an instantiation based on a variant of linear code equivalence, interpreted as a symmetric group action.
Last updated:  2025-03-03
Trail-Estimator: An Automated Verifier for Differential Trails in Block Ciphers
Thomas Peyrin, Quan Quan Tan, Hongyi Zhang, and Chunning Zhou
Differential cryptanalysis is a powerful technique for attacking block ciphers, wherein the Markov cipher assumption and stochastic hypothesis are commonly employed to simplify the search and probability estimation of differential trails. However, these assumptions often neglect inherent algebraic constraints, potentially resulting in invalid trails and inaccurate probability estimates. Some studies identified violations of these assumptions and explored how they impose constraints on key material, but they have not yet fully captured all relevant ones. This study proposes Trail-Estimator, an automated verifier for differential trails on block ciphers, consisting of two parts: a constraint detector Cons-Collector and a solving tool Cons-Solver. We first establish the fundamental principles that will allow us to systematically identify all constraint subsets within a differential trail, upon which Cons-Collector is built. Then, Cons-Solver utilizes specialized preprocessing techniques to efficiently solve the detected constraint subsets, thereby determining the key space and providing a comprehensive probability distribution of differential trails. To validate its effectiveness, Trail-Estimator is applied to verify 14 differential trails for the SKINNY, LBLOCK, and TWINE block ciphers. Experimental results show that Trail-Estimator consistently identifies previously undetected constraints for SKINNY and discovers constraints for the first time for LBLOCK and TWINE. Notably, it is the first tool to discover long nonlinear constraints extending beyond five rounds in these ciphers. Furthermore, Trail-Estimator's accuracy is validated by experiments showing its predictions closely match the real probability distribution of short-round differential trails.
Last updated:  2025-03-03
Provably Secure Approximate Computation Protocols from CKKS
Intak Hwang, Yisol Hwang, Miran Kim, Dongwon Lee, and Yongsoo Song
Secure multi-party computation (MPC) enables collaborative, privacy-preserving computation over private inputs. Advances in homomorphic encryption (HE), particularly the CKKS scheme, have made secure computation practical, making it well-suited for real-world applications involving approximate computations. However, the inherent approximation errors in CKKS present significant challenges in developing MPC protocols. This paper investigates the problem of secure approximate MPC from CKKS. We first analyze CKKS-based protocols in two-party setting. When only one party holds a private input and the other party acts as an evaluator, a simple protocol with the noise smudging technique on the encryptor's side achieves security in the standard manner. When both parties have private inputs, we demonstrate that the protocol incorporating independent errors from each party achieves a relaxed standard security notion, referred to as a liberal security. Nevertheless, such a protocol fails to satisfy the standard security definition. To address this limitation, we propose a novel protocol that employs a distributed sampling approach to generate smudging noise in a secure manner, which satisfies the standard security definition. Finally, we extend the two-party protocols to the multi-party setting. Since the existing threshold CKKS-based MPC protocol only satisfies the liberal security, we present a novel multi-party protocol achieving the standard security by applying multi-party distributed sampling of a smudging error. For all the proposed protocols, we formally define the functionalities and provide rigorous security analysis within the simulation-based security framework. To the best of our knowledge, this is the first work to explicitly define the functionality of CKKS-based approximate MPC and achieve formal security guarantees.
Last updated:  2025-03-03
Under What Conditions Is Encrypted Key Exchange Actually Secure?
Jake Januzelli, Lawrence Roy, and Jiayu Xu
A Password-Authenticated Key Exchange (PAKE) protocol allows two parties to agree upon a cryptographic key, in the setting where the only secret shared in advance is a low-entropy password. The standard security notion for PAKE is in the Universal Composability (UC) framework. In recent years there have been a large number of works analyzing the UC-security of Encrypted Key Exchange (EKE), the very first PAKE protocol, and its One-encryption variant (OEKE), both of which compile an unauthenticated Key Agreement (KA) protocol into a PAKE. In this work, we present a comprehensive and thorough study of the UC-security of both EKE and OEKE in the most general setting and using the most efficient building blocks: 1. We show that among the five existing results on the UC-security of (O)EKE using a general KA protocol, all are incorrect; 2. We show that for (O)EKE to be UC-secure, the underlying KA protocol needs to satisfy several additional security properties: though some of these are closely related to existing security properties, some are new, and all are missing from existing works on (O)EKE; 3. We give UC-security proofs for EKE and OEKE using Programmable-Once Public Function (POPF), which is the most efficient instantiation to date and is around 4 times faster than the standard instantiation using Ideal Cipher (IC). Our results in particular allow for PAKE constructions from post-quantum KA protocols such as Kyber. We also present a security analysis of POPF using a new, weakened notion of almost UC realizing a functionality, that is still sufficient for proving composed protocols to be fully UC-secure.
Last updated:  2025-03-02
Reducing the Number of Qubits in Solving LWE
Barbara Jiabao Benedikt
At Crypto 2021, May presented an algorithm solving the ternary Learning-With-Error problem, where the solution is a ternary vector with a known number of and entries. This attack significantly improved the time complexity of from previously known algorithms to , where is the size of the key space. Therefore, May exploited that using more representations, i.e., allowing ternary interim results with additional and entries, reduces the overall time complexity. Later, van Hoof et al. (PQCrypto 2021) combined May's algorithm with quantum walks to a new attack that performs in time . However, this quantum attack requires an exponential amount of qubits. This work investigates whether the ternary LWE problem can also be solved using only qubits. Therefore, we look closely into Dicke states, which are an equal superposition over all binary vectors with a fixed Hamming weight. Generalizing Dicke states to ternary vectors makes these states applicable to the ternary LWE problem. Bärtschi and Eidenbenz (FCT 2019) proposed a quantum circuit to prepare binary Dicke states deterministically in linear time . Their procedure benefits from the inductive structure of Dicke states, i.e., that a Dicke state of a particular dimension can be built from Dicke states of lower dimensions. Our work proves that this inductive structure is also present in generalized Dicke states with an underlying set other than . Utilizing this structure, we introduce a new algorithm that deterministically prepares generalized Dicke states in linear time, for which we also provide an implementation in Qiskit. Finally, we apply our generalized Dicke states to the ternary LWE problem, and construct an algorithm that requires qubits and classical memory space up to . We achieve as best obtainable time complexity.
Last updated:  2025-03-02
The Blockwise Rank Syndrome Learning problem and its applications to cryptography
Nicolas Aragon, Pierre Briaud, Victor Dyseryn, Philippe Gaborit, and Adrien Vinçotte
This paper is an extended version of [8] published in PQCrypto 2024, in which we combine two approaches, blockwise errors and multi-syndromes, in a unique approach which leads to very efficient generalized RQC and LRPC schemes. The notion of blockwise error in a context of rank based cryptography has been recently introduced in [31]. This notion of error, very close to the notion of sum-rank metric [27], permits, by decreasing the weight of the decoded error, to greatly improve parameters for the LRPC and RQC cryptographic schemes. A little before, the multi-syndromes approach introduced for LRPC and RQC schemes in [3,18] also allowed to considerably decrease parameters sizes for LRPC and RQC schemes, through in particular the introduction of Augmented Gabidulin codes. In order to combine these approaches, we introduced in [8] the Blockwise Rank Support Learning problem. It consists of guessing the support of the errors when several syndromes are given in input, with blockwise structured errors. The new schemes we introduced have very interesting features since for 128 bits security they permit to obtain generalized schemes for which the sum of public key and ciphertext is only 1.4 kB for the generalized RQC scheme and 1.7 kB for the generalized LRPC scheme. In this extended version we give the following new features. First, we propose a new optimization on the main protocol which consists in considering 1 in the support of an error, allowing to deduce a subspace of the error to decode and improve the decoding capacity of our LRPC code, while maintaining an equal level of security. The approach of the original paper permits to reach a gain in terms of parameters size when compared to previous results [18,31], and this optimization allows to reduce the parameters by another for higher security level. We obtain better results in terms of size than the KYBER scheme whose total sum is 1.5 kB. Second we give a more detailed analysis of the algebraic attacks on the -RD problem we proposed in [8], which allowed to cryptanalyze all blockwise LRPC parameters proposed in [31] (with an improvement of more than 40 bits in the case of structural attacks). And at last, third, we propose a more detailed introduction to the historical background about rank metric, especially on the RQC and LRPC cryptosystems and their recent improvements and we add some parameters for the case of classical RQC (the case of only one given syndrome, that is a special case of our scheme, for which we could achieve 1.5 kB for the sum of the public key and the ciphertext), which compares very well to the previous version of classical RQC.
Last updated:  2025-03-02
Conjunctive Searchable Symmetric Encryption from Hard Lattices
Debadrita Talapatra, Sikhar Patranabis, and Debdeep Mukhopadhyay
Searchable Symmetric Encryption (SSE) supports efficient keyword searches over encrypted outsourced document collections while minimizing information leakage. All practically efficient SSE schemes supporting conjunctive queries rely crucially on quantum-broken cryptographic assumptions (such as discrete-log hard groups) to achieve compact storage and fast query processing. On the other hand, quantum-safe SSE schemes based on purely symmetric-key crypto-primitives either do not support conjunctive searches, or are practically inefficient. In particular, there exists no quantum-safe yet practically efficient conjunctive SSE scheme from lattice-based hardness assumptions. We solve this open question by proposing Oblivious Post-Quantum Secure Cross Tags (OQXT) – the first lattice-based practically efficient and highly scalable conjunctive SSE scheme. The technical centerpiece of OQXT is a novel oblivious cross-tag generation protocol with provable security guarantees derived from lattice-based hardness assumptions. We prove the post-quantum simulation security of OQXT with respect to a rigorously defined and thoroughly analyzed leakage profile. We then present a prototype implementation of OQXT and experimentally validate its practical efficiency and scalability over extremely large real-world databases. Our experiments show that OQXT has competitive end-to-end search latency when compared with the best (quantum-broken) conjunctive SSE schemes.
Last updated:  2025-03-02
An Efficient Quantum Oblivious Transfer Protocol
Sushmita Sarkar, Vikas Srivastava, Tapaswini Mohanty, Sumit Kumar Debnath, and Sihem Mesnager
Oblivious Transfer (OT) is a significant two party privacy preserving cryptographic primitive. OT involves a sender having several pieces of information and a receiver having a choice bit. The choice bit represents the piece of information that the receiver wants to obtain as an output of OT. At the end of the protocol, sender remains oblivious about the choice bit and receiver remains oblivious to the contents of the information that were not chosen. It has applications ranging from secure multi-party computation, privacy-preserving protocols to cryptographic protocols for secure communication. Most of the classical OT protocols are based on number theoretic assumptions which are not quantum secure and existing quantum OT protocols are not so efficient and practical. Herein, we present the design and analysis of a simple yet efficient quantum OT protocol, namely qOT. qOT is designed by using the asymmetric key distribution proposed by Gao et al. [18] as a building block. The designed qOT requires only single photons as a source of a quantum state, and the measurements of the states are computed using single particle projective measurement. These make qOT efficient and practical. Our proposed design is secure against quantum attacks. Moreover, qOT also provides long-term security.
Last updated:  2025-03-02
Key Guidance Invocation: A White-box Mode Enables Strong Space Hardness under Adaptively Chosen-Space Attacks
Yipeng Shi, Xiaolin Zhang, Boshi Yuan, Chenghao Chen, Jintong Yu, Yuxuan Wang, Chi Zhang, and Dawu Gu
The notion of space hardness serves as a quantitative measure to characterize the resilience of dedicated white-box schemes against code-lifting attacks, making it a widely utilized metric in the field. However, achieving strong space hardness (SSH) under the adaptively chosen-space attack model (ACSAM) remains an unresolved challenge, as no existing white-box scheme has given SSH guarantees under ACSAM. \par To address the problem, we introduce a novel mode of operation tailored for white-box cryptography, termed the Key Guidance Invocation (KGI) mode. Our security analysis reveals that the KGI mode not only significantly strengthens the resistance to adaptively chosen-space attacks, but also ensures SSH under ACSAM. Moreover, we propose a dedicated white-box construction, RubikStone-(,,,), which directly leverages the concept of the lookup table pool. RubikStone offers enhanced flexibility in lookup table utilization compared to existing white-box constructions and is particularly well-suited to the KGI mode. \par Additionally, we instantiate RubikStone-(256,8,12,) with the KGI mode, resulting in -256, which delivers -SSH security guarantees under ACSAM. Remarkably, -256 also shows superior performance, surpassing the efficiency of white-box AES based on the CEJO framework by in real-world settings. Besides, we conduct a comprehensive statistical analysis of the operations in all existing white-box ciphers. Our findings indicate that -256 remains highly competitive in computational efficiency despite offering unprecedented security.
Last updated:  2025-03-02
Blockchain-based Secure D2D localisation with adaptive precision
Gewu Bu, Bilel Zaghdoudi, Maria Potop-Butucaru, and Serge Fdida
In this paper we propose a secure best effort methodology for providing localisation information to devices in a heterogenous network where devices do not have access to GPS-like technology or heavy cryptographic infrastructure. Each device will compute its localisation with the highest possible accuracy based solely on the data provided by its neighboring anchors. The security of the localisation is guarantied by registering the localisation information on a distributed ledger via smart contracts. We prove the security of our solution under the adaptive chosen message attacks model. We furthermore evaluate the effectiveness of our solution by measuring the average register location time, failed requests, and total execution time using as DLT case study Hyperledger Besu with QBFT consensus.
Last updated:  2025-03-01
Monotone-Policy BARGs and More from BARGs and Quadratic Residuosity
Shafik Nassar, Brent Waters, and David J. Wu
A tuple of NP statements satisfies a monotone policy if , where if and only if is in the NP language. A monotone-policy batch argument (monotone-policy BARG) for NP is a natural extension of regular batch arguments (BARGs) that allows a prover to prove that satisfy a monotone policy with a proof of size , where is the size of the Boolean circuit computing the NP relation . Previously, Brakerski, Brodsky, Kalai, Lombardi, and Paneth (CRYPTO 2023) and Nassar, Waters, and Wu (TCC 2024) showed how to construct monotone-policy BARGs from (somewhere-extractable) BARGs for NP together with a leveled homomorphic encryption scheme (Brakerski et al.) or an additively homomorphic encryption scheme over a sufficiently-large group (Nassar et al.). In this work, we improve upon both works by showing that BARGs together with additively homomorphic encryption over any group suffices (e.g., over ). For instance, we can instantiate the additively homomorphic encryption with the classic Goldwasser-Micali encryption scheme based on the quadratic residuosity (QR) assumption. Then, by appealing to existing compilers, we also obtain a monotone-policy aggregate signature scheme from any somewhere-extractable BARG and the QR assumption.
Last updated:  2025-03-01
Lattice-Based Post-Quantum iO from Circular Security with Random Opening Assumption (Part II: zeroizing attacks against private-coin evasive LWE assumptions)
Yao-Ching Hsieh, Aayush Jain, and Huijia Lin
Indistinguishability obfuscation (iO) stands out as a powerful cryptographic primitive but remains notoriously difficult to realize under simple-to-state, post-quantum assumptions. Recent works have proposed lattice-inspired iO constructions backed by new “LWE-with-hints” assumptions, which posit that certain distributions of LWE samples retain security despite auxiliary information. However, subsequent cryptanalysis has revealed structural vulnerabilities in these assumptions, leaving us without any post-quantum iO candidates supported by simple, unbroken assumptions. Motivated by these proposals, we introduce the \emph{Circular Security with Random Opening} (CRO) assumption—a new LWE-with-hint assumption that addresses structural weaknesses from prior assumptions, and based on our systematic examination, does not appear vulnerable to known cryptanalytic techniques. In CRO, the hints are random ``openings'' of zero-encryptions under the Gentry--Sahai--Waters (GSW) homomorphic encryption scheme. Crucially, these zero-encryptions are efficiently derived from the original LWE samples via a special, carefully designed procedure, ensuring that the openings are marginally random. Moreover, the openings do not induce any natural leakage on the LWE noises. These two features--- marginally random hints and the absence of (natural) noise leakage---rule out important classes of attacks that had undermined all previous LWE-with-hint assumptions for iO. Therefore, our new lattice-based assumption for iO provides a qualitatively different target for cryptanalysis compared to existing assumptions. To build iO under this less-structured CRO assumption, we develop several new technical ideas. In particular, we devise an oblivious LWE sampling procedure, which succinctly encodes random LWE secrets and smudging noises, and uses a tailored-made homomorphic evaluation procedure to generate secure LWE samples. Crucially, all non-LWE components in this sampler, including the secrets and noises of the generated samples, are independently and randomly distributed, avoiding attacks on non-LWE components. In the second part of this work, we investigate recent constructions of obfuscation for pseudorandom functionalities. We show that the same cryptanalytic techniques used to break previous LWE-with-hints assumptions for iO (Hopkins-Jain-Lin CRYPTO 21) can be adapted to construct counterexamples against the private-coin evasive LWE assumptions underlying these pseudorandom obfuscation schemes. Unlike prior counterexamples for private-coin evasive LWE assumptions, our new counterexamples take the form of zeroizing attacks, contradicting the common belief that evasive-LWE assumptions circumvent zeroizing attacks by restricting to ``evasive'' or pseudorandom functionalities.
Last updated:  2025-03-01
An ETSI GS QKD compliant TLS implementation
Thomas Prévost, Bruno Martin, and Olivier Alibart
This paper presents our implementation of the Quantum Key Distribution standard ETSI GS QKD 014 v1.1.1, which required a modification of the Rustls library. We modified the TLS protocol while maintaining backward compatibility on the client and server side. We thus wish to participate in the effort to generalize the use of Quantum Key Distribution on the Internet. Finally we used this library for a video conference call encrypted by QKD.
Last updated:  2025-03-01
PQNTRU: Acceleration of NTRU-based Schemes via Customized Post-Quantum Processor
Zewen Ye, Junhao Huang, Tianshun Huang, Yudan Bai, Jinze Li, Hao Zhang, Guangyan Li, Donglong Chen, Ray C.C. Cheung, and Kejie Huang
Post-quantum cryptography (PQC) has rapidly evolved in response to the emergence of quantum computers, with the US National Institute of Standards and Technology (NIST) selecting four finalist algorithms for PQC standardization in 2022, including the Falcon digital signature scheme. The latest round of digital signature schemes introduced Hawk, both based on the NTRU lattice, offering compact signatures, fast generation, and verification suitable for deployment on resource-constrained Internet-of-Things (IoT) devices. Despite the popularity of Crystal-Dilithium and Crystal-Kyber, research on NTRU-based schemes has been limited due to their complex algorithms and operations. Falcon and Hawk's performance remains constrained by the lack of parallel execution in crucial operations like the Number Theoretic Transform (NTT) and Fast Fourier Transform (FFT), with data dependency being a significant bottleneck. This paper enhances NTRU-based schemes Falcon and Hawk through hardware/software co-design on a customized Single-Instruction-Multiple-Data (SIMD) processor, proposing new SIMD hardware units and instructions to expedite these schemes along with software optimizations to boost performance. Our NTT optimization includes a novel layer merging technique for SIMD architecture to reduce memory accesses, and the use of modular algorithms (Signed Montgomery and Improved Plantard) targets various modulus data widths to enhance performance. We explore applying layer merging to accelerate fixed-point FFT at the SIMD instruction level and devise a dual-issue parser to streamline assembly code organization to maximize dual-issue utilization. A System-on-chip (SoC) architecture is devised to improve the practical application of the processor in real-world scenarios. Evaluation on 28 nm technology and FPGA platform shows that our design and optimizations can increase the performance of Hawk signature generation and verification by over 7 times.
Last updated:  2025-03-01
Discrete Gaussians Modulo Sub-Lattices: New Leftover Hash Lemmas for Discrete Gaussians
Haoxiang Jin, Feng-Hao Liu, Zhedong Wang, and Dawu Gu
The Leftover Hash Lemma (LHL) is a powerful tool for extracting randomness from an entropic distribution, with numerous applications in cryptography. LHLs for discrete Gaussians have been explored in both integer settings by Gentry et al. (GPV, STOC'08) and algebraic ring settings by Lyubashevsky et al. (LPR, Eurocrypt'13). However, the existing LHLs for discrete Gaussians have two main limitations: they require the Gaussian parameter to be larger than certain smoothing parameters, and they cannot handle cases where fixed and arbitrary information is leaked. In this work, we present new LHLs for discrete Gaussians in both integer and ring settings. Our results show that the Gaussian parameter can be improved by a factor of and compared to the regularity lemmas of GPV and LPR, respectively, under similar parameter choices such as the dimension and ring. Furthermore, our new LHLs can be applied to leaked discrete Gaussians, and the result can be used to establish asymptotic hardness of the extended MLWE assumptions, addressing an open question in recent works by Lyubashevsky et al. (LNP, Crypto'22). Our central techniques involve new fine-grained analyses of the min-entropy in discrete Gaussians modulo sublattices and should be of interest.
Last updated:  2025-03-01
Delegated Multi-party Private Set Intersection from Secret Sharing
Jingwei Hu, Zhiqi Liu, and Cong Zuo
In this work, we address the problem of Delegated PSI (D-PSI), where a cloud server is introduced to handle most computational and communication tasks. D-PSI enables users to securely delegate their private sets to the cloud, ensuring the privacy of their data while allowing efficient computation of the intersection. The cloud operates under strict security requirements, learning nothing about the individual sets or the intersection result. Moreover, D-PSI minimizes user-to-user communication and supports "silent" processing, where the cloud can perform computations independently without user interaction, apart from set delegation and result retrieval. We formally define the D-PSI problem and propose a novel construction that extends beyond two-party scenarios to support multi-party settings. Our construction adheres to the D-PSI requirements, including security against semi-honest adversaries, and achieves computational and communication complexities close to the ideal "perfect" D-PSI protocol. Additionally, we demonstrate the practicality of our approach through a baseline implementation and an optimized version that further reduces computational overhead. Our results establish a strong foundation for secure and efficient PSI in real-world cloud computing scenarios.
Last updated:  2025-03-01
Solving Multivariate Coppersmith Problems with Known Moduli
Keegan Ryan
We examine the problem of finding small solutions to systems of modular multivariate polynomials. While the case of univariate polynomials has been well understood since Coppersmith's original 1996 work, multivariate systems typically rely on carefully crafted shift polynomials and significant manual analysis of the resulting Coppersmith lattice. In this work, we develop several algorithms that make such hand-crafted strategies obsolete. We first use the theory of Gröbner bases to develop an algorithm that provably computes an optimal set of shift polynomials, and we use lattice theory to construct a lattice which provably contains all desired short vectors. While this strategy is usable in practice, the resulting lattice often has large rank. Next, we propose a heuristic strategy based on graph optimization algorithms that quickly identifies low-rank alternatives. Third, we develop a strategy which symbolically precomputes shift polynomials, and we use the theory of polytopes to polynomially bound the running time. Like Meers and Nowakowski's automated method, our precomputation strategy enables heuristically and automatically determining asymptotic bounds. We evaluate our new strategies on over a dozen previously studied Coppersmith problems. In all cases, our unified approach achieves the same recovery bounds in practice as prior work, even improving the practical bounds for three of the problems. In five problems, we find smaller and more efficient lattice constructions, and in three problems, we improve the existing asymptotic bounds. While our strategies are still heuristic, they are simple to describe, implement, and execute, and we hope that they drastically simplify the application of Coppersmith's method to systems of multivariate polynomials.
Last updated:  2025-03-01
Rational Secret Sharing with Competition
Tiantian Gong and Zeyu Liu
The rational secret sharing problem (RSS) considers incentivizing rational parties to share their received information to reconstruct a correctly shared secret. Halpern and Teague (STOC'04) demonstrate that solving the RSS problem deterministically with explicitly bounded runtime is impossible, if parties prefer learning the secret than not learning, and they prefer fewer other parties to learn. To overcome this impossibility result, we propose RSS with competition. We consider a slightly different yet sensible preference profile: Each party prefers to learn the secret early and prefers fewer parties learning before them. This preference profile changes the information-hiding dynamics among parties in prior works: First, those who have learned the secret are indifferent towards or even prefer informing others later; second, the competition to learn the secret earlier among different access groups in the access structure facilitates information sharing inside an access group. As a result, we are able to construct the first deterministic RSS algorithm that terminates in at most two rounds. Additionally, our construction does not employ any cryptographic machinery (being fully game-theoretic and using the underlying secret-sharing scheme as a black-box) nor requires the knowledge of the parties' exact utility function. Furthermore, we consider general access structures.
Last updated:  2025-02-28
Flexway O-Sort: Enclave-Friendly and Optimal Oblivious Sorting
Tianyao Gu, Yilei Wang, Afonso Tinoco, Bingnan Chen, Ke Yi, and Elaine Shi
Oblivious algorithms are being deployed at large scale in real world to enable privacy-preserving applications such as Signal's private contact discovery. Oblivious sorting is a fundamental building block in the design of oblivious algorithms for numerous computation tasks. Unfortunately, there is still a theory-practice gap for oblivious sort. The commonly implemented bitonic sorting algorithm is not asymptotically optimal, whereas known asymptotically optimal algorithms suffer from large constants. In this paper, we construct a new oblivious sorting algorithm called flexway o-sort, which is asymptotically optimal, concretely efficient, and suitable for implementation in hardware enclaves such as Intel SGX. For moderately large inputs of GB, our flexway o-sort algorithm outperforms known oblivious sorting algorithms by to when the data fits within the hardware enclave, and by to when the data does not fit within the hardware enclave. We also implemented various applications of oblivious sorting, including histogram, database join, and initialization of an ORAM data structure. For these applications and data sets from 8GB to 32GB, we achieve speedup over bitonic sort when the data fits within the enclave, and speedup when the data does not fit within the enclave.
Last updated:  2025-02-28
Fair Exchange for Decentralized Autonomous Organizations via Threshold Adaptor Signatures
Ruben Baecker, Paul Gerhart, Jonathan Katz, and Dominique Schröder
A Decentralized Autonomous Organization (DAO) enables multiple parties to collectively manage digital assets in a blockchain setting. We focus on achieving fair exchange between DAOs using a cryptographic mechanism that operates with minimal blockchain assumptions and, crucially, does not rely on smart contracts. Specifically, we consider a setting where a DAO consisting of sellers holding shares of a witness interacts with a DAO comprising buyers holding shares of a signing key ; the goal is for the sellers to exchange for a signature under transferring a predetermined amount of funds. Fairness is required to hold both between DAOs (i.e., ensuring that each DAO receives its asset if and only if the other does) as well as within each DAO (i.e., ensuring that all members of a DAO receive their asset if and only if every other member does). We formalize these fairness properties and present an efficient protocol for DAO-based fair exchange under standard cryptographic assumptions. Our protocol leverages certified witness encryption and threshold adaptor signatures, two primitives of independent interest that we introduce and show how to construct efficiently.
Last updated:  2025-02-28
Registered ABE and Adaptively-Secure Broadcast Encryption from Succinct LWE
Jeffrey Champion, Yao-Ching Hsieh, and David J. Wu
Registered attribute-based encryption (ABE) is a generalization of public-key encryption that enables fine-grained access control to encrypted data (like standard ABE), but without needing a central trusted authority. In a key-policy registered ABE scheme, users choose their own public and private keys and then register their public keys together with a decryption policy with an (untrusted) key curator. The key curator aggregates all of the individual public keys into a short master public key which serves as the public key for an ABE scheme. Currently, we can build registered ABE for restricted policies (e.g., Boolean formulas) from pairing-based assumptions and for general policies using witness encryption or indistinguishability obfuscation. In this work, we construct a key-policy registered ABE for general policies (specifically, bounded-depth Boolean circuits) from the -succinct learning with errors (LWE) assumption in the random oracle model. The ciphertext size in our registered ABE scheme is , where is a security parameter and is the depth of the circuit that computes the policy circuit . Notably, this is independent of the length of the attribute and is optimal up to the factor. Previously, the only lattice-based instantiation of registered ABE uses witness encryption, which relies on private-coin evasive LWE, a stronger assumption than -succinct LWE. Moreover, the ciphertext size in previous registered ABE schemes that support general policies (i.e., from obfuscation or witness encryption) scales with . The ciphertext size in our scheme depends only on the depth of the circuit (and not the length of the attribute or the size of the policy). This enables new applications to identity-based distributed broadcast encryption. Our techniques are also useful for constructing adaptively-secure (distributed) broadcast encryption, and we give the first scheme from the -succinct LWE assumption in the random oracle model. Previously, the only lattice-based broadcast encryption scheme with adaptive security relied on witness encryption in the random oracle model. All other lattice-based broadcast encryption schemes only achieved selective security.
Last updated:  2025-02-28
Generic Composition: From Classical to Quantum Security
Nathalie Lang, Jannis Leuther, and Stefan Lucks
Authenticated encryption (AE) provides both authenticity and privacy. Starting with Bellare's and Namprempre's work in 2000, the Encrypt-then-MAC composition of an encryption scheme for privacy and a MAC for authenticity has become a well-studied and common approach. This work investigates the security of the Encrypt-then-MAC composition in a quantum setting which means that adversarial queries as well as the responses to those queries may be in superposition. We demonstrate that the Encrypt-then-MAC composition of a chosen-plaintext (IND-qCPA) secure symmetric encryption scheme SE and a plus-one unforgeable MAC fails to achieve chosen-ciphertext (IND-qCCA) security. On the other hand, we show that it suffices to choose a quantum pseudorandom function (qPRF) as the MAC. Namely, the Encrypt-then-MAC composition of SE and a qPRF is IND-qCCA secure. The same holds for the Encrypt-and-MAC composition of SE and a qPRF
Last updated:  2025-02-28
How Small Can S-boxes Be
Chenhao Jia, Tingting Cui, Qing Ling, Yan He, Kai Hu, Yu Sun, and Meiqin Wang
S-boxes are the most popular nonlinear building blocks used in symmetric-key primitives. Both cryptographic properties and implementation cost of an S-box are crucial for a good cipher design, especially for lightweight ones. This paper aims to determine the exact minimum area of optimal 4-bit S-boxes (whose differential uniform and linearity are both 4) under certain standard cell library. Firstly, we evaluate the upper and lower bounds upon the minimum area of S-boxes, by proposing a Prim-like greedy algorithm and utilizing properties of balanced Boolean functions to construct bijective S-boxes. Secondly, an SAT-aided automatic search tool is proposed that can simultaneously consider multiple cryptographic properties such as the uniform, linearity, algebraic degree, and the implementation costs such as area, and gate depth complexity. Thirdly, thanks to our tool, we manage to find the exact minimum area for different types of 4-bit S-boxes. The measurement in this paper uses the gate equivalent (GE) as standard unit under UMC 180 nm library, all 2/3/4-input logic gates are taken into consideration. Our results show that the minimum area of optimal 4-bit S-box is 11 GE and the depth is 3. If we do not use the 4-input gates, this minimum area increases to 12 GE and the depth in this case is 4, which is the same if we only use 2-input gates. If we further require that the S-boxes should not have fixed points, the minimum area continue increasing a bit to 12.33 GE while keeping the depth. Interestingly, the same results are also obtained for non-optimal 4-bit bijective S-boxes as long as their differential uniform and linearity (i.e., there is no non-trivial linear structures) if only 2-input and 3-input gates are used. But the minimum area reduce to 9 GE if 4-input gates are involved. More strictly, if we require the algebraic degree of all coordinate functions of optimal S-boxes be 3, the minimum area is 14 GE with fixed point and 14.33 GE without fixed point, and the depth increases sharply to 8. Besides determining the exact minimum area, our tool is also useful to search for a better implementation of existing S-boxes. As a result, we find out an implementation of Keccak's 5-bit S-box with 17 GE. As a contrast, the designer's original circuit has an area of 23.33 GE, while the optimized result by Lu et al. achieves an area of 17.66 GE. Also, we find out the first optimized implementation of SKINNY's 8-bit S-box with 26.67 GE.
Last updated:  2025-02-28
Toward Optimal-Complexity Hash-Based Asynchronous MVBA with Optimal Resilience
Uncategorized
Jovan Komatovic, Joachim Neu, and Tim Roughgarden
Show abstract
Uncategorized
Multi-valued validated Byzantine agreement (MVBA), a fundamental primitive of distributed computing, allows processes to agree on a valid -bit value, despite faulty processes behaving maliciously. Among hash-based solutions for the asynchronous setting with adaptive faults, the state-of-the-art HMVBA protocol achieves optimal message complexity, (near-)optimal bit complexity, and optimal time complexity. However, it only tolerates up to adaptive failures. In contrast, the best known optimally resilient protocol, FIN-MVBA, exchanges messages and bits. This highlights a fundamental question: can a hash-based protocol be designed for the asynchronous setting with adaptive faults that simultaneously achieves both optimal complexity and optimal resilience? In this paper, we take a significant step toward answering the question. Namely, we introduce Reducer, an MVBA protocol that retains HMVBA's complexity while improving its resilience to . Like HMVBA and FIN-MVBA, Reducer relies exclusively on collision-resistant hash functions. A key innovation in Reducer's design is its internal use of strong multi-valued Byzantine agreement (SMBA), a variant of strong consensus we introduce and construct, which ensures agreement on a correct process's proposal. To further advance resilience toward the optimal one-third bound, we then propose Reducer++, an MVBA protocol that tolerates up to adaptive failures, for any fixed constant . Unlike Reducer, Reducer++ does not rely on SMBA. Instead, it employs a novel approach involving hash functions modeled as random oracles to ensure termination. Reducer++ maintains constant time complexity, quadratic message complexity, and quasi-quadratic bit complexity, with constants dependent on .
Last updated:  2025-02-28
Optimizing Final Exponentiation for Pairing-Friendly Elliptic Curves with Odd Embedding Degrees Divisible by 3
Walid Haddaji, Loubna Ghammam, Nadia El Mrabet, and Leila Ben Abdelghani
In pairing-based cryptography, final exponentiation with a large fixed exponent is crucial for ensuring unique outputs in Tate and optimal Ate pairings. While optimizations for elliptic curves with even embedding degrees have been well-explored, progress for curves with odd embedding degrees, particularly those divisible by , has been more limited. This paper presents new optimization techniques for computing the final exponentiation of the optimal Ate pairing on these curves. The first exploits the fact that some existing seeds have a form enabling cyclotomic cubing and extends this to generate new seeds with the same form. The second is to generate new seeds with sparse ternary representations, replacing squaring with cyclotomic cubing. The first technique improves efficiency by and compared to the square and multiply (\textbf{SM}) method for existing seeds at -bit and -bit security levels, respectively. For newly generated seeds, it achieves efficiency gains of at -bit, at -bit, and at -bit security levels. The second technique improves efficiency by at -bit, at -bit, and at -bit security levels.
Last updated:  2025-02-28
BEAT-MEV: Epochless Approach to Batched Threshold Encryption for MEV Prevention
Jan Bormet, Sebastian Faust, Hussien Othman, and Ziyan Qu
In decentralized finance (DeFi), the public availability of pending transactions presents significant privacy concerns, enabling market manipulation through miner extractable value (MEV). MEV occurs when block proposers exploit the ability to reorder, omit, or include transactions, causing financial loss to users from frontrunning. Recent research has focused on encrypting pending transactions, hiding transaction data until block finalization. To this end, Choudhuri et al. (USENIX '24) introduce an elegant new primitive called Batched Threshold Encryption (BTE) where a batch of encrypted transactions is selected by a committee and only decrypted after block finalization. Crucially, BTE achieves low communication complexity during decryption and guarantees that all encrypted transactions outside the batch remain private. An important shortcoming of their construction is, however, that it progresses in epochs and requires a costly setup in MPC for each batch decryption. In this work, we introduce a novel BTE scheme addressing the limitations by eliminating the need for an expensive epoch setup while achieving practical encryption and decryption times. Additionally, we explore a previously ignored question of how users can coordinate their transactions, which is crucial for the functionality of the system. Along the way, we present several optimizations and trade-offs between communication and computational complexity that allows us to achieve practical performance on standard hardware ( ms for encryption and ms for decrypting transactions). Finally, we prove our constructions secure in a model that captures practical attacks on MEV-prevention mechanisms.
Last updated:  2025-02-28
Finding and Protecting the Weakest Link: On Side-Channel Attacks on Masked ML-DSA
Julius Hermelink, Kai-Chun Ning, and Richard Petri
NIST has standardized ML-KEM and ML-DSA as replacements for pre-quantum key exchanges and digital signatures. Both schemes have already seen analysis with respect to side-channels, and first fully masked implementations of ML-DSA have been published. Previous attacks have focused on unprotected implementations or assumed only hiding countermeasures to be in-place. Thus, in contrast to ML-KEM, the threat of side-channel attacks for protected implementations of ML-DSA is mostly unclear. In this work, we analyze the side-channel vulnerability of masked ML-DSA implementations. We first systematically assess the vulnerability of several potential points of attacks in different leakage models using information theory. Then, we explain how an adversary could launch first, second, and higher-order attacks using a recently presented framework for side-channel information in lattice-based schemes. In this context, we propose a filtering technique that allows the framework to solve for the secret key from a large number of hints; this had previously been prevented by numerical instabilities. We simulate the presented attacks and discuss the relation to the information-theoretic analysis. Finally, we carry out relevant attacks on physical devices, discuss recent masked implementations, and instantiate a countermeasure against the most threatening attacks. The countermeasure mitigates the attacks with the highest noise-tolerance while having very little overhead. The results on the physical devices validate our simulations.
Last updated:  2025-02-28
Refined Strategy for Solving LWE in Two-step Mode
Wenwen Xia, Leizhang Wang, GengWang, Dawu Gu, and Baocang Wang
Learning with Errors (LWE) and its variants are widely used in constructing lattice-based cryptographic schemes, including NIST standards Kyber and Dilithium, and a refined estimation of LWE’s hardness is crucial for their security. Currently, primal attack is considered the fastest algorithm for solving LWE problem in practice. It reduces LWE to a unique Shortest Vector Problem (uSVP) and combines lattice reduction algorithms with SVP calls such as enumeration or sieving. However, finding the most time-efficient combination strategy for these algorithms remains a challenge. The designers of Kyber highlighted this issue as open problem Q7: “A refined (progressive) lattice reduction strategy and a precise analysis of the gains using reduction preprocessing plus a single SVP call in large dimensions are still missing.” In this paper, we address this problem by presenting a Strategy Search algorithm named PSSearch for solving uSVP and LWE, using progressive BKZ as the lattice reduction and sieving as the SVP call. Compared to the heuristic strategy used in G6K (Albrechet et al., Eurocrypt 2019), the strategy generated by our algorithm has the following advantages: (1) We design a tree search algorithm with pruning named PSSearch to find the minimal time-cost strategy in two-step mode and prove its correctness, showing that the fastest approach in two-step mode for solving uSVP and LWE can be achieved in a reasonable timeframe; (2) We propose the first tight simulation for BKZ that can jump by J > 1 blocks, which allows us to choose more flexible jump values to improve reduction efficiency. (3) We propose a refined dimension estimation method for the SVP call. We tested the accuracy of our new simulation algorithm and the efficiency of our new strategy through experiments. Furthermore, we apply the strategies generated by SSearch to solve the TU Darmstadt LWE Challenges with (n, α) ∈{(80, 0.005), (40, 0.035), (90, 0.005), (50, 0.025), (55, 0.020), (40, 0.040)} using the G6K framework, achieving improve- ments of 7.2 to 23.4 times over the heuristic strategy employed in G6K. By combining the minimal time-cost strategy selection with the refined two-step estimator for LWE (Xia et al., PKC 2024), we re-estimate the hardness of NIST standards Kyber and Dilithium, and determine the influence of the strategy. Specifically, the security levels of NIST standards decrease by 3.4 to 4.6 bits, rather than 2 to 8 bits indicated in the Kyber documentation. It achieves a decrease of 1.1 to 1.3 bits compared to the refined two-step estimation using trivial strategy.
Last updated:  2025-02-28
: Succinct Arguments with Small Arithmetization Overheads from IOPs of Proximity to the Integers
Uncategorized
Albert Garreta, Hendrik Waldner, Katerina Hristova, and Luca Dall'Ava
Show abstract
Uncategorized
We introduce , a hash-based succinct argument for integer arithmetic. 's goal is to provide a practically efficient scheme that bypasses the arithmetization overheads that many succinct arguments present. These overheads can be of orders of magnitude in many applications. By enabling proving statements over the integers, we are able to arithmetize many operations of interest with almost no overhead. This includes modular operations involving any moduli, not necessarily prime, and possibly involving multiple moduli in the same statement. In particular, allows to prove statements for the ring for arbitrary . Importantly, and departing from prior work, our schemes are purely code and hash-based, and do not require hidden order groups. In its final form, operates similarly to other hash-based schemes using Brakedown as their PCS, and at the same time it benefits from the arithmetization perks brought by working over (and ) natively. At its core, is a succinct argument for proving relations over the rational numbers , even though when applied to integer statements, an honest prover and verifier will only operate with small integers. consists of two main components: 1) -, a framework for proving algebraic statements over the rationals by reducing modulo a randomly chosen prime , followed by running a suitable PIOP over (this is similar to the approach taken in prior works, with the difference that we use localizations of to enable prime modular projection); and 2) , a Brakedown-type polynomial commitment scheme built from an IOP of proximity to the integers, a novel primitive that we introduce. The latter primitive guarantees that a prover is using a polynomial with coefficients close to being integral. With these two primitives in place, one can use a lookup argument over the rationals to ensure that the witness contains only integer elements.
Last updated:  2025-02-28
Pencil: A Domain-Extended PRF with Full -bit Security for Strengthening GCM and More
Ritam Bhaumik and Jean Paul Degabriele
We consider the problem of constructing efficient pseudorandom functions with Beyond-Birthday-Bound (BBB) security from blockciphers. More specifically, we are interested in variable-output-length pseudorandom functions (PRF) whose domain is twice that of the underlying blockcipher. We present two such constructions, and , which provide weak PRF and full PRF security, respectively, where both achieve full -bit security. While several recent works have focused on constructing BBB PRFs from blockciphers, much less attention has been given to weak PRF constructions which can potentially be constructed more efficiently and still serve as a useful primitive. Another understudied problem in this domain, is that of extending the domain of a BBB PRF, which turns out to be rather challenging. Besides being of theoretical interest in itself, this is also a very practical problem. Often, the input to the BBB PRF is a nonce, but random nonces are much easier to handle in practice as they do not require maintaining state---which can be very cumbersome in distributed systems and encrypted cloud storage. Accordingly, in order to maintain a BBB security bound, one requires random nonces of size between and bits long and corresponding BBB (weak) PRF constructions that admit matching input sizes. NIST has recently announced a pre-draft call for comments to standardise AEAD schemes that can encrypt larger amounts of data and admit larger nonces. The call lists two approaches. The first is to define an analogue of GCM using a 256-bit blockcipher, and the second is based on a recent proposal by Gueron, to extend GCM with a key derivation function (KDF) called DNDK to increase its security. In essence, DNDK is a BBB-secure expanding weak pseudorandom function with a domain size of 192 bits that is realised from AES. Our work makes relevant contributions to this domain in two important ways. Firstly, an immediate consequence of our work is that one can construct a GCM analogue with BBB security from , without resorting to a 256-bit blockcipher. Our second contribution is that can be used as a KDF in combination with GCM in an analogous manner to DNDK-GCM. However, being a full PRF as opposed to DNDK which is only a weak PRF, allows one to prove the KDF-GCM composition secure as an AEAD scheme. Finally, when contrasting and DNDK as weak PRFs with comparable parameters, our construction requires only half the blockcipher calls.
Last updated:  2025-02-28
MPC for Access Structures over Rings and Fields
Robin Jadoul, Nigel P. Smart, and Barry Van Leeuwen
We examine Multi-Party Computation protocols in the active-security-with-abort setting for access structures over small and large finite fields and over rings . We give general protocols which work for any access structure which is realised by a multiplicative Extended Span Program. We generalize a number of techniques and protocols from various papers and compare the different methodologies. In particular we examine the expected communication cost per multiplication gate when the protocols are instantiated with different access structures.
Last updated:  2025-02-28
On the Security and Privacy of CKKS-based Homomorphic Evaluation Protocols
Intak Hwang, Seonhong Min, Jinyeong Seo, and Yongsoo Song
CKKS is a homomorphic encryption (HE) scheme that supports arithmetic over complex numbers in an approximate manner. Despite its utility in PPML protocols, formally defining the security of CKKS-based protocols is challenging due to its approximate nature. To be precise, in a sender-receiver model, where the receiver holds input ciphertexts and the sender evaluates its private circuit, it is difficult to define sender's privacy in terms of indistinguishability, whereas receiver's privacy is easily achieved through the semantic security of CKKS. In this paper, we present a new definition for CKKS-based protocols, called Differentially Private Homomorphic Evaluation (DPHE) protocols, along with a general method to achieve this. In our definition, we relax the sender’s privacy condition from indistinguishability to differential privacy notion. We focus on the fact that most security concern for PPML protocols is differential privacy on evaluation results, rather than the simulatability of the evaluation. We prove that if the ideal functionality satisfies differential privacy and a protocol satisfies DPHE, then the output of the protocol also satisfies differential privacy. Next, we provide a general compiler that transforms a plain CKKS-based protocol into a DPHE one. We achieve this by mixing the Laplace mechanism and zero-knowledge argument of knowledge (ZKAoK) for CKKS. This approach allows us to achieve sender's privacy with a moderate noise, whereas the previous indistinguishability-based approach requires exponentially large overhead. Finally, we provide a concrete instantiation of ZKAoK for CKKS in the form of PIOP. To prove the well-formedness of CKKS ciphertexts and public keys, we devise new proof techniques that use homomorphic evaluation during verification. We also provide an implementation to demonstrate the practicality of our ZKAoK for CKKS by compiling PIOPs using the HSS polynomial commitment scheme (Crypto'24).
Last updated:  2025-02-27
MAESTRO: Multi-party AES using Lookup Tables
Hiraku Morita, Erik Pohle, Kunihiko Sadakane, Peter Scholl, Kazunari Tozawa, and Daniel Tschudi
Secure multi-party computation (MPC) enables multiple distrusting parties to jointly compute a function while keeping their inputs private. Computing the AES block cipher in MPC, where the key and/or the input are secret-shared among the parties is important for various applications, particularly threshold cryptography. In this work, we propose a family of dedicated, high-performance MPC protocols to compute the non-linear S-box part of AES in the honest majority setting. Our protocols come in both semi-honest and maliciously secure variants. The core technique is a combination of lookup table protocols based on random one-hot vectors and the decomposition of finite field inversion in into multiplications and inversion in the smaller field , taking inspiration from ideas used for hardware implementations of AES. We also apply and improve the analysis of a batch verification technique for checking inner products with logarithmic communication. This allows us to obtain malicious security with almost no communication overhead, and we use it to obtain new, secure table lookup protocols with only communication for a table of size , which may be useful in other applications. Our protocols have different trade-offs, such as having a similar round complexity as previous state-of-the-art by Chida et al. [WAHC'18] but lower bandwidth costs, or having fewer rounds and lower bandwidth costs. An experimental evaluation in various network conditions using three party replicated secret sharing shows improvements in throughput between and in the semi-honest setting. For malicious security, we improve throughput by to in LAN and by in WAN due to sublinear batch verification.
Last updated:  2025-02-27
Towards a White-Box Secure Fiat-Shamir Transformation
Gal Arnon and Eylon Yogev
The Fiat–Shamir transformation is a fundamental cryptographic technique widely used to convert public-coin interactive protocols into non-interactive ones. This transformation is crucial in both theoretical and practical applications, particularly in the construction of succinct non-interactive arguments (SNARKs). While its security is well-established in the random oracle model, practical implementations replace the random oracle with a concrete hash function, where security is merely assumed to carry over. A growing body of work has given theoretical examples of protocols that remain secure under the Fiat–Shamir transformation in the random oracle model but become insecure when instantiated with any white-box implementation of the hash function. Recent research has shown how these attacks can be applied to natural cryptographic schemes, including real-world systems. These attacks rely on a general diagonalization technique, where the protocol exploits its access to the white-box implementation of the hash function. These attacks cast serious doubt on the security of cryptographic systems deployed in practice today, leaving their soundness uncertain. We propose a new Fiat–Shamir transformation (XFS) that aims to defend against a broad family of attacks. Our approach is designed to be practical, with minimal impact on the efficiency of the prover and verifier and on the proof length. At a high level, our transformation combines the standard Fiat–Shamir technique with a new type of proof-of-work that we construct. We provide strong evidence for the security of our transformation by proving its security in a relativized random oracle model. Specifically, we show diagonalization attacks on the standard Fiat–Shamir transformation that can be mapped to analogous attacks within this model, meaning they do not rely on a concrete instantiation of the random oracle. In contrast, we prove unconditionally that our XFS variant of the Fiat–Shamir transformation remains secure within this model. Consequently, any successful attack on XFS must deviate from known techniques and exploit aspects not captured by our model. We hope that our transformation will help preserve the security of systems relying on the Fiat–Shamir transformation.
Last updated:  2025-02-27
Twist and Shout: Faster memory checking arguments via one-hot addressing and increments
Srinath Setty and Justin Thaler
A memory checking argument enables a prover to prove to a verifier that it is correctly processing reads and writes to memory. They are used widely in modern SNARKs, especially in zkVMs, where the prover proves the correct execution of a CPU including the correctness of memory operations. We describe a new approach for memory checking, which we call the method of one-hot addressing and increments. We instantiate this method via two different families of protocols, called Twist and Shout. Twist supports read/write memories, while Shout targets read-only memories (also known as lookup arguments). Both Shout and Twist have logarithmic verifier costs. Unlike prior works, these protocols do not invoke "grand product" or "grand sum" arguments. Twist and Shout significantly improve the prover costs of prior works across the full range of realistic memory sizes, from tiny memories (e.g., 32 registers as in RISC-V), to memories that are so large they cannot be explicitly materialized (e.g., structured lookup tables of size or larger, which arise in Lasso and the Jolt zkVM). Detailed cost analysis shows that Twist and Shout are well over 10x cheaper for the prover than state-of-the-art memory-checking procedures configured to have logarithmic proof length. Prior memory-checking procedures can also be configured to have larger proofs. Even then, we estimate that Twist and Shout are at least 2--4x faster for the prover in key applications. Finally, using Shout, we provide two fast-prover SNARKs for non-uniform constraint systems, both of which achieve minimal commitment costs (the prover commits only to the witness): (1) SpeedySpartan applies to Plonkish constraints, substantially improving the previous state-of-the-art protocol, BabySpartan; and (2)Spartan++ applies to CCS (a generalization of Plonkish and R1CS), improving prover times over the previous state-of-the-art protocol, Spartan, by 6x.
Last updated:  2025-02-27
Kleptographic Attacks against Implicit Rejection
Antoine Joux, Julian Loss, and Benedikt Wagner
Given its integral role in modern encryption systems such as CRYSTALS-Kyber, the Fujisaki-Okamoto (FO) transform will soon be at the center of our secure communications infrastructure. An enduring debate surrounding the FO transform is whether to use explicit or implicit rejection when decapsulation fails. Presently, implicit rejection, as implemented in CRYSTALS-Kyber, is supported by a strong set of arguments. Therefore, understanding its security implications in different attacker models is essential. In this work, we study implicit rejection through a novel lens, namely, from the perspective of kleptography. Concretely, we consider an attacker model in which the attacker can subvert the user's code to compromise security while remaining undetectable. In this scenario, we present three attacks that significantly reduce the security level of the FO transform with implicit rejection.
Last updated:  2025-02-27
Faster FHEW Bootstrapping with Adaptive Key Update
Qi Zhang, Mingqiang Wang, and Xiaopeng Cheng
Lee et al. proposed a new bootstrapping algorithm based on homomorphic automorphism, which merges the empty sets of ciphertexts by adjusting the window size. This algorithm supports arbitrary secret key distributions with no additional runtime costs while using small evaluation keys. However, our implementation reveals that once the window size exceeds a certain threshold, the time required for bootstrapping remains relatively constant. This observation prompts the question of how to further reduce the running time. To address this challenge, we introduce a new trick called Adaptive Key Update (AKU). With AKU and automorphism techniques, we propose a new bootstrapping algorithm for Gaussian secret keys that requires only external products and no switching for blind rotation. Building on this, we employ window size optimization and key switching techniques to further improve the algorithm. The improved algorithm provides a useful trade-off between key storage and computational efficiency, depending on the choice of the window size. Compared to the current fastest FHEW bootstrapping method for Gaussian secret keys (the LLWW+ method proposed by Li et al.), our AKU-based algorithm reduces the number of key switching by 76% and decreases the running time of bootstrapping by 20.7%. At this time, the practical runtime for bootstrapping is approximately equal to that of performing only external products.
Last updated:  2025-02-27
A New Generalized Attack on RSA-like Cryptosystems
Michel SECK, Oumar Niang, and Djiby Sow
Rivest, Shamir, and Adleman published the RSA cryptosystem in 1978, which has been widely used over the last four decades. The security of RSA is based on the difficulty of factoring large integers , where and are prime numbers. The public exponent and the private exponent are related by the equation . Recently, Cotan and Te{\c{s}}eleanu (NordSec 2023) introduced a variant of RSA, where the public exponent and the private exponent satisfy the equation for some positive integer . In this paper, we study the general equation with positive integers and , and . We show that, given the public parameters and , one can recover and and factor the modulus in polynomial time by combining continued fractions with Coppersmith's algorithm which relies on lattice reduction techniques, under specific conditions on , , and . Furthermore, we show that if the private exponent in an RSA-like cryptosystem is either small or too large, then can be factored in polynomial time. This attack applies to the standard RSA cryptosystem.
Last updated:  2025-02-27
Protocol for Tight Optimal Symmetric Security
Emanuele Di Giandomenico, Yong Li, and Sven Schäge
We present , a new balanced PAKE protocol with optimal communication efficiency. Messages are only 160 bits long and the computational complexity is lower than all previous approaches. Our protocol is proven secure in the random oracle model and features a security proof in a strong security model with multiple parties and multiple sessions, while allowing for generous attack queries including multiple -queries. Moreover, the proof is in the practically relevant single-bit model (that is harder to achieve than the multiple-bit model) and tightly reduces to the Strong Square Diffie-Hellman assumption (SSQRDH). This allows for very efficient, theoretically-sound instantiations and tight compositions with symmetric primitives.
Last updated:  2025-02-27
A Complete Security Proof of SQIsign
Marius A. Aardal, Andrea Basso, Luca De Feo, Sikhar Patranabis, and Benjamin Wesolowski
SQIsign is the leading digital signature from isogenies. Despite the many improvements that have appeared in the literature, all its recents variants lack a complete security proof. In this work, we provide the first full security proof of SQIsign, as submitted to the second round of NIST's on-ramp track for digital signatures. To do so, we introduce a new framework, which we call Fiat-Shamir with hints, that captures all those protocols where the simulator needs additional information to simulate a transcript. Using this framework, we show that SQIsign is EUF-CMA secure in the ROM, assuming the hardness of the One Endomorphism problem with hints, or the hardness of the Full Endomorphism Ring problem with hints together with a hint indistinguishability assumption; all assumptions, unlike previous ones in the literature, are non-interactive. Along the way, we prove several intermediate results that may be of independent interest.
Last updated:  2025-02-27
Side-Channel and Fault Injection Attacks on VOLEitH Signature Schemes: A Case Study of Masked FAEST
Sönke Jendral and Elena Dubrova
Ongoing efforts to transition to post-quantum secure public- key cryptosystems have created the need for algorithms with a variety of performance characteristics and security assumptions. Among the can- didates in NIST’s post-quantum standardisation process for additional digital signatures is FAEST, a Vector Oblivious Linear Evaluation in-the- Head (VOLEitH)-based scheme, whose security relies on the one-wayness of the Advanced Encryption Standard (AES). The VOLEitH paradigm enables competitive performance and signature sizes under conservative security assumptions. However, since it was introduced recently, in 2023, its resistance to physical attacks has not yet been analysed. In this paper, we present the first security analysis of VOLEitH-based signa- ture schemes in the context of side-channel and fault injection attacks. We demonstrate four practical attacks on a masked implementation of FAEST in ARM Cortex-M4 capable of recovering the full secret key with high probability (greater than 0.87) from a single signature. These at- tacks exploit vulnerabilities of components specific to VOLEitH schemes and FAEST, such as the all-but-one vector commitments, VOLE gener- ation, and AES proof generation. Finally, we propose countermeasures to mitigate these attacks and enhance the physical security of VOLEitH- based signature schemes.
Last updated:  2025-02-27
Pseudorandom Obfuscation and Applications
Pedro Branco, Nico Döttling, Abhishek Jain, Giulio Malavolta, Surya Mathialagan, Spencer Peters, and Vinod Vaikuntanathan
We introduce the notion of pseudorandom obfuscation, a way to obfuscate (keyed) pseudorandom functions in an average-case sense. We study several variants of pseudorandom obfuscation and show a number of applications. 1. Applications in the iO World: Our weakest variant of pseudorandom obfuscation, named obfuscation for identical pseudorandom functions (iPRO), is weaker than indistinguishability obfuscation (iO): rather than obfuscating arbitrary circuits as in iO, iPRO only obfuscates circuits computing pseudorandom functions. We show that iPRO already enables several applications of iO, such as unleveled fully homomorphic encryption (without assuming circular security) and succinct randomized encodings. 2. From iPRO to iO: Despite being a weaker notion than iO, we show two transformations from iPRO to iO. Our first (and main) construction builds iO from iPRO plus standard assumptions on cryptographic bilinear maps. Our second construction builds iO, and even ideal obfuscation, from iPRO alone in the pseudorandom oracle model (Jain, Lin, Luo and Wichs, CRYPTO 2023). We also formulate stronger variants of pseudorandom obfuscation that help us go beyond the applications of indistinguishability obfuscation. 3. Applications outside the iO World: We show how to construct a succinct witness encryption scheme from a strong version of PRO, where the size of the ciphertext is independent of the witness size. Such a witness encryption scheme is not known to exist even assuming iO. 4. Construction: We show a *heuristic* construction of the strongest version of PRO, secure under the sub-exponential hardness of the learning with errors (LWE) problem, and the private-coin evasive LWE heuristic (Wee, EUROCRYPT 2022; Tsabary, CRYPTO 2022). Finally, we highlight some barriers in achieving the strongest version of pseudorandom obfuscation.
Last updated:  2025-02-27
HiAE: A High-Throughput Authenticated Encryption Algorithm for Cross-Platform Efficiency
Han Chen, Tao Huang, Phuong Pham, and Shuang Wu
This paper addresses the critical challenges in designing cryptographic algorithms that achieve both high performance and cross-platform efficiency on ARM and x86 architectures, catering to the demanding requirements of next-generation communication systems, such as 6G and GPU/NPU interconnections. We propose HiAE, a high-throughput authenticated encryption algorithm optimized for performance exceeding 100 Gbps and designed to meet the stringent security requirements of future communication networks. HiAE leverages the stream cipher structure, integrating the AES round function for non-linear diffusion. Our design achieves exceptional efficiency, with benchmark results from software implementations across various platforms showing over 180 Gbps on ARM devices in AEAD mode, making it the fastest AEAD solution on ARM chips
Last updated:  2025-02-27
Another Look at the Quantum Security of the Vectorization Problem with Shifted Inputs
Paul Frixons, Valerie Gilchrist, Péter Kutas, Simon-Philipp Merz, and Christophe Petit
Cryptographic group actions provide simple post-quantum generalizations to many cryptographic protocols based on the discrete logarithm problem (DLP). However, many advanced group action-based protocols do not solely rely on the core group action problem (the so-called vectorization problem), but also on variants of this problem, to either improve efficiency or enable new functionalities. In particular, the security of the CSI-SharK threshold signature protocol relies on the Vectorization Problem with Shifted Inputs where (in DLP formalism) the adversary not only receives and , but also for multiple known values of . A natural open question is then whether the extra data provided to the adversary in this variant allows for more efficient attacks. In this paper, we revisit the concrete quantum security of this problem. We start from a quantum multiple hidden shift algorithm of Childs and van Dam, which to the best of our knowledge was never applied in cryptography before. We specify algorithms for its subroutines and we provide concrete complexity estimates for both these subroutines and the overall algorithm. We then apply our analysis to the CSI-SharK protocol. In prior analyses based on Kuperberg’s algorithms, group action evaluations contributed to a significant part of the overall T-gate cost. For CSI-SharK suggested parameters, our new approach requires significantly fewer calls to the group action evaluation subroutine, leading to significant T-gate complexity improvements overall. We also show that the quantum security of the protocol decreases when the number of public keys increases, and quantify this degradation. Beyond its direct application to the CSI-Shark protocol, our work more generally questions the quantum security of vectorization problem variants, and it introduces the Childs-van Dam algorithm as a new quantum cryptanalysis tool.
Last updated:  2025-02-27
Traceable Secret Sharing Based on the Chinese Remainder Theorem
Charlotte Hoffmann
Traceable threshold secret sharing schemes, introduced by Goyal, Song and Srinivasan (CRYPTO'21), allow to provably trace leaked shares to the parties that leaked them. The authors give the first definition and construction of traceable secret sharing schemes. However, the size of the shares in their construction are quadratic in the size of the secret. Boneh, Partap and Rotem (CRYPTO'24) recently proposed a new definition of traceable secret sharing and the first practical constructions. In their definition, one considers a reconstruction box that contains leaked shares and, on input additional shares, outputs the secret . A scheme is traceable if one can find out the leaked shares inside the box by only getting black-box access to and a private tracing key. Boneh, Partap and Rotem give constructions from Shamir's secret sharing and Blakley's secret sharing. The constructions are efficient as the size of the secret shares is only twice the size of the secret. In this work we present the first traceable secret sharing scheme based on the Chinese remainder theorem. This was stated as an open problem by Boneh, Partap and Rotem, as it gives rise to traceable secret sharing with weighted threshold access structures. The scheme is based on Mignotte's secret sharing and increases the size of the shares of the standard Mignotte secret sharing scheme only by a factor of . We also introduce a new definition of semi-public secret sharing that does not require a private tracing key and give a construction based on the Chinese Remainder Theorem.
Last updated:  2025-02-27
Watermarkable and Zero-Knowledge Verifiable Delay Functions from any Proof of Exponentiation
Charlotte Hoffmann and Krzysztof Pietrzak
A verifiable delay function maps an input and time parameter to an output together with an efficiently verifiable proof certifying that was correctly computed. The function runs in sequential steps, and it should not be possible to compute much faster than that. The only known practical VDFs use sequential squaring in groups of unknown order as the sequential function, i.e., . There are two constructions for the proof of exponentiation (PoE) certifying that , with Wesolowski (Eurocrypt'19) having very short proofs, but they are more expensive to compute and the soundness relies on stronger assumptions than the PoE proposed by Pietrzak (ITCS'19). A recent application of VDFs by Arun, Bonneau and Clark (Asiacrypt'22) are short-lived proofs and signatures, which are proofs and signatures which are only sound for some time , but after that can be forged by anyone. For this they rely on "watermarkable VDFs", where the proof embeds a prover chosen watermark. To achieve stronger notions of proofs/signatures with reusable forgeability, they rely on "zero-knowledge VDFs", where instead of the output , one just proves knowledge of this output. The existing proposals for watermarkable and zero-knowledge VDFs all build on Wesolowski's PoE, for the watermarkable VDFs there's currently no security proof. In this work we give the first constructions that transform any PoEs in hidden order groups into watermarkable VDFs and into zkVDFs, solving an open question by Arun et al.. Unlike our watermarkable VDF, the zkVDF (required for reusable forgeability) is not very practical as the number of group elements in the proof is a security parameter. To address this, we introduce the notion of zero-knowledge proofs of sequential work (zkPoSW), a notion that relaxes zkVDFs by not requiring that the output is unique. We show that zkPoSW are sufficient to construct proofs or signatures with reusable forgeability, and construct efficient zkPoSW from any PoE, ultimately achieving short lived proofs and signatures that improve upon Arun et al's construction in several dimensions (faster forging times, arguably weaker assumptions). A key idea underlying our constructions is to not directly construct a (watermarked or zk) proof for , but instead give a (watermarked or zk) proof for the more basic statement that satisfy for some , together with a normal PoE for .
Last updated:  2025-02-27
Evasive LWE: Attacks, Variants & Obfustopia
Shweta Agrawal, Anuja Modi, Anshu Yadav, and Shota Yamada
Evasive LWE (Wee, Eurocrypt 2022 and Tsabary, Crypto 2022) is a recently introduced, popular lattice assumption which has been used to tackle long-standing problems in lattice based cryptography. In this work, we develop new counter-examples against Evasive LWE, in both the private and public-coin regime, propose counter-measures that define safety zones, and finally explore modifications to construct full compact FE/iO. Attacks: Our attacks are summarized as follows. - The recent work by Hseih, Lin and Luo [HLL23] constructed the first ABE for unbounded depth circuits by relying on the (public coin) ''circular'' evasive LWE assumption, which incorporates circularity into the Evasive LWE assumption. We provide a new attack against this assumption by exhibiting a sampler such that the pre-condition is true but post-condition is false. - We demonstrate a counter-example against public-coin evasive LWE which exploits the freedom to choose the error distributions in the pre and post conditions. Our attack crucially relies on the error in the pre-condition being larger than the error in the post-condition. - The recent work by Agrawal, Kumari and Yamada [AKY24a] constructed the first functional encryption scheme for pseudorandom functionalities () and extended this to obfuscation for pseudorandom functionalities () [AKY24c] by relying on private-coin evasive LWE. We provide a new attack against the stated assumption. - The recent work by Branco et al. [BDJ+24] (concurrently to [AKY24c]) provides a construction of obfuscation for pseudorandom functionalities by relying on private-coin evasive LWE. By adapting the counter-example against [AKY24a], we provide an attack against this assumption. - Branco et al. [BDJ+24] showed that there exist contrived, somehow ''self-referential'', classes of pseudorandom functionalities for which pseudorandom obfuscation cannot exist. We develop an analogous result to the setting of pseudorandom functional encryption. While Evasive LWE was developed to specifically avoid zeroizing attacks as discussed above, our attacks show that in some (contrived) settings, the adversary may nevertheless obtain terms in the zeroizing regime. Counter-measures: Guided by the learning distilled from the above attacks, we develop counter-measures to prevent against them. Our interpretation of the above attacks is that Evasive LWE, as defined, is too general -- we suggest restrictions to identify safe zones for the assumption, using which, the broken applications can be recovered. Variants to give full FE and iO: Finally, we show that certain modifications of Evasive LWE, which respect the counter-measures developed above, yield full compact FE in the standard model. We caution that the main goal of presenting these candidates is as goals for cryptanalysis to further our understanding of this regime of assumptions.
Last updated:  2025-02-27
New Black-Box Separations through Mathematically Structured Primitives
Hart Montgomery and Sikhar Patranabis
We provide a novel view of public-key cryptography by showing full equivalences of certain primitives to "hard" monoid actions. More precisely, we show that key exchange and two-party computation are exactly equivalent to monoid actions with certain structural and hardness properties. To the best of our knowledge, this is the first "natural" characterization of the mathematical structure inherent to any key exchange or two-party computation protocol, and the first explicit proof of the necessity of mathematical structure for public-key cryptography. We then utilize these characterizations to show new black-box separation results. Concretely, we obtain the following results: Two-Party Key Exchange. We show that that any two-party noninteractive key exchange protocol is equivalent to the existence of an abelian monoid action equipped with a natural hardness property, namely (distributional) unpredictability. More generally, we show that any -round (two-party) key exchange protocol is essentially equivalent to the existence of a (distributional) unpredictable monoid action with certain commutator-like properties. Rudich (Crypto '91) shows a black-box separation of -round and -round key exchange for any ; we use our generic primitive here to formalize this result and extend it to efficient key exchange protocols (where communication is ). Two-Party Computation. We show that any maliciously secure two-party computation protocol is also equivalent to a monoid action with commutator-like properties and certain hardness guarantees. We then use a generic version of this primitive to show a black-box separation between -round semi-honest secure two-party computation and -round maliciously secure two-party computation. This yields the first black-box separation (to our knowledge) between -round and -round maliciously secure two-party computation protocols.
Last updated:  2025-02-26
Split Prover Zero-Knowledge SNARKs
Sanjam Garg, Aarushi Goel, Dimitris Kolonelos, Sina Shiehian, and Rohit Sinha
We initiate the study of {\em split prover zkSNARKs}, which allow Alice to offload part of the zkSNARK computation to her assistant, Bob. In scenarios like online transactions (e.g., zCash), a significant portion of the witness (e.g., membership proofs of input coins) is often available to the prover (Alice) before the transaction begins. This setup offers an opportunity to Alice to initiate the proof computation early, even before the entire witness is available. The remaining computation can then be delegated to Bob, who can complete it once the final witness (e.g., the transaction amount) is known. To prevent Bob from generating proofs independently (e.g., initiating unauthorized transactions), it is essential that the data provided to him for the second phase of computation does not reveal the witness used in the first phase. Additionally, the verifier of the zkSNARK should be unable to determine whether the proof was generated solely by Alice or through this two-step process. To achieve this efficiently, we require this two-phase proof generation to only use cryptography in a black-box manner. We propose a split prover zkSNARK based on the Groth16 zkSNARKs [Groth, EUROCRYPT 2016], meeting all these requirements. Our solution is also \emph{asymptotically tight}, meaning it achieves the optimal second phase proof generation time for Groth16. Importantly, our split prover zkSNARK preserves the verification algorithm of the original Groth16 zkSNARK, enabling seamless integration into existing deployments of Groth16.
Last updated:  2025-02-26
Consensus Under Adversary Majority Done Right
Srivatsan Sridhar, Ertem Nusret Tas, Joachim Neu, Dionysis Zindros, and David Tse
A specter is haunting consensus protocols—the specter of adversary majority. Dolev and Strong in 1983 showed an early possibility for up to 99% adversaries. Yet, other works show impossibility results for adversaries above 50% under synchrony, seemingly the same setting as Dolev and Strong's. What gives? It is high time that we pinpoint a key culprit for this ostensible contradiction: the modeling details of clients. Are the clients sleepy or always-on? Are they silent or communicating? Can validators be sleepy too? We systematize models for consensus across four dimensions (sleepy/always-on clients, silent/communicating clients, sleepy/always-on validators, and synchrony/partial-synchrony), some of which are new, and tightly characterize the achievable safety and liveness resiliences with matching possibilities and impossibilities for each of the sixteen models. To this end, we unify folklore and earlier results, and fill gaps left in the literature with new protocols and impossibility theorems.
Last updated:  2025-02-26
KLPT²: Algebraic Pathfinding in Dimension Two and Applications
Wouter Castryck, Thomas Decru, Péter Kutas, Abel Laval, Christophe Petit, and Yan Bo Ti
Following Ibukiyama, Katsura and Oort, all principally polarized superspecial abelian surfaces over can be represented by a certain type of matrix , having entries in the quaternion algebra . We present a heuristic polynomial-time algorithm which, upon input of two such matrices , finds a "connecting matrix" representing a polarized isogeny of smooth degree between the corresponding surfaces. Our algorithm should be thought of as a two-dimensional analog of the KLPT algorithm from 2014 due to Kohel, Lauter, Petit and Tignol for finding a connecting ideal of smooth norm between two given maximal orders in . The KLPT algorithm has proven to be a versatile tool in isogeny-based cryptography, and our analog has similar applications; we discuss two of them in detail. First, we show that it yields a polynomial-time solution to a two-dimensional analog of the so-called constructive Deuring correspondence: given a matrix representing a superspecial principally polarized abelian surface, realize the latter as the Jacobian of a genus- curve (or, exceptionally, as the product of two elliptic curves if it concerns a product polarization). Second, we show that, modulo a plausible assumption, Charles-Goren-Lauter style hash functions from superspecial principally polarized abelian surfaces require a trusted set-up. Concretely, if the matrix associated with the starting surface is known then collisions can be produced in polynomial time. We deem it plausible that all currently known methods for generating a starting surface indeed reveal the corresponding matrix. As an auxiliary tool, we present an explicit table for converting -isogenies into the corresponding connecting matrix, a step for which a previous method by Chu required super-polynomial (but sub-exponential) time.
Last updated:  2025-02-26
Functional Oblivious Transfer with Applications in Privacy-Preserving Machine Learning
Aydin Abadi and Mohammad Naseri
Oblivious Transfer (OT) is a fundamental cryptographic primitive introduced nearly four decades ago. OT allows a receiver to select and learn out of private messages held by a sender. It ensures that the sender does not learn which specific messages the receiver has chosen, while the receiver gains no information about the remaining messages. In this work, we introduce the notion of functional OT (FOT), for the first time. FOT adds a layer of security to the conventional OT by ensuring that the receiver only learns a function of the selected messages rather than the individual messages themselves. We propose several protocols that realize this concept. In particular, we propose concrete instantiations of FOT when the function to be executed on the selected message is mean, mode, addition, or multiplication. The schemes are efficient and unconditionally secure. We also propose a non-trivial protocol that supports arbitrary functions on the selected messages mainly using fully homomorphic encryption (FHE) and oblivious linear function evaluation, where the number of FHE invocations is constant with respect to . Our asymptotic and concrete cost analyses demonstrate the efficiency of our unconditionally secure FOT protocols. FOT can enhance the security of privacy-preserving machine learning, particularly in (i) K-Nearest Neighbors schemes and (ii) client selection in Federated Learning (FL).
Last updated:  2025-02-26
Simple Public Key Anamorphic Encryption and Signature using Multi-Message Extensions
Shalini Banerjee, Tapas Pal, Andy Rupp, and Daniel Slamanig
Anamorphic encryption (AE) considers secure communication in the presence of a powerful surveillant (typically called a ''dictator'') who only allows certain cryptographic primitives and knows all the secret keys in a system. The basic idea is that there is a second (anamorphic) mode of encryption that allows to transmit an anamorphic message using a double key to a receiver that can decrypt this message using a double key. From the point of view of the dictator the encryption keys as well as the ciphertexts in the regular and anamorphic mode are indistinguishable. The most recent works in this field consider public key anamorphic encryption (PKAE), i.e., the sender of an anamorphic message requires an encryption double key (or no key at all) and the receiver requires a decryption double key. Known constructions, however, either work only for schemes that are mostly of theoretical interest or come with conceptual limitations. In this paper we ask whether we can design such PKAE schemes without such limitations and being closer to PKE schemes used in practice. In fact, such schemes are more likely to be allowed by a cognizant dictator. Moreover, we initiate the study of identity-based anamorphic encryption (IBAE), as the IBE setting seems to be a natural choice for a dictator. For both PKAE and IBAE, we show how well-known IND-CPA and IND-CCA secure primitives can be extended by an anamorphic encryption channel. In contrast to previous work, we additionally consider CCA (rather than just CPA) security notions for the anamorphic channel and also build upon CPA (rather than just CCA) secure PKE. Finally, we ask whether it is possible to port the recent concept of anamorphic signatures, which considers constructing symmetric anamorphic channels in case only signature schemes are allowed by the dictator, to the asymmetric setting, which we denote by public-key anamorphic signatures (PKAS). Also here we consider security beyond IND-CPA for the anamorphic channel.
Last updated:  2025-02-26
A Closer Look at Falcon
Phillip Gajland, Jonas Janneck, and Eike Kiltz
Falcon is a winner of NIST's six-year post-quantum cryptography standardisation competition. Based on the celebrated full-domain-hash framework of Gentry, Peikert and Vaikuntanathan (GPV) (STOC'08), Falcon leverages NTRU lattices to achieve the most compact signatures among lattice-based schemes. Its security hinges on a Rényi divergence-based argument for Gaussian samplers, a core element of the scheme. However, the GPV proof, which uses statistical distance to argue closeness of distributions, fails when applied naively to Falcon due to parameter choices resulting in statistical distances as large as . Additional implementation-driven deviations from the GPV framework further invalidate the original proof, leaving Falcon without a security proof despite its selection for standardisation. This work takes a closer look at Falcon and demonstrates that introducing a few minor, conservative modifications allows for the first formal proof of the scheme in the random oracle model. At the heart of our analysis lies an adaptation of the GPV framework to work with the Rényi divergence, along with an optimised method for parameter selection under this measure. Unfortunately, our analysis shows that despite our modification of Falcon-512 and Falcon-1024 we do not achieve strong unforgeability for either scheme. For plain unforgeability we are able to show that our modifications to Falcon-512 barely satisfy the claimed 120-bit security target and for Falcon-1024 we confirm the claimed security level. As such we recommend revisiting Falcon and its parameters.
Last updated:  2025-02-26
A Framework for Group Action-Based Multi-Signatures and Applications to LESS, MEDS, and ALTEQ
Giuseppe D'Alconzo, Andrea Flamini, Alessio Meneghetti, and Edoardo Signorini
A multi-signature scheme allows a list of signers to sign a common message. They are widely used in scenarios where the same message must be signed and transmitted by users, and, instead of concatenating individual signatures, employing a multi-signature can reduce the data to be sent. In recent years there have been numerous practical proposals in the discrete logarithm setting, such as MuSig2 (CRYPTO'21) for the Schnorr signature. Recently, these attempts have been extended to post-quantum assumptions, with lattice-based proposals such as MuSig-L (CRYPTO'22). Given the growth of group action-based signatures, a natural question is whether a multi-signature can be built on the same models. In this work, we present the first construction of such a primitive relying on group action assumptions. We obtain a 3-round scheme achieving concurrent security in the ROM. Moreover, we instantiate it using the three candidates to the additional post-quantum NIST's call, namely LESS, MEDS and ALTEQ, obtaining a good compression rate for different parameters sets.
Last updated:  2025-02-26
Practical Key-Extraction Attacks in Leading MPC Wallets
Nikolaos Makriyannis, Oren Yomtov, and Arik Galansky
Multi-Party Computation (MPC) has become a major tool for protecting hundreds of billions of dollars in cryptocurrency wallets. MPC protocols are currently powering the wallets of Coinbase, Binance, Zengo, BitGo, Fireblocks and many other fintech companies servicing thousands of financial institutions and hundreds of millions of end-user consumers. We present four novel key-extraction attacks on popular MPC signing protocols showing how a single corrupted party may extract the secret in full during the MPC signing process. Our attacks are highly practical (the practicality of the attack depends on the number of signature-generation ceremonies the attacker participates in before extracting the key). Namely, we show key-extraction attacks against different threshold-ECDSA protocols/implementations requiring , , , and *one signature*, respectively. In addition, we provide proof-of-concept code that implements our attacks.
Last updated:  2025-02-26
Differential Cryptanalysis of the Reduced Pointer Authentication Code Function used in Arm’s FEAT_PACQARMA3 Feature
Roberto Avanzi, Orr Dunkelman, and Shibam Ghosh
The Pointer Authentication Code () feature in the Arm architecture is used to enforce the Code Flow Integrity () of running programs. It does so by generating a short - called the - of the return address and some additional context information upon function entry, and checking it upon exit. An attacker that wants to overwrite the stack with manipulated addresses now faces an additional hurdle, as they now have to guess, forge, or reuse values. is deployed on billions of devices as a first line of defense to harden system software and complex programs against software exploitation. The original version of the feature uses a 12-round version the block cipher. The output is then truncated to between 3 and 32 bits, in order to be inserted into unused bits of 64-bit pointers. A later revision of the specification allows the use of an 8-round version of . This reduction may introduce vulnerabilities such as high-probability distinguishers, potentially enabling key recovery attacks. The present paper explores this avenue. A cryptanalysis of the computation function entails restricting the inputs to valid virtual addresses, meaning that certain most significant bits are fixed to zero, and considering only the truncated output. Within these constraints, we present practical attacks on various configurations. These attacks, while not presenting immediate threat to the mechanism, show that some versions of the feature do miss the security targets made for the original function. This offers new insights into the practical security of constructing from truncated block ciphers, expanding on the mostly theoretical understanding of creating PRFs from truncated PRPs. We note that the results do not affect the security of when used with the recommended number of rounds for general purpose applications.
Last updated:  2025-02-26
Strong Existential Unforgeability and BUFF Securities of MPC-in-the-Head Signatures
Mukul Kulkarni and Keita Xagawa
NIST started the standardization of additional post-quantum signatures in 2022. Among 40 candidates, a few showed stronger security than existential unforgeability, strong existential unforgeability, and BUFF (beyond unforgeability features) securities. Recently, Aulbach, Düzlü, Meyer, Struck, and Weishäupl (PQCrypto 2024) examined the BUFF securities of 17 out of 40 candidates. Unfortunately, on the so-called MPC-in-the-Head (MPCitH) signature schemes, we have no knowledge of strong existential unforgeability and BUFF securities. This paper studies the strong securities of all nine MPCitH signature candidates: AIMer, Biscuit, FAEST, MIRA, MiRitH, MQOM, PERK, RYDE, and SDitH. We show that the MPCitH signature schemes are strongly existentially unforgeable under chosen message attacks in the (quantum) random oracle model. To do so, we introduce a new property of the underlying multi-pass identification, which we call _non-divergency_. This property can be considered as a weakened version of the computational unique response for three-pass identification defined by Kiltz, Lyubashevsky, and Schaffner (EUROCRYPT 2018) and its extension to multi-pass identification defined by Don, Fehr, and Majenz (CRYPTO 2020). In addition, we show that the SSH11 protocol proposed by Sakumoto, Shirai, and Hiwatari (CRYPTO 2011) is _not_ computational unique response, while Don et al. (CRYPTO 2020) claimed it. We also survey BUFF securities of the nine MPCitH candidates in the quantum random oracle model. In particular, we show that Biscuit and MiRitH do not have some of the BUFF securities.
Last updated:  2025-02-26
Computationally Efficient Asynchronous MPC with Linear Communication and Low Additive Overhead
Akhil Bandarupalli, Xiaoyu Ji, Aniket Kate, Chen-Da Liu-Zhang, and Yifan Song
We explore the setting of asynchronous multi-party computation (AMPC) with optimal resilience , and develop an efficient protocol that optimizes both communication and computation. The recent work by Goyal, Liu-Zhang, and Song [Crypto' 24] was the first to achieve AMPC with amortized linear communication cost without using computationally heavy public-key cryptography. However, its additive communication overhead renders it impractical for most real-world applications. It is possible to reduce the communication overhead significantly by leveraging cryptographic tools such as %random oracle hash, homomorphic commitments, public-key cryptography, or zero-knowledge proofs; however, the corresponding AMPC protocols introduce computation overhead of public-key cryptographic operations that become bottleneck as grows. Overall, achieving AMPC with linear communication complexity, low additive communication overhead, and low computation overhead remains an open challenge. In this work, we resolve this efficiency challenge by utilizing the random oracle model. By relying solely on computationally efficient primitives such as random oracle hash and symmetric-key cryptography, our protocol is not only efficient in terms of computation and communication overhead but also post-quantum secure. For a circuit with multiplication gates, our protocol achieves communication per multiplication gate with an additive overhead of communication. In terms of computation, our protocol only introduces an additive overhead of hash computations independent of the circuit size.
Last updated:  2025-02-26
Higher Residuosity Attacks on Small RSA Subgroup Decision Problems
Xiaopeng Zhao, Zhenfu Cao, Xiaolei Dong, and Zhusen Liu
Secure two-party comparison, known as Yao's millionaires' problem, has been a fundamental challenge in privacy-preserving computation. It enables two parties to compare their inputs without revealing the exact values of those inputs or relying on any trusted third party. One elegant approach to secure computation is based on homomorphic encryption. Recently, building on this approach, Carlton et al. (CT-RSA 2018) and Bourse et al. (CT-RSA 2020) presented novel solutions for the problem of secure integer comparison. These protocols have demonstrated significantly improved performance compared to the well-known and frequently used DGK protocol (ACISP 2007 and Int. J. Appl. Cryptogr. 1(4),323–324, 2009). In this paper, we introduce a class of higher residuosity attacks, which can be regarded as an extension of the classical quadratic residuosity attack on the decisional Diffie-Hellman problem. We demonstrate that the small RSA subgroup decision problems, upon which both the CEK and BST protocols are based, are not difficult to solve when the prime base is small (e.g., ). Under these conditions, the protocols achieve optimal overall performance. Furthermore, we offer recommendations for precluding such attacks, including one approach that does not adversely affect performance. We hope that these attacks can be applied to analyze other number-theoretic hardness assumptions.
Last updated:  2025-02-26
Polynomial Secret Sharing Schemes and Algebraic Matroids
Amos Beimel, Oriol Farràs, and Adriana Moya
In a secret sharing scheme with polynomial sharing, the secret is an element of a finite field, and the shares are obtained by evaluating polynomials on the secret and some random field elements, i.e., for every party there is a set of polynomials that computes the share of the party. These schemes generalize the linear ones, adding more expressivity and giving room for more efficient schemes. To identify the access structures for which this efficiency gain is relevant, we need a systematic method to identify the access structure of polynomial schemes; i.e., to identify which sets can reconstruct the secret in the scheme. As a first step, we study ideal polynomial secret sharing schemes where there is a single polynomial for each party. Ideal schemes have optimal share size because the size of each share is the size of the secret. Our goal is to generalize results of linear secret sharing schemes, i.e., schemes in which the shares are computed by applying linear mappings and the linear dependency of these mappings determines their access structures. To achieve this goal, we study the connection between the algebraic dependency of the sharing polynomials and the access structure of the polynomial scheme. Our first result shows that if the degree of the sharing polynomials is not too big compared to the size of the field, then the algebraic dependence of the sharing polynomials determines the access structure of the scheme. This contributes to the characterization of ideal polynomial schemes and establishes a new connection between families of ideal schemes and algebraic matroids. Conversely, we ask the question: If we associate a polynomial with each party and the dealer, can we use these polynomials to realize the access structure determined by the algebraic dependency of the polynomials? Our second result shows that these access structures admit statistical schemes with small shares. Finally, we extend this result to the general case where each party may have more than one polynomial.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.