Papers updated in last 183 days (1741 results)
The Violation of Bell's Inequality Represents Nothing
The Bell's mathematical formulation for EPR paradox, especially Bell's inequality, had interested many physicists and philosophers. We revisit the famous formulation and investigate the related arguments, just from a mathematical point of view. We find: (1) there is a key assumption inconsistent with the general hidden variable theory; (2) the mutual independence between measurements has been thoroughly neglected in the past decades; (3) the inequality involves three pairs of particles, not as generally imagined to measure only a pair of particles. The findings could provide a new glimpse into the old and hot issue.
Blind Signatures from Cryptographic Group Actions
Uncategorized
Uncategorized
We provide a generic construction of blind signatures from cryptographic group actions following the framework of the blind signature CSIOtter introduced by Katsumata et al. (CRYPTO'23) in the context of isogeny (commutative group action). We adapt and modify that framework to make it work even for non-commutative group actions. As a result, we obtain a blind signature from abstract group actions which are proven to be secure in the random oracle model. We also propose an instantiation based on a variant of linear code equivalence, interpreted as a symmetric group action.
MIFARE Classic: exposing the static encrypted nonce variant
MIFARE Classic smart cards, developed and licensed by NXP, are widely used but have been subjected to numerous attacks over the years. Despite the introduction of new versions, these cards have remained vulnerable, even in card-only scenarios. In 2020, the FM11RF08S, a new variant of MIFARE Classic, was released by the leading Chinese manufacturer of unlicensed "MIFARE compatible" chips. This variant features specific countermeasures designed to thwart all known card-only attacks and is gradually gaining market share worldwide.
In this paper, we present several attacks and unexpected findings regarding the FM11RF08S. Through empirical research, we discovered a hardware backdoor and successfully cracked its key. This backdoor enables any entity with knowledge of it to compromise all user-defined keys on these cards without prior knowledge, simply by accessing the card for a few minutes.
Additionally, our investigation into older cards uncovered another hardware backdoor key that was common to several manufacturers.
Hash-Based Multi-Signatures for Post-Quantum Ethereum
With the threat posed by quantum computers on the horizon, systems like Ethereum must transition to cryptographic primitives resistant to quantum attacks. One of the most critical of these primitives is the non-interactive multi-signature scheme used in Ethereum's proof-of-stake consensus, currently implemented with BLS signatures. This primitive enables validators to independently sign blocks, with their signatures then publicly aggregated into a compact aggregate signature.
In this work, we introduce a family of hash-based signature schemes as post-quantum alternatives to BLS. We consider the folklore method of aggregating signatures via (hash-based) succinct arguments, and our work is focused on instantiating the underlying signature scheme. The proposed schemes are variants of the XMSS signature scheme, analyzed within a novel and unified framework. While being generic, this framework is designed to minimize security loss, facilitating efficient parameter selection. A key feature of our work is the avoidance of random oracles in the security proof. Instead, we define explicit standard model requirements for the underlying hash functions. This eliminates the paradox of simultaneously treating hash functions as random oracles and as explicit circuits for aggregation. Furthermore, this provides cryptanalysts with clearly defined targets for evaluating the security of hash functions. Finally, we provide recommendations for practical instantiations of hash functions and concrete parameter settings, supported by known and novel heuristic bounds on the standard model properties.
New proof systems and an OPRF from CSIDH
Isogeny computations in CSIDH (Asiacrypt 2018) are described using a commutative group G acting on the set of supersingular elliptic curves. The commutativity property gives CSIDH enough flexibility to allow the creation of many cryptographic primitives and protocols. Nevertheless, these operations are limited and more complex applications have not yet been proposed.
When calling the composition of two group elements of G addition, our goal in this work is to explore exponentiation, multiplication with public elements, and multiplication between secret elements of this group. We first introduce a two-party interactive protocol for multiplication of secret group elements. Then, we explore zero-knowledge proofs of these different arithmetic operations. We present two types of approaches, using either standard sigma protocols or the MPC-in-the-Head paradigm. Most of our proofs need a trusted setup, which can be removed in the MPC-in-the-Head setting using cut-and-choose techniques. We conclude this work by presenting an oblivious pseudorandom function based on our new framework, that is competitive with current state-of-the-art designs.
Faster Proofs and VRFs from Isogenies
We improve recent generic proof systems for isogeny knowledge by Cong, Lai, Levin [27] based on circuit satisfiability, by using radical isogeny descriptions [20, 21] to prove a path in the underlying isogeny graph. We then present a new generic construction for a verifiable random function (VRF) based on a one-more type hardness assumption and zero-knowledge proofs. We argue that isogenies fit the constraints of our construction and instantiate the VRF with a CGL walk [23] and our new proofs. As a different contribution, we also propose a new VRF in the effective group action description of isogenies from [1]. Our protocol takes a novel approach based on the polynomial-in-the-exponent technique first described in [36], but without the need of a trusted setup or heavy preprocessing. We compare our protocols to the current state-of-the-art isogeny VRFs by Leroux [56] and Lai [54], with a particular emphasis on computational efficiency.
BulletCT: Towards More Scalable Ring Confidential Transactions With Transparent Setup
RingCT signatures are essential components of Ring Confidential Transaction (RingCT) schemes on blockchain platforms, enabling anonymous transaction spending and significantly impacting the scalability of these schemes. This paper makes two primary contributions:
We provide the first thorough analysis of a recently developed Any-out-of-N proof in the discrete logarithm (DLOG) setting and the associated RingCT scheme, introduced by ZGSX23 (S&P '23). The proof conceals the number of the secrets to offer greater anonymity than K-out-of-N proofs and uses an efficient "K-Weight" technique for its construction. However, we identify for the first time several limitations of using Any-out-of-N proofs, such as increased transaction sizes, heightened cryptographic complexities and potential security risks. These limitations prevent them from effectively mitigating the longstanding scalability bottleneck.
We then continue to explore the potential of using K-out-of-N proofs to enhance scalability of RingCT schemes. Our primary innovation is a new DLOG-based RingCT signature that integrates a refined "K-Weight"-based K-out-of-N proof and an entirely new tag proof. The latter is the first to efficiently enable the linkability of RingCT signatures derived from the former, effectively resisting double-spending attacks.
Finally, we identify and patch a linkability flaw in ZGSX23's signature. We benchmark our scheme against this patched one to show that our scheme achieves a boost in scalability, marking a promising step forward.
"There's always another counter": Detecting Micro-architectural Attacks in a Probabilistically Interleaved Malicious/Benign Setting
Modern micro-architectural attacks use a variety of building blocks chained to develop a final exploit. However, since in most cases, the footprint of such attacks is not visible architecturally (like, in the file-system), it becomes trickier to defend against these. In light of this, several automated defence mechanisms use Hardware Performance Counters (HPCs) detect when the micro-architectural elements are being misused for a potential attacks (like flush-reload, Spectre, Meltdown etc.). In order to bypass such defences, recent works have proposed the idea of "probabilistic interleaving": the adversary interleaves the actual attack code with benign code with very low frequency. Such a strategy tips off the HPCs used for detection with a lot of unnecessary noise; recent studies have shown that probabilistically interleaved attacks can achieve an attack evasion rate of 100% (i.e. are virtually undetectable).
In this work, we contend this folklore. We develop a theoretical model of interleaved attacks using lightweight statistical tools like Gaussian Mixture Models and Dip Test for Unimodality and prove they are detectable for the correct choices of HPCs. Furthermore, we also show possible defence strategy against a stronger threat model than considered in literature: where the attacker interleaves multiple attacks instead of a single attack. Empirically, to instantiate our detector, in contrast to prior detection strategies, we choose LLMs for a number of reasons: (1) LLMs can easily contextualize data from a larger set of HPCs than generic machine learning techniques, and (2) with simple prompts, LLMs can quickly switch between different statistical analysis methods. To this end, we develop an LLM-based methodology to detect probabilistically interleaved attacks. Our experiments establish that our improved methodology is able to achieve 100% speculative attacks like Spectre v1/v2/v3, Meltdown, and Spectre v2 (with improved gadgets that even evade recent protections like Enhanced IBRS, IBPB conditional, and so on). This makes our methodology suitable for detecting speculative attacks in a non-profiled setting: where attack signatures might not be known in advance. All in all, we achieve a 100% attack detection rate, even with very low interleave frequencies (i.e. ).
Our detection principle and its instantiation through LLMs shows how probabilistically interleaving attack code in benign execution is not a perfect strategy, and more research is still needed into developing and countering better attack evasion strategies.
Black-Box (and Fast) Non-Malleable Zero Knowledge
Non-malleable zero-knowledge (NMZK), originally introduced in the seminal work of Dolev, Dwork, and Naor (STOC 91), is a fundamental concept for modeling the security of proof systems against man-in-the-middle attacks.
Recently, Kim, Liang, and Pandey (CRYPTO 2022) presented the first efficient constant-round NMZK argument system based solely on symmetric-key cryptography. Their construction relies on a non-black-box use of the involved cryptographic primitives and on multiple executions of Ligero (CCS 2017) that affect both the round complexity and the computational efficiency of their protocol. Their work left open the natural important challenge of achieving NMZK using the underlying primitives only in a black-box fashion (regardless of the number of rounds and actual efficiency).
In this paper, we solve the aforementioned open problem by presenting the first NMZK argument system based on the black-box use of cryptographic primitives. Our work is optimal in the use of primitives since we only need one-way functions, and asymptotically optimal in the number of rounds since we only require a constant number of rounds. Our argument system is non-malleable with respect to the strong "simulation-extractability" flavor of non-malleability.
Furthermore, we also show that our construction can be efficiently instantiated in Minicrypt, significantly improving upon the work of Kim et al., both in terms of round complexity and computational efficiency.
Collision-Based Attacks on Block Cipher Modes - Exploiting Collisions and Their Absence
Advanced Encryption Standard in Galois/Counter Mode (AES-GCM) is the most widely used Authenticated Encryption with Associated Data (AEAD) algorithm in the world. In this paper, we analyze the use of GCM with all the Initialization Vector (IV) constructions and lengths approved by NIST SP 800-38D when encrypting multiple plaintexts with the same key. We derive attack complexities in both ciphertext-only and known-plaintext models, with or without nonce hiding, for collision attacks compromising integrity and confidentiality. To facilitate the analysis of GCM with random IVs, we derive a new, simplified equation for near birthday collisions. Our analysis shows that GCM with random IVs provides less than 128 bits of security. When 96-bit IVs are used, as recommended by NIST, the security drops to less than 97 bits. Therefore, we strongly recommend NIST to forbid the use of GCM with 96-bit random nonces.
On the BUFF Security of ECDSA with Key Recovery
In the usual syntax of digital signatures, the verification algorithm takes a verification key in addition to a signature and a message, whereas in ECDSA with key recovery, which is used in Ethereum, no verification key is input to the verification algorithm. Instead, a verification key is recovered from a signature and a message. In this paper, we explore BUFF security of ECDSA with key recovery (KR-ECDSA), where BUFF stands for Beyond UnForgeability Features (Cremers et al., IEEE S&P 2021). As a result, we show that KR-ECDSA provides BUFF security, except weak non-resignability (wNR). We pay attention to that the verification algorithm of KR-ECDSA takes an Ethereum address addr as input, which is defined as the rightmost 160-bits of the Keccak-256 hash of the corresponding ECDSA verification key, and checks the hash value of the recovered verification key is equal to addr. Our security analysis shows that this procedure is mandatory to provide BUFF security. We also discuss whether wNR is mandatory in Ethereum or not. To clarify the above equality check is mandatory to provide BUFF security in KR-ECDSA, we show that the original ECDSA does not provide any BUFF security. As a by-product of the analysis, we show that one of our BUFF attacks also works against the Aumayr et al.'s ECDSA-based adaptor signature scheme (ASIACRYPT 2021). We emphasize that the attack is positioned outside of their security model.
Commitment Schemes Based on Module-LIP
Recently, Jiang et al. (EUROCRYPT 2025) proposed a universal framework for constructing commitment schemes using group actions, and instantiated it with the Lattice Isomorphism Problem (LIP). This paper attempts to construct an instantiation based on module-LIP with this framework. More precisely, we first present a reduction from -LIP to -LAP. Then we develop a re-randomized algorithm based on the self-reduction framework of Module-LIP (Ducas et al. ASIACRYPT 2022), adapting it to the framework to construct commitment schemes.
Non-interactive Anonymous Tokens with Private Metadata Bit
Anonymous tokens with private metadata bit (ATPM) have received increased interest as a method for anonymous client authentication while also embedding trust signals that are only readable by the authority who holds the issuance secret key and nobody else. A drawback of all existing ATPM constructions is that they require client-issuer interaction during the issuance process. In this work, we build the first non-interactive anonymous tokens (NIAT) with private metadata bit, inspired by the recent work of Hanzlik (Eurocrypt '23) on non-interactive blind signatures. We discuss how the non-interaction property during the issuance process allows for more efficient issuance protocols that avoid the need for online signing. We construct an efficient NIAT scheme based on Structure-preserving Signatures on Equivalence Classes (SPS-EQ) and experimentally evaluate its performance. We also present an extension to our NIAT construction that allows the identification of clients who attempt to double-spend (i.e., present the same token twice).
Enhanced CKKS Bootstrapping with Generalized Polynomial Composites Approximation
Bootstrapping in approximate homomorphic encryption involves evaluating the modular reduction function. Traditional methods decompose the modular reduction function into three components: scaled cosine, double-angle formula, and inverse sine. While these approaches offer a strong trade-off between computational cost and level consumption, they lack flexibility in parameterization.
In this work, we propose a new method to decompose the modular reduction function with improved parameterization, generalizing prior trigonometric approaches. Numerical experiments demonstrate that our method achieves near-optimal approximation errors. Additionally, we introduce a technique that integrates the rescaling operation into matrix operations during bootstrapping, further reducing computational overhead.
MERCURY: A multilinear Polynomial Commitment Scheme with constant proof size and no prover FFTs
We construct a pairing-based polynomial commitment scheme for multilinear polynomials of size where
constructing an opening proof requires field operations, and scalar multiplications. Moreover,
the opening proof consists of a constant number of field elements.
This is a significant improvement over previous works which would require either
1. field operations; or
2. size opening proof.
The main technical component is a new method of verifiably folding a witness via univariate polynomial division.
As opposed to previous methods, the proof size and prover time remain constant *regardless of the folding factor*.
On Improved Cryptanalytic Results against ChaCha for Reduced Rounds ≥ 7
In this paper, we analyze the subtle issues of complexity estimates related to state-of-the-art cryptanalytic efforts on ChaCha. In this regard, we demonstrate that the currently best-known cryptanalytic result on -round ChaCha with time and data [Xu et al., ToSC 2024] can be estimated as for time and for data complexity. We improve the best-known result for the round by obtaining an improved set of Probabilistic Neutral Bits and considering our revised estimation. Our result with time complexity and data complexity improves the result of Xu et al., where they could achieve time and data complexity and , respectively. For both the and rounds, we can show an improvement of the order of in the time complexity. For -round, we improve the result of Dey [IEEE-IT 2024], which reports the time and data complexity of and , respectively. By applying the formula of the same paper and incorporating additional PNBs, we obtain improved time and data complexity of and , respectively. Thus, this paper describes the currently best-known cryptanalytic results against reduced round ChaCha. Our results do not affect the security claims of the complete algorithm with 20 rounds. Also, we provide a rebuttal of the Work by Wang et al. \cite{wangeprint} and analyze their claim about the error in the ``Divide-and-Conquer'' Approach.
BUFFing Threshold Signature Schemes
We explore advanced security notions for threshold signature schemes, focusing on Beyond UnForgeability Features (BUFF), introduced by Cremers et al. (S&P’21) in the non-threshold setting. The BUFF properties protect against attacks based on maliciously chosen keys, e.g., expropriating a message-signature pair under a new public key (called exclusive ownership). We first formalize these notions in the threshold setting and examine their relationships. Notably, unlike regular signature schemes, the hierarchy of variants of exclusive ownership notions only holds for threshold schemes if they are also robust.
We then present a generic compiler that transforms any threshold signature scheme to satisfy exclusive ownership, and message-bound signature properties with minimal overhead. Furthermore, we modify the threshold BLS signature scheme to achieve these additional properties without increasing the signature size. Lastly, we identify specific structures in threshold signature schemes where BUFF properties can be naturally extended from the underlying standard signature scheme, and we analyze and prove the security properties in some of the existing threshold schemes.
Exploring How to Authenticate Application Messages in MLS: More Efficient, Post-Quantum, and Anonymous Blocklistable
The Message Layer Security (MLS) protocol has recently been standardized by the IETF. MLS is a scalable secure group messaging protocol expected to run more efficiently compared to the Signal protocol at scale, while offering a similar level of strong security. Even though MLS has undergone extensive examination by researchers, the majority of the works have focused on confidentiality.
In this work, we focus on the authenticity of the application messages exchanged in MLS. Currently, MLS authenticates every application message with an EdDSA signature and while manageable, the overhead is greatly amplified in the post-quantum setting as the NIST-recommended Dilithium signature results in a 40x increase in size. We view this as an invitation to explore new authentication modes that can be used instead. We start by taking a systematic view on how application messages are authenticated in MLS and categorize authenticity into four different security notions. We then propose several authentication modes, offering a range of different efficiency and security profiles. For instance, in one of our modes, COSMOS++, we replace signatures with one-time tokens and a MAC tag, offering roughly a 75x savings in the post-quantum communication overhead. While this comes at the cost of weakening security compared to the authentication mode used by MLS, the lower communication overhead seems to make it a worthwhile trade-off with security.
A Note on the Blindness of the Scheme from ePrint 2025/397
This note demonstrates that the blind signature scheme based on cryptographic group actions, as proposed in ePrint paper 2025/397, fails to ensure blindness. Specifically, we construct an adversary that achieves a advantage in the blindness experiment. The attack leverages selective abort techniques (also known as selective failure attacks), a well-known strategy in the MPC literature.
Two-Round 2PC ECDSA at the Cost of 1 OLE
We present a novel protocol for two-party ECDSA that achieves two rounds (a single back-and-forth communication) at the cost of a single oblivious linear function evaluation (OLE). In comparison, the previous work of [DKLs18] (S&P 2018) achieves two rounds at the cost of three OLEs, while [BHL24] (EUROCRYPT 2025) requires expensive zero-knowledge proofs on top of the OLE. We demonstrate this by proving that in the generic group model, any adversary capable of generating forgeries for our protocol can be transformed into an adversary that finds preimages for the ECDSA message digest function (e.g., the SHA family). Interestingly, our analysis is closely related to, and has ramifications for, the `presignatures' mode of operation—[CGGMP20] (CCS 2020), [GroSho22] (EUROCRYPT 2022).
Motivated by applications to embedded cryptocurrency wallets, where a single server maintains distinct, shared public keys with separate clients (i.e., a star-shaped topology), and with the goal of minimizing communication, we instantiate our protocol using Paillier encryption and suitable zero-knowledge proofs. To reduce computational overhead, we thoroughly optimize all components of our protocol under sound cryptographic assumptions, specifically small-exponent variants of RSA-style assumptions.
Finally, we implement our protocol and provide benchmarks. At the 128-bit security level, the signing phase requires approximately 50ms of computation time on a standard linux machine, and 2KB of bandwidth.
Compiled Nonlocal Games from any Trapdoor Claw-Free Function
A recent work of Kalai et al. (STOC 2023) shows how to compile any multi-player nonlocal game into a protocol with a single computationally-bounded prover. Subsequent works have built on this to develop new cryptographic protocols, where a completely classical client can verify the validity of quantum computations done by a quantum server — classical verification of quantum computations, for short. Their compiler relies on the existence of quantum fully homomorphic encryption.
In this work, we propose a new compiler for transforming nonlocal games into single-prover protocols.
Our compiler is based on a novel blind classical delegation of quantum computation protocol, constructed within the framework of measurement-based quantum computation. The latter construction combines a new blind remote state preparation protocol with a generalization of the universal blind quantum computation protocol by Broadbent, Fitzsimons, and Kashefi (FOCS 2009).
These two constructions may also be of independent interest, as the techniques employed could enable the blind construction of more specific quantum states, and the generalized framework allows for the use of blind quantum computation in a broader range of applications, particularly in quantum cryptographic protocols.
The compiler can be instantiated assuming the existence of any plain trapdoor claw-free function.
Leveraging results by Natarajan and Zhang (FOCS 2023) on compiled nonlocal games, our work implies the existence of new protocols for classically verifying quantum computations based on a broader range of computational assumptions, potentially weaker than those previously known.
Great-LaKeys: An Improved Threshold-PRF and a Novel Exponent-VRF from LWR
Building on the recently proposed LWR-based threshold-PRF LaKey, we propose two new constructions. First, we propose an optimized threshold-PRF with significantly reduced round and communication complexity. We achieve this by improving the underlying bit truncation protocol, as well as the lower bound on the required number of LWR instances. Second, we show that the same underlying PRF construction lends itself as a basis for a novel and efficient exponent-VRF. We implement prototypes of both of our contributions and demonstrate their practical performance.
Matchmaker: Fast Secure Inference across Deployment Scenarios
Secure Two-Party Computation (2PC) enables secure inference with cryptographic guarantees that protect the privacy of the model owner and client. However, it adds significant performance overhead. In this work, we make 2PC-based secure inference efficient while considering important deployment scenarios.
We observe that the hitherto unconsidered latency of fetching keys from storage significantly impacts performance, as does network speed. We design a Linear Secret Sharing (LSS)-based system and a Function Secret Sharing (FSS)-based system for secure inference, optimized for small key size and communication, respectively. Notably, our highly-optimized and hardware-aware CPU-based outperforms prior GPU-based LSS systems by up to . We then show that the best choice between and depends on the deployment scenario.
In fact, under certain deployments, a combination of and can leverage heterogeneous processing across CPU and GPU. Such protocol-system co-design lets us outperform state-of-the-art secure inference systems
by up to (geomean ).
Multi-Client Attribute-Based Unbounded Inner Product Functional Encryption, and More
This paper presents the concept of a multi-client functional encryption (MC-FE) scheme for attribute-based inner product functions (AB-IP), initially proposed by Abdalla et al. [ASIACRYPT’20], in an unbounded setting. In such a setting, the setup is independent of vector length constraints, allowing secret keys to support functions of arbitrary lengths, and clients can dynamically choose vector lengths during encryption. The functionality outputs the sum of inner products if vector lengths and indices meet a specific relation, and all clients’ attributes satisfy the key’s policy. We propose the following constructions based on the matrix decisional Diffie-Hellman assumption in a natural permissive setting
of unboundedness:
– the first multi-client attribute-based unbounded IPFE (MC-AB-UIPFE) scheme secure in the standard model, overcoming previous limitations where clients could only encrypt fixed-length data;
– the first multi-input AB-UIPFE (MI-AB-UIPFE) in the public key setting; improving upon prior bounded constructions under the same assumption;
– the first dynamic decentralized UIPFE (DD-UIPFE); enhancing the dynamism property of prior works.
Technically, we follow the blueprint of Agrawal et al. [CRYPTO’23] but begin with a new unbounded FE called extended slotted unbounded IPFE. We first construct a single-input AB-UIPFE in the standard model and then extend it to multi-input settings. In a nutshell, our work demonstrates the applicability of function-hiding security of IPFE in realizing variants of multi-input FE capable of encoding unbounded
length vectors both at the time of key generation and encryption.
A Small Serving of Mash: (Quantum) Algorithms for SPDH-Sign with Small Parameters
We find an efficient method to solve the semidirect discrete logarithm problem (SDLP) over finite nonabelian groups of order and exponent for certain exponentially large parameters. This implies an attack on SPDH-Sign, a signature scheme based on the SDLP, for such parameters. We also take a step toward proving the quantum polynomial time equivalence of SDLP and SCDH.
Distributed Differentially Private Data Analytics via Secure Sketching
We introduce the linear-transformation model, a distributed model of differentially private data analysis. Clients have access to a trusted platform capable of applying a public matrix to their inputs. Such computations can be securely distributed across multiple servers using simple and efficient secure multiparty computation techniques.
The linear-transformation model serves as an intermediate model between the highly expressive central model and the minimal local model. In the central model, clients have access to a trusted platform capable of applying any function to their inputs. However, this expressiveness comes at a cost, as it is often prohibitively expensive to distribute such computations, leading to the central model typically being implemented by a single trusted server. In contrast, the local model assumes no trusted platform, which forces clients to add significant noise to their data. The linear-transformation model avoids the single point of failure for privacy present in the central model, while also mitigating the high noise required in the local model.
We demonstrate that linear transformations are very useful for differential privacy, allowing for the computation of linear sketches of input data. These sketches largely preserve utility for tasks such as private low-rank approximation and private ridge regression, while introducing only minimal error, critically independent of the number of clients.
Theoretical Approaches to Solving the Shortest Vector Problem in NP-Hard Lattice-Based Cryptography with Post-SUSY Theories of Quantum Gravity in Polynomial Time by Orch-Or
The Shortest Vector Problem (SVP) is a cornerstone of lattice-based cryptography, underpinning the security of numerous cryptographic schemes like NTRU. Given its NP-hardness, efficient solutions to SVP have profound implications for both cryptography and computational complexity theory. This paper presents an innovative framework that integrates concepts from quantum gravity, noncommutative geometry, spectral theory, and post-supersymmetry (post-SUSY) particle physics to address SVP. By mapping high-dimensional lattice points to spinfoam networks and by means of Hamiltonian engineering, it is theoretically possible to devise new algorithms that leverage the interactions topologically protected Majorana fermion particles have with the gravitational field through the spectral action principle to loop through these spinfoam networks where SVP vectors could then be encoded onto the spectrum of the corresponding Dirac-like dilation operators within the system. We establish a novel approach that leverages post-SUSY physics and theories of quantum gravity to achieve algorithmic speedups beyond those expected by conventional quantum computers. This interdisciplinary methodology not only proposes potential polynomial-time algorithms for SVP, but also bridges gaps between theoretical physics and cryptographic applications, providing further insights into the Riemann Hypothesis (RH) and the Hilbert-Pólya Conjecture. Possible directions for experimental realization through biologically inspired hardware or biological tissues by orchestrated objective reduction (Orch-Or) theory are discussed.
Private Computation on Common Fuzzy Records
Private computation on common records refers to analyze data from two databases containing shared records without revealing personal information. As a basic requirement for private computation, the databases involved essentially need to be aligned by a common identification system. However, it is hard to expect such common identifiers in real world scenario. For this reason, multiple quasi-identifiers can be used to identify common records. As some quasi-identifiers might be missing or have typos, it is important to support fuzzy records setting. Identifying common records using quasi-identifiers requires manipulation of highly sensitive information, which could be privacy concerns.
This work studies the problem of enabling such data analysis on the fuzzy records of quasi-identifiers. To this end, we propose ordered threshold-one (OTO) matching which can be efficiently realized by circuit-based private set intersection (CPSI) protocols and some multiparty computation (MPC) techniques. Furthermore, we introduce some generic encoding techniques from traditional matching rules to the OTO matching. Finally, we achieve a secure efficient private computation protocol which supports various matching rules which have already been widely used.
We also demonstrate the superiority of our proposal with experimental validation. First, we empirically check that our encoding to OTO matching does not affect accuracy a lot for the benchmark datasets found in the fuzzy record matching literature. Second, we implement our protocol and achieve significantly faster performance at the cost of communication overhead compared to previous privacy-preserving record linkage (PPRL) protocols. In the case of 100K records for each dataset, our work shows 147.58MB communication cost, 10.71s setup time, and 1.97s online time, which is 7.78 times faster compared to the previous work (50.12 times faster when considering online time only).
Batch Inference on Deep Convolutional Neural Networks With Fully Homomorphic Encryption Using Channel-By-Channel Convolutions
Secure Machine Learning as a Service (MLaaS) is a viable solution where clients seek secure ML computation delegation while protecting sensitive data. We propose an efficient method to securely evaluate deep standard convolutional neural networks based on residue number system variant of Cheon-Kim-Kim-Song (RNS-CKKS) scheme in the manner of batch inference. In particular, we introduce a packing method called Channel-By-Channel Packing that maximizes the slot compactness and Single-Instruction-Multiple-Data (SIMD) capabilities in ciphertexts. We also propose a new method for homomorphic convolution evaluation called Channel-By-Channel Convolution, which minimizes the additional heavy operations during convolution layers.
Simulation results show that our work has improvements in amortized runtime for inference, with a factor of and for ResNet-20 and ResNet-110, respectively, compared to the previous results. We note that our results almost simulate the original backbone models, with classification accuracy differing from the backbone within %p. Furthermore, we show that the client's rotation key size generated and transmitted can be reduced from GB to GB for ResNet models during an MLaaS scenario. Finally, we show that our method can be combined with previous methods, providing flexibility for selecting batch sizes for inference.
Relaxed Vector Commitment for Shorter Signatures
MPC-in-the-Head (MPCitH) has recently gained traction as a foundation for post-quantum signature schemes, offering robust security without trapdoors. Despite its strong security profile, MPCitH-based schemes suffer from high computational overhead and large signature sizes, limiting their practical application.
This work addresses these inefficiencies by relaxing vector commitments within MPCitH-based schemes. We introduce the concept of vector semi-commitment, which relaxes the binding property of traditional vector commitment. Vector semi-commitment schemes may allow an adversary to find more than one preimage of a commitment. We instantiate vector semi-commitment schemes in both the random oracle model and the ideal cipher model, leveraging recent optimizations on GGM tree such as correlated GGM tree.
We apply the ideal-cipher-based vector semi-commitment scheme to the BN++ signature scheme and prove it almost fully secure in the ideal cipher model. Implementing these improvements in the AIMer v2.0 signature scheme, we achieve up to 18% shorter signatures and up to 112% faster signing and verification speeds, setting new benchmarks for MPCitH-based schemes.
A Note on Obfuscation-based Attacks on Private-coin Evasive LWE
The evasive learning with errors (evasive LWE) assumption is a new assumption recently introduced by Wee (Eurocrypt 2022) and Tsabary (Crypto 2022) independently, as a significant strengthening of the standard LWE assumption.
While the assumption is known to imply various strong primitives including witness encryption [Wee22,Tsabary22], the assumption in the most general case (i.e., the private coin variant) is considered quite implausible due to the obfuscation based attack mentioned in [Wee22]. This obfuscation based attack is then later formalized by Vaikuntanathan, Wee, and Wichs [VWW22].
In this note, we revisit their attack and show that the attack actually does not work by showing a concrete counterexample. We then show that their attack can be made valid with some modifications. Along the way, we also improve the counterexample by making it provable. Specifically, our counterexample is valid assuming the (plain) LWE assumption and the existence of instance-hiding witness encryption, whereas their original counterexample was dependent on the heuristic assumption of the existence of an ideal obfuscation.
Non-Interactive Verifiable Aggregation
Consider a weak analyst that wishes to outsource data collection and computation of aggregate statistics over a a potentially large population of (also weak) clients to a powerful server. For flexibility and efficiency, we consider public-key and non-interactive protocols, meaning the clients know the analyst's public key but do not share secrets, and each client sends at most one message. Furthermore, the final step should be silent, whereby the analyst simply downloads the (encrypted) result from the server when needed. To capture this setting, we define a new primitive we call Non-Interactive Verifiable Aggregation (NIVA).
We require both privacy and robustness for a NIVA protocol to be deemed secure. Namely, our security notion for NIVA ensures that the clients' data remains private to both the server and the analyst, while also ensuring that malicious clients cannot skew the results by providing faulty data.
We propose a secure NIVA protocol, which we call PEAR (for Private, Efficient, Accurate, Robust), which can validate inputs according to any NP validity rule. PEAR is based on a novel combination of functional encryption for inner-products (Abdalla et al., PKC 2015) and fully-linear probabilistically-checkable proofs (Boneh et al., Crypto 2019). We emphasize that PEAR is non-interactive, public-key, and makes black-box use of the underlying cryptographic primitives. Additionally, we devise substantial optimizations of PEAR for practically-relevant validity rules. Finally, we implement PEAR to show feasibility for such validity rules, conducting a thorough performance evaluation. In particular, we compare PEAR to two more straightforward or "off-the-shelf" NIVA protocols and show performance gains, demonstrating the merit of our new approach. The bottleneck in our protocol comes from the fact that we require the underlying IPFE scheme to be "unrestricted" over a large field. As more efficient such schemes are developed, they can be immediately be plugged into PEAR for further gains.
Samaritan: Linear-time Prover SNARK from New Multilinear Polynomial Commitments
We study linear-time prover SNARKs and make the following contributions:
We provide a framework for transforming a univariate polynomial commitment scheme into a multilinear polynomial commitment scheme. Our transformation is generic, can be instantiated with any univariate scheme and improves on prior transformations like Gemini (EUROCRYPT 2022) and Virgo (S&P 2020) in all relevant parameters: proof size, verification complexity, and prover complexity. Instantiating the above framework with the KZG univariate polynomial commitment scheme, we get SamaritanPCS – the first multilinear polynomial commitment scheme with constant proof size and linear-time prover. SamaritanPCS is a drop-in replacement for the popular PST scheme, and improves upon PST in all relevant parameters.
We construct LogSpartan – a new multilinear PIOP for R1CS based on recent techniques for lookup arguments. Compiling this PIOP using SamaritanPCS gives Samaritan – a SNARK in the universal and updatable SRS setting. Samaritan has linear-time prover, logarithmic verification and logarithmic proof size. Concretely, its proof size is one of the smallest among other known linear-time prover SNARKs without relying on concretely expensive proof recursion techniques. For an R1CS instance with 1 million constraints, Samaritan (over BLS12-381 curve) has a proof size of 6.7KB.
We compare Samaritan with other linear-time prover SNARKs in the updatable setting. We asymptotically improve on the proof size of Spartan. Unlike Libra (CRYPTO 2019), the argument size of Samaritan is independent of the circuit depth. Compared to Gemini (EUROCRYPT 2022), Samaritan achieves 3 smaller argument size at 1 million constraints. We match the argument size of HyperPlonk, which is the smallest linear-time SNARK for the Plonkish constraint system, while achieving slightly better verification complexity.
We believe that our transformation and our techniques for applying lookups based on logarithmic derivatives to the multilinear setting are of wider interest.
Recover from Excessive Faults in Partially-Synchronous BFT SMR
Byzantine fault-tolerant (BFT) state machine replication (SMR) protocols form the basis of modern blockchains as they maintain a consistent state across all blockchain nodes while tolerating a bounded number of Byzantine faults. We analyze BFT SMR in the excessive fault setting where the actual number of Byzantine faults surpasses a protocol's tolerance.
We start by devising the very first repair algorithm for linearly chained and quorum-based partially synchronous SMR to recover from faulty states caused by excessive faults. Such a procedure can be realized using any commission fault detection module -- an algorithm that identifies the faulty replicas without falsely locating any correct replica. We achieve this with a slightly weaker liveness guarantee, as the original security notion is impossible to satisfy given excessive faults.
We implement recoverable HotStuff in Rust. The throughput resumes to the normal level (without excessive faults) after recovery routines terminate for replicas and is slightly reduced by for replicas. On average, it increases the latency by for replicas \usenix{and for replicas}.
Aside from adopting existing detection modules, we also establish the sufficient condition for a general BFT SMR protocol to allow for complete and sound fault detection when up to Byzantine replicas (out of total replicas) attack safety. We start by providing the first closed-box fault detection algorithm for any SMR protocol without any extra rounds of communication. We then describe open-box instantiations of our fault detection routines in Tendermint and Hotstuff, further reducing the overhead, both asymptotically and concretely.
ProofFrog: A Tool For Verifying Game-Hopping Proofs
Cryptographic proofs allow researchers to provide theoretical guarantees on the security that their constructions provide. A proof of security can completely eliminate a class of attacks by potential adversaries. Human fallibility, however, means that even a proof reviewed by experts may still hide flaws or outright errors. Proof assistants are software tools built for the purpose of formally verifying each step in a proof, and as such have the potential to prevent erroneous proofs from being published and insecure constructions from being implemented.
Unfortunately, existing tooling for verifying cryptographic proofs has found limited adoption in the cryptographic community, in part due to concerns with ease of use. We present ProofFrog: a new tool for verifying cryptographic game-hopping proofs. ProofFrog is designed with the average cryptographer in mind, using an imperative syntax similar to C for specifying games and a syntax for proofs that closely models pen-and-paper arguments. As opposed to other proof assistant tools which largely operate by manipulating logical formulae, ProofFrog manipulates abstract syntax trees (ASTs) into a canonical form to establish indistinguishable or equivalent behaviour for pairs of games in a user-provided sequence. We also detail the domain-specific language developed for use with the ProofFrog proof engine, the exact transformations it applies to canonicalize ASTs, and case studies of verified proofs. A tool like ProofFrog that prioritizes ease of use can lower the barrier of entry to using computer-verified proofs and aid in catching insecure constructions before they are made public.
Evaluation of Privacy-aware Support Vector Machine (SVM) Learning using Homomorphic Encryption
The requirement for privacy-aware machine learning increases as we continue to use PII (Personally Identifiable Information) within machine training. To overcome these privacy issues, we can apply Fully Homomorphic Encryption (FHE) to encrypt data before it is fed into a machine learning model. This involves creating a homomorphic encryption key pair, and where the associated public key will be used to encrypt the input data, and the private key will decrypt the output. But, there is often a performance hit when we use homomorphic encryption, and so this paper evaluates the performance overhead of using the SVM machine learning technique with the OpenFHE homomorphic encryption library. This uses Python and the scikit-learn library for its implementation. The experiments include a range of variables such as multiplication depth, scale size, first modulus size, security level, batch size, and ring dimension, along with two different SVM models, SVM-Poly and SVM-Linear. Overall, the results show that the two main parameters which affect performance are the ring dimension and the modulus size, and that SVM-Poly and SVM-Linear show similar performance levels.
Trapdoor Hash Functions and PIR from Low-Noise LPN
Trapdoor hash functions (TDHs) are compressing hash functions, with an additional trapdoor functionality: Given a encoding key for a function , a hash on together with a (small) input encoding allow one to recover . TDHs are a versatile tool and a useful building block for more complex cryptographic protocols.
In this work, we propose the first TDH construction assuming the (quasi-polynomial) hardness of the LPN problem with noise rate for , i.e., in the so-called low-noise regime. The construction achieves compression factor. As an application, we obtain a private-information retrieval (PIR) with communication complexity , for a database of size L. This is the first PIR scheme with non-trivial communication complexity (asymptotically smaller than ) from any code-based assumption.
On the Soundness of Algebraic Attacks against Code-based Assumptions
We study recent algebraic attacks (Briaud-Øygarden EC'23) on the Regular Syndrome Decoding (RSD) problem and the assumptions underlying the correctness of their attacks' complexity estimates. By relating these assumptions to interesting algebraic-combinatorial problems, we prove that they do not hold in full generality. However, we show that they are (asymptotically) true for most parameter sets, supporting the soundness of algebraic attacks on RSD. Further, we prove—without any heuristics or assumptions—that RSD can be broken in polynomial time whenever the number of error blocks times the square of the size of error blocks is larger than 2 times the square of the dimension of the code.
Additionally, we use our methodology to attack a variant of the Learning With Errors problem where each error term lies in a fixed set of constant size. We prove that this problem can be broken in polynomial time, given a sufficient number of samples. This result improves on the seminal work by Arora and Ge (ICALP'11), as the attack's time complexity is independent of the LWE modulus.
An Undetectable Watermark for Generative Image Models
We present the first undetectable watermarking scheme for generative image models. Undetectability ensures that no efficient adversary can distinguish between watermarked and un-watermarked images, even after making many adaptive queries. In particular, an undetectable watermark does not degrade image quality under any efficiently computable metric. Our scheme works by selecting the initial latents of a diffusion model using a pseudorandom error-correcting code (Christ and Gunn, 2024), a strategy which guarantees undetectability and robustness. We experimentally demonstrate that our watermarks are quality-preserving and robust using Stable Diffusion 2.1. Our experiments verify that, in contrast to every prior scheme we tested, our watermark does not degrade image quality. Our experiments also demonstrate robustness: existing watermark removal attacks fail to remove our watermark from images without significantly degrading the quality of the images. Finally, we find that we can robustly encode 512 bits in our watermark, and up to 2500 bits when the images are not subjected to watermark removal attacks. Our code is available at https://github.com/XuandongZhao/PRC-Watermark.
Is ML-Based Cryptanalysis Inherently Limited? Simulating Cryptographic Adversaries via Gradient-Based Methods
Given the recent progress in machine learning (ML), the cryptography community has started exploring the applicability of ML methods to the design of new cryptanalytic approaches. While current empirical results show promise, the extent to which such methods may outperform classical cryptanalytic approaches is still somewhat unclear.
In this work, we initiate exploration of the theory of ML-based cryptanalytic techniques, in particular providing new results towards understanding whether they are fundamentally limited compared to traditional approaches. Whereas most classic cryptanalysis crucially relies on directly processing individual samples (e.g., plaintext-ciphertext pairs), modern ML methods thus far only interact with samples via gradient-based computations that average a loss function over all samples. It is, therefore, conceivable that such gradient-based methods are inherently weaker than classical approaches.
We introduce a unifying framework for capturing both ``sample-based'' adversaries that are provided with direct access to individual samples and ``gradient-based'' ones that are restricted to issuing gradient-based queries that are averaged over all given samples via a loss function. Within our framework, we establish a general feasibility result showing that any sample-based adversary can be simulated by a seemingly-weaker gradient-based one. Moreover, the simulation exhibits a nearly optimal overhead in terms of the gradient-based simulator's running time. Finally, we extend and refine our simulation technique to construct a gradient-based simulator that is fully parallelizable (crucial for avoiding an undesirable overhead for parallelizable cryptanalytic tasks), which is then used to construct a gradient-based simulator that executes the particular and highly useful gradient-descent method.
Taken together, although the extent to which ML methods may outperform classical cryptanalytic approaches is still somewhat unclear, our results indicate that such gradient-based methods are not inherently limited by their seemingly restricted access to the provided samples.
Deimos Cipher: A High-Entropy, Secure Encryption Algorithm with Strong Diffusion and Key Sensitivity
Deimos Cipher is a symmetric encryption algorithm designed to achieve high entropy and strong diffusion while maintaining efficiency.
It employs advanced cryptographic transformations to ensure robust security against modern cryptanalysis techniques.
Entropy tests demonstrate its ability to generate highly randomized ciphertext, surpassing industry standards.
Avalanche effect analysis confirms optimal diffusion, achieving an average bit change of 50.18% in large datasets.
Key sensitivity tests reveal a 50.54% ciphertext difference for minimal key variations, ensuring strong resistance to differential cryptanalysis.
With fast encryption and decryption speeds, Deimos Cipher offers a balanced approach between security and performance, making it suitable for secure communication and data protection.
This paper presents the algorithm's design, security analysis, and benchmarking against established cryptographic standards.
Garblet: Multi-party Computation for Protecting Chiplet-based Systems
The introduction of shared computation architectures assembled from
heterogeneous chiplets introduces new security threats. Due to the shared logical and physical resources, an untrusted chiplet can act maliciously to surreptitiously probe the data communication between chiplets or sense the computation shared between them. This paper presents Garblet, the first framework to leverage the flexibility offered by chiplet technology and Garbled Circuits (GC)-based MPC to enable efficient, secure computation even in the presence of potentially compromised chiplets. Our approach integrates a customized hardware Oblivious Transfer (OT) module and an optimized evaluator engine into chiplet-based platforms. This configuration distributes the tasks of garbling and evaluating circuits across two chiplets, reducing communication costs and enhancing computation speed. We implement this framework on an AMD/Xilinx UltraScale+ multi-chip module and demonstrate its effectiveness using benchmark functions. Additionally, we introduce a novel circuit decomposition technique that allows for parallel processing across multiple chiplets to further improve computational efficiency. Our results highlight the potential of chiplet systems for accelerating GC (e.g., the time complexity of garbled AES is 0.0226ms) in order to guarantee the security and privacy of the computation on chiplets.
Multi-Authority Functional Encryption: Corrupt Authorities, Dynamic Collusion, Lower Bounds, and More
Decentralization is a great enabler for adoption of modern cryptography in real-world systems. Widespread adoption of blockchains and secure multi-party computation protocols are perfect evidentiary examples for dramatic rise in deployment of decentralized cryptographic systems. Much of cryptographic research can be viewed as reducing (or eliminating) the dependence on trusted parties, while shielding from stronger adversarial threats. In this work, we study the problem of multi-authority functional encryption (MAFE), a popular decentralized generalization of functional encryption (FE). Our main contributions are:
1. We design MAFE for all poly-sized circuits, in the bounded collusion model, under the minimal assumption of PKE/OWFs. Prior to our work, this required either sub-exponentially secure obfuscation, or -party key exchange, or Random Oracles and sub-exponentially secure PKE. We also extend our constructions to the dynamic collusion model under the minimal assumptions of IBE/OWFs. Unlike all prior works, our MAFE systems are truly dynamic and put no restrictions on the maximum number of authorities.
2. Under the hardness of learning with errors (LWE) assumption, we design MAFE for all poly-sized circuits where we allow adversaries to adaptively corrupt local authorities. We allow an adversary to corrupt any out of local authorities as long as = poly . Prior to this, such MAFE relied on sub-exponentially secure obfuscation. Additionally, we design a new MAFE compiler for boosting selective authority corruptions to non-adaptive authority corruptions.
3. We prove a tight implication from MAFE to (VBB/indistinguishability) obfuscation. We show that MAFE implies obfuscation only if the number of attribute bits (jointly) controlled by all corrupt local authorities is . This proves optimality of our second result for a wide range of parameters.
4. Finally, we propose a new MAFE system that we refer to as multi-authority attribute-based functional encryption (MA-ABFE). We view it as an approach to get best of both worlds (fully collusion resistant MA-ABE, and bounded collusion resistant MAFE). By combining our results with prior MA-ABE results, we obtain MA-ABFE for from standard pairing-based assumptions, and for from LWE, both in the Random Oracle Model. We also describe a simple construction of MA-ABE for general predicates from witness encryption, and combining with known results, we also get MA-ABFE for from evasive LWE.
Universally Composable SNARKs with Transparent Setup without Programmable Random Oracle
Non-interactive zero-knowledge (NIZK) proofs enable a prover to convince a verifier of an NP statement’s validity using a single message, without disclosing any additional information. These proofs are widely studied and deployed, especially in their succinct form, where proof length is sublinear in the size of the NP relation. However, efficient succinct NIZKs typically require an idealized setup, such as a a common reference string, which complicates real-world deployment. A key challenge is developing NIZKs with simpler, more transparent setups.
A promising approach is the random-oracle (RO) methodology, which idealizes hash functions as public random functions. It is commonly believed that UC NIZKs cannot be realized using a non-programmable global RO—the simplest incarnation of the RO as a form of setup—since existing techniques depend on the ability to program the oracle.
We challenge this belief and present a methodology to build UC-secure NIZKs based solely on a global, non-programmable RO. By applying our framework we are able to construct a NIZK that achieves witness-succinct proofs of logarithmic size, breaking both the programmability barrier and polylogarithmic proof size limitations for UC-secure NIZKs with transparent setups. We further observe that among existing global RO formalizations put forth by Camenisch et al. (Eurocrypt 2018), our choice of setup is necessary to achieve this result.
From the technical standpoint, our contributions span both modeling and construction. We leverage the shielded (super-poly) oracle model introduced by Broadnax et al. (Eurocrypt 2017) to define a UC NIZK functionality that can serve as a drop-in replacement for its standard variant—it preserves the usual soundness and zero-knowledge properties while ensuring its compositional guarantees remain intact. To instantiate this functionality under a non-programmable RO setup, we follow the framework of Ganesh et al. (Eurocrypt 2023) and provide new building blocks for it, around which are some of our core technical contributions: a novel polynomial encoding technique and the leakage analysis of its companion polynomial commitment, based on Bulletproofs-style folding. We also provide a second construction, based on a recent work by Chiesa and Fenzi (TCC 2024), and show that it achieves a slightly weaker version of the NIZK functionality.
GPU Implementations of Three Different Key-Switching Methods for Homomorphic Encryption Schemes
In this work, we report on the latest GPU implementations of the three well-known methods for the key switching operation, which is critical for Fully Homomorphic Encryption (FHE). Additionally, for the first time in the literature, we provide implementations of all three methods in GPU for leveled CKKS schemes. To ensure a fair comparison, we employ the most recent GPU implementation of the number-theoretic transform (NTT), which is the most time-consuming operation in key switching, and evaluate the performance across two fully homomorphic schemes: BFV and CKKS. Furthermore, we highlight the advantages and shortcomings of the three methods in the context of leveled HE schemes, and discuss other aspects such as memory requirements. Our GPU implementation is integrated with HEonGPU Library and delivers up to a ×380 improvement in execution time compared to the Microsoft SEAL Library. Since key switching is a specialized form of the external product common in many HE schemes, our results are directly relevant to time-intensive homomorphic operations such as relinearization and rotation. As homomorphic rotation is one of the most dominant operations in bootstrapping, our results are also applicable in bootstrapping algorithms of BFV, BGV and CKKS schemes.
Simple and General Counterexamples for Private-Coin Evasive LWE
We present a simple counterexample to all known variants of the private-coin evasive learning with errors (LWE) assumption. Unlike prior works, our counterexample is direct, it does not use heavy cryptographic machinery (such as obfuscation or witness encryption), and it applies to all variants of the assumption. Our counterexample can be seen as a "zeroizing" attack against evasive LWE, calling into question the soundness of the underlying design philosophy.
Security of the Ascon Authenticated Encryption Mode in the Presence of Quantum Adversaries
We examine the post-quantum security of the Ascon authenticated encryption (AE) mode. In spite of comprehensive research of Ascon's classical security, the potential impact of quantum adversaries on Ascon has not yet been explored much. We investigate the generic security of the Ascon AE mode in the setting where the adversary owns a quantum computer to improve its attack, while the adversarial encryption or decryption queries are still classical. In this so-called Q1 model, Ascon achieves security up to approximately evaluations, where is the capacity, the key size, and the adversary is block-wise adaptive but restricted to one forgery attempt. Our technique is based on applying the semi-classical one-way to hiding (O2H) lemma, and on tailoring the puncture set to the Ascon mode.
Additionally, we discuss different parameter choices for Ascon and compare our results to generic quantum attacks, such as Grover-based key search and state recovery.
TreeKEM: A Modular Machine-Checked Symbolic Security Analysis of Group Key Agreement in Messaging Layer Security
The Messaging Layer Security (MLS) protocol standard proposes a novel tree-based protocol that enables efficient end-to-end encrypted messaging over large groups with thousands of members. Its functionality can be divided into three components: TreeSync for authenticating and synchronizing group state, TreeKEM for the core group key agreement, and TreeDEM for group message encryption. While previous works have analyzed the security of abstract models of TreeKEM, they do not account for the precise low-level details of the protocol standard. This work presents the first machine-checked security proof for TreeKEM. Our proof is in the symbolic Dolev-Yao model and applies to a bit-level precise, executable, interoperable specification of the protocol. Furthermore, our security theorem for TreeKEM composes naturally with a previous result for TreeSync to provide a strong modular security guarantee for the published MLS standard.
Low Communication Threshold FHE from Standard (Module-)LWE
Threshold fully homomorphic encryption (ThFHE) is an extension of FHE that can be applied to multiparty computation (MPC) with low round complexity. Recently, Passelègue and Stehlé (Asiacrypt 2024) presented a simulation-secure ThFHE scheme with polynomially small decryption shares from “yet another” learning with errors assumption (LWE), in which the norm of the secret key is leaked to the adversary. While “yet another” LWE is reduced from standard LWE, its module variant, “yet another” module-LWE (MLWE), lacks a known reduction from standard MLWE. Because of this, it is left as an open question to extend their scheme to the MLWE-based construction.
In this paper, we address this open problem: we propose a simulation-secure ThFHE scheme with polynomially small decryption shares whose security is (directly) reduced from standard LWE/MLWE. Our core technique, which we call “noise padding”, eliminates the need of “yet another” assumptions: we distribute shares of a small error and use them to adjust the distribution of decryption noise so that no information about the secret key is leaked. As side benefits of our construction, our ThFHE efficiently realizes arbitrary T-out-of-N threshold decryption via simple Shamir secret sharing instead of {0, 1}-linear secret sharing. Furthermore, the sizes of keys, ciphertexts and decryption shares in our scheme are constant w.r.t. the number of parties N ; we achieve compactness w.r.t. N.
Quarantined-TreeKEM: a Continuous Group Key Agreement for MLS, Secure in Presence of Inactive Users
The recently standardized secure group messaging protocol Messaging Layer Security (MLS) is designed to ensure asynchronous communications within large groups, with an almost-optimal communication cost and the same security level as point-to-point se- cure messaging protocols such as Signal. In particular, the core sub-protocol of MLS, a Continuous Group Key Agreement (CGKA) called TreeKEM, must generate a common group key that respects the fundamental security properties of post-compromise security and forward secrecy which mitigate the effects of user corruption over time.
Most research on CGKAs has focused on how to improve these two security properties. However, post-compromise security and forward secrecy require the active participation of respectively all compromised users and all users within the group. Inactive users – who remain offline for long periods – do not update anymore their encryption keys and therefore represent a vulnerability for the entire group. This issue has already been identified in the MLS standard, but no solution, other than expelling these inactive users after some disconnection time, has been found.
We propose here a CGKA protocol based on TreeKEM and fully compatible with the MLS standard, that implements a quarantine mechanism for the inactive users in order to mitigate the risk induced by these users during their inactivity period and before they are removed from the group. That mechanism indeed updates the inactive users’ encryption keys on their behalf and secures these keys with a secret sharing scheme. If some of the inactive users eventually reconnect, their quarantine stops and they are able to recover all the messages that were exchanged during their offline period. Our Quarantined-TreeKEM protocol thus increases the security of original TreeKEM, with a very limited – and sometimes negative – communication overhead.
Radical 2-isogenies and cryptographic hash functions in dimensions 1, 2 and 3
We provide explicit descriptions for radical 2-isogenies in dimensions
one, two and three using theta coordinates. These formulas allow us to efficiently
navigate in the corresponding isogeny graphs.
As an application of this, we implement different versions of the CGL hash func-
tion. Notably, the three-dimensional version is fastest, which demonstrates yet
another potential of using higher dimensional isogeny graphs in cryptography.
On the Effects of Neural Network-based Output Prediction Attacks on the Design of Symmetric-key Ciphers
Proving resistance to conventional attacks, e.g., differential, linear, and integral attacks, is essential for designing a secure symmetric-key cipher. Recent advances in automatic search and deep learning-based methods have made this time-consuming task relatively easy, yet concerns persist over expertise requirements and potential oversights. To overcome these concerns, Kimura et al. proposed neural network-based output prediction (NN) attacks, offering simplicity, generality, and reduced coding mistakes. NN attacks could be helpful for designing secure symmetric-key ciphers, especially the S-box-based block ciphers. Inspired by their work, we first apply NN attacks to Simon, one of the AND-Rotation-XOR-based block ciphers, and identify structures susceptible to NN attacks and the vulnerabilities detected thereby. Next, we take a closer look at the vulnerable structures. The most vulnerable structure has the lowest diffusion property compared to others. This fact implies that NN attacks may detect such a property. We then focus on a biased event of the core function in vulnerable Simon-like ciphers and build effective linear approximations caused by such an event. Finally, we use these linear approximations to reveal that the vulnerable structures are more susceptible to a linear key recovery attack than the original one. We conclude that our analysis can be a solid step toward making NN attacks a helpful tool for designing a secure symmetric-key cipher.
Attribute-Based Threshold Issuance Anonymous Counting Tokens and Its Application to Sybil-Resistant Self-Sovereign Identity
Self-sovereign identity (SSI) systems empower users to (anonymously) establish and verify their identity when accessing both digital and real-world resources, emerging as a promising privacy-preserving solution for user-centric identity management. Recent work by Maram et al. proposes the privacy-preserving Sybil-resistant decentralized SSI system CanDID (IEEE S&P 2021). While this is an important step, notable shortcomings undermine its efficacy. The two most significant among them being the following: First, unlinkability breaks in the presence of a single malicious issuer. Second, it introduces interactiveness, as the users are required to communicate each time with issuers to collect credentials intended for use in interactions with applications. This contradicts the goal of SSI, whose aim is to give users full control over their identities. This paper first introduces the concept of publicly verifiable attribute-based threshold anonymous counting tokens (tACT). Unlike recent approaches confined to centralized settings (Benhamouda et al., ASIACRYPT 2023), tACT operates in a distributed-trust environment. Accompanied by a formal security model and a provably secure instantiation, tACT introduces a novel dimension to token issuance, which, we believe, holds independent interest. Next, the paper leverages the proposed tACT scheme to construct an efficient Sybil-resistant SSI system. This system supports various functionalities, including threshold issuance, unlinkable multi-show selective disclosure, and non-interactive, non-transferable credentials that offer constant-size credentials. Finally, our benchmark results show an efficiency improvement in our construction when compared to CanDID all while accommodating a greater number of issuers and additionally reducing to a one-round protocol that can be run in parallel with all issuers.
Constructing Quantum Implementations with the Minimal T-depth or Minimal Width and Their Applications
With the rapid development of quantum computers, optimizing the quantum implementations of symmetric-key ciphers, which constitute the primary components of the quantum oracles used in quantum attacks based on Grover and Simon's algorithms, has become an active topic in the cryptography community. In this field, a challenge is to construct quantum circuits that require the least amount of quantum resources. In this work, we aim to address the problem of constructing quantum circuits with the minimal T-depth or width (number of qubits) for nonlinear components, thereby enabling implementations of symmetric-key ciphers with the minimal T-depth or width. Specifically, we propose several general methods for obtaining quantum implementation of generic vectorial Boolean functions and multiplicative inversions in GF(2^n), achieving the minimal T-depth and low costs across other metrics. As an application, we present a highly compact T-depth-3 Clifford+T circuit for the AES S-box. Compared to the T-depth-3 circuits presented in previous works (ASIACRYPT 2022, IEEE TC 2024), our circuit has significant reductions in T-count, full depth and Clifford gate count. Compared to the state-of-the-art T-depth-4 circuits, our circuit not only achieves the minimal T-depth but also exhibits reduced full depth and closely comparable width. This leads to lower costs for the DW-cost and T-DW-cost. Additionally, we propose two methods for constructing minimal-width implementations of vectorial Boolean functions. As applications, for the first time, we present a 9-qubit Clifford+T circuit for the AES S-box, a 16-qubit Clifford+T circuit for a pair of AES S-boxes, and a 5-qubit Clifford+T circuit for the chi function of SHA3. These circuits can be used to derive quantum circuits that implement AES or SHA3 without ancilla qubits.
zkMarket : Privacy-preserving Digital Data Trade System via Blockchain
Ensuring fairness in blockchain-based data trading presents significant challenges, as the transparency of blockchain can expose sensitive details and compromise fairness. Fairness ensures that the seller receives payment only if they provide the correct data, and the buyer gains access to the data only after making the payment. Existing approaches face limitations in efficiency particularly when applied to large-scale data. Moreover, preserving privacy has also been a significant challenge in blockchain.
In this paper, we introduce zkMarket, a privacy-preserving fair trade system on the blockchain.
we make the data registration process more concise and improve the seller's proving time by leveraging our novel pseudorandom generator named matrix-formed PRG (MatPRG), and existing commit-and-prove SNARK (CP-SNARK).
To ensure transaction privacy, zkMarket is built upon an anonymous transfer protocol. By combining encryption with zero-knowledge succinct non-interactive arguments of knowledge (zk-SNARK), both the seller and the buyer are enabled to trade fairly.
Experimental results demonstrate that zkMarket significantly reduces the computational overhead associated with traditional blockchain solutions while maintaining robust security and privacy. Our evaluation demonstrates that zkMarket achieves high efficiency while maintaining robust security and privacy. The seller can register 1MB of data in 2.8 seconds, the buyer can generate the trade transaction in 0.2 seconds, and the seller can finalize the trade within 0.4 seconds.
Our work aims to minimize interaction in secure computation due to the high cost and challenges associated with communication rounds, particularly in scenarios with many clients. In this work, we revisit the problem of secure aggregation in the single-server setting where a single evaluation server can securely aggregate client-held individual inputs. Our key contribution is the introduction of One-shot Private Aggregation ( ) where clients speak only once (or even choose not to speak) per aggregation evaluation. Since each client communicates only once per aggregation, this simplifies managing dropouts and dynamic participation, contrasting with multi-round protocols and aligning with plaintext secure aggregation, where clients interact only once.
We construct based on LWR, LWE, class groups, DCR and demonstrate applications to privacy-preserving Federated Learning (FL) where clients {speak once}. This is a sharp departure from prior multi-round FL protocols whose study was initiated by Bonawitz et al. (CCS, 2017). Moreover, unlike the YOSO (You Only Speak Once) model for general secure computation, eliminates complex committee selection protocols to achieve adaptive security. Beyond asymptotic improvements, is practical, outperforming state-of-the-art solutions. We benchmark logistic regression classifiers for two datasets, while also building an MLP classifier to train on MNIST, CIFAR-10, and CIFAR-100 datasets.
We build two flavors of (1) from (threshold) key homomorphic PRF and (2) from seed homomorphic PRG and secret sharing.
The threshold Key homomorphic PRF addresses shortcomings observed in previous works that relied on DDH and LWR in the work of Boneh et al. (CRYPTO, 2013), marking it as an independent contribution to our work. Moreover, we also present new threshold key homomorphic PRFs based on class groups or DCR or the LWR assumption.
Lattice-based -Protocols for Polynomial Relations with Standard Soundness
We propose new techniques for enhancing the efficiency of -protocols in lattice settings.
One major challenge in lattice-based -protocols is restricting the norm of the extracted witness in soundness proofs.
Most of existing solutions either repeat the protocol several times or opt for a relaxation version of the original relation.
Recently, Boneh and Chen have proposed an innovative solution called ,
which utilizes a sum-check protocol to enforce the norm bound on the witness.
In this paper, we elevate this idea to efficiently proving multiple polynomial relations without relaxation.
Simply incorporating the techniques from into -protocols leads to inefficient results;
therefore, we introduce several new techniques to ensure efficiency.
First, to enable the amortization in [AC20] for multiple polynomial relations,
we propose a general linearization technique to reduce polynomial relations to homomorphic ones.
Furthermore, we generalize the folding protocol in LatticeFold, enabling us to efficiently perform folding and other complex operations multiple times without the need to repeatedly execute sum-checks. Moreover, we achieve zero-knowledge by designing hiding claims and elevating the zero-knowledge sum-check protocol [XZZ+19] on rings.
Our protocol achieves standard soundness, thereby enabling the efficient integration of the compressed -protocol theory [AC20, ACF21] in lattice settings.
Hybrid Obfuscated Key Exchange and KEMs
Hiding the metadata in Internet protocols serves to protect user privacy, dissuade traffic analysis, and prevent network ossification. Fully encrypted protocols require even the initial key exchange to be obfuscated: a passive observer should be unable to distinguish a protocol execution from an exchange of random bitstrings. Deployed obfuscated key exchanges such as Tor's pluggable transport protocol obfs4 are Diffie–Hellman-based, and rely on the Elligator encoding for obfuscation. Recently, Günther, Stebila, and Veitch (CCS '24) proposed a post-quantum variant pq-obfs, using a novel building block called obfuscated key encapsulation mechanisms (OKEMs): KEMs whose public keys and ciphertexts look like random bitstrings.
For transitioning real-world protocols, pure post-quantum security is not enough. Many are taking a hybrid approach, combining traditional and post-quantum schemes to hedge against security failures in either component. While hybrid KEMs are already widely deployed (e.g., in TLS 1.3), existing hybridization techniques fail to provide hybrid obfuscation guarantees for OKEMs. Further, even if a hybrid OKEM existed, the pq-obfs protocol would still not achieve hybrid obfuscation.
In this work, we address these challenges by presenting the first OKEM combiner that achieves hybrid IND-CCA security with hybrid ciphertext obfuscation guarantees, and using this to build Drivel, a modification of pq-obfs that is compatible with hybrid OKEMs. Our OKEM combiner allows for a variety of practical instantiations, e.g., combining obfuscated versions of DHKEM and ML-KEM. We additionally provide techniques to achieve unconditional public key obfuscation for LWE-based OKEMs, and explore broader applications of hybrid OKEMs, including a construction of the first hybrid password-authenticated key exchange (PAKE) protocol secure against adaptive corruptions in the UC model.
Delegatable ABE with Full Security from Witness Encryption
Delegatable Attribute-Based Encryption (DABE) is a well-known generalization of ABE, proposed to mirror organizational hierarchies. In this work, we design a fully-secure DABE scheme from witness encryption and other simple assumptions. Our construction does not rely on Random Oracles, and we provide a black-box reduction to polynomial hardness of underlying assumptions. To the best of our knowledge, this is the first DABE construction (beyond hierarchical identity-based encryption) that achieves full security without relying on complexity leveraging. Our DABE supports an unbounded number of key delegations, and the secret key size grows just linearly with each key delegation operation.
A Systematic Study of Sparse LWE
In this work, we introduce the sparse LWE assumption, an assumption that draws inspiration from both Learning with Errors (Regev JACM 10) and Sparse Learning Parity with Noise (Alekhnovich FOCS 02). Exactly like LWE, this assumption posits indistinguishability of from for a random where the secret , and the error vector is generated exactly as in LWE. However, the coefficient matrix in sparse LPN is chosen randomly from so that each column has Hamming weight exactly for some small . We study the problem in the regime where is a constant or polylogarithmic. The primary motivation for proposing this assumption is efficiency. Compared to LWE, the samples can be computed and stored with roughly factor improvement in efficiency. Our results can be summarized as:
Foundations:
We show several properties of sparse LWE samples, including: 1) The hardness of LWE/LPN with dimension implies the hardness of sparse LWE/LPN with sparsity and arbitrary dimension . 2) When the number of samples , length of the shortest vector of a lattice spanned by rows of a random sparse matrix is large, close to that of a random dense matrix of the same dimension (up to a small constant factor). 3) Trapdoors with small polynomial norm exist for random sparse matrices with dimension . 4) Efficient algorithms for sampling such matrices together with trapdoors exist when the dimension is .
Cryptanalysis:
We examine the suite of algorithms that have been used to break LWE and sparse LPN. While naively many of the attacks that apply to LWE do not exploit sparsity, we consider natural extensions that make use of sparsity. We propose a model to capture all these attacks. Using this model we suggest heuristics on how to identify concrete parameters. Our initial cryptanalysis suggests that concretely sparse LWE with a modest and slightly bigger dimension than LWE will satisfy similar level of security as LWE with similar parameters.
Applications:
We show that the hardness of sparse LWE implies very efficient homomorphic encryption schemes for low degree computations. We obtain the first secret key Linearly Homomorphic Encryption (LHE) schemes with slightly super-constant, or even constant, overhead, which further has applications to private information retrieval, private set intersection, etc. We also obtain secret key homomorphic encryption for arbitrary constant-degree polynomials with slightly super-constant, or constant, overhead.
We stress that our results are preliminary. However, our results make a strong case for further investigation of sparse LWE.
Proofs for Deep Thought: Accumulation for large memories and deterministic computations
An important part in proving machine computation is to prove the correctness of the read and write operations performed from the memory, which we term memory-proving. Previous methodologies required proving Merkle Tree openings or multi-set hashes, resulting in relatively large proof circuits. We construct an efficient memory-proving Incrementally Verifiable Computation (IVC) scheme from accumulation, which is particularly useful for machine computations with large memories and deterministic steps. In our scheme, the IVC prover PIVC has cost entirely independent of the memory size T and only needs to commit to approximately 15 field elements per read/write operation, marking a more than 100X improvement over prior work. We further reduce this cost by employing a modified, accumulation-friendly version of the GKR protocol. In the optimized version, PIVC only needs to commit to 6 small memory-table elements per read/write. If the table stores 32-bit values, then this is equivalent to committing to less than one single field element per read and write. Our modified GKR protocol is also valuable for proving other deterministic computations within the context of IVC. Our memory-proving protocol can be extended to support key-value store.
AsyRand: fast asynchronous distributed randomness beacon with reconfiguration
Distributed randomness beacon protocols, which generate publicly verifiable randomness at regular intervals, are crucial for a wide range of applications. The publicly verifiable secret sharing (PVSS) scheme is a promising cryptographic primitive for implementing beacon protocols, such as Hydrand (S\&P '20) and SPURT (S\&P '22). However, two key challenges for practical deployment remain unresolved: asynchrony and reconfiguration. In this paper, we introduce the beacon protocol to address these challenges. In brief, leverages Bracha Reliable Broadcast (BRB) or BRB-like protocols for message dissemination and incorporates a producer-consumer model to decouple the production and consumption of PVSS commitments. In the producer-consumer model, PVSS commitments are produced and consumed using a queue data structure. Specifically, the producer process is responsible for generating new PVSS commitments and reaching consensus on them within the queue, while the consumer process continuously consumes the commitments to recover PVSS secrets and generate new beacon values. This separation allows the producer and consumer processes to operate simultaneously and asynchronously, without the need for a global clock. Moreover, the producer-consumer model enables each party to detect potential faults in other parties by monitoring the queue length. If necessary, parties in can initiate a removal process for faulty parties. BRB is also employed to facilitate the addition of new parties without requiring a system restart. In summary, supports reconfiguration, enhancing both the protocol's usability and reliability. Additionally, we propose a novel PVSS scheme based on the protocol, which is of independent interest. Regarding complexity, achieves state-of-the-art performance with communication complexity, computation complexity, and verification complexity.
Withdrawable signatures in Fiat-Shamir with aborts constructions
This article presents an extension of the work performed by Liu, Baek and Susilo on withdrawable signatures to the Fiat-Shamir with aborts paradigm. We introduce an abstract construction, and provide security proofs for this proposal. As an instantiation, we provide a concrete construction for a withdrawable signature scheme based on Dilithium.
Constant time lattice reduction in dimension 4 with application to SQIsign
In this paper we propose a constant time lattice reduction algorithm for integral dimension-4 lattices. Motivated by its application in the SQIsign post-quantum signature scheme, we provide for the first time a constant time LLL-like algorithm with guarantees on the length of the shortest output vector. We implemented our algorithm and ensured through various tools that it indeed operates in constant time. Our experiments suggest that in practice our implementation outputs a Minkowski reduced basis and thus can replace a non constant time lattice reduction subroutine in SQIsign.
SNARKs for Stateful Computations on Authenticated Data
We present a new generalization of (zk-)SNARKs combining two additional features at the same time. Besides the verification of correct computation, our new SNARKs also allow, first, the verification of input data authenticity. Specifically, a verifier can confirm that the input to the computation originated from a trusted source. Second, our SNARKs support verification of stateful computations across multiple rounds, ensuring that the output of the current round correctly depends on the internal state of the previous round. Our SNARKs are specifically suited to applications in cyber-physical control systems, where computations are periodically carried out and need to be checked immediately. Our focus is on concrete practicality, so we abstain from arithmetizing hash functions or signatures in our SNARKs. Rather, we modify the internals of an existing SNARK to extend its functionality. Additionally, we present new optimizations to reduce proof size, prover time, and verification time in our setting. With our construction, prover runtime improves significantly over the baseline by a factor of 89. Verification time is 70 % less for computations on authenticated data and 33 % less for stateful computations. To demonstrate relevance and practicality, we implement and benchmark our new SNARKs in a sample real-world scenario with
a (simple) quadcopter flight control system.
Really Complex Codes with Application to STARKs
Reed-Solomon (RS) codes [RS60], representing evaluations of univariate polynomials over distinct domains, are foundational in error correction and cryptographic protocols. Traditional RS codes leverage the Fourier domain for efficient encoding and decoding via Fast Fourier Transforms (FFT). However, in fields such as the Reals and some finite prime fields, limited root-of-unity orders restrict these methods. Recent research, particularly in the context of modern STARKs [BSBHR18b], has explored the Complex Fourier domain for constructing Real-valued RS codes, introducing novel transforms such as the Discrete Circle-Curve Transform (DCCT) for Real-to-Real transformations [HLP24]. Despite their efficiency, these transforms face limitations with high radix techniques and potentially other legacy know-hows. In this paper, we propose a construction of Real-valued RS codes utilizing the Discrete Fourier Transform (DFT) over the Complex domain. This approach leverages well-established algebraic and Fourier properties to achieve efficiency comparable to DCCT, while retaining compatibility with legacy techniques and optimizations.
Periodic Table of Cryptanalysis: Geometric Approach with Different Bases
In the past three decades, we have witnessed the creation of various cryptanalytic attacks. However, relatively little research has been done on their potential underlying connections. The geometric approach, developed by Beyne in 2021, shows that a cipher can be viewed as a linear operation when we treat its input and output as points in an induced \textit{free vector space}.
By performing a change of basis for the input and output spaces, one can obtain various transition matrices. Linear, differential, and (ultrametic) integral attacks have been well reinterpreted by Beyne's theory in a unified way.
Thus far, the geometric approach always uses the same basis for the input and output spaces. We observe here that this restriction is unnecessary and allowing different bases makes the geometric approach more flexible and able to interpret/predict more attack types. Given some set of bases for the input and output spaces, a family of basis-based attacks is defined by combining them, and all attacks in this family can be studied in a unified automatic search method.
We revisit three kinds of bases from previous geometric approach papers and extend them to four extra ones by introducing new rules when generating new bases. With the final seven bases, we can obtain different basis-based attacks in the -th order spaces, where the \textit{order} is defined as the number of messages used in one sample during the attack.
We then provide four examples of applications of this new framework. First, we show that by choosing a better pair of bases, Beyne and Verbauwhede's ultrametric integral cryptanalysis can be interpreted as a single element of a transition matrix rather than as a linear combination of elements. This unifies the ultrametric integral cryptanalysis with the previous linear and quasi-differential attacks.
Second, we revisit the multiple-of- property with our refined geometric approach and exhibit new multiple-of- distinguishers that can reach more rounds of the \skinny-64 cipher than the state-of-the-art.
Third, we study the multiple-of- property for the first-order case, which is similar to the subspace trail but it is the divisibility property that is considered. This leads to a new distinguisher for 11-round-reduced \skinny-64.
Finally, we give a closed formula for differential-linear approximations without any assumptions, even confirming that the two differential-linear approximations of \simeck-32 and \simeck-48 found by Hadipour \textit{et al.} are deterministic independently of concrete key values.
We emphasize that all these applications were not possible before.
Related-Key Differential and Boomerang Cryptanalysis in the Fixed-Key Model
Differential cryptanalysis, along with its variants such as boomerang attacks, is widely used to evaluate the security of block ciphers. These cryptanalytic techniques often rely on assumptions like the \textit{hypothesis of stochastic equivalence} and \textit{Markov ciphers assumption}. Recently, more attention has been paid to verifying whether differential characteristics (DCs) meet these assumptions, finding both positive and negative results.
A part of these efforts includes the automatic search methods for both the value and difference propagation (e.g., Liu et al. CRYPTO 2020, Nageler et al. ToSC 2025/1), structural constraints analysis (e.g., Tan and Peyrin, ToSC 2022/4), and the quasidifferential (Beyne and Rijmen, CRYPTO 2022).
Nevertheless, less attention has been paid to the related-key DCs and boomerang distinguishers, where the same assumptions are used. To the best of our knowledge, only some related-tweakey DCs of \skinny were checked thanks to its linear word-based key-schedule, and no similar work is done for boomerang distinguishers.
The verification of related-key DCs and boomerang distinguishers is as important as that of DCs, as they often hold the longest attack records for block ciphers.
This paper focuses on investigating the validity of DCs in the related-key setting and boomerang distinguishers in both single- and related-key scenarios.
For this purpose, we generalize Beyne and Rijmen's quasidifferential techniques for the related-key DCs and boomerang attacks.
First, to verify related-key DCs, the related-key quasi-DC is proposed. Similar to the relationship between the quasi-DC and DC, the exact probability of a related-key DC is equal to the sum of all corresponding related-key quasi-DCs' correlations.
Since the related-key quasi-DCs involve the key information, we can determine the probability of the target related-key DC in different key subspaces.
We find both positive and negative results.
For example, we verify the 18-round related-key DC used in the best attack on \gift-64 whose probability is , finding that this related-key DC has a higher probability for keys which is around , but it is impossible for the remaining keys.
Second, we identify proper bases to describe the boomerang distinguishers with the geometric approach. A quasi-BCT is constructed to consider the value influence in the boomerang connectivity table (BCT). For the DC parts, the quasi-biDDT is used. Connecting the quasi-BCT and quasi-biDDT, we can verify the probability of a boomerang distinguisher with quasi-boomerang characteristics.
This also allows us to analyze the probability of the boomerang in different key spaces.
For a 17-round boomerang distinguisher of \skinny-64-128 whose probability is , we find that the probability can be for half of keys, and impossible for the other half.
PEGASIS: Practical Effective Class Group Action using 4-Dimensional Isogenies
In this paper, we present the first practical algorithm to compute an effective group action of the class group of any imaginary quadratic order on a set of supersingular elliptic curves primitively oriented by . Effective means that we can act with any element of the class group directly, and are not restricted to acting by products of ideals of small norm, as for instance in CSIDH. Such restricted effective group actions often hamper cryptographic constructions, e.g. in signature or MPC protocols.
Our algorithm is a refinement of the Clapoti approach by Page and Robert, and uses -dimensional isogenies. As such, it runs in polynomial time, does not require the computation of the structure of the class group, nor expensive lattice reductions, and our refinements allows it to be instantiated with the orientation given by the Frobenius endomorphism. This makes the algorithm practical even at security levels as high as CSIDH-4096. Our implementation in SageMath takes 1.5s to compute a group action at the CSIDH-512 security level, 21s at CSIDH-2048 level and around 2 minutes at the CSIDH-4096 level. This marks the first instantiation of an effective cryptographic group action at such high security levels. For comparison, the recent KLaPoTi approach requires around 200s at the CSIDH-512 level in SageMath and 2.5s in Rust.
SPHINCS- : A Compact Stateless Hash-Based Signature Scheme
Hash-based signatures offer a conservative alternative to post-quantum signatures with arguably better-understood security than other post-quantum candidates. As a core building block of hash-based signatures, the efficiency of one-time signature (OTS) largely dominates that of hash-based signatures. The WOTS signature scheme (Africacrypt 2013) is the current state-of-the-art OTS adopted by the signature schemes standardized by NIST---XMSS, LMS, and SPHINCS .
A natural question is whether there is (and how much) room left for improving one-time signatures (and thus standard hash-based signatures). In this paper, we show that the WOTS one-time signature, when adopting the constant-sum encoding scheme (Bos and Chaum, Crypto 1992), is size-optimal not only under Winternitz's OTS framework, but also among all tree-based OTS designs. Moreover, we point out a flaw in the DAG-based OTS design previously shown to be size-optimal at Asiacrypt 1996, which makes the constant-sum WOTS the most size-efficient OTS to our knowledge. Finally, we evaluate the performance of constant-sum WOTS integrated into the SPHINCS (CCS 2019) and XMSS (PQC 2011) signature schemes, which exhibit certain degrees of improvement in both sign time and signature size.
Re-Randomize and Extract: A Novel Commitment Construction Framework Based on Group Actions
Cryptographic group actions have attracted growing attention as a useful tool for constructing cryptographic schemes.
Among their applications, commitment schemes are particularly interesting as fundamental primitives, playing a crucial role in protocols such as zero-knowledge proofs, multi-party computation, and more.
In this paper, we introduce a novel framework to construct commitment schemes based on cryptographic group actions.
Specifically, we propose two key techniques for general group actions: re-randomization and randomness extraction.
Roughly speaking, a re-randomization algorithm introduces randomness within an orbit for any input element, while a randomness extractor maps this randomness to uniformity over the message space.
We demonstrate that these techniques can significantly facilitate the construction of commitment schemes, providing a flexible framework for constructing either perfectly hiding or perfectly binding commitments, depending on the type of extractor involved.
Moreover, we extend our framework to support the construction of commitments with additional desirable properties beyond hiding and binding, such as dual-mode commitments and enhanced linkable commitments.
These extensions are achieved by further adapting the extractor to satisfy trapdoor or homomorphic properties.
Finally, we instantiate all our proposed commitment schemes using lattices, specifically leveraging the lattice isomorphism problem (LIP) and the lattice automorphism problem (LAP) as underlying cryptographic assumptions.
To the best of our knowledge, this is the first commitment scheme construction based on LIP/LAP.
Additionally, we use LIP to provide a repair and improvement to the tensor isomorphism-based non-interactive commitment scheme proposed by D'Alconzo, Flamini, and Gangemi (ASIACRYPT 2023), which was recently shown to be insecure by an attack from Gilchrist, Marco, Petit, and Tang (CRYPTO 2024).
Identity-based Matchmaking Encryption with Stronger Security and Instantiation on Lattices
An identity-based matchmaking encryption (IB-ME) scheme proposed at JOC 2021 supports anonymous but authenticated communications in a way that communication parties can both specify the senders or receivers on the fly. IB-ME is easy to be used in several network applications requiring privacy-preserving for its efficient implementation and special syntax. In the literature, IB-ME schemes are built from the variants of Diffie-Hellman assumption and all fail to retain security for quantum attackers. Despite the rigorous security proofs in previous security models, the existing schemes are still possibly vulnerable to some potential neglected attacks. Aiming at the above problems, we provide a stronger security definition of authenticity considering new attacks to fit real-world scenarios and then propose a generic construction of IB-ME satisfying the new model. Inspired by the prior IB-ME construction of Chen et al., the proposed scheme is constructed by combining 2-level anonymous hierarchical IBE (HIBE) and identity-based signature (IBS) schemes. In order to upgrade lattice-based IB-ME with better efficiency, we additionally improve a lattice IBS, as an independent technical contribution, to shorten its signature and thus reduce the final IB-ME ciphertext size. By combining the improved IBS and any 2-level adaptively-secure lattice-based HIBE with anonymity, we finally obtain the first lattice-based construction that implements IB-ME directly.
Exploring side-channels in Intel Trust Domain Extensions
Intel Trust Domain Extensions (TDX) has emerged as a crucial technology aimed at strengthening the isolation and security guarantees of virtual machines, especially as the demand for secure computation is growing largely. Despite the protections offered by TDX, in this work, we dig deep into the security claims and uncover an intricate observation in TDX. These findings undermine TDX's core security guarantees by breaching the isolation between the Virtual Machine Manager (VMM) and Trust Domains (TDs). In this work for the first time, we show through a series of experiments that these performance counters can also be exploited by the VMM to differentiate between activities of an idle and active TD. The root cause of this leakage is core contention. This occurs when the VMM itself, or a process executed by the VMM, runs on the same core as the TD. Due to resource contention on the core, the effects of the TD's computations become observable in the performance monitors collected by the VMM. This finding underscore the critical need for enhanced protections to bridge these gaps within these advanced virtualized environments.
Computational Quantum Anamorphic Encryption and Anamorphic Secret Sharing
The concept of anamorphic encryption, first formally introduced by Persiano et al. in their influential 2022 paper titled ``Anamorphic Encryption: Private Communication Against a Dictator,'' enables embedding covert messages within ciphertexts. One of the key distinctions between a ciphertext embedding a covert message and an original ciphertext, compared to an anamorphic ciphertext, lies in the indistinguishability between the original ciphertext and the anamorphic ciphertext. This encryption procedure has been defined based on a public-key cryptosystem. Initially, we present a quantum analogue of the classical anamorphic encryption definition that is based on public-key encryption. Additionally, we introduce a definition of quantum anamorphic encryption that relies on symmetric key encryption. Furthermore, we provide a detailed generalized construction of quantum anamorphic symmetric key encryption within a general framework, which involves taking any two quantum density matrices of any different dimensions and constructing a single quantum density matrix, which is the quantum anamorphic ciphertext containing ciphertexts of both of them. Subsequently, we introduce a definition of computational anamorphic secret-sharing and extend the work of \c{C}akan et al. on computational quantum secret-sharing to computational quantum anamorphic secret-sharing, specifically addressing scenarios with multiple messages, multiple keys, and a single share function. This proposed secret-sharing scheme demonstrates impeccable security measures against quantum adversaries.
Tight Adaptive Simulation Security for Identity-based Inner-Product FE in the (Quantum) Random Oracle Model
Abdalla et al. (ASIACRYPT 2020) introduced a notion of identity-based inner-product functional encryption (IBIPFE) that combines identity-based encryption and inner-product functional encryption (IPFE). Thus far, several pairing-based and lattice-based IBIPFE schemes have been proposed. However, there are two open problems. First, there are no known IBIPFE schemes that satisfy the adaptive simulation-based security. Second, known IBIPFE schemes that satisfy the adaptive indistinguishability-based security or the selective simulation-based security do not have tight reductions. In this paper, we propose lattice-based and pairing-based IBIPFE schemes that satisfy the tight adaptive simulation-based security. At first, we propose a generic transformation from an indistinguishability-based secure -dimensional (IB)IPFE scheme to a simulation-based secure -dimensional (IB)IPFE scheme. The proposed transformation improves Agrawal et al.'s transformation for plain IPFE (PKC 2020) that requires an indistinguishability-based secure -dimensional scheme. Then, we construct a lattice-based IBIPFE scheme that satisfies the tight adaptive indistinguishability-based security under the LWE assumption in the quantum random oracle model. We apply the proposed transformation and obtain the first lattice-based IBIPFE scheme that satisfies adaptive simulation-based security. Finally, we construct a pairing-based IBIPFE scheme that satisfies the tight adaptive simulation-based security under the DBDH assumption in the random oracle model. The pairing-based scheme does not use the proposed transformation towards the best efficiency.
BBB Secure Arbitrary Length Tweak TBC from n-bit Block Ciphers
At FSE'15, Mennink introduced the concept of designing beyond-the-birthday bound secure tweakable block cipher from an ideal block cipher. They proposed two tweakable block ciphers and that accepts -bit tweak using a block cipher of -bit key and -bit data. Mennink proved that the constructions achieve security up to and queries, respectively, assuming the underlying block cipher is ideal. Later, at ASIACRYPT'16, Wang et al. proposed a class of new tweakable block ciphers derived from -bit ideal block ciphers that achieve optimal security, i.e., security up to queries. The proposed designs by both Mennink and Wang et al. admit only -bit tweaks. In FSE'23, Shen and Standaert proposed a tweakable block cipher that accepts -bit tweaks and achieves security up to queries. Their construction uses three block cipher calls, which was shown to be optimal for beyond-birthday-bound secure tweakable block ciphers accepting -bit tweaks. In this paper, we extend this line of research and consider designing tweakable block cipher supporting -bit tweaks from ideal block cipher. First, we show that there is a generic birthday-bound distinguishing attack on any such design with three block cipher calls if any of the block cipher keys are tweak-independent. We then propose a tweakable block cipher , which leverages three block cipher calls with each key being dependent on tweak. We demonstrate that achieve security up to queries. Furthermore, we extend this result and propose an optimally secure construction, dubbed , that uses four ideal block cipher calls with only one tweak-dependent key. Finally, we generalize this and propose an optimally secure tweakable block cipher that processes -bit tweaks using block cipher invocations with only one tweak-dependent block cipher key. Our experimental evaluation asserts that ZMAC instantiated with and (i.e., with ) performs better than all the existing ideal cipher based TBC candidates.
Trail-Estimator: An Automated Verifier for Differential Trails in Block Ciphers
Differential cryptanalysis is a powerful technique for attacking block ciphers, wherein the Markov cipher assumption and stochastic hypothesis are commonly employed to simplify the search and probability estimation of differential trails. However, these assumptions often neglect inherent algebraic constraints, potentially resulting in invalid trails and inaccurate probability estimates. Some studies identified violations of these assumptions and explored how they impose constraints on key material, but they have not yet fully captured all relevant ones. This study proposes Trail-Estimator, an automated verifier for differential trails on block ciphers, consisting of two parts: a constraint detector Cons-Collector and a solving tool Cons-Solver. We first establish the fundamental principles that will allow us to systematically identify all constraint subsets within a differential trail, upon which Cons-Collector is built. Then, Cons-Solver utilizes specialized preprocessing techniques to efficiently solve the detected constraint subsets, thereby determining the key space and providing a comprehensive probability distribution of differential trails. To validate its effectiveness, Trail-Estimator is applied to verify 14 differential trails for the SKINNY, LBLOCK, and TWINE block ciphers. Experimental results show that Trail-Estimator consistently identifies previously undetected constraints for SKINNY and discovers constraints for the first time for LBLOCK and TWINE. Notably, it is the first tool to discover long nonlinear constraints extending beyond five rounds in these ciphers. Furthermore, Trail-Estimator's accuracy is validated by experiments showing its predictions closely match the real probability distribution of short-round differential trails.
Provably Secure Approximate Computation Protocols from CKKS
Secure multi-party computation (MPC) enables collaborative, privacy-preserving computation over private inputs. Advances in homomorphic encryption (HE), particularly the CKKS scheme, have made secure computation practical, making it well-suited for real-world applications involving approximate computations. However, the inherent approximation errors in CKKS present significant challenges in developing MPC protocols.
This paper investigates the problem of secure approximate MPC from CKKS. We first analyze CKKS-based protocols in two-party setting. When only one party holds a private input and the other party acts as an evaluator, a simple protocol with the noise smudging technique on the encryptor's side achieves security in the standard manner. When both parties have private inputs, we demonstrate that the protocol incorporating independent errors from each party achieves a relaxed standard security notion, referred to as a liberal security. Nevertheless, such a protocol fails to satisfy the standard security definition. To address this limitation, we propose a novel protocol that employs a distributed sampling approach to generate smudging noise in a secure manner, which satisfies the standard security definition.
Finally, we extend the two-party protocols to the multi-party setting. Since the existing threshold CKKS-based MPC protocol only satisfies the liberal security, we present a novel multi-party protocol achieving the standard security by applying multi-party distributed sampling of a smudging error.
For all the proposed protocols, we formally define the functionalities and provide rigorous security analysis within the simulation-based security framework. To the best of our knowledge, this is the first work to explicitly define the functionality of CKKS-based approximate MPC and achieve formal security guarantees.
Under What Conditions Is Encrypted Key Exchange Actually Secure?
A Password-Authenticated Key Exchange (PAKE) protocol allows two parties to agree upon a cryptographic key, in the setting where the only secret shared in advance is a low-entropy password. The standard security notion for PAKE is in the Universal Composability (UC) framework. In recent years there have been a large number of works analyzing the UC-security of Encrypted Key Exchange (EKE), the very first PAKE protocol, and its One-encryption variant (OEKE), both of which compile an unauthenticated Key Agreement (KA) protocol into a PAKE.
In this work, we present a comprehensive and thorough study of the UC-security of both EKE and OEKE in the most general setting and using the most efficient building blocks:
1. We show that among the five existing results on the UC-security of (O)EKE using a general KA protocol, all are incorrect;
2. We show that for (O)EKE to be UC-secure, the underlying KA protocol needs to satisfy several additional security properties: though some of these are closely related to existing security properties, some are new, and all are missing from existing works on (O)EKE;
3. We give UC-security proofs for EKE and OEKE using Programmable-Once Public Function (POPF), which is the most efficient instantiation to date and is around 4 times faster
than the standard instantiation using Ideal Cipher (IC).
Our results in particular allow for PAKE constructions from post-quantum KA protocols such as Kyber. We also present a security analysis of POPF using a new, weakened notion of almost UC realizing a functionality, that is still sufficient for proving composed protocols to be fully
UC-secure.
Reducing the Number of Qubits in Solving LWE
At Crypto 2021, May presented an algorithm solving the ternary Learning-With-Error problem, where the solution is a ternary vector with a known number of and entries. This attack significantly improved the time complexity of from previously known algorithms to , where is the size of the key space. Therefore, May exploited that using more representations, i.e., allowing ternary interim results with additional and entries, reduces the overall time complexity.
Later, van Hoof et al. (PQCrypto 2021) combined May's algorithm with quantum walks to a new attack that performs in time . However, this quantum attack requires an exponential amount of qubits. This work investigates whether the ternary LWE problem can also be solved using only qubits. Therefore, we look closely into Dicke states, which are an equal superposition over all binary vectors with a fixed Hamming weight. Generalizing Dicke states to ternary vectors makes these states applicable to the ternary LWE problem.
Bärtschi and Eidenbenz (FCT 2019) proposed a quantum circuit to prepare binary Dicke states deterministically in linear time . Their procedure benefits from the inductive structure of Dicke states, i.e., that a Dicke state of a particular dimension can be built from Dicke states of lower dimensions. Our work proves that this inductive structure is also present in generalized Dicke states with an underlying set other than . Utilizing this structure, we introduce a new algorithm that deterministically prepares generalized Dicke states in linear time, for which we also provide an implementation in Qiskit.
Finally, we apply our generalized Dicke states to the ternary LWE problem, and construct an algorithm that requires qubits and classical memory space up to . We achieve as best obtainable time complexity.
The Blockwise Rank Syndrome Learning problem and its applications to cryptography
This paper is an extended version of [8] published in PQCrypto 2024, in which we combine two approaches, blockwise errors and multi-syndromes, in a unique approach which leads to very efficient generalized RQC and LRPC schemes.
The notion of blockwise error in a context of rank based cryptography has been recently introduced in [31]. This notion of error, very close to the notion of sum-rank metric [27], permits, by decreasing the weight of the decoded error, to greatly improve parameters for the LRPC and RQC cryptographic schemes. A little before, the multi-syndromes approach introduced for LRPC and RQC schemes in [3,18] also allowed to considerably decrease parameters sizes for LRPC and RQC schemes, through in particular the introduction of Augmented Gabidulin codes.
In order to combine these approaches, we introduced in [8] the Blockwise Rank Support Learning problem. It consists of guessing the support of the errors when several syndromes are given in input, with blockwise structured errors. The new schemes we introduced have very interesting features since for 128 bits security they permit to obtain generalized schemes for which the sum of public key and ciphertext is only 1.4 kB for the generalized RQC scheme and 1.7 kB for the generalized LRPC scheme.
In this extended version we give the following new features. First, we propose a new optimization on the main protocol which consists in considering 1 in the support of an error, allowing to deduce a subspace of the error to decode and improve the decoding capacity of our LRPC code, while maintaining an equal level of security. The approach of the original paper permits to reach a gain in terms of parameters size when compared to previous results [18,31], and this optimization allows to reduce the parameters by another for higher security level. We obtain better results in terms of size than the KYBER scheme whose total sum is 1.5 kB. Second we give a more detailed analysis of the algebraic attacks on the -RD problem we proposed in [8], which allowed to cryptanalyze all blockwise LRPC parameters proposed in [31] (with an improvement of more than 40 bits in the case of structural attacks). And at last, third, we propose a more detailed introduction to the historical background about rank metric, especially on the RQC and LRPC cryptosystems and their recent improvements and we add some parameters for the case of classical RQC (the case of only one given syndrome, that is a special case of our scheme, for which we could achieve 1.5 kB for the sum of the public key and the ciphertext), which compares very well to the previous version of classical RQC.
Conjunctive Searchable Symmetric Encryption from Hard Lattices
Searchable Symmetric Encryption (SSE) supports efficient keyword searches over encrypted outsourced document collections while minimizing information leakage. All practically efficient SSE schemes supporting conjunctive queries rely crucially on quantum-broken cryptographic assumptions (such as discrete-log hard groups) to achieve compact storage and fast query processing. On the other hand, quantum-safe SSE schemes based on purely symmetric-key crypto-primitives either do not support conjunctive searches, or are practically inefficient. In particular, there exists no quantum-safe yet practically efficient conjunctive SSE scheme from lattice-based hardness assumptions.
We solve this open question by proposing Oblivious Post-Quantum Secure Cross Tags (OQXT) – the first lattice-based practically efficient and highly scalable conjunctive SSE scheme. The technical centerpiece of OQXT is a novel oblivious cross-tag generation protocol with provable security guarantees derived from lattice-based hardness assumptions. We prove the post-quantum simulation security of OQXT with respect to a rigorously defined and thoroughly analyzed leakage profile. We then present a prototype implementation of OQXT and experimentally validate its practical efficiency and scalability over extremely large real-world databases. Our experiments show that OQXT has competitive end-to-end search latency when compared with the best (quantum-broken) conjunctive SSE schemes.
An Efficient Quantum Oblivious Transfer Protocol
Oblivious Transfer (OT) is a significant two party privacy preserving cryptographic primitive. OT involves a sender having several pieces of information and a receiver having a choice bit. The choice bit represents the piece of information that the receiver wants to obtain as an output of OT. At the end of the protocol, sender remains oblivious about the choice bit and receiver remains oblivious to the contents of the information that were not chosen. It has applications ranging from secure multi-party computation, privacy-preserving protocols to cryptographic protocols for secure communication. Most of the classical OT protocols are based on number theoretic assumptions which are not quantum secure and existing quantum OT protocols are not so efficient and practical. Herein, we present the design and analysis of a simple yet efficient quantum OT protocol, namely qOT. qOT is designed by using the asymmetric key distribution proposed by Gao et al. [18] as a building block. The designed qOT requires only single photons as a source of a quantum state, and the measurements of the states are computed using single particle projective measurement. These make qOT efficient and practical. Our proposed design is secure against quantum attacks. Moreover, qOT also provides long-term security.
Key Guidance Invocation: A White-box Mode Enables Strong Space Hardness under Adaptively Chosen-Space Attacks
The notion of space hardness serves as a quantitative measure to characterize the resilience of dedicated white-box schemes against code-lifting attacks, making it a widely utilized metric in the field. However, achieving strong space hardness (SSH) under the adaptively chosen-space attack model (ACSAM) remains an unresolved challenge, as no existing white-box scheme has given SSH guarantees under ACSAM. \par
To address the problem, we introduce a novel mode of operation tailored for white-box cryptography, termed the Key Guidance Invocation (KGI) mode. Our security analysis reveals that the KGI mode not only significantly strengthens the resistance to adaptively chosen-space attacks, but also ensures SSH under ACSAM. Moreover, we propose a dedicated white-box construction, RubikStone-( , , , ), which directly leverages the concept of the lookup table pool. RubikStone offers enhanced flexibility in lookup table utilization compared to existing white-box constructions and is particularly well-suited to the KGI mode. \par
Additionally, we instantiate RubikStone-(256,8,12, ) with the KGI mode, resulting in -256, which delivers -SSH security guarantees under ACSAM. Remarkably, -256 also shows superior performance, surpassing the efficiency of white-box AES based on the CEJO framework by in real-world settings. Besides, we conduct a comprehensive statistical analysis of the operations in all existing white-box ciphers. Our findings indicate that -256 remains highly competitive in computational efficiency despite offering unprecedented security.
Blockchain-based Secure D2D localisation with adaptive precision
In this paper we propose a secure best effort methodology for providing localisation information to devices in a heterogenous network where devices do not have access to GPS-like technology or heavy cryptographic infrastructure. Each device will compute its localisation with the highest possible accuracy based solely on the data provided by its neighboring anchors. The security of the localisation is guarantied by registering the localisation information on a distributed ledger via smart contracts. We prove the security of our solution under the adaptive chosen message attacks model. We furthermore evaluate the effectiveness of our solution by measuring the average register location time, failed requests, and total execution time using as DLT case study Hyperledger Besu with QBFT consensus.
Monotone-Policy BARGs and More from BARGs and Quadratic Residuosity
A tuple of NP statements satisfies a monotone policy if , where if and only if is in the NP language. A monotone-policy batch argument (monotone-policy BARG) for NP is a natural extension of regular batch arguments (BARGs) that allows a prover to prove that satisfy a monotone policy with a proof of size , where is the size of the Boolean circuit computing the NP relation .
Previously, Brakerski, Brodsky, Kalai, Lombardi, and Paneth (CRYPTO 2023) and Nassar, Waters, and Wu (TCC 2024) showed how to construct monotone-policy BARGs from (somewhere-extractable) BARGs for NP together with a leveled homomorphic encryption scheme (Brakerski et al.) or an additively homomorphic encryption scheme over a sufficiently-large group (Nassar et al.). In this work, we improve upon both works by showing that BARGs together with additively homomorphic encryption over any group suffices (e.g., over ). For instance, we can instantiate the additively homomorphic encryption with the classic Goldwasser-Micali encryption scheme based on the quadratic residuosity (QR) assumption. Then, by appealing to existing compilers, we also obtain a monotone-policy aggregate signature scheme from any somewhere-extractable BARG and the QR assumption.
Lattice-Based Post-Quantum iO from Circular Security with Random Opening Assumption (Part II: zeroizing attacks against private-coin evasive LWE assumptions)
Indistinguishability obfuscation (iO) stands out as a powerful cryptographic primitive but remains notoriously difficult to realize under simple-to-state, post-quantum assumptions. Recent works have proposed lattice-inspired iO constructions backed by new “LWE-with-hints” assumptions, which posit that certain distributions of LWE samples retain security despite auxiliary information. However, subsequent cryptanalysis has revealed structural vulnerabilities in these assumptions, leaving us without any post-quantum iO candidates supported by simple, unbroken assumptions.
Motivated by these proposals, we introduce the \emph{Circular Security with Random Opening} (CRO) assumption—a new LWE-with-hint assumption that addresses structural weaknesses from prior assumptions, and based on our systematic examination, does not appear vulnerable to known cryptanalytic techniques. In CRO, the hints are random ``openings'' of zero-encryptions under the Gentry--Sahai--Waters (GSW) homomorphic encryption scheme. Crucially, these zero-encryptions are efficiently derived from the original LWE samples via a special, carefully designed procedure, ensuring that the openings are marginally random. Moreover, the openings do not induce any natural leakage on the LWE noises.
These two features--- marginally random hints and the absence of (natural) noise leakage---rule out important classes of attacks that had undermined all previous LWE-with-hint assumptions for iO. Therefore, our new lattice-based assumption for iO provides a qualitatively different target for cryptanalysis compared to existing assumptions.
To build iO under this less-structured CRO assumption, we develop several new technical ideas. In particular, we devise an oblivious LWE sampling procedure, which succinctly encodes random LWE secrets and smudging noises, and uses a tailored-made homomorphic evaluation procedure to generate secure LWE samples. Crucially, all non-LWE components in this sampler, including the secrets and noises of the generated samples, are independently and randomly distributed, avoiding attacks on non-LWE components.
In the second part of this work, we investigate recent constructions of obfuscation for pseudorandom functionalities. We show that the same cryptanalytic techniques used to break previous LWE-with-hints assumptions for iO (Hopkins-Jain-Lin CRYPTO 21) can be adapted to construct counterexamples against the private-coin evasive LWE assumptions underlying these pseudorandom obfuscation schemes.
Unlike prior counterexamples for private-coin evasive LWE assumptions, our new counterexamples take the form of zeroizing attacks, contradicting the common belief that evasive-LWE assumptions circumvent zeroizing attacks by restricting to ``evasive'' or pseudorandom functionalities.
An ETSI GS QKD compliant TLS implementation
This paper presents our implementation of the Quantum Key Distribution standard ETSI GS QKD 014 v1.1.1, which required a modification of the Rustls library. We modified the TLS protocol while maintaining backward compatibility on the client and server side. We thus wish to participate in the effort to generalize the use of Quantum Key Distribution on the Internet. Finally we used this library for a video conference call encrypted by QKD.
PQNTRU: Acceleration of NTRU-based Schemes via Customized Post-Quantum Processor
Post-quantum cryptography (PQC) has rapidly evolved in response to the emergence of quantum computers, with the US National Institute of Standards and Technology (NIST) selecting four finalist algorithms for PQC standardization in 2022, including the Falcon digital signature scheme. The latest round of digital signature schemes introduced Hawk, both based on the NTRU lattice, offering compact signatures, fast generation, and verification suitable for deployment on resource-constrained Internet-of-Things (IoT) devices. Despite the popularity of Crystal-Dilithium and Crystal-Kyber, research on NTRU-based schemes has been limited due to their complex algorithms and operations. Falcon and Hawk's performance remains constrained by the lack of parallel execution in crucial operations like the Number Theoretic Transform (NTT) and Fast Fourier Transform (FFT), with data dependency being a significant bottleneck. This paper enhances NTRU-based schemes Falcon and Hawk through hardware/software co-design on a customized Single-Instruction-Multiple-Data (SIMD) processor, proposing new SIMD hardware units and instructions to expedite these schemes along with software optimizations to boost performance. Our NTT optimization includes a novel layer merging technique for SIMD architecture to reduce memory accesses, and the use of modular algorithms (Signed Montgomery and Improved Plantard) targets various modulus data widths to enhance performance. We explore applying layer merging to accelerate fixed-point FFT at the SIMD instruction level and devise a dual-issue parser to streamline assembly code organization to maximize dual-issue utilization. A System-on-chip (SoC) architecture is devised to improve the practical application of the processor in real-world scenarios. Evaluation on 28 nm technology and FPGA platform shows that our design and optimizations can increase the performance of Hawk signature generation and verification by over 7 times.
Discrete Gaussians Modulo Sub-Lattices: New Leftover Hash Lemmas for Discrete Gaussians
The Leftover Hash Lemma (LHL) is a powerful tool for extracting randomness from an entropic distribution, with numerous applications in cryptography. LHLs for discrete Gaussians have been explored in both integer settings by Gentry et al. (GPV, STOC'08) and algebraic ring settings by Lyubashevsky et al. (LPR, Eurocrypt'13). However, the existing LHLs for discrete Gaussians have two main limitations: they require the Gaussian parameter to be larger than certain smoothing parameters, and they cannot handle cases where fixed and arbitrary information is leaked.
In this work, we present new LHLs for discrete Gaussians in both integer and ring settings. Our results show that the Gaussian parameter can be improved by a factor of and compared to the regularity lemmas of GPV and LPR, respectively, under similar parameter choices such as the dimension and ring. Furthermore, our new LHLs can be applied to leaked discrete Gaussians, and the result can be used to establish asymptotic hardness of the extended MLWE assumptions, addressing an open question in recent works by Lyubashevsky et al. (LNP, Crypto'22). Our central techniques involve new fine-grained analyses of the min-entropy in discrete Gaussians modulo sublattices and should be of interest.
Delegated Multi-party Private Set Intersection from Secret Sharing
In this work, we address the problem of Delegated PSI (D-PSI), where a cloud server is introduced to handle most computational and communication tasks. D-PSI enables users to securely delegate their private sets to the cloud, ensuring the privacy of their data while allowing efficient computation of the intersection. The cloud operates under strict security requirements, learning nothing about the individual sets or the intersection result. Moreover, D-PSI minimizes user-to-user communication and supports "silent" processing, where the cloud can perform computations independently without user interaction, apart from set delegation and result retrieval.
We formally define the D-PSI problem and propose a novel construction that extends beyond two-party scenarios to support multi-party settings. Our construction adheres to the D-PSI requirements, including security against semi-honest adversaries, and achieves computational and communication complexities close to the ideal "perfect" D-PSI protocol. Additionally, we demonstrate the practicality of our approach through a baseline implementation and an optimized version that further reduces computational overhead. Our results establish a strong foundation for secure and efficient PSI in real-world cloud computing scenarios.
Solving Multivariate Coppersmith Problems with Known Moduli
We examine the problem of finding small solutions to systems of modular multivariate polynomials. While the case of univariate polynomials has been well understood since Coppersmith's original 1996 work, multivariate systems typically rely on carefully crafted shift polynomials and significant manual analysis of the resulting Coppersmith lattice. In this work, we develop several algorithms that make such hand-crafted strategies obsolete. We first use the theory of Gröbner bases to develop an algorithm that provably computes an optimal set of shift polynomials, and we use lattice theory to construct a lattice which provably contains all desired short vectors. While this strategy is usable in practice, the resulting lattice often has large rank. Next, we propose a heuristic strategy based on graph optimization algorithms that quickly identifies low-rank alternatives. Third, we develop a strategy which symbolically precomputes shift polynomials, and we use the theory of polytopes to polynomially bound the running time. Like Meers and Nowakowski's automated method, our precomputation strategy enables heuristically and automatically determining asymptotic bounds. We evaluate our new strategies on over a dozen previously studied Coppersmith problems. In all cases, our unified approach achieves the same recovery bounds in practice as prior work, even improving the practical bounds for three of the problems. In five problems, we find smaller and more efficient lattice constructions, and in three problems, we improve the existing asymptotic bounds. While our strategies are still heuristic, they are simple to describe, implement, and execute, and we hope that they drastically simplify the application of Coppersmith's method to systems of multivariate polynomials.
Rational Secret Sharing with Competition
The rational secret sharing problem (RSS) considers incentivizing rational parties to share their received information to reconstruct a correctly shared secret. Halpern and Teague (STOC'04) demonstrate that solving the RSS problem deterministically with explicitly bounded runtime is impossible, if parties prefer learning the secret than not learning, and they prefer fewer other parties to learn.
To overcome this impossibility result, we propose RSS with competition. We consider a slightly different yet sensible preference profile: Each party prefers to learn the secret early and prefers fewer parties learning before them. This preference profile changes the information-hiding dynamics among parties in prior works: First, those who have learned the secret are indifferent towards or even prefer informing others later; second, the competition to learn the secret earlier among different access groups in the access structure facilitates information sharing inside an access group. As a result, we are able to construct the first deterministic RSS algorithm that terminates in at most two rounds. Additionally, our construction does not employ any cryptographic machinery (being fully game-theoretic and using the underlying secret-sharing scheme as a black-box) nor requires the knowledge of the parties' exact utility function. Furthermore, we consider general access structures.
Flexway O-Sort: Enclave-Friendly and Optimal Oblivious Sorting
Oblivious algorithms are being deployed at large scale in real world to enable privacy-preserving applications such as Signal's private contact discovery. Oblivious sorting is a fundamental building block in the design of oblivious algorithms for numerous computation tasks. Unfortunately, there is still a theory-practice gap for oblivious sort. The commonly implemented bitonic sorting algorithm is not asymptotically optimal, whereas known asymptotically optimal algorithms suffer from large constants.
In this paper, we construct a new oblivious sorting algorithm called flexway o-sort, which is asymptotically optimal, concretely efficient, and suitable for implementation in hardware enclaves such as Intel SGX. For moderately large inputs of GB, our flexway o-sort algorithm outperforms known oblivious sorting algorithms by to when the data fits within the hardware enclave, and by to when the data does not fit within the hardware enclave. We also implemented various applications of oblivious sorting, including histogram, database join, and initialization of an ORAM data structure. For these applications and data sets from 8GB to 32GB, we achieve speedup over bitonic sort when the data fits within the enclave, and speedup when the data does not fit within the enclave.
Fair Exchange for Decentralized Autonomous Organizations via Threshold Adaptor Signatures
A Decentralized Autonomous Organization (DAO) enables multiple parties to collectively manage digital assets in a blockchain setting. We focus on achieving fair exchange between DAOs using a cryptographic mechanism that operates with minimal blockchain assumptions and, crucially, does not rely on smart contracts.
Specifically, we consider a setting where a DAO consisting of sellers holding shares of a witness interacts with a DAO comprising buyers holding shares of a signing key ; the goal is for the sellers to exchange for a signature under transferring a predetermined amount of funds.
Fairness is required to hold both between DAOs (i.e., ensuring that each DAO receives its asset if and only if the other does) as well as within each DAO (i.e., ensuring that all members of a DAO receive their asset if and only if every other member does).
We formalize these fairness properties and present an efficient protocol for DAO-based fair exchange under standard cryptographic assumptions. Our protocol leverages certified witness encryption and threshold adaptor signatures, two primitives of independent interest that we introduce and show how to construct efficiently.
Registered ABE and Adaptively-Secure Broadcast Encryption from Succinct LWE
Registered attribute-based encryption (ABE) is a generalization of public-key encryption that enables fine-grained access control to encrypted data (like standard ABE), but without needing a central trusted authority. In a key-policy registered ABE scheme, users choose their own public and private keys and then register their public keys together with a decryption policy with an (untrusted) key curator. The key curator aggregates all of the individual public keys into a short master public key which serves as the public key for an ABE scheme.
Currently, we can build registered ABE for restricted policies (e.g., Boolean formulas) from pairing-based assumptions and for general policies using witness encryption or indistinguishability obfuscation. In this work, we construct a key-policy registered ABE for general policies (specifically, bounded-depth Boolean circuits) from the -succinct learning with errors (LWE) assumption in the random oracle model. The ciphertext size in our registered ABE scheme is , where is a security parameter and is the depth of the circuit that computes the policy circuit . Notably, this is independent of the length of the attribute and is optimal up to the factor.
Previously, the only lattice-based instantiation of registered ABE uses witness encryption, which relies on private-coin evasive LWE, a stronger assumption than -succinct LWE. Moreover, the ciphertext size in previous registered ABE schemes that support general policies (i.e., from obfuscation or witness encryption) scales with . The ciphertext size in our scheme depends only on the depth of the circuit (and not the length of the attribute or the size of the policy). This enables new applications to identity-based distributed broadcast encryption.
Our techniques are also useful for constructing adaptively-secure (distributed) broadcast encryption, and we give the first scheme from the -succinct LWE assumption in the random oracle model. Previously, the only lattice-based broadcast encryption scheme with adaptive security relied on witness encryption in the random oracle model. All other lattice-based broadcast encryption schemes only achieved selective security.
Generic Composition: From Classical to Quantum Security
Authenticated encryption (AE) provides both authenticity and privacy.
Starting with Bellare's and Namprempre's work in 2000, the Encrypt-then-MAC composition of an encryption scheme for privacy and a MAC for authenticity has become a well-studied and common approach.
This work investigates the security of the Encrypt-then-MAC composition in a quantum setting which means that adversarial queries as well as the responses to those queries may be in superposition.
We demonstrate that the Encrypt-then-MAC composition of a chosen-plaintext (IND-qCPA) secure symmetric encryption scheme SE and a plus-one unforgeable MAC fails to achieve chosen-ciphertext (IND-qCCA) security.
On the other hand, we show that it suffices to choose a quantum pseudorandom function (qPRF) as the MAC.
Namely, the Encrypt-then-MAC composition of SE and a qPRF is IND-qCCA secure.
The same holds for the Encrypt-and-MAC composition of SE and a qPRF
How Small Can S-boxes Be
S-boxes are the most popular nonlinear building blocks used in symmetric-key primitives.
Both cryptographic properties and implementation cost of an S-box are crucial for a good cipher design, especially for lightweight ones.
This paper aims to determine the exact minimum area of optimal 4-bit S-boxes (whose differential uniform and linearity are both 4) under certain standard cell library.
Firstly, we evaluate the upper and lower bounds upon the minimum area of S-boxes, by proposing a Prim-like greedy algorithm and utilizing properties of balanced Boolean functions to construct bijective S-boxes.
Secondly, an SAT-aided automatic search tool is proposed that can simultaneously consider multiple cryptographic properties such as the uniform, linearity, algebraic degree, and the implementation costs such as area, and gate depth complexity.
Thirdly, thanks to our tool, we manage to find the exact minimum area for different types of 4-bit S-boxes.
The measurement in this paper uses the gate equivalent (GE) as standard unit under UMC 180 nm library, all 2/3/4-input logic gates are taken into consideration.
Our results show that the minimum area of optimal 4-bit S-box is 11 GE and the depth is 3.
If we do not use the 4-input gates, this minimum area increases to 12 GE and the depth in this case is 4, which is the same if we only use 2-input gates.
If we further require that the S-boxes should not have fixed points, the minimum area continue increasing a bit to 12.33 GE while keeping the depth.
Interestingly, the same results are also obtained for non-optimal 4-bit bijective S-boxes as long as their differential uniform and linearity (i.e., there is no non-trivial linear structures) if only 2-input and 3-input gates are used. But the minimum area reduce to 9 GE if 4-input gates are involved.
More strictly, if we require the algebraic degree of all coordinate functions of optimal S-boxes be 3, the minimum area is 14 GE with fixed point and 14.33 GE without fixed point, and the depth increases sharply to 8.
Besides determining the exact minimum area, our tool is also useful to search for a better implementation of existing S-boxes. As a result, we find out an implementation of Keccak's 5-bit S-box with 17 GE. As a contrast, the designer's original circuit has an area of 23.33 GE, while the optimized result by Lu et al. achieves an area of 17.66 GE. Also, we find out the first optimized implementation of SKINNY's 8-bit S-box with 26.67 GE.
Toward Optimal-Complexity Hash-Based Asynchronous MVBA with Optimal Resilience
Uncategorized
Uncategorized
Multi-valued validated Byzantine agreement (MVBA), a fundamental primitive of distributed computing, allows processes to agree on a valid -bit value, despite faulty processes behaving maliciously. Among hash-based solutions for the asynchronous setting with adaptive faults, the state-of-the-art HMVBA protocol achieves optimal message complexity, (near-)optimal bit complexity, and optimal time complexity. However, it only tolerates up to adaptive failures. In contrast, the best known optimally resilient protocol, FIN-MVBA, exchanges messages and bits. This highlights a fundamental question: can a hash-based protocol be designed for the asynchronous setting with adaptive faults that simultaneously achieves both optimal complexity and optimal resilience?
In this paper, we take a significant step toward answering the question. Namely, we introduce Reducer, an MVBA protocol that retains HMVBA's complexity while improving its resilience to . Like HMVBA and FIN-MVBA, Reducer relies exclusively on collision-resistant hash functions. A key innovation in Reducer's design is its internal use of strong multi-valued Byzantine agreement (SMBA), a variant of strong consensus we introduce and construct, which ensures agreement on a correct process's proposal. To further advance resilience toward the optimal one-third bound, we then propose Reducer++, an MVBA protocol that tolerates up to adaptive failures, for any fixed constant . Unlike Reducer, Reducer++ does not rely on SMBA. Instead, it employs a novel approach involving hash functions modeled as random oracles to ensure termination. Reducer++ maintains constant time complexity, quadratic message complexity, and quasi-quadratic bit complexity, with constants dependent on .
Optimizing Final Exponentiation for Pairing-Friendly Elliptic Curves with Odd Embedding Degrees Divisible by 3
In pairing-based cryptography, final exponentiation with a large fixed exponent is crucial for ensuring unique outputs in Tate and optimal Ate pairings. While optimizations for elliptic curves with even embedding degrees have been well-explored, progress for curves with odd embedding degrees, particularly those divisible by , has been more limited. This paper presents new optimization techniques for computing the final exponentiation of the optimal Ate pairing on these curves. The first exploits the fact that some existing seeds have a form enabling cyclotomic cubing and extends this to generate new seeds with the same form. The second is to generate new seeds with sparse ternary representations, replacing squaring with cyclotomic cubing. The first technique improves efficiency by and compared to the square and multiply (\textbf{SM}) method for existing seeds at -bit and -bit security levels, respectively. For newly generated seeds, it achieves efficiency gains of at -bit, at -bit, and at -bit security levels. The second technique improves efficiency by at -bit, at -bit, and at -bit security levels.