Papers updated in last 7 days (89 results)
Adaptively Secure Threshold Blind BLS Signatures and Threshold Oblivious PRF
We show the first threshold blind signature scheme and threshold Oblivious PRF (OPRF) scheme which remain secure in the presence of an adaptive adversary, who can adaptively decide which parties to corrupt throughout the lifetime of the scheme. Moreover, our adaptively secure schemes preserve the minimal round complexity and add only a small computational overhead over prior solutions that offered security only for a much less realistic static adversary, who must choose the subset of corrupted parties before initializing the protocol.
Our threshold blind signature scheme computes standard BLS signatures while our threshold OPRF computes a very efficient "2HashDH" OPRF [JKK14]. We prove adaptive security of both schemes in the Algebraic Group Model (AGM). Our adaptively secure threshold schemes are as practical as the underlying standard single-server BLS blind signature and 2HashDH OPRF, and they can be used to add cryptographic fault-tolerance and decentralize trust in any system that relies on blind signatures, like anonymous credentials and e-cash, or on OPRF, like the OPAQUE password authentication and the Privacy Pass anonymous authentication scheme, among many others.
An Efficient Sequential Aggregate Signature Scheme with Lazy Verification
A sequential aggregate signature scheme (SAS) allows multiple potential signers to sequentially aggregate their respective signatures into a single compact signature. Typically, verification of a SAS signatures requires access to all messages and public key pairs utilized in the aggregate generation. However, efficiency is crucial for cryptographic protocols to facilitate their practical implementation. To this end, we propose a sequential aggregate signature scheme with lazy verification for a set of user-message pairs, allowing the verification algorithm to operate without requiring access to all messages and public key pairs in the sequence. This construction is based on the RSA assumption in the random oracle model and is particularly beneficial in resource constrained applications that involve forwarding of authenticated information between parties, such as certificate chains. As an extension of this work, we introduce the notion of sequentially aggregatable proxy re-signatures that enables third parties or proxies to transform aggregatable signatures under one public key to another, useful in applications such as sharing web certificates and authentication of network paths. We also present a construction of a sequential aggregate proxy re-signature scheme, secure in the random oracle model, based on the RSA assumption, which may be of independent interest.
RHQC: post-quantum ratcheted key exchange from coding assumptions
Key Exchange mechanisms (KE or KEMs) such as the Diffie-Hellman protocol have proved to be a cornerstone conciliating the efficiency of symmetric encryption and the practicality of public key primitives.
Such designs however assume the non-compromission of the long term asymmetric key in use. To relax this strong security assumption, and allow for modern security features such as Perfect Forward Secrecy (PFS) or Post Compromise Security (PCS), Ratcheted-KE (RKE) have been proposed.
This work proposes to turn the Hamming Quasi-Cyclic (HQC) cryptosystem into such a Ratcheted-KE, yielding the first code-based such construction.
Interestingly, our design allows indifferently one party to update the key on-demand rather than the other, yielding a construction called bi-directional RKE, which compares favorably to generic transformations.
Finally, we prove that the resulting scheme satisfies the usual correctness and key-indistinguishability properties, and suggest concrete sets of parameters, assuming different real-life use cases.
Worst-case Analysis of Lattice Enumeration Algorithm over Modules
This paper presents a systematic study of module lattices. We extend the lattice enumeration algorithm from Euclidean lattices to module lattices, providing a generalized framework.
To incorporate the refined analysis by Hanrot and Stehlè (CRYPTO'07), we adapt key definitions from Euclidean lattices, such as HKZ-reduced bases and quasi-HKZ-reduced bases, adapting them to the pseudo-basis of modules.
Furthermore, we revisit the lattice profile, a crucial aspect of enumeration algorithm analysis, and extend its analysis to module lattices.
As a result, we improve the asymptotic performance of the module lattice enumeration algorithm and module-SVP.
For instance, let $K = \mathbb{Q}[x]/\langle x^d + 1\rangle$ be a number field with a power-of-two integer $d$, and suppose that $n\ln n = o(\ln d)$.
Then, the nonzero shortest vector in $M \subset K^n$ can be found in time $d^{\frac{d}{2e} + o(d)}$, improving upon the previous lattice enumeration bound of $(nd)^{\frac{nd}{2e}+ o(nd)}$.
Our algorithm naturally extends to solving ideal-SVP. Given an ideal $I \subset R$, where $R = \mathbb{Z}[x]/\langle x^t + 1 \rangle$ with a power-of-two integer $t = nd$, we can find the nonzero shortest element of $I$ in time $\exp(O(\frac{t}{2e} \ln \ln t))$, improving upon the previous enumeration bound of $\exp(O(\frac{t}{2e} \ln t))$.
Simple Public Key Anamorphic Encryption and Signature using Multi-Message Extensions
Anamorphic encryption (AE) considers secure communication in the presence of a powerful surveillant (typically called a ''dictator'') who only allows certain cryptographic primitives and knows all the secret keys in a system. The basic idea is that there is a second (anamorphic) mode of encryption that allows to transmit an anamorphic message using a double key to a receiver that can decrypt this message using a double key. From the point of view of the dictator the encryption keys as well as the ciphertexts in the regular and anamorphic mode are indistinguishable. The most recent works in this field consider public key anamorphic encryption (PKAE), i.e., the sender of an anamorphic message requires an encryption double key (or no key at all) and the receiver requires a decryption double key. Known constructions, however, either work only for schemes that are mostly of theoretical interest or come with conceptual limitations.
In this paper we ask whether we can design such PKAE schemes without such limitations and being closer to PKE schemes used in practice. In fact, such schemes are more likely to be allowed by a cognizant dictator. Moreover, we initiate the study of identity-based anamorphic encryption (IBAE), as the IBE setting seems to be a natural choice for a dictator. For both PKAE and IBAE, we show how well-known IND-CPA and IND-CCA secure primitives can be extended by an anamorphic encryption channel. In contrast to previous work, we additionally consider CCA (rather than just CPA) security notions for the anamorphic channel and also build upon CPA (rather than just CCA) secure PKE.
Finally, we ask whether it is possible to port the recent concept of anamorphic signatures, which considers constructing symmetric anamorphic channels in case only signature schemes are allowed by the dictator, to the asymmetric setting, which we denote by public-key anamorphic signatures (PKAS). Also here we consider security beyond IND-CPA for the anamorphic channel.
MERCURY: A multilinear Polynomial Commitment Scheme with constant proof size and no prover FFTs
We construct a pairing-based polynomial commitment scheme for multilinear polynomials of size $n$ where
constructing an opening proof requires $O(n)$ field operations, and $2n+O(\sqrt n)$ scalar multiplications. Moreover,
the opening proof consists of a constant number of field elements.
This is a significant improvement over previous works which would require either
1. $O(n\log n)$ field operations; or
2. $O(\log n)$ size opening proof.
The main technical component is a new method of verifiably folding a witness via univariate polynomial division.
As opposed to previous methods, the proof size and prover time remain constant *regardless of the folding factor*.
Leaking Arbitrarily Many Secrets: Any-out-of-Many Proofs and Applications to RingCT Protocols
Ring Confidential Transaction (RingCT) protocol is an effective cryptographic component for preserving the privacy of cryptocurrencies. However, existing RingCT protocols are instantiated from one-out-of-many proofs with only one secret, leading to low efficiency and weak anonymity when handling transactions with multiple inputs. Additionally, current partial knowledge proofs with multiple secrets are neither secure nor efficient to be applied in a RingCT protocol.
In this paper, we propose a novel \emph{any-out-of-many proof}, a logarithmic-sized zero-knowledge proof scheme for showing the knowledge of arbitrarily many secrets out of a public list. Unlike other partial knowledge proofs that have to reveal the number of secrets [ACF21], our approach proves the knowledge of multiple secrets without leaking the exact number of them. Furthermore, we improve the efficiency of our method with a generic inner-product transformation to adopt the Bulletproofs compression [BBB+18], which reduces the proof size to $2 \lceil \log_2(N) \rceil \! + \! 9$.
Based on our proposed proof scheme, we further construct a compact RingCT protocol for privacy cryptocurrencies, which can provide a logarithmic-sized communication complexity for transactions with multiple inputs. More importantly, as the only known RingCT protocol instantiated from the partial knowledge proofs, our protocol can achieve the highest anonymity level compared with other approaches like Omniring [LRR+19]. For other applications, such as multiple ring signatures, our protocol can also be applied with some modifications. We believe our techniques are also applicable in other privacy-preserving scenarios, such as multiple ring signatures and coin-mixing in the blockchain.
Post Quantum Migration of Tor
Shor's and Grover's algorithms' efficiency and the advancement of quantum computers imply that the cryptography used until now to protect one's privacy is potentially vulnerable to retrospective decryption, also known as harvest now, decrypt later attack in the near future. This dissertation proposes an overview of the cryptographic schemes used by Tor, highlighting the non-quantum-resistant ones and introducing theoretical performance assessment methods of a local Tor network. The measurement is divided into three phases. We will start with benchmarking a local Tor network simulation on constrained devices to isolate the time taken by classical cryptography processes. Secondly, the analysis incorporates existing benchmarks of quantum-secure algorithms and compares these performances on the devices. Lastly, the estimation of overhead is calculated by replacing the measured times of traditional cryptography with the times recorded for Post Quantum Cryptography (PQC) execution within the specified Tor environment. By focusing on the replaceable cryptographic components, using theoretical estimations, and leveraging existing benchmarks, valuable insights into the potential impact of PQC can be obtained without needing to implement it fully.
Token meets Wallet: Formalizing Privacy and Revocation for FIDO2
The FIDO2 standard is a widely-used class of challenge-response type protocols that allows to authenticate to an online service using a hardware token.
Barbosa et al. (CRYPTO `21) provided the first formal security model and analysis for the FIDO2 standard.
However, their model has two shortcomings: (1) It does not include privacy, one of the key features claimed by FIDO2. (2) It only covers tokens that store {all secret keys locally}.
In contrast, due to limited memory, most existing FIDO2 tokens either derive all secret keys from a common seed or store keys on the server (the latter approach is also known as {key wrapping}).
In this paper, we revisit the security of the WebAuthn component of FIDO2 as implemented in practice. Our contributions are as follows.
(1) We adapt the model of Barbosa et al. so as to capture authentication tokens using key derivation or key wrapping.
(2) We provide the {first formal definition of privacy for the WebAuthn component of FIDO2}. We then prove the privacy of this component in common FIDO2 token implementations if the underlying building blocks are chosen appropriately.
(3) We address the unsolved problem of {global key revocation} in FIDO2.
To this end, we introduce and analyze a simple revocation procedure that builds on the popular BIP32 standard used in cryptocurrency wallets and can efficiently be implemented with existing FIDO2 servers.
Attacking Single-Cycle Ciphers on Modern FPGAs featuring Explainable Deep Learning
In this paper, we revisit the question of key recovery using side-channel analysis for unrolled, single-cycle block ciphers. In particular, we study the Princev2 cipher. While it has been shown vulnerable in multiple previous studies, those studies were performed on side-channel friendly ASICs or older FPGAs (e.g., Xilinx Virtex II on the SASEBO-G board), and using mostly expensive equipment. We start with the goal of exploiting a cheap modern FPGA and board using power traces from a cheap oscilloscope. Particularly, we use Xilinx Artix 7 on the Chipwhisperer CW305 board and PicoScope 5000A, respectively.
We split our study into three parts. First, we show that the new set-up still exhibits easily detectable leakage, using a non-specific t-test. Second, we replicate attacks from older FPGAs. Namely, we start with the attack by Yli-Mäyry et al., which is a simple chosen plaintext correlation power analysis attack using divide and conquer. However, we demonstrate that even this simple, powerful attack does not work, demonstrating a peculiar behavior. We study this behavior using a stochastic attack that attempts to extract the leakage model, and we show that models over a small part of the state are inconsistent and depend on more key bits than what is expected. We also attempt classical template attacks and get similar results.
To further exploit the leakage, we employ deep learning techniques and succeed in key recovery, albeit using a large number of traces. We perform the explainability technique called Key Guessing Occlusion (KGO) to detect which points the neural networks exploit. When we use these points as features for the classical template attack, although it did not recover the secret key, its performance improves compared to other feature selection techniques.
A Note on the Advanced Use of the Tate Pairing
This short note explains how the Tate pairing can be used to efficiently sample torsion points with precise requirements, and other applications. These applications are most clearly explained on Montgomery curves, using the Tate pairing of degree 2, but hold more generally for any degree or abelian variety, or even generalized Tate pairings. This note is explanatory in nature; it does not contain new results, but aims to provide a clear and concise explanation of results in the literature that are somewhat hidden, yet are extremely useful in practical isogeny-based cryptography.
MUTLISS: a protocol for long-term secure distributed storage over multiple remote QKD networks
We introduce MULTISS, a new distributed storage protocol over multiple remote Quantum Key Distribution (QKD) networks that ensures long-term data confidentiality. Our protocol extends LINCOS, a secure storage protocol that uses Shamir secret sharing to distribute data in a single QKD network. Instead MULTISS uses a hierarchical secret scheme that makes certain shares mandatory for the reconstruction of the original secret. We prove that MULTISS ensures that the stored data remain secure even if an eavesdropper (1) gets full access to all storage servers of some of the QKD networks or (2) stores and breaks later all the classical communication between the QKD networks. We demonstrate that this is strictly more secure than LINCOS which is broken as soon as one QKD network is compromised.
Our protocol, like LINCOS, has a procedure to update the shares stored in each QKD network without reconstructing the original data. In addition, we provide a procedure to recover from a full compromission of one of the QKD network. In particular, we introduce a version of the protocol that can only be implemented over a restricted network topologies, but minimizes the communication required in the recovery procedure.
In practice, the MULTISS protocol is designed for the case of several QKD networks at the metropolitan scale connected to each other through channels secured by classical cryptography. Hence, MULTISS offers a secure distributed storage solution in a scenario that is compatible with the current deployment of quantum networks.
“Check-Before-you-Solve”: Verifiable Time-lock Puzzles
Time-lock puzzles are cryptographic primitives that guarantee to the generator that the puzzle cannot be solved in less than $\mathcal{T}$ sequential computation steps. They have recently found numerous applications, e.g., in fair contract signing and seal-bid auctions. However, solvers have no a priori guarantee about the solution they will reveal, e.g., about its ``usefulness'' within a certain application scenario. In this work, we propose verifiable time-lock puzzles (VTLPs) that address this by having the generator publish a succinct proof that the solution satisfies certain properties (without revealing anything else about it). Hence solvers are now motivated to ``commit'' resources into solving the puzzle. We propose VTLPs that support proving arbitrary NP relations $\mathcal{R}$ about the puzzle solution.
At a technical level, to overcome the performance hurdles of the ``naive'' approach of simply solving the puzzle within a SNARK that also checks $\mathcal{R}$, our scheme combines the ``classic'' RSA time-lock puzzle of Rivest, Shamir, and Wagner, with novel building blocks for ``offloading'' expensive modular group exponentiations and multiplications from the SNARK circuit. We then propose a second VTLP specifically for checking RSA-based signatures and verifiable random functions (VRFs). Our second scheme does not rely on a SNARK and can have several applications, e.g., in the context of distributed randomness generation. Along the road, we propose new constant-size proofs for modular exponent relations over hidden-order groups that may be of independent interest. Finally, we experimentally evaluate the performance of our schemes and report the findings and comparisons with prior approaches.
A note on "industrial blockchain threshold signatures in federated learning for unified space-air-ground-sea model training"
We show that the threshold signature scheme [J. Ind. Inf. Integr. 39: 100593 (2024)] is insecure against forgery attack. An adversary can find an efficient signing algorithm functionally equivalent to the valid signing algorithm, so as to convert the legitimate signature $(sig, s, r_x)$ of message $m$ into a valid signature $(sig, s, r_x')$ of any message $m'$.
A proof of P≠NP (New symmetric encryption algorithm against any linear attacks and differential attacks)
P vs NP problem is the most important unresolved problem in the field of computational complexity. Its impact has penetrated into all aspects of algorithm design, especially in the field of cryptography. The security of cryptographic algorithms based on short keys depends on whether P is equal to NP. In fact, Shannon strictly proved that the one-time-pad system meets unconditional security, but because the one-time-pad system requires the length of key to be at least the length of plaintext, how to transfer the key is a troublesome problem that restricts the use of the one-time-pad system in practice. Cryptography algorithms used in practice are all based on short key, and the security of the short key mechanism is ultimately based on one-way assumption. In fact, the existence of one-way function can directly lead to the important conclusion P≠NP.
In this paper, we originally constructed a short-key block cipher algorithm. The core feature of this algorithm is that for any block, when a plaintext-ciphertext pair is known, any key in the key space is valid, that is, for each block, the plaintext-ciphertext pair and the key are independence, and the independence between blocks is also easy to construct. This feature is completely different from all existing short-key cipher algorithms.
Based on the above feature, we construct a problem and theoretically prove that the problem satisfies the properties of one-way functions, thereby solving the problem of the existence of one-way functions, that is, directly proving that P≠NP.
An efficient collision attack on Castryck-Decru-Smith’s hash function
In 2020, Castryck-Decru-Smith constructed a hash function using the (2,2)-isogeny graph of superspecial principally polarized abelian surfaces. In their construction, the initial surface was chosen from vertices quite "close" to the square of a supersingular elliptic curve with a known endomorphism ring. In this paper, we propose an algorithm for recovering a collision on their hash function. Under some heuristic assumptions, the time complexity and space complexity of our algorithm are estimated to be $\widetilde{O}(p^{3/10})$ which is smaller than the complexity $\widetilde{O}(p^{3/2})$ the authors had claimed necessary to recover such a collision, where $p$ is the characteristic of the base field. In particular case where $p$ has a special form, then both the time and space complexities of our algorithm are polynomial in $\log{p}$. We implemented our algorithm in MAGMA, and succeeded in recovering a collision in 17 hours (using 64 parallel computations) under a parameter setting the authors had claimed to be 384-bit secure. Finally, we propose a simple countermeasure against our attack, which is expected to restore the complexity required to recover a collision to $\widetilde{O}(p)$ currently.
Triple Ratchet: A Bandwidth Efficient Hybrid-Secure Signal Protocol
Secure Messaging apps have seen growing adoption, and are used by billions of people daily. However, due to imminent threat of a "Harvest Now, Decrypt Later" attack, secure messaging providers must react know in order to make their protocols $\textit{hybrid-secure}$: at least as secure as before, but now also post-quantum (PQ) secure. Since many of these apps are internally based on the famous Signal's Double-Ratchet (DR) protocol, making Signal hybrid-secure is of great importance.
In fact, Signal and Apple already put in production various Signal-based variants with certain levels of hybrid security: PQXDH (only on the initial handshake), and PQ3 (on the entire protocol), by adding a $\textit{PQ-ratchet}$ to the DR protocol. Unfortunately, due to the large communication overheads of the $\mathsf{Kyber}$ scheme used by PQ3, real-world PQ3 performs this PQ-ratchet approximately every 50 messages. As we observe, the effectiveness of this amortization, while reasonable in the best-case communication scenario, quickly deteriorates in other still realistic scenarios; causing $\textit{many consecutive}$ (rather than $1$ in $50$) re-transmissions of the same $\mathsf{Kyber}$ public keys and ciphertexts (of combined size 2272 bytes!).
In this work we design a new Signal-based, hybrid-secure secure messaging protocol, which significantly reduces the communication complexity of PQ3. We call our protocol "the $\textit{Triple Ratchet}$" (TR) protocol. First, TR uses $\textit{em erasure codes}$ to make the communication inside the PQ-ratchet provably balanced. This results in much better $\textit{worst-case}$ communication guarantees of TR, as compared to PQ3. Second, we design a novel "variant" of $\mathsf{Kyber}$, called $\mathsf{Katana}$, with significantly smaller combined length of ciphertext and public key (which is the relevant efficiency measure for "PQ-secure ratchets"). For 192 bits of security, $\mathsf{Katana}$ improves this key efficiency measure by over 37%: from 2272 to 1416 bytes. In doing so, we identify a critical security flaw in prior suggestions to optimize communication complexity of lattice-based PQ-ratchets, and fix this flaw with a novel proof relying on the recently introduced hint MLWE assumption.
During the development of this work we have been in discussion with the Signal team, and they are actively evaluating bringing a variant of it into production in a future iteration of the Signal protocol.
A Unified Treatment of Anamorphic Encryption
Receiver anamorphic encryption (hereafter anamorphic encryption), introduced by Persiano et al. at Eurocrypt 2022, allows for a double message to be symmetrically hidden in a public-key encryption ciphertext via a pre-shared -double key-. In anamorphic encryption, confidentiality must be preserved even if the adversary (or the -dictator-) has access to all regular keys. It has been the subject of several works since its introduction that explore tweaks and extensions to the core primitive. However, this study has not been systematic, and so disparate security notions have been proposed, for which their relationships are not clear. Moreover, there are clear gaps in the literature, including in the treatment of chosen-ciphertext attacks.
In this work, we conduct a systematic study of receiver anamorphic encryption. We unify existing security notions and propose several new ones, and prove implications and separations between them. Our main findings are as follows. First, we identify gaps in previous security notions against an anamorphic -sender-, namely an adversary who is given the double key, and propose three new security notions to bridge these gaps. We also identify several gaps in the treatment of chosen-ciphertext attacks, a setting only very recently considered in anamorphic cryptography (Jaeger and Stracovsky, Asiacrypt 2024). Moreover, observing that no previous construction achieves all desirable security properties in this setting, we propose a suitable construction that does. Finally, we propose several security notions for -asymmetric- anamorphic encryption, and explore the case here where the dictator and the anamorphic sender collude.
BitGC: Garbled Circuits with 1 Bit per Gate
We present BitGC, a garbling scheme for Boolean circuits with 1 bit per gate communication based on either ring learning with errors (RLWE) or NTRU assumption, with key-dependent message security. The garbling consists of 1) a homomorphically encrypted seed that can be expanded to encryption of many pseudo-random bits and 2) one-bit stitching information per gate to reconstruct garbled tables from the expanded ciphertexts. By using low-complexity PRGs, both the garbling and evaluation of each gate require only O(1) homomorphic addition/multiplication operations without bootstrapping.
HammR: A ZKP Protocol for Fixed Hamming-Weight Restricted-Entry Vectors
In this paper, we introduce $\mathsf{HammR}$, a generic Zero-Knowledge Proof (ZKP) protocol demonstrating knowledge of a secret vector that has a fixed Hamming weight with entries taken from a shifted multiplicative group.
As special cases, we are able to directly apply this protocol to restricted vectors and to rank-1 vectors, which are vectors with entries that lie in a dimension one subspace of $\mathbb{F}_q$.
We show that these proofs can be batched with low computational overhead, and further prove that this general framework is complete, sound, and zero-knowledge, thus truly a genuine ZKP.
Finally, we present applications of $\mathsf{HammR}$ to various Syndrome Decoding Problems, including the Regular and Restricted SDPs, as well as other implementations such as lookup instances, proof of proximity, and electronic voting protocols.
Black-Box Constant-Round Secure 2PC with Succinct Communication
The most fundamental performance metrics of secure multi-party computation (MPC) protocols are related to the number of messages the parties exchange (i.e., round complexity), the size of these messages (i.e., communication complexity), and the overall computational resources required to execute the protocol (i.e., computational complexity). Another quality metric of MPC protocols is related to the black-box or non-black-box use of the underlying cryptographic primitives. Indeed, the design of black-box MPC protocols, other than being of theoretical interest, usually can lead to protocols that have better computational complexity.
In this work, we aim to optimize the round and communication complexity of black-box secure multi-party computation in the plain model, by designing a constant-round two-party computation protocol in the malicious setting, whose communication complexity is only polylogarithmic in the size of the function being evaluated.
We successfully design such a protocol, having only black-box access to fully homomorphic encryption, trapdoor permutations, and hash functions. To the best of our knowledge, our protocol is the first to make black-box use of standard cryptographic primitives while achieving almost asymptotically optimal communication and round complexity.
Cross-Platform Benchmarking of the FHE Libraries: Novel Insights into SEAL and OpenFHE
The rapid growth of cloud computing and data-driven applications has amplified privacy concerns, driven by the increasing demand to process sensitive data securely. Homomorphic encryption (HE) has become a vital solution for addressing these concerns by enabling computations on encrypted data without revealing its contents. This paper provides a comprehensive evaluation of two leading HE libraries, SEAL and OpenFHE, examining their performance, usability, and support for prominent HE schemes such as BGV and CKKS. Our analysis highlights computational efficiency, memory usage, and scalability across Linux and Windows platforms, emphasizing their applicability in real-world scenarios. Results reveal that Linux outperforms Windows in computation efficiency, with OpenFHE emerging as the optimal choice across diverse cryptographic settings. This paper provides valuable insights for researchers and practitioners to advance privacy-preserving applications using FHE.
Module Learning with Errors with Truncated Matrices
The Module Learning with Errors ($\mathsf{MLWE}$) problem is one of the most commonly used hardness assumption in lattice-based cryptography. In its standard version, a matrix $\mathbf{A}$ is sampled uniformly at random over a quotient ring $R_q$, as well as noisy linear equations in the form of $\mathbf{A} \mathbf{s}+ \mathbf{e} \bmod q$, where $\mathbf{s}$ is the secret, sampled uniformly at random over $R_q$, and $\mathbf{e}$ is the error, coming from a Gaussian distribution. Many previous works have focused on variants of $\mathsf{MLWE}$, where the secret and/or the error are sampled from different distributions. Only few works have focused on different distributions for the matrix $\mathbf{A}$. One variant proposed in the literature is to consider matrix distributions where the low-order bits of a uniform $\mathbf{A}$ are deleted. This seems a natural approach in order to save in bandwidth. We call it truncated $\mathsf{MLWE}$.
In this work, we show that the hardness of standard $\mathsf{MLWE}$ implies the hardness of truncated $\mathsf{MLWE}$, both for search and decision versions. Prior works only covered the search variant and relied on the (module) $\mathsf{NTRU}$ assumption, limitations which we are able to overcome. Overall, we provide two approaches, offering different advantages. The first uses a general Rényi divergence argument, applicable to a wide range of secret/error distributions, but which only works for the search variants of (truncated) $\mathsf{MLWE}$. The second applies to the decision versions, by going through an intermediate variant of $\mathsf{MLWE}$, where additional hints on the secret are given to the adversary. However, the reduction makes use of discrete Gaussian distributions.
Quantum Attacks on Sum of Even-Mansour Construction Utilizing Online Classical Queries
The Sum of Even-Mansour (SoEM) construction, proposed by Chen et al. at Crypto 2019, has become the basis for designing some symmetric schemes, such as
the nonce-based MAC scheme $\text{nEHtM}_{p}$ and the nonce-based encryption scheme $\text{CENCPP}^{\ast}$. In this paper, we make the first attempt to study the quantum security of SoEM under the Q1 model where the targeted encryption oracle can only respond to classical queries rather than quantum ones.
Firstly, we propose a quantum key recovery attack on SoEM21 with a time complexity of $\tilde{O}(2^{n/3})$ along with $O(2^{n/3})$ online classical queries. Compared with the current best classical result which requires $O(2^{2n/3})$, our method offers a quadratic time speedup while maintaining the same number of queries. The time complexity of our attack is less than that observed for quantum exhaustive search by a factor of $2^{n/6}$. We further propose classical and quantum key recovery attacks on the generalized SoEMs1 construction (consisting of $s\geq 2$ independent public permutations), revealing that the application of quantum algorithms can provide a quadratic acceleration over the pure classical methods. Our results also imply that the quantum security of SoEM21 cannot be strengthened merely by increasing the number of permutations.
A Practical Tutorial on Deep Learning-based Side-channel Analysis
This tutorial provides a practical introduction to Deep Learning-based Side-Channel Analysis (DLSCA), a powerful approach for evaluating the security of cryptographic implementations.
Leveraging publicly available datasets and a Google Colab service, we guide readers through the fundamental steps of DLSCA, offering clear explanations and code snippets.
We focus on the core DLSCA framework, providing references for more advanced techniques, and address the growing interest in this field driven by emerging standardization efforts like AIS 46. This tutorial is designed to be accessible to researchers, students, and practitioners seeking to learn practical DLSCA techniques and improve the security of cryptographic systems.
On Deniable Authentication against Malicious Verifiers
Deniable authentication allows Alice to authenticate a message to Bob, while retaining deniability towards third parties. In particular, not even Bob can convince a third party that Alice authenticated that message. Clearly, in this setting Bob should not be considered trustworthy. Furthermore, deniable authentication is necessary for deniable key exchange, as explicitly desired by Signal and off-the-record (OTR) messaging.
In this work we focus on (publicly verifiable) designated verifier signatures (DVS), which are a widely used primitive to achieve deniable authentication. We propose a definition of deniability against malicious verifiers for DVS. We give a construction that achieves this notion in the random oracle (RO) model. Moreover, we show that our notion is not achievable in the standard model with a concrete attack; thereby giving a non-contrived example of the RO heuristic failing.
All previous protocols that claim to achieve deniable authentication against malicious verifiers (like Signal's initial handshake protocols X3DH and PQXDH) rely on the Extended Knowledge of Diffie–Hellman (EKDH) assumption. We show that this assumption is broken and that these protocols do not achieve deniability against malicious verifiers.
AutoDiVer: Automatically Verifying Differential Characteristics and Learning Key Conditions
Differential cryptanalysis is one of the main methods of cryptanalysis and has been applied to a wide range of ciphers. While it is very successful, it also relies on certain assumptions that do not necessarily hold in practice. One of these is the hypothesis of stochastic equivalence, which states that the probability of a differential characteristic behaves similarly for all keys. Several works have demonstrated examples where this hypothesis is violated, impacting the attack complexity and sometimes even invalidating the investigated prior attacks. Nevertheless, the hypothesis is still typically taken for granted. In this work, we propose AutoDiVer, an automatic tool that allows to thoroughly verify differential characteristics. First, the tool supports calculating the expected probability of differential characteristics while considering the key schedule of the cipher. Second, the tool supports estimating the size of the space of keys for which the characteristic permits valid pairs, and deducing conditions for these keys. AutoDiVer implements a custom SAT modeling approach and takes advantage of a combination of features of advanced SAT solvers, including approximate model counting and clause learning. To show applicability to many different kinds of block ciphers like strongly aligned, weakly aligned, and ARX ciphers, we apply AutoDiVer to GIFT, PRESENT, RECTANGLE, SKINNY, WARP, SPECK, and SPEEDY.
Low Communication Threshold Fully Homomorphic Encryption
Uncategorized
Uncategorized
This work investigates constructions of threshold fully homomorphic encryption with low communication, i.e., with small ciphertexts and small decryption shares. In this context, we discuss in detail the technicalities for achieving full-fledged threshold FHE, and put forward limitations regarding prior works, including an attack against the recent construction of Boudgoust and Scholl [ASIACRYPT 2023]. In light of our observations, we generalize the definition of threshold fully homomorphic encryption by adding an algorithm which allows to introduce additional randomness in ciphertexts before they are decrypted by parties. In this setting, we are able to propose a construction which offers small ciphertexts and small decryption shares.
Practical Semi-Open Chat Groups for Secure Messaging Applications
Chat groups in secure messaging applications such as Signal, Telegram, and Whatsapp are nowadays used for rapid and widespread dissemination of information to large groups of people. This is common even in sensitive contexts, associated with the organisation of protests, activist groups, and internal company dialogues. Manual administration of who has access to such groups quickly becomes infeasible, in the presence of hundreds or thousands of members.
We construct a practical, privacy-preserving reputation system, that automates the approval of new group members based on their reputation amongst the existing membership. We demonstrate security against malicious adversaries in a single-server model, with no further trust assumptions required. Furthermore, our protocol supports arbitrary reputation calculations while almost all group members are offline (as is likely). In addition, we demonstrate the practicality of the approach via an open-source implementation. For groups of size 50 (resp. 200), an admission process on a user that received 40 (resp. 80) scores requires 1312.2 KiB (resp. 5239.4 KiB) of communication, and 3.3s (resp. 16.3s) of overall computation on a single core. While our protocol design matches existing secure messaging applications, we believe it can have value in distributed reputation computation beyond this problem setting.
Optimized Frobenius and Cyclotomic Cubing for Enhanced Pairing Computation
Efficient implementation of a pairing-based cryptosystem relies on high-performance arithmetic in finite fields $\mathbb{F}_{p}$ and their extensions $\mathbb{F}_{p^k}$, where $k$ is the embedding degree. A small embedding degree is crucial because part of the arithmetic for pairing computation occurs in $\mathbb{F}_{{p}^k}$ and includes operations such as squaring, multiplication, and Frobenius operations.
In this paper, we present a fast and efficient method for computing the Frobenius endomorphism and its complexity. Additionally, we introduce an improvement in the efficiency of cyclotomic cubing operations for several pairing-friendly elliptic curves, which are essential for the calculation of Tate pairing and its derivatives.
PMNS arithmetic for elliptic curve cryptography
We show that using a polynomial representation of prime field elements (PMNS) can be relevant for real-world cryptographic applications even in terms of performance. More specifically, we consider elliptic curves for cryptography when pseudo-Mersenne primes cannot be used to define the base field (e.g. Brainpool standardized curves, JubJub curves in the zkSNARK context, pairing-friendly curves). All these primitives make massive use of the Montgomery reduction algorithm and well-known libraries such as GMP or OpenSSL for base field arithmetic. We show how this arithmetic can be advantageously replaced by a polynomial representation of the number that can be easily parallelized, avoids carry propagation, and allows randomization process. We provide good PMNS basis in the cryptographic context mentioned above, together with a C-implementation that is competitive or faster than GMP and OpenSSL for performing basic operations in the base fields considered. We also integrate this arithmetic into the Rust reference implementation of elliptic curve scalar multiplication for Zero-knowledge applications, and achieve better practical performances for such protocols. This shows that PMNS is an attractive alternative for the base field arithmetic layer in cryptographic primitives using elliptic curves or pairings.
Algebraic Cryptanalysis of Small-Scale Variants of Stream Cipher E0
This study explores the algebraic cryptanalysis of small-scale variants of the E0 stream cipher, a legacy cipher used in the Bluetooth protocol. By systematically reducing the size of the linear feedback shift registers (LFSRs) while preserving the cipher’s core structure, we investigate the relationship between the number of unknowns and the number of consecutive keystream bits required to recover the internal states of the LFSRs. Our work demonstrates an approximately linear relationship between the number of consecutive keystream bits and the size of small-scale E0 variants, as indicated by our experimental results. To this end, we utilize two approaches: the computation of Gröbner bases using Magma’s F4 algorithm and the application of CryptoMiniSat’s SAT solver. Our experimental results show that increasing the number of keystream bits significantly improves computational efficiency, with the F4 algorithm achieving a speedup of up to 733× when additional equations are supplied. Furthermore, we verify the non-existence of equations of degree four or lower for up to seven consecutive keystream bits, and the non-existence of equations of degree three or lower for up to eight consecutive keystream bits, extending prior results on the algebraic properties of E0.
TinyLabels: How to Compress Garbled Circuit Input Labels, Efficiently
Garbled circuits are a foundational primitive in both theory and practice of cryptography. Given $(\hat{C}, K[x])$, where $\hat{C}$ is the garbling of a circuit C and $K[x] = \{K[i, x_i]\}$ are the input labels for an input $x$, anyone can recover $C(x)$, but nothing else about the input $x$. Most research efforts focus on minimizing the size of the garbled circuit $\hat{C}$. In contrast, the work by Applebaum, Ishai, Kushilevitz, and Waters (CRYPTO' 13) initiated the study of minimizing the cost for transferring the input labels $K[x]$. Later improved in a follow-up by Applebaum et al. (STOC' 23), the state-of-the-art techniques allow compressing the input labels to the optimal rate of $1 + o(1)$. That is, each input label can be transferred by essentially sending 1 bit. However, existing solutions are computationally expensive, requiring large numbers of public-key operations (such as RSA exponentiation).
In this work, we present an efficient input label compression technique based on Ring-LWE. We achieve the same optimal rate of $1 + o(1)$, by making use of additional communication in an offline stage (before the input $x$ becomes known), a paradigm that has already been explored in prior works. A novel feature of the offline communication in our scheme is that the information sent is either reusable or compressible using a random oracle, leading to small amortized offline cost $o(|x|)$. We further demonstrate concrete efficiency through an implementation whose online latency out-performs the naive baseline (which sends all of $K[x]$ in the online phase) in a realistic network with a bandwidth of up to 45Mbps. This break-even point could be pushed even further by leveraging the large potential for parallelization of computation.
Finally, we apply our techniques to construct maliciously-secure two-party computation protocols with succinct online communication: The online phase starts once the circuit C becomes known, and requires exchanging only $poly(\lambda)$ bits (independent of $|C|$). After inputs $x_A$, $x_B$ arrive, an additional $|x_A| + |x_B | + poly(\lambda)$ bits need to be sent.
MicroCrypt Assumptions with Quantum Input Sampling and Pseudodeterminism: Constructions and Separations
We investigate two natural relaxations of quantum cryptographic assumptions. First, we examine primitives such as pseudorandom generators (${PRG}$s) and pseudorandom states (${PRS}$s), extended with quantum input sampling, which we term ${PRG}^{qs}$ and ${PRS}^{qs}$. In these primitives, the input is sampled via a quantum algorithm rather than uniformly at random. The second relaxation, $\bot$-pseudodeterminism, allows the generator to output $\bot$ on an inverse-polynomial fraction of inputs.
We demonstrate an equivalence between (bounded-query) logarithmic-sized ${PRS}^{qs}$, logarithmic-sized ${PRS}^{qs}$, and ${PRG}^{qs}$. Notably, such an equivalence remains unknown for the uniform key sampling versions of these primitives. Furthermore, we establish that ${PRG}^{qs}$ can be constructed from $\bot$-pseudodeterministic ${PRG}$s ($\bot{-PRG}$s).
To further justify our exploration, we present two separation results. First, we examine the relationship between $\bot$-pseudodeterministic notions and their deterministic counterparts. We show that there does not exist a black-box construction of a one-way state generator $({OWSG})$ from a $\bot{-PRG}$, indicating that $\bot$-pseudodeterministic primitives may be inherently weaker than their deterministic counterparts. Second, we explore the distinction between quantum and uniform input sampling. We prove that there does not exist a black-box construction of a $\bot$-psuedodeterministic ${OWSG}$ from a ${PRF}^{qs}$, suggesting that primitives relying on quantum input sampling may be weaker than those using traditional uniform sampling. Given the broad cryptographic applicability of ${PRF}^{qs}$s and $\bot{-PRG}$s, these separation results yield numerous new insights into the hierarchy of primitives within MicroCrypt.
zkAML: Zero-knowledge Anti Money Laundering in Smart Contracts with whitelist approach
In the interconnected global financial system, anti-money laundering (AML) and combating the financing of terrorism (CFT) regulations are indispensable for safeguarding financial integrity. However, while illicit transactions constitute only a small fraction of overall financial activities, traditional AML/CFT frameworks impose uniform compliance burdens on all users, resulting in inefficiencies, transaction delays, and privacy concerns.
These issues stem from the institution-centric model, where financial entities independently conduct compliance checks, resulting in repeated exposure of personally identifiable information (PII) and operational bottlenecks.
To address these challenges, we introduce \textsf{zkAML}, a cryptographic framework that offers a novel approach to AML/CFT compliance. By leveraging zero-knowledge Succinct Non-Interactive Argument of Knowledge (zk-SNARK) proofs, \textsf{zkAML}~enables users to cryptographically demonstrate their regulatory compliance without revealing sensitive personal information. This approach eliminates redundant identity checks, streamlines compliance procedures, and enhances transaction efficiency while preserving user privacy.
We implement and evaluate \textsf{zkAML}~on a blockchain network to demonstrate its practicality. Our experimental results show that \textsf{zkAML}~achieves 55 transactions per second (TPS) on a public network and 324 TPS on a private network. The zk-SNARK proof generation times are $226.59$ms for senders and $215.76$ms for receivers, with a constant verification time of $1.47$ms per transaction. These findings highlight \textsf{zkAML}'s potential as a privacy-preserving and regulation-compliant solution for modern financial systems.
SoK: Efficient Design and Implementation of Polynomial Hash Functions over Prime Fields
Poly1305 is a widely-deployed polynomial hash function. The rationale behind its design was laid out in a series of papers by Bernstein, the last of which dates back to 2005. As computer architectures evolved, some of its design features became less relevant, but implementers found new ways of exploiting these features to boost its performance. However, would we still converge to this same design if we started afresh with today's computer architectures and applications? To answer this question, we gather and systematize a body of knowledge concerning polynomial hash design and implementation that is spread across research papers, cryptographic libraries, and developers' blogs. We develop a framework to automate the validation and benchmarking of the ideas that we collect. This approach leads us to five new candidate designs for polynomial hash functions. Using our framework, we generate and evaluate different implementations and optimization strategies for each candidate. We obtain substantial improvements over Poly1305 in terms of security and performance. Besides laying out the rationale behind our new designs, our paper serves as a reference for efficiently implementing polynomial hash functions, including Poly1305.
Multi-Party Computation in Corporate Data Processing: Legal and Technical Insights
This paper examines the deployment of Multi-Party Computation (MPC) in corporate data processing environments, focusing on its legal and technical implications under the European Union’s General Data Protection Regulation (GDPR). By combining expertise in cryptography and legal analysis, we address critical questions necessary for assessing the suitability of MPC for real-world applications. Our legal evaluation explores the conditions under which MPC qualifies as an anonymizing approach under GDPR, emphasizing the architectural requirements, such as the distribution of control among compute parties, to minimize re-identification risks effectively. The assertions put forth in the legal opinion are validated by two distinct assessments conducted independently.
We systematically answer key regulatory questions, demonstrating that a structured legal assessment is indispensable for organizations aiming to adopt MPC while ensuring compliance with privacy laws. In addition, we complement this analysis with a practical implementation of privacy-preserving analytics using Carbyne Stack, a cloud-native open-source platform for scalable MPC applications, which integrates the MP-SPDZ framework as its backend. We benchmark SQL queries under various security models to evaluate scalability and efficiency.
Practical Key Collision on AES and Kiasu-BC
The key collision attack was proposed as an open problem in key-committing security in Authenticated Encryption (AE) schemes like $\texttt{AES-GCM}$ and $\texttt{ChaCha20Poly1305}$. In ASIACRYPT 2024, Taiyama et al. introduce a novel type of key collision—target-plaintext key collision ($\texttt{TPKC}$) for $\texttt{AES}$. Depending on whether the plaintext is fixed, $\texttt{TPKC}$ can be divided into $\texttt{fixed-TPKC}$ and $\texttt{free-TPKC}$, which can be directly converted into collision attacks and semi-free-start collision attacks on the Davies-Meyer ($\texttt{DM}$) hashing mode.
In this paper, we propose a new rebound attack framework leveraging a time-memory tradeoff strategy, enabling practical key collision attacks with optimized complexity. We also present an improved automatic method for finding \textit{rebound-friendly} differential characteristics by controlling the probabilities in the inbound and outbound phases, allowing the identified characteristics to be directly used in $\textit{rebound-based}$ key collision attacks. Through our analysis, we demonstrate that the 2-round $\texttt{AES-128}$ $\texttt{fixed-TPKC}$ attack proposed by Taiyama et al. is a $\texttt{free-TPKC}$ attack in fact, while $\texttt{fixed-TPKC}$ attacks are considerably more challenging than $\texttt{free-TPKC}$ attacks. By integrating our improved automatic method with a new rebound attack framework, we successfully identify a new differential characteristic for the 2-round $\texttt{AES-128}$ $\texttt{fixed-TPKC}$ attack and develope the first practical $\texttt{fixed-TPKC}$ attack against 2-round $\texttt{AES-128}$. Additionally, we present practical $\texttt{fixed-TPKC}$ attacks against 5-round $\texttt{AES-192}$ and 3-round $\texttt{Kiasu-BC}$, along with a practical $\texttt{free-TPKC}$ attack against 6-round $\texttt{Kiasu-BC}$. Furthermore, we reduce time complexities for $\texttt{free-TPKC}$ and $\texttt{fixed-TPKC}$ attacks on other $\texttt{AES}$ variants.
Machine-checking Multi-Round Proofs of Shuffle: Terelius-Wikstrom and Bayer-Groth
Shuffles are used in electronic voting in much the same way physical ballot boxes are used in paper systems: (encrypted) ballots are input into the shuffle and (encrypted) ballots are output in a random order, thereby breaking the link between voter identities and ballots. To guarantee that no ballots are added, omitted or altered, zero-knowledge proofs, called proofs of shuffle, are used to provide publicly verifiable transcripts that prove that the outputs are a re-encrypted permutation of the inputs. The most prominent proofs of shuffle, in practice, are those due to Terelius and
Wikström (TW), and Bayer and Groth (BG). TW is simpler whereas BG is more efficient, both in terms of bandwidth and computation. Security for the simpler (TW) proof of shuffle has already been machine-checked but several prominent vendors insist on using the more complicated BG proof of shuffle. Here, we machine-check the security of the Bayer-Groth proof of shuffle via the Coq proof-assistant. We then extract the verifier (software) required to check the transcripts produced by Bayer-Groth implementations and use it to check transcripts from the Swiss Post evoting
system under development for national elections in Switzerland.
Achieving Data Reconstruction Hardness and Efficient Computation in Multiparty Minimax Training
Generative models have achieved remarkable success in a wide range of applications. Training such models using proprietary data from multiple parties has been studied in the realm of federated learning. Yet recent studies showed that reconstruction of authentic training data can be achieved in such settings.
On the other hand, multiparty computation (MPC) guarantees standard data privacy, yet scales poorly for training generative models.
In this paper, we focus on improving reconstruction hardness during Generative Adversarial Network (GAN) training while keeping the training cost tractable. To this end, we explore two training protocols that use a public generator and an MPC discriminator: Protocol 1 (P1) uses a fully private discriminator, while Protocol 2 (P2) privatizes the first three discriminator layers. We prove reconstruction hardness for P1 and P2 by showing that (1) a public generator does not allow recovery of authentic training data, as long as the first two layers of the discriminator are private; and through an existing approximation hardness result on ReLU networks, (2) a discriminator with at least three private layers does not allow authentic data reconstruction with algorithms polynomial in network depth and size. We show empirically that compared with fully MPC training, P1 reduces the training time by $2\times$ and P2 further by $4-16\times$.
Revisiting the Security and Privacy of FIDO2
We revisit the privacy and security analyses of FIDO2, a widely deployed standard for passwordless authentication on the Web.
We discuss previous works
and conclude that each of them has at least one of the following limitations:
(i) impractical trusted setup assumptions,
(ii) security models that are inadequate in light of state of the art of practical attacks,
(iii) not analyzing FIDO2 as a whole, especially for its privacy guarantees.
Our work addresses these gaps and proposes revised security models for privacy and authentication. Equipped with our new models, we analyze FIDO2 modularly and focus on its component protocols, WebAuthn and CTAP2, clarifying their exact security guarantees.
In particular, our results, for the first time, establish privacy guarantees for FIDO2 as a whole.
Furthermore, we suggest minor modifications that can help FIDO2 provably meet stronger privacy and authentication definitions and withstand known and novel attacks.
Round Optimal Fully Secure Distributed Key Generation
Protocols for distributed (threshold) key generation (DKG) in the discrete-logarithm setting have received a tremendous amount of attention in the past few years. Several synchronous DKG protocols have been proposed, but most such protocols are not fully secure: they either allow corrupted parties to bias the key, or are not robust and allow malicious parties to prevent successful generation of a key.
We explore the round complexity of fully secure DKG in the honest-majority setting where it is feasible. We show the impossibility of one-round, unbiased DKG protocols (even satisfying weaker notions of security), regardless of any prior setup. On the positive side, we show various round-optimal protocols for fully secure DKG offering tradeoffs in terms of their efficiency, necessary setup, and required assumptions.
Voting with coercion resistance and everlasting privacy using linkable ring signatures
We propose an e-voting protocol based on a novel linkable ring signature scheme with unconditional anonymity. In our system, all voters create private credentials and register their public counterparts. To vote, they create a ring (anonymity set) consisting of public credentials together with a proof of knowledge of their secret credential via our signature. Its unconditional anonymity prevents an attacker, no matter how powerful, from deducing the identity of the voter, thus attaining everlasting privacy. Additionally, our protocol provides coercion resistance in the JCJ framework; when an adversary tries to coerce a voter, the attack can be evaded by creating a signature with a fake but indistinguishable credential. During a moment of privacy, they will cast their real vote. Our scheme also provides verifiability and ballot secrecy.
SoK: Time to be Selfless?! Demystifying the Landscape of Selfish Mining Strategies and Models
Selfish mining attacks present a serious threat to Bitcoin security, enabling a miner with less than 51% of the network hashrate to gain higher rewards than their fair share. A growing body of works has studied the impact of such attacks and presented numerous strategies under a variety of model settings. This led to a complex landscape making it hard to comprehend the state of the art and distill insights, gaps, and trade-offs.
In this paper, we demystify the landscape of selfish mining in Bitcoin by systematizing existing studies and evaluating more realistic model adaptations. To the best of our knowledge, our work is the first of its kind. We develop a multi-dimensional systematization framework assessing prior works based on their strategy formulation and targeted models. We go on to distill a number of insights and gaps highlighting open questions and understudied areas. Among them, we find that most of the surveyed works target the block-reward setting and do not account for transaction fees, and generally consider only single attackers. To bridge this gap, we evaluate several existing strategies in the transaction-fee regime---so miner's incentives come solely from transaction fees---for both single and multi-attacker scenarios. We also extend their models to include honest-but-rational miners showing how such adaptations could garner more performant strategy variations. Finally, we touch upon defenses proposed in the literature, and discuss connections between selfish mining and relevant incentivized/fee-driven paradigms.
CAKE requires programming - On the provable post-quantum security of (O)CAKE
Uncategorized
Uncategorized
In this work we revisit the post-quantum security of KEM-based password-authenticated key exchange (PAKE), specifically that of (O)CAKE. So far, these schemes evaded a security proof considering quantum adversaries. We give a detailed analysis of why this is the case, determining the missing proof techniques. To this end, we first provide a proof of security in the post-quantum setting, up to a single gap. This proof already turns out to be technically involved, requiring advanced techniques to reason in the QROM, including the compressed oracle and the extractable QROM. To pave the way towards closing the gap, we then further identify an efficient simulator for the ideal cipher. This provides certain programming abilities as a necessary and sufficient condition to close the gap in the proof: we demonstrate that we can close the gap using the simulator, and give a meta-reduction based on KEM-anonymity that shows the impossibility of a non-programming reduction that covers a class of KEMs that includes Kyber / ML-KEM.
Behemoth: transparent polynomial commitment scheme with constant opening proof size and verifier time
Polynomial commitment schemes are fundamental building blocks in numerous cryptographic protocols such as verifiable secret sharing, zero-knowledge succinct non-interactive arguments, and many more. The most efficient polynomial commitment schemes rely on a trusted setup which is undesirable in trust-minimized applications, e.g., cryptocurrencies. However, transparent polynomial commitment schemes are inefficient (polylogarithmic opening proofs and/or verification time) compared to their trusted counterparts. It has been an open problem to devise a transparent, succinct polynomial commitment scheme or prove an impossibility result in the transparent setting. In this work, for the first time, we create a transparent, constant-size polynomial commitment scheme called Behemoth with constant-size opening proofs and a constant-time verifier. The downside of Behemoth is that it employs a cubic prover in the degree of the committed polynomial. We prove the security of our scheme in the generic group model and discuss parameter settings in which it remains practical even for the prover.
A 10-bit S-box generated by Feistel construction from cellular automata
In this paper, we propose a new 10-bit S-box generated from a Feistel construction. The subpermutations are generated by a 5-cell cellular automaton based on a unique well-chosen rule and bijective affine transformations. In particular, the cellular automaton rule is chosen based on empirical tests of its ability to generate good pseudorandom output on a ring cellular automaton. Similarly, Feistel's network layout is based on empirical data regarding the quality of the output S-box.
We perform cryptanalysis of the generated 10-bit S-box: we test the properties of algebraic degree, algebraic complexity, nonlinearity, strict avalanche criterion, bit independence criterion, linear approximation probability, differential approximation probability, differential uniformity and boomerang uniformity of our S-box, and relate them to those of the AES S-box. We find security properties comparable to or sometimes even better than those of the standard AES S-box. We believe that our S-box could be used to replace the 5-bit substitution of ciphers like ASCON.
A Democratic Distributed Post-Quantum Certificateless Encryption Scheme
We propose a post-quantum certificateless encryption scheme based on a web of trust instead of a centralized Key Generation Center. Our scheme allows nodes to communicate securely. It is the nodes already present in the network that vote on the acceptance of new nodes, and agree on the shared key. The threshold required for the acceptance of a new node is configurable. Our protocol thus allows to completely operate without the Key Generation Center (or Key Distribution Center).
Our scheme is based on Quasi-Cyclic Moderate Density Parity Check Code McEliece, which is resistant to quantum computer attacks. The voting system uses Shamir secret sharing, coupled with the Kabatianskii-Krouk-Smeets signature scheme, both are also resistant to quantum computer attacks.
We provide a security analysis of our protocol, as well as a formal verification and a proof of concept code.
StaMAC: Fault Protection via Stable-MAC Tags
Fault attacks pose a significant threat to cryptographic implementations, motivating the development of countermeasures, primarily based on a combination of redundancy and masking techniques. Redundancy, in these countermeasures, is often implemented via duplication or linear codes. However, their inherent structure remains susceptible to strategic fault injections bypassing error checks. To address this, the CAPA countermeasure from CRYPTO 2018 leveraged information-theoretic MAC tags for protection against fault and combined attacks. However, a recent attack has shown that CAPA can only protect against either side-channel analysis or fault attacks, but not both simultaneously, and with significant hardware costs. Its successor, M&M, improves efficiency but lacks protection against ineffective faults.
In this paper, we propose StaMAC, a framework aimed at securely incorporating MAC tags against both side-channel and fault adversaries in a non-combined scenario. We extend the security notions outlined in StaTI from TCHES 2024, and propose the notion of MAC-stability, ensuring fault propagation in masked and MACed circuits, necessitating only a single error check at the end of the computation. Additionally, we show that the stability notion from StaTI is arbitrarily composable (whereas it was previously thought to be only serially composable), making it the first arbitrary composable fault security notion which does not require intermediate error checks or correction. Then, we establish the improved protection of masking combined with MAC tags compared to linear encoding techniques by showing bounds on the advantage considering several fault adversaries: a gate/register faulting adversary, an arbitrary register faulting adversary, and a random register faulting adversary. Then, we show how to transform any probing secure circuit to protect against fault attacks using the proposed MAC-stable gadgets implementing field operations. Finally, we demonstrate StaMAC on an AES implementation, evaluating its security and hardware costs compared to the countermeasures using MAC tags.
Quantum circuit for implementing AES S-box with low costs
Advanced Encryption Standard (AES) is one of the most widely used and extensively studied encryption algorithms globally, which is renowned for its efficiency and robust resistance to attacks. In this paper, three quantum circuits are designed to implement the S-box, which is the sole nonlinear component in AES. By incorporating a linear key schedule, we achieve a quantum circuit for implementing AES with the minimum number of qubits used. As a consequence, only 264/328/398 qubits are needed to implement the quantum circuits for AES-128/192/256. Furthermore, through quantum circuits of the S-box and key schedule, the overall size of the quantum circuit required for Grover's algorithm to attack AES is significantly decreased. This enhancement improves both the security and resource efficiency of AES in a quantum computing environment.
Differential Analysis of Feistel Ciphers Incorporating Ajtai SIS Hash Function
This paper presents a framework for evaluating the differential cryptanalysis resistance of a Feistel cipher that uses Ajtai SIS hash function as its S-box. We derive an upper bound on the maximum differential probability and analyze the S-box output bias using a generalized extreme value (GEV) model. Simulation results indicate that practical security is achieved with 16 rounds for a 32-bit block and six for a 128-bit block.
Verifiable Secret Sharing Based on Fully Batchable Polynomial Commitment for Privacy-Preserving Distributed Computation
Privacy-preserving distributed computation enables a resource-limited client to securely delegate computations on sensitive data to multiple servers by distributing shares of the data. In such systems, verifiable secret sharing (VSS) is a fundamental component, ensuring secure data distribution and directly impacting the overall performance. The most practical approach to construct VSS is through polynomial commitment (PC), with two main research directions to improve the VSS efficiency. The first focuses on improving the dealer time by designing PC that supports batch evaluation, i.e., generating multiple evaluation$\&$proof pairs in one shot. The second aims to reduce the broadcast cost by designing PC that supports batch opening, i.e., producing a compact proof for multiple evaluations.
Recently, Zhang et al. (Usenix Security 2022) proposed a transparent PC that supports batch evaluation and obtained a transparent VSS with optimal dealer time. However, their scheme does not support batch opening, leading to high broadcast costs in VSS. To the best of our knowledge, no transparent PC currently supports both batch evaluation and batch opening, thus limiting the performance of existing VSS schemes.
In this paper, we propose a transparent fully batchable polynomial commitment (TFB-PC), that simultaneously supports batch evaluation and batch opening. Leveraging TFB-PC, we present a VSS scheme with optimal complexity: $O(n\log n)$ dealer time, $O(n)$ participant time and $O(n)$ communication cost. Furthermore, we implement our VSS scheme and compare its performance with Zhang et al.’s VSS
(the naive approach). Results show that our scheme achieves $954\text{-}27,595\times$ reduction in communication cost and a $1,028\text{-}1,155,106\times$ speed up in participant time for $2^{11}$-$2^{21}$ parties.
Polar Lattice Cryptography
Presenting a protocol that builds a cryptographic solution which shifts security responsibility from the cipher designer to the cipher user. The Polar Lattice is a pattern-devoid cryptographic cipher. It is based on a geometric construct -- a polar lattice, on which the letters of a plaintext alphabet A, are presented as two points each letter, so that to transmit a letter the transmitter transmits a randomized pathway, a trail, (ciphertext) that begins at the first point of the transmitted letter and ends at the second point of the transmitted letter; the transmitted pathway is a set of steps on the lattice. Once a letter is transmitted the next bits on the ciphertext mark the beginning of the pathway that points to the next letter. The size and the geometric construction of the polar lattice are randomized and kept secret. The randomized pathways may be long or short, the attacker does not know how to parcel the ciphertext to individual trails pointing to distinct letters in the plaintext alphabet A. The polar lattice may be implemented algebraically, or geometrically; the lattice may be a physical nano-construct. The polar lattice is very power efficient, very fast. It claims all the attributes associated with pattern devoid cryptography: it allows for only brute force cryptanalysis, which in turn can be defeated through increased ciphertext size, unlimited key size and structure complexity.
Succinctly Verifiable Computation over Additively-Homomorphically Encrypted Data: Making Privacy-Preserving Blueprints Practical
With additively homomorphic encryption (AHE), one can compute, from input ciphertexts $\mathsf{Enc}(x_1),\ldots,\mathsf{Enc}(x_n)$, and additional inputs $y_1,\ldots,y_k$, a ciphertext $c_\textit{f}=\mathsf{Enc}(f(x_1,\ldots,x_n,y_1,\ldots, y_k))$ for any polynomial $f$ in which each monomial has total degree at most $1$ in the $x$-variables (but with arbitrary degree in the known $y$-variables). For AHE that satisfies a set of natural requirements, we give a zero-knowledge proof system for showing that a ciphertext $c_\textit{f}$ is the result of homomorphically evaluating $f$ on ciphertexts $(c_1,\ldots,c_n)$ = $(\mathsf{Enc}(x_1),\ldots,\mathsf{Enc}(x_n))$ and private inputs $y_1,\ldots,y_k$ that correspond to commitments $C_1,\ldots,C_k$ where the encrypted values, $x_1,\ldots,x_n$ are unknown to the prover. Our proofs are succinct, i.e., their size is independent of the number of ciphertexts $n$, and is instead $O(k\log d)$ where $k$ is the number of private inputs, and $d$ is the maximum degree of any variable in $f$.
We give two ways of instantiating this framework: with ElGamal-based encryption (under the DDH assumption) and with a variant of the Camenisch-Shoup cryptosystem (under the DCR and Strong RSA assumptions). Both yield proof systems where computing and verifying the proof takes a comparable amount of time to homomorphically evaluating $f$.
Next, we show that our framework yields a dramatically improved privacy-preserving blueprint (PPB) system. Introduced by Kohlweiss, Lysyanskaya, and Nguyen (Eurocrypt'23), an $f$-PPB system allows an auditor with secret input $x$ to create a public encoding ${\sf pk}$ of the function $f(x,\cdot)$ that reveals nothing about $x$.
Yet, it allows a user to compute an encoding, or escrow $Z$, of the value $f(x,y)$ on input the user's private data $y$ corresponding to a commitment $C_y$; $Z$ will verifiably correspond to the commitment $C_y$. The auditor will be able to recover $f(x,y)$ from $Z$, but will learn no other information about $y$. For example, if $f$ is the watchlist function where $f(x,y)$ outputs $y$ only in the event that $y$ is on the list $x$, then an $f$-PPB allows the auditor to trace watchlisted users in an otherwise anonymous system.
Using our succinct zero-knowledge proof system for additively homomorphic computation we achieve the following results: (1) We provide efficient schemes for a bigger class of functions $f$; for example, we show how to realize $f$ that would allow the auditor to trace private payment transactions of a criminal suspect which was previously not efficient. (2) For the watchlist and related functions, we reduce the size of the escrow $Z$ from linear in the size of the auditor's input $x$, to logarithmic.
Additionally, we define and satisfy a stronger notion of security for $f$-PPBs, where a malicious auditor cannot frame a user in a transaction in which the user was not involved in.
Analysis of the Telegram Key Exchange
We describe, formally model, and prove the security of Telegram's key exchange protocols for client-server communications. To achieve this, we develop a suitable multi-stage key exchange security model along with pseudocode descriptions of the Telegram protocols that are based on analysis of Telegram's specifications and client source code. We carefully document how our descriptions differ from reality and justify our modelling choices. Our security proofs reduce the security of the protocols to that of their cryptographic building blocks, but the subsequent analysis of those building blocks requires the introduction of a number of novel security assumptions, reflecting many design decisions made by Telegram that are suboptimal from the perspective of formal analysis. Along the way, we provide a proof of IND-CCA security for the variant of RSA-OEAP+ used in Telegram and identify a hypothetical attack exploiting current Telegram server behaviour (which is not captured in our protocol descriptions). Finally, we reflect on the broader lessons about protocol design that can be taken from our work.
Evaluation of Privacy-aware Support Vector Machine (SVM) Learning using Homomorphic Encryption
The requirement for privacy-aware machine learning increases as we continue to use PII (Personally Identifiable Information) within machine training. To overcome these privacy issues, we can apply Fully Homomorphic Encryption (FHE) to encrypt data before it is fed into a machine learning model. This involves creating a homomorphic encryption key pair, and where the associated public key will be used to encrypt the input data, and the private key will decrypt the output. But, there is often a performance hit when we use homomorphic encryption, and so this paper evaluates the performance overhead of using the SVM machine learning technique with the OpenFHE homomorphic encryption library. This uses Python and the scikit-learn library for its implementation. The experiments include a range of variables such as multiplication depth, scale size, first modulus size, security level, batch size, and ring dimension, along with two different SVM models, SVM-Poly and SVM-Linear. Overall, the results show that the two main parameters which affect performance are the ring dimension and the modulus size, and that SVM-Poly and SVM-Linear show similar performance levels.
AQQUA: Augmenting Quisquis with Auditability
We present AQQUA, a permissionless, private, and auditable
payment system built on top of Quisquis. Unlike other auditable payment systems, AQQUA supports auditing, while maintaining privacy. It allows users to hold multiple accounts, perform concurrent transactions, and features a non-increasing state. AQQUA achieves auditability by introducing two authorities: one for registration and one for auditing. These authorities cannot censor transactions, thus preserving the decentralized nature of the system. Users create an initial account with the registration authority and then privately transact by using provably unlinkable updates of it. Audits can be voluntarily initiated by the users or requested by the audit authority at any time. Compliance is proved in zero-knowledge against a set of policies which include a maximum limit in the amount sent/received during a time period or in a single transfer, non-participation in a specific transaction or selective disclosure of the value exchanged. To analyze the security of AQQUA we formally define a security model for private and auditable decentralized payment systems. Using this model, we prove that AQQUA satisfies anonymity towards both the public and the auditor, theft prevention, and audit soundness.
Verifiable Decapsulation: Recognizing Faulty Implementations of Post-Quantum KEMs
Cryptographic schemes often contain verification steps that are essential for security. Yet, faulty implementations missing these steps can easily go unnoticed, as the schemes might still function correctly. A prominent instance of such a verification step is the re-encryption check in the Fujisaki-Okamoto (FO) transform that plays a prominent role in the post-quantum key encapsulation mechanisms (KEMs) considered in NIST's PQC standardization process. In KEMs built from FO, decapsulation performs a re-encryption check that is essential for security, but not for functionality. In other words, it will go unnoticed if this essential step is omitted or wrongly implemented, opening the door for key recovery attacks. Notably, such an implementation flaw was present in HQC's reference implementation and was only noticed after 19 months.
In this work, we develop a modified FO transform that binds re-encryption to functionality, ensuring that a faulty implementation which skips re-encryption will be exposed through basic correctness tests. We do so by adapting the "verifiable verification" methodology of Fischlin and Günther (CCS 2023) to the context of FO-based KEMs. More concretely, by exporting an unpredictable confirmation code from the public key encryption and embedding it into the key derivation function, we can confirm that (most of) the re-encryption step was indeed performed during decapsulation. We formalize this concept, establish modified FO transforms, and prove how unpredictable PKE confirmation codes turn into noticeable correctness errors for faulty implementations. We show how to apply this technique to ML-KEM and HQC, both with negligible overhead, by leveraging the entropy lost through ciphertext compression or truncation. We confirm that our approach works through mathematical proofs, as well as experimental data. Our experiments show that the implementation flaw in HQC's reference implementation indeed makes basic test cases when following our approach.
Concretely Efficient Correlated Oblivious Permutation
Oblivious permutation (OP) enables two parties, a sender with a private data vector $x$ and a receiver with a private permutation π, to securely obtain the shares of π(x). OP has been used to construct many important MPC primitives and applications such as secret shuffle, oblivious sorting, private set operations, secure database analysis, and privacy-preserving machine learning. Due to its high complexity, OP has become a performance bottleneck in several practical applications, and many efforts have been devoted to enhancing its concrete efficiency. Chase et al. (Asiacrypt'20) proposed an offline-online OP paradigm leveraging a pre-computable resource termed Share Translation. While this paradigm significantly reduces online costs, the substantial offline cost of generating Share Translation remains an area for further investigation.
In this work, we redefine the pre-computable resource as a cryptographic primitive known as Correlated Oblivious Permutation (COP) and conduct in-depth analyses and optimizations of the two COP generation solutions: network-based solution and matrix-based solution. The optimizations for the network-based solution halve the communication/computation cost of constructing a switch (the basic unit of the permutation network) and reduce the number of switches in the permutation network. The optimizations for the matrix-based solution halve the communication cost of small-size COP generation and reduce the cost of large-size COP generation with in-outside permutation decomposition.
We implement our two COP generation protocols and conduct comprehensive evaluations. Taking commonly used 128-bit input data as an example, our network-based and matrix-based solutions are up to 1.7x and 1.6x faster than baseline protocols, respectively.
We further facilitate the state-of-the-art (SOTA) PSU protocols with our optimized COP, achieving over 25% reduction in communication cost and 35% decrease in execution time. This shows that our COP optimizations bring significant improvements for real-world MPC primitives.
Last updated: 2025-03-10
Publicly Verifiable Generalized Secret Sharing and Its Application in Building Decentralized Exchange
Generalized secret sharing (GSS), which can offer more flexibility by accommodating diverse access structures and conditions, has been under-explored in distributed computing over the past decades. To address the gaps, we propose the publicly verifiable generalized secret sharing (PVGSS) scheme, enhancing the applicability of GSS in transparent systems. Public verifiability is a crucial property to gain trustworthiness for decentralized systems like blockchain. We begin by introducing two GSS constructions, one based on Shamir's secret sharing and the other on the linear secret sharing scheme (LSSS). Next, we present PVGSS schemes that combine GSS with non-interactive zero-knowledge (NIZK) proofs. Further, we construct a decentralized exchange (DEX) based on PVGSS scheme, where any users can participate in exchanges and engage in arbitrage. Specifically, users can fairly swap ERC-20 tokens with passive watchers, who earn profits by providing arbitration services. The critical property of "fairness" required by the DEX is ensured through a sophisticated access structure, supported by the PVGSS scheme. We provide a comprehensive evaluation on the performance of the PVGSS schemes and the monetary costs for users in the DEX. The results demonstrate the feasibility and practicality of this approach in real-world applications.
Ciphertext-Ciphertext Matrix Multiplication: Fast for Large Matrices
Matrix multiplication of two encrypted matrices (CC-MM) is a key challenge for privacy-preserving machine learning applications. As modern machine learning models focus on scalability, fast CC-MM on large datasets is increasingly in demand.
In this work, we present a CC-MM algorithm for large matrices. The algorithm consists of plaintext matrix multiplications (PP-MM) and ciphertext matrix transpose algorithms (C-MT). We propose a fast C-MT algorithm, which is computationally inexpensive compared to PP-MM. By leveraging high-performance BLAS libraries to optimize PP-MM, we implement large-scale CC-MM with substantial performance improvements. Furthermore, we propose lightweight algorithms, significantly reducing the key size from $1\ 960$ MB to $1.57$ MB for CC-MM with comparable efficiency.
In a single-thread implementation, the C-MT algorithm takes $0.76$ seconds to transpose a $2\ 048\times 2\ 048$ encrypted matrix. The CC-MM algorithm requires $85.2$ seconds to multiply two $4\ 096\times 4\ 096$ encrypted matrices. For large matrices, our algorithm outperforms the state-of-the-art CC-MM method from Jiang-Kim-Lauter-Song [CCS'18] by a factor of over $800$.
Private Neural Network Training with Packed Secret Sharing
We present a novel approach for training neural networks that leverages packed Shamir secret sharing scheme. For specific training protocols based on Shamir scheme, we demonstrate how to realize the conversion between packed sharing and Shamir sharing without additional communication overhead. We begin by introducing a method to locally convert between Shamir sharings with secrets stored at different slots. Building upon this conversion, we achieve free conversion from packed sharing to Shamir sharing. We then show how to embed the conversion from Shamir sharing to packed sharing into the truncation used during the training process without incurring additional communication costs. With free conversion between packed sharing and Shamir sharing, we illustrate how to utilize the packed scheme to parallelize certain computational steps involved in neural network training. On this basis, we propose training protocols with information-theoretic security between general $n$ parties under the semi-honest model. The experimental results demonstrate that, compared to previous work in this domain, applying the packed scheme can effectively improve training efficiency. Specifically, when packing $4$ secrets into a single sharing, we observe a reduction of more than $20\%$ in communication overhead and an improvement of over $10\%$ in training speed under the WAN setting.
Revisiting subgroup membership testing on pairing-friendly curves via the Tate pairing
In 2023, Koshelev introduced an efficient method of subgroup membership testing for a list of non-pairing-friendly curves, using at most two small Tate pairings. In fact, this technique can also be applied to certain pairing-friendly curves, e.g., from the BLS and BW13 families. In this paper, we revisit Koshelev's method and propose simplified formulas for computing the two Tate pairings. Compared to the original formulas, ours reduce both the number of Miller's iterations and the storage requirements. Furthermore, we provide a high-speed software implementation on a 64-bit processor. Our experimental results show that the new method outperforms the state-of-the-art one by up to $62.0\%$ and $41.2\%$ on the BW13-310 and BLS48-575 curves, respectively. When special precomputation is utilized, our method achieves greater speed improvements of up to $110.6\%$ and $74.4\%$ on the two curves, respectively
Endomorphisms for Faster Cryptography on Elliptic Curves of Moderate CM Discriminants, II
The present article is a natural extension of the previous one about the GLV method of accelerating a (multi-)scalar multiplication on elliptic curves of moderate CM discriminants $D < 0$. In comparison with the first article, much greater magnitudes of $D$ (in absolute value) are achieved, although the base finite fields of the curves have to be pretty large. This becomes feasible by resorting to quite powerful algorithmic tools developed primarily in the context of lattice-based and isogeny-based cryptography. Curiously, pre-quantum cryptography borrows research outcomes obtained when seeking conversely quantum-resistant solutions or attacks on them.
For instance, some $2$-cycle of pairing-friendly MNT curves (with $-D \approx 100{,}000{,}000$, i.e., $\log_2(-D) \approx 26.5$) is relevant for the result of the current article. The given $2$-cycle was generated at one time by Guillevic to provide $\approx 128$ security bits, hence it was close to application in real-world zk-SNARKs. Another more performant MNT $2$-cycle (with slightly smaller security level, but with much larger $D$) was really employed in the protocol Coda (now Mina) until zero-knowledge proof systems on significantly faster pairing-free (or half-pairing) $2$-cycles were invented. It is also shown in the given work that more lollipop curves, recently proposed by Costello and Korpal to replace MNT ones, are now covered by the GLV technique.
The 2Hash OPRF Framework and Efficient Post-Quantum Instantiations
An Oblivious Pseudo-Random Function (OPRF) is a two-party protocol for jointly evaluating a Pseudo-Random Function (PRF), where a user has an input x and a server has an input k. At the end of the protocol, the user learns the evaluation of the PRF using key k at the value x, while the server learns nothing about the user's input or output.
OPRFs are a prime tool for building secure authentication and key exchange from passwords, private set intersection, private information retrieval, and many other privacy-preserving systems. While classical OPRFs run as fast as a TLS Handshake, current *quantum-safe* OPRF candidates are still practically inefficient.
In this paper, we propose a framework for constructing OPRFs from post-quantum multi-party computation. The framework captures a family of so-called "2Hash PRFs", which sandwich a function evaluation in between two hashes. The core of our framework is a compiler that yields an OPRF from a secure evaluation of any function that is key-collision resistant and one-more unpredictable. We instantiate this compiler by providing such functions built from Legendre symbols, and from AES encryption. We then give a case-tailored protocol for securely evaluating our Legendre-based function, built from oblivious transfer (OT) and zero-knowledge proofs (ZKP). Instantiated with lattice-based OT and ZKPs, we obtain a quantum-safe OPRF that completes in 0.57 seconds, with less than 1MB of communication.
Protecting Computations Against Continuous Bounded-Communication Leakage
We consider the question of protecting a general computation device, modeled by a stateful Boolean circuit, against leakage of partial information about its internal wires. Goyal et al. (FOCS 2016) obtained a solution for the case of bounded-communication leakage, where the wires are partitioned into two parts and the leakage can be any function computed using $t$ bits of communication between the parts. However, this solution suffers from two major limitations: (1) it only applies to a one-shot (stateless) computation, mapping an encoded input to an encoded output, and (2) the leakage-resilient circuit consumes fresh random bits, whose number scales linearly with the circuit complexity of the computed function.
In this work, we eliminate the first limitation and make progress on the second. Concretely:
- We present the first construction of stateful circuits that offer information-theoretic protection against continuous bounded-communication leakage. As an application, we extend a two-party ``malware-resilient'' protocol of Goyal et al. to the continuous-leakage case.
- For simple types of bounded-communication leakage, which leak $t$ parities or $t$ disjunctions of circuit wires or their negations, we obtain a deterministic variant that does not require any fresh randomness beyond the randomness in the initial state. Here we get computational security based on a subexponentially secure one-way function. This is the first deterministic leakage-resilient circuit construction for any nontrivial class of global leakage.
Disincentivize Collusion in Verifiable Secret Sharing
In verifiable secret sharing (VSS), a dealer shares a secret input among several parties, ensuring each share is verifiable. Motivated by its applications in the blockchain space, we focus on a VSS where parties holding shares are not allowed to reconstruct the dealer's secret (even partially) on their own terms, which we address as privacy-targeted collusion if attempted.
In this context, our work investigates mechanisms deterring such collusion in VSS among rational and malicious parties. For this problem, we make both algorithmic and combinatorial contributions:
1. We provide two collusion-deterrent mechanisms to discourage parties from colluding and recovering the dealer's secret. Notably, when it is desired to achieve fairness---where non-colluding parties are not at a loss---while allowing for the best achievable malicious fault tolerance, we define ``trackable access structures'' (TAS) and design a deterrence mechanism tailored for VSS on these structures.
2. We estimate the size of the optimal TAS, construct them from Steiner systems, provide highly robust TAS using partial Steiner systems, and present efficient secret sharing schemes for the latter close-to-optimal TAS for various parameter regimes.
3. We demonstrate that trackability in access structures is connected to combinatorial objects like (partial) Steiner systems, uniform subsets with restricted intersections, and appropriate binary codes. The robustness of access structures is equivalent to the minimum vertex cover of hypergraphs.
We believe these connections between cryptography, game theory, and discrete mathematics will be of broader interest.
BulletCT: Towards More Scalable Ring Confidential Transactions With Transparent Setup
RingCT signatures are essential components of Ring Confidential Transaction (RingCT) schemes on blockchain platforms, enabling anonymous transaction spending and significantly impacting the scalability of these schemes. This paper makes two primary contributions:
We provide the first thorough analysis of a recently developed Any-out-of-N proof in the discrete logarithm (DLOG) setting and the associated RingCT scheme, introduced by ZGSX23 (S&P '23). The proof conceals the number of the secrets to offer greater anonymity than K-out-of-N proofs and uses an efficient "K-Weight" technique for its construction. However, we identify for the first time several limitations of using Any-out-of-N proofs, such as increased transaction sizes, heightened cryptographic complexities and potential security risks. These limitations prevent them from effectively mitigating the longstanding scalability bottleneck.
We then continue to explore the potential of using K-out-of-N proofs to enhance scalability of RingCT schemes. Our primary innovation is a new DLOG-based RingCT signature that integrates a refined "K-Weight"-based K-out-of-N proof and an entirely new tag proof. The latter is the first to efficiently enable the linkability of RingCT signatures derived from the former, effectively resisting double-spending attacks.
Finally, we identify and patch a linkability flaw in ZGSX23's signature. We benchmark our scheme against this patched one to show that our scheme achieves a boost in scalability, marking a promising step forward.
Deimos Cipher: A High-Entropy, Secure Encryption Algorithm with Strong Diffusion and Key Sensitivity
Deimos Cipher is a symmetric encryption algorithm designed to achieve high entropy and strong diffusion while maintaining efficiency.
It employs advanced cryptographic transformations to ensure robust security against modern cryptanalysis techniques.
Entropy tests demonstrate its ability to generate highly randomized ciphertext, surpassing industry standards.
Avalanche effect analysis confirms optimal diffusion, achieving an average bit change of 50.18% in large datasets.
Key sensitivity tests reveal a 50.54% ciphertext difference for minimal key variations, ensuring strong resistance to differential cryptanalysis.
With fast encryption and decryption speeds, Deimos Cipher offers a balanced approach between security and performance, making it suitable for secure communication and data protection.
This paper presents the algorithm's design, security analysis, and benchmarking against established cryptographic standards.
Practical Preimage Attacks on 3-Round Keccak-256 and 4-Round Keccak[r=640, c=160]
Recently, linear structures and algebraic attacks have been widely used in preimage attacks on round-reduced Keccak. Inherited by pioneers' work, we make some improvements for 3-round Keccak-256 and 4-round Keccak[r=640, c=160]. For 3-round Keccak-256, we introduce a three-stage model to deal with the unsatisfied restrictions while bringing more degrees of freedom at the same time. Besides, we show that guessing values for different variables will result in different complexity of solving time. With these techniques, the guessing times can be decreased to 2^{52}, and the solving time for each guess can be decreased to around 2^{5.2} 3-round Keccak calls. As a result, the complexity of finding a preimage for 3-round Keccak-256 can be decreased to around 2^{57.2}. For 4-round Keccak[r=640, c=160], an instance of the Crunchy Contest, we use some techniques to save degrees of freedom and make better linearization. Based on these techniques, we build an MILP model and obtain an attack with better complexity of around 2^{60.9}. The results of 3-round Keccak-256 and 4-round Keccak[r=640, c=160] are verified with real examples.
Hybrid Password Authentication Key Exchange in the UC Framework
A hybrid cryptosystem combines two systems that fulfill the same cryptographic functionality, and its security enjoys the security of the harder one. There are many proposals for hybrid public-key encryption (hybrid PKE), hybrid signature (hybrid SIG) and hybrid authenticated key exchange (hybrid AKE). In this paper, we fill the blank of Hybrid Password Authentication Key Exchange (hybrid PAKE).
For constructing hybrid PAKE, we first define an important class of PAKE -- full DH-type PAKE, from which we abstract sufficient properties to achieve UC security. Our full DH-type PAKE framework unifies lots of PAKE schemes like SPAKE2, TBPEKE, (Crs)X-GA-PAKE, and summarizes their common features for UC security.
Stepping from full DH-type PAKE, we propose two generic approaches to hybrid PAKE, parallel composition and serial composition.
-- We propose a generic construction of hybrid PAKE via parallel composition and prove that the hybrid PAKE by composing DH-type PAKEs in parallel is a full DH-type PAKE and hence achieves UC security, as long as one underlying DH-type PAKE is a full DH-type.
-- We propose a generic construction of hybrid PAKE via serial composition, and prove that the hybrid PAKE by composing a DH-type PAKE and another PAKE in serial achieves UC security, if either the DH-type PAKE is a full DH-type or the other PAKE has UC security and the DH-type PAKE only has some statistical properties.
Our generic constructions of hybrid PAKE result in a variety of hybrid PAKE schemes enjoying different nice features, like round-optimal, high efficiency, or UC security in quantum random oracle model (QROM).
Somewhat Homomorphic Encryption from Linear Homomorphism and Sparse LPN
We construct somewhat homomorphic encryption from the sparse learning-parities-with-noise problem, along with an assumption that implies linearly homomorphic encryption (e.g., the decisional Diffie-Hellman or decisional composite residuosity assumptions). Our resulting schemes support an a-priori bounded number of homomorphic operations: $O(\log \lambda / \log \log \lambda)$ multiplications followed by poly($\lambda$) additions, where $\lambda$ is a security parameter. These schemes have compact ciphertexts: before and after homomorphic evaluation, the bit length of each ciphertext is a fixed polynomial in the security parameter $\lambda$, independent of the number of homomorphic operations that the scheme supports. This gives the first constructions of somewhat homomorphic encryption that can evaluate the class of bounded-degree polynomials without relying on lattice assumptions or bilinear maps.
Our new encryption schemes are conceptually simple: much as in Gentry, Sahai, and Waters’ fully homomorphic encryption scheme, ciphertexts in our scheme are matrices, homomorphic addition is matrix addition, and homomorphic multiplication is matrix multiplication. Moreover, when encrypting many messages at once and performing many homomorphic evaluations at once, the bit length of the ciphertexts in (some of) our schemes can be made arbitrarily close to the bit length of the plaintexts. The main limitation of our schemes is that they require a large evaluation key, whose size scales with the complexity of the homomorphic computation performed, though this key can be re-used across any polynomial number of encryptions and evaluations.
TaSSLE: Lasso for the commitment-phobic
We present TaSSLE, a new lookup argument for decomposable tables with minimal commitment costs. The construction generalizes techniques introduced in Lasso (Eurocrypt '24) which take advantage of the internal structure present in such tables to avoid the need for any party to need to commit to, or even construct, the entire table. This allows the use of lookups against very large tables, with applications including new design strategies for "zero-knowledge virtual machines". We show that these techniques may be combined in a generic way with any existing lookup argument to achieve similar results. We then give a construction of TaSSLE by applying this observation to a recent lookup argument, introduced in [Papini-Haböck '23], which combines logarithmic derivatives with the GKR protocol to achieve a lookup argument with minimal commitment costs.
On the Security of KOS
We study the security of the random oblivious transfer extension protocol of Keller, Orsini, and Scholl (CRYPTO '15), whose security proof was recently invalidated by Roy (CRYPTO '22). We show that KOS is asymptotically secure. Our proof involves a subtle analysis of the protocol's "correlation check", and introduces several new techniques. We also study the protocol's concrete security. We establish concrete security for security parameter values on the order of 5,000. We present evidence that a stronger result than ours—if possible—is likely to require radically new ideas.
Quantum Black-Box Separations: Succinct Non-Interactive Arguments from Falsifiable Assumptions
In their seminal work, Gentry and Wichs (STOC'11) established an impossibility result for the task of constructing an adaptively-sound SNARG via black-box reduction from a falsifiable assumption.
An exciting set of recent SNARG constructions demonstrated that, if one adopts a weaker but still quite meaningful notion of adaptive soundness, then impossibility no longer holds (Waters-Wu, Waters-Zhandry, Mathialagan-Peters-Vaikunthanathan ePrint'24). These fascinating new results raise an intriguing possibility: is there a way to remove this slight weakening of adaptive soundness, thereby completely circumventing the Gentry-Wichs impossibility?
A natural route to closing this gap would be to use a quantum black-box reduction, i.e., a reduction that can query the SNARG adversary on superpositions of inputs. This would take advantage of the fact that Gentry-Wichs only consider classical reductions. In this work, we show that this approach cannot succeed. Specifically, we extend the Gentry-Wichs impossibility result to quantum black-box reductions, and thereby establish an important limit on the power of such reductions.
Multiparty Garbling from OT with Linear Scaling and RAM Support
State-of-the-art protocols that achieve constant-round secure multiparty computation currently present a trade-off: either consume an amount of communication that scales quadratically in the number of parties, or achieve better asymptotics at the cost of high constant factors (e.g. schemes based on LPN or DDH).
We construct a constant-round MPC protocol where communication scales linearly in the number of parties n. Our construction relies only on OT and RO, and it leverages packed secret sharing. Due to building on simple primitives, our protocol offers concrete improvement over asymptotically-efficient LPN-based schemes. We consider security in the presence of a dishonest majority where the malicious (with abort) adversary corrupts an arbitrary constant fraction of parties.
By leveraging tri-state circuits (Heath et al. Crypto 2023), we extend our protocol to the RAM model of computation. For a RAM program that halts within $T$ steps, our maliciously-secure protocol communicates $O(n \cdot T \log^3 T \log \log T \cdot \kappa)$ total bits, where $\kappa$ is a security parameter.
Homomorphic Signature-based Witness Encryption and Applications
Practical signature-based witness encryption (SWE) schemes recently emerged as a viable alternative to instantiate timed-release cryptography in the honest majority setting. In particular, assuming threshold trust in a set of parties that release signatures at a specified time, one can ``encrypt to the future'' using an SWE scheme. Applications of SWE schemes include voting, auctions, distributed randomness beacons, and more. However, the lack of homomorphism in existing SWE schemes reduces efficiency and hinders deployment. In this work, we introduce the notion of homomorphic SWE (HSWE) to improve the practicality of timed-release encryption schemes. We show one can build HSWE using a pair of encryption and signature schemes where the uniqueness of the signature is required when the encryption scheme relies on injective one-way functions. We then build three HSWE schemes in various settings using BLS, RSA, and Rabin signatures and show how to achieve a privacy-preserving variant that only allows extracting the homomorphically aggregated result while keeping the individual plaintexts confidential
A Unified Framework for Succinct Garbling from Homomorphic Secret Sharing
A major challenge in cryptography is the construction of succinct garbling schemes that have asymptotically smaller size than Yao’s garbled circuit construction. We present a new framework for succinct garbling that replaces the heavy machinery of most previous constructions by lighter-weight homomorphic secret sharing techniques.
Concretely, we achieve 1-bit-per-gate (amortized) garbling size for Boolean circuits under circular variants of standard assumptions in composite-order or prime-order groups, as well as a lattice-based instantiation. We further extend these ideas to layered circuits, improving the per-gate cost below 1 bit, and to arithmetic circuits, eliminating the typical Ω(λ)-factor overhead for garbling mod-p computations. Our constructions also feature “leveled” variants that remove circular-security requirements at the cost of adding a depth-dependent term to the garbling size.
Our framework significantly extends a recent technique of Liu, Wang, Yang, and Yu (Eurocrypt 2025) for lattice-based succinct garbling, and opens new avenues toward practical succinct garbling. For moderately large circuits with a few million gates, our garbled circuits can be two orders of magnitude smaller than Yao-style garbling. While our garbling and evaluation algorithms are much slower, they are still practically feasible, unlike previous fully succinct garbling schemes that rely on expensive tools such as iO or a non-black-box combination of FHE and ABE. This trade-off
can make our framework appealing when a garbled circuit is used as a functional ciphertext that is broadcast or stored in multiple locations (e.g., on a blockchain), in which case communication and storage may dominate computational cost.
High-Order Masking of BIKE
Every cryptographic implementation on embedded device is vulnerable to side-channel attacks. To prevent these attacks, the main countermeasure consists in splitting each sensitive variable in shares and processing them independently.
With the upcoming of new algorithms designed to resist quantum computers and the complexity of their operations, this protection represents a real challenge.
In this article, we present an attack on an earlier attempt to protect the decoder of BIKE cryptosystem against first-order attack.
Additionally, we introduce a new procedure for the high-order masking of the decoder, up-to-date with its latest improvement.
We also present the first fully masked implementation of the whole cryptosystem, including the key generation and the encapsulation.
Eventually, to assess the correctness of our countermeasures and initiate further comparison, we implemented our countermeasures in C and provide benchmarks of their performance.
Black-Box (and Fast) Non-Malleable Zero Knowledge
Non-malleable zero-knowledge (NMZK), originally introduced in the seminal work of Dolev, Dwork, and Naor (STOC 91), is a fundamental concept for modeling the security of proof systems against man-in-the-middle attacks.
Recently, Kim, Liang, and Pandey (CRYPTO 2022) presented the first efficient constant-round NMZK argument system based solely on symmetric-key cryptography. Their construction relies on a non-black-box use of the involved cryptographic primitives and on multiple executions of Ligero (CCS 2017) that affect both the round complexity and the computational efficiency of their protocol. Their work left open the natural important challenge of achieving NMZK using the underlying primitives only in a black-box fashion (regardless of the number of rounds and actual efficiency).
In this paper, we solve the aforementioned open problem by presenting the first NMZK argument system based on the black-box use of cryptographic primitives. Our work is optimal in the use of primitives since we only need one-way functions, and asymptotically optimal in the number of rounds since we only require a constant number of rounds. Our argument system is non-malleable with respect to the strong "simulation-extractability" flavor of non-malleability.
Furthermore, we also show that our construction can be efficiently instantiated in Minicrypt, significantly improving upon the work of Kim et al., both in terms of round complexity and computational efficiency.
AI for Code-based Cryptography
We introduce the use of machine learning in the cryptanalysis of code-based cryptography. Our focus is on distinguishing problems related to the security of NIST round-4 McEliece-like cryptosystems, particularly for Goppa codes used in ClassicMcEliece and Quasi-Cyclic Moderate Density Parity-Check (QC-MDPC) codes used in BIKE. We present DeepDistinguisher, a new algorithm for distinguishing structured codes from random linear codes that uses a transformer. The results show that the new distinguisher achieves a high level of accuracy in distinguishing Goppa codes, suggesting that their structure may be more recognizable by AI models. Our approach outperforms traditional attacks in distinguishing Goppa codes in certain settings and does generalize to larger code lengths without further training using a puncturing technique. We also present the first distinguishing results dedicated to MDPC and QC-MDPC codes.
Formal Analysis of Session-Handling in Secure Messaging: Lifting Security from Sessions to Conversations
The building blocks for secure messaging apps, such as Signal’s X3DH and Double Ratchet (DR) protocols, have received a lot of attention from the research community. They have notably been proved to meet strong security properties even in the case of compromise such as Forward Secrecy (FS) and Post-Compromise Security (PCS). However, there is a lack of formal study of these properties at the application level. Whereas the research works have studied such properties in the context of a single ratcheting chain, a conversation between two persons in a messaging application can in fact be the result of merging multiple ratcheting chains.
In this work, we initiate the formal analysis of secure messaging taking the session-handling layer into account, and apply our approach to Sesame, Signal’s session management. We first experimentally show practical scenarios in which PCS can be violated in Signal by a clone attacker, despite its use of the Double Ratchet. We identify how this is enabled by Signal’s session-handling layer. We then design a formal model of the session-handling layer of Signal that is tractable for automated verification with the Tamarin prover, and use this model to rediscover the PCS violation and propose two provably secure mechanisms to offer stronger guarantees.
Breaking and Provably Restoring Authentication: A Formal Analysis of SPDM 1.2 including Cross-Protocol Attacks
The SPDM (Security Protocol and Data Model) protocol is a standard under development by the DMTF consortium, and supported by major industry players including Broadcom, Cisco, Dell, Google, HP, IBM, Intel, and NVIDIA. SPDM 1.2 is a complex protocol that aims to provide platform security, for example for communicating hardware components or cloud computing scenarios.
In this work, we provide the first holistic, formal analysis of SPDM 1.2: we model the full protocol flow of SPDM considering all of its modes – especially the complex interaction between its different key-exchange modes – in the framework of the Tamarin prover, making our resulting model one of the most complex Tamarin models to date. To our surprise, Tamarin finds a cross-protocol attack that allows a network attacker to completely break authentication of the pre-shared key mode. We implemented our attack on the SPDM reference implementation, and reported the issue to the SPDM developers. DMTF registered our attack as a CVE with CVSS rating 9 (critical).
We propose a fix and develop the first formal symbolic proof using the Tamarin prover for the fixed SPDM 1.2 protocol as a whole. The resulting model of the main modes and their interactions is highly complex, and we develop supporting lemmas to enable proving properties in the Tamarin prover, including the absence of all cross-protocol attacks. Our fix has been incorporated into both the reference implementation and the newest version of the standard. Our results highlight the need for a holistic analysis of other internet standards and the importance of providing generalized security guarantees across entire protocols.
Preimage Attacks on up to 5 Rounds of SHA-3 Using Internal Differentials
In this paper, we study preimage resistance of the SHA-3 standard. We propose a squeeze meet-in-the-middle attack as a new preimage attack method for the sponge functions. This attack combines the squeeze attack and meet-in-the-middle attack, and is implemented by internal differentials. We analyze the inverse operation of the SHA-3 round function, and develop a new target internal differential algorithm as well as a linearization technique for the Sbox in the backward phase. In addition, we propose the concept of a value-difference distribution table (VDDT) to optimize the attack complexity. These techniques lead to faster preimage attacks on five (out of six) SHA-3 functions reduced to 4 rounds, and also bring preimage attacks on 5 rounds of four SHA-3 instances. The attack techniques are verified by performing practical preimage attack on a small variant of 4-round Keccak.
XOCB: Beyond-Birthday-Bound Secure Authenticated Encryption Mode with Rate-One Computation (Full Version)
We present a new block cipher mode of operation for authenticated encryption (AE), dubbed XOCB, that has the following features: (1) beyond-birthday-bound (BBB) security based on the standard pseudorandom assumption of the internal block cipher if the maximum block length is sufficiently smaller than the birthday bound, (2) rate-1 computation, and (3) supporting any block cipher with any key length. Namely, XOCB has effectively the same efficiency as the seminal OCB while having stronger quantitative security without any change in the security model or the required primitive in OCB. Although numerous studies have been conducted in the past, our XOCB is the first mode of operation to achieve these multiple goals simultaneously.
IO-Optimized Design-Time Configurable Negacyclic Seven-Step NTT Architecture for FHE Applications
Uncategorized
Uncategorized
FHE enables computations on encrypted data, proving itself to be an essential building block for privacy-preserving applications. However, it involves computationally demanding operations such as polynomial multiplication, with the NTT being the state-of-the-art solution to perform it. Considering that most FHE schemes operate over the negacyclic ring of polynomials, we introduce a novel formulation of the hierarchical Four-Step NTT approach for the negacyclic ring, eliminating the need for pre- and post-processing steps found in the existing methods. To accelerate NTT operations, the FPGA devices offer flexible and powerful computing platforms. We propose an FPGA-based, high-speed, parametric and fully pipelined architecture that implements the improved Seven-Step NTT algorithm, which builds upon the four-step algorithm. Our design supports a wide range of parameters, including ring sizes up to $2^{16}$ and modulus sizes up to $64$-bit. We focus on achieving configurable throughput, as constrained by the bandwidth of HBM, which is a additional in-package memory common in high-end FGPA devices such as Alveo U280. We aim to maximize throughput through an IO parametric design on the Alveo U280 FPGA. The implementation results demonstrate that the average latency of our design for batch NTT operation is $\mathbf{8.32}\mu s$ for the ring size $2^{16}$ and $64$-bit width; a speed-up of $\mathbf{7.96}\times$ compared to the current state-of-the-art designs.
New Quantum Cryptanalysis of Binary Elliptic Curves (Extended Version)
This paper improves upon the quantum circuits required for the Shor's attack on binary elliptic curves. We present two types of quantum point addition, taking both qubit count and circuit depth into consideration.
In summary, we propose an in-place point addition that improves upon the work of Banegas et al. from CHES'21, reducing the qubit count – depth product by more than $73\%$ – $81\%$ depending on the variant. Furthermore, we develop an out-of-place point addition by using additional qubits. This method achieves the lowest circuit depth and offers an improvement of over $92\%$ in the qubit count – quantum depth product (for a single step).
To the best of our knowledge, our work improves from all previous works (including the CHES'21 paper by Banegas et al., the IEEE Access'22 paper by Putranto et al., and the CT-RSA'23 paper by Taguchi and Takayasu) in terms of circuit depth and qubit count – depth product.
Equipped with the implementations, we discuss the post-quantum security of the binary elliptic curve cryptography. Under the MAXDEPTH metric (proposed by the US government's NIST), the quantum circuit with the highest depth in our work is $2^{24}$, which is significantly lower than the MAXDEPTH limit of $2^{40}$. For the gate count – full depth product, a metric for estimating quantum attack cost (proposed by NIST), the highest complexity in our work is $2^{60}$ for the curve having degree 571 (which is comparable to AES-256 in terms of classical security), considerably below the post-quantum security level 1 threshold (of the order of $2^{156}$).
Transmitting Secrets by Transmitting only Plaintext
Presenting a novel use of encryption, not for hiding a secret, but for marking letters. Given a 2n letters plaintext, the transmitter encrypts the first n letters with key K1 to generate corresponding n cipherletters, and encrypts the second n letters with key K2 to generate n corresponding cipherletters. The transmitter sends the 2n cipherletters along with the keys, K1 and K2 The recipient (and any interceptor) will readily decrypt the 2n cipherletters to the original plaintext. This makes the above procedure equivalent to sending out the plaintext. So why bother? When decrypting the 2n cipherletters one will make a note of how the letters that were encrypted with K1 are mixed with the letters encrypted with K2 while keeping the original order of the letters encrypted with each key. There are 2^n possible mixings. Which means that the choice of mixing order can deliver a secret message, S, comprising n bits. So while on the surface a given plaintext is sent out from transmitter to recipient, this plaintext hides a secret. Imagine a text messaging platform that uses this protocol. An adversary will not know which plain innocent message harbors a secret message. This allows residents of cyberspace to communicate secrets without exposing the fact that they communicated a secret. Expect a big impact on the level of cyberspace privacy.
Improved Cryptanalysis of ChaCha: Beating PNBs with Bit Puncturing
ChaCha is a widely deployed stream cipher and one of the most important symmetric primitives. Due to this practical importance, many cryptanalysis have been proposed. Until now, Probabilistic Neutral Bits (PNBs) have been the most successful. Given differential-linear distinguishers, PNBs are the technique for key recovery relying on an experimental backward correlation obtained through blackbox analysis. A careful theoretical analysis exploiting the round function design may find a better attack and improve our understanding, but the complicated nature of the ARX structure makes such analysis difficult.
%
We propose a theoretical methodology inspired by bit puncturing, which was recently proposed at Eurocrypt 2024. Our method has a theoretical foundation and is thus fundamentally different from PNBs, to which it is the first effective alternative. As a result, we significantly improved the attack complexity for 6, 7, and 7.5-round ChaCha. The 7-round attack is about $2^{40}$ times faster than the previous best. Furthermore, we propose the first 7.5-round attack with a non-negligible advantage over an exhaustive search.