All papers (23783 results)
Garblet: Multi-party Computation for Protecting Chiplet-based Systems
The introduction of shared computation architectures assembled from
heterogeneous chiplets introduces new security threats. Due to the shared logical and physical resources, an untrusted chiplet can act maliciously to surreptitiously probe the data communication between chiplets or sense the computation shared between them. This paper presents Garblet, the first framework to leverage the flexibility offered by chiplet technology and Garbled Circuits (GC)-based MPC to enable efficient, secure computation even in the presence of potentially compromised chiplets. Our approach integrates a customized hardware Oblivious Transfer (OT) module and an optimized evaluator engine into chiplet-based platforms. This configuration distributes the tasks of garbling and evaluating circuits across two chiplets, reducing communication costs and enhancing computation speed. We implement this framework on an AMD/Xilinx UltraScale+ multi-chip module and demonstrate its effectiveness using benchmark functions. Additionally, we introduce a novel circuit decomposition technique that allows for parallel processing across multiple chiplets to further improve computational efficiency. Our results highlight the potential of chiplet systems for accelerating GC (e.g., the time complexity of garbled AES is 0.0226ms) in order to guarantee the security and privacy of the computation on chiplets.
Multi-Authority Functional Encryption: Corrupt Authorities, Dynamic Collusion, Lower Bounds, and More
Decentralization is a great enabler for adoption of modern cryptography in real-world systems. Widespread adoption of blockchains and secure multi-party computation protocols are perfect evidentiary examples for dramatic rise in deployment of decentralized cryptographic systems. Much of cryptographic research can be viewed as reducing (or eliminating) the dependence on trusted parties, while shielding from stronger adversarial threats. In this work, we study the problem of multi-authority functional encryption (MAFE), a popular decentralized generalization of functional encryption (FE). Our main contributions are:
1. We design MAFE for all poly-sized circuits, in the bounded collusion model, under the minimal assumption of PKE/OWFs. Prior to our work, this required either sub-exponentially secure obfuscation, or -party key exchange, or Random Oracles and sub-exponentially secure PKE. We also extend our constructions to the dynamic collusion model under the minimal assumptions of IBE/OWFs. Unlike all prior works, our MAFE systems are truly dynamic and put no restrictions on the maximum number of authorities.
2. Under the hardness of learning with errors (LWE) assumption, we design MAFE for all poly-sized circuits where we allow adversaries to adaptively corrupt local authorities. We allow an adversary to corrupt any out of local authorities as long as = poly . Prior to this, such MAFE relied on sub-exponentially secure obfuscation. Additionally, we design a new MAFE compiler for boosting selective authority corruptions to non-adaptive authority corruptions.
3. We prove a tight implication from MAFE to (VBB/indistinguishability) obfuscation. We show that MAFE implies obfuscation only if the number of attribute bits (jointly) controlled by all corrupt local authorities is . This proves optimality of our second result for a wide range of parameters.
4. Finally, we propose a new MAFE system that we refer to as multi-authority attribute-based functional encryption (MA-ABFE). We view it as an approach to get best of both worlds (fully collusion resistant MA-ABE, and bounded collusion resistant MAFE). By combining our results with prior MA-ABE results, we obtain MA-ABFE for from standard pairing-based assumptions, and for from LWE, both in the Random Oracle Model. We also describe a simple construction of MA-ABE for general predicates from witness encryption, and combining with known results, we also get MA-ABFE for from evasive LWE.
Security of the Ascon Authenticated Encryption Mode in the Presence of Quantum Adversaries
We examine the post-quantum security of the Ascon authenticated encryption (AE) mode. In spite of comprehensive research of Ascon's classical security, the potential impact of quantum adversaries on Ascon has not yet been explored much. We investigate the generic security of the Ascon AE mode in the setting where the adversary owns a quantum computer to improve its attack, while the adversarial encryption or decryption queries are still classical. In this so-called Q1 model, Ascon achieves security up to approximately evaluations, where is the capacity, the key size, and the adversary is block-wise adaptive but restricted to one forgery attempt. Our technique is based on applying the semi-classical one-way to hiding (O2H) lemma, and on tailoring the puncture set to the Ascon mode.
Additionally, we discuss different parameter choices for Ascon and compare our results to generic quantum attacks, such as Grover-based key search and state recovery.
TreeKEM: A Modular Machine-Checked Symbolic Security Analysis of Group Key Agreement in Messaging Layer Security
The Messaging Layer Security (MLS) protocol standard proposes a novel tree-based protocol that enables efficient end-to-end encrypted messaging over large groups with thousands of members. Its functionality can be divided into three components: TreeSync for authenticating and synchronizing group state, TreeKEM for the core group key agreement, and TreeDEM for group message encryption. While previous works have analyzed the security of abstract models of TreeKEM, they do not account for the precise low-level details of the protocol standard. This work presents the first machine-checked security proof for TreeKEM. Our proof is in the symbolic Dolev-Yao model and applies to a bit-level precise, executable, interoperable specification of the protocol. Furthermore, our security theorem for TreeKEM composes naturally with a previous result for TreeSync to provide a strong modular security guarantee for the published MLS standard.
Low Communication Threshold FHE from Standard (Module-)LWE
Threshold fully homomorphic encryption (ThFHE) is an extension of FHE that can be applied to multiparty computation (MPC) with low round complexity. Recently, Passelègue and Stehlé (Asiacrypt 2024) presented a simulation-secure ThFHE scheme with polynomially small decryption shares from “yet another” learning with errors assumption (LWE), in which the norm of the secret key is leaked to the adversary. While “yet another” LWE is reduced from standard LWE, its module variant, “yet another” module-LWE (MLWE), lacks a known reduction from standard MLWE. Because of this, it is left as an open question to extend their scheme to the MLWE-based construction.
In this paper, we address this open problem: we propose a simulation-secure ThFHE scheme with polynomially small decryption shares whose security is (directly) reduced from standard LWE/MLWE. Our core technique, which we call “noise padding”, eliminates the need of “yet another” assumptions: we distribute shares of a small error and use them to adjust the distribution of decryption noise so that no information about the secret key is leaked. As side benefits of our construction, our ThFHE efficiently realizes arbitrary T-out-of-N threshold decryption via simple Shamir secret sharing instead of {0, 1}-linear secret sharing. Furthermore, the sizes of keys, ciphertexts and decryption shares in our scheme are constant w.r.t. the number of parties N ; we achieve compactness w.r.t. N.
Hybrid Obfuscated Key Exchange and KEMs
Hiding the metadata in Internet protocols serves to protect user privacy, dissuade traffic analysis, and prevent network ossification. Fully encrypted protocols require even the initial key exchange to be obfuscated: a passive observer should be unable to distinguish a protocol execution from an exchange of random bitstrings. Deployed obfuscated key exchanges such as Tor's pluggable transport protocol obfs4 are Diffie–Hellman-based, and rely on the Elligator encoding for obfuscation. Recently, Günther, Stebila, and Veitch (CCS '24) proposed a post-quantum variant pq-obfs, using a novel building block called obfuscated key encapsulation mechanisms (OKEMs): KEMs whose public keys and ciphertexts look like random bitstrings.
For transitioning real-world protocols, pure post-quantum security is not enough. Many are taking a hybrid approach, combining traditional and post-quantum schemes to hedge against security failures in either component. While hybrid KEMs are already widely deployed (e.g., in TLS 1.3), existing hybridization techniques fail to provide hybrid obfuscation guarantees for OKEMs. Further, even if a hybrid OKEM existed, the pq-obfs protocol would still not achieve hybrid obfuscation.
In this work, we address these challenges by presenting the first OKEM combiner that achieves hybrid IND-CCA security with hybrid ciphertext obfuscation guarantees, and using this to build Drivel, a modification of pq-obfs that is compatible with hybrid OKEMs. Our OKEM combiner allows for a variety of practical instantiations, e.g., combining obfuscated versions of DHKEM and ML-KEM. We additionally provide techniques to achieve unconditional public key obfuscation for LWE-based OKEMs, and explore broader applications of hybrid OKEMs, including a construction of the first hybrid password-authenticated key exchange (PAKE) protocol secure against adaptive corruptions in the UC model.
Delegatable ABE with Full Security from Witness Encryption
Delegatable Attribute-Based Encryption (DABE) is a well-known generalization of ABE, proposed to mirror organizational hierarchies. In this work, we design a fully-secure DABE scheme from witness encryption and other simple assumptions. Our construction does not rely on Random Oracles, and we provide a black-box reduction to polynomial hardness of underlying assumptions. To the best of our knowledge, this is the first DABE construction (beyond hierarchical identity-based encryption) that achieves full security without relying on complexity leveraging. Our DABE supports an unbounded number of key delegations, and the secret key size grows just linearly with each key delegation operation.
AsyRand: fast asynchronous distributed randomness beacon with reconfiguration
Distributed randomness beacon protocols, which generate publicly verifiable randomness at regular intervals, are crucial for a wide range of applications. The publicly verifiable secret sharing (PVSS) scheme is a promising cryptographic primitive for implementing beacon protocols, such as Hydrand (S\&P '20) and SPURT (S\&P '22). However, two key challenges for practical deployment remain unresolved: asynchrony and reconfiguration. In this paper, we introduce the beacon protocol to address these challenges. In brief, leverages Bracha Reliable Broadcast (BRB) or BRB-like protocols for message dissemination and incorporates a producer-consumer model to decouple the production and consumption of PVSS commitments. In the producer-consumer model, PVSS commitments are produced and consumed using a queue data structure. Specifically, the producer process is responsible for generating new PVSS commitments and reaching consensus on them within the queue, while the consumer process continuously consumes the commitments to recover PVSS secrets and generate new beacon values. This separation allows the producer and consumer processes to operate simultaneously and asynchronously, without the need for a global clock. Moreover, the producer-consumer model enables each party to detect potential faults in other parties by monitoring the queue length. If necessary, parties in can initiate a removal process for faulty parties. BRB is also employed to facilitate the addition of new parties without requiring a system restart. In summary, supports reconfiguration, enhancing both the protocol's usability and reliability. Additionally, we propose a novel PVSS scheme based on the protocol, which is of independent interest. Regarding complexity, achieves state-of-the-art performance with communication complexity, computation complexity, and verification complexity.
Withdrawable signatures in Fiat-Shamir with aborts constructions
This article presents an extension of the work performed by Liu, Baek and Susilo on withdrawable signatures to the Fiat-Shamir with aborts paradigm. We introduce an abstract construction, and provide security proofs for this proposal. As an instantiation, we provide a concrete construction for a withdrawable signature scheme based on Dilithium.
SNARKs for Stateful Computations on Authenticated Data
We present a new generalization of (zk-)SNARKs combining two additional features at the same time. Besides the verification of correct computation, our new SNARKs also allow, first, the verification of input data authenticity. Specifically, a verifier can confirm that the input to the computation originated from a trusted source. Second, our SNARKs support verification of stateful computations across multiple rounds, ensuring that the output of the current round correctly depends on the internal state of the previous round. Our SNARKs are specifically suited to applications in cyber-physical control systems, where computations are periodically carried out and need to be checked immediately. Our focus is on concrete practicality, so we abstain from arithmetizing hash functions or signatures in our SNARKs. Rather, we modify the internals of an existing SNARK to extend its functionality. Additionally, we present new optimizations to reduce proof size, prover time, and verification time in our setting. With our construction, prover runtime improves significantly over the baseline by a factor of 89. Verification time is 70 % less for computations on authenticated data and 33 % less for stateful computations. To demonstrate relevance and practicality, we implement and benchmark our new SNARKs in a sample real-world scenario with
a (simple) quadcopter flight control system.
Periodic Table of Cryptanalysis: Geometric Approach with Different Bases
In the past three decades, we have witnessed the creation of various cryptanalytic attacks. However, relatively little research has been done on their potential underlying connections. The geometric approach, developed by Beyne in 2021, shows that a cipher can be viewed as a linear operation when we treat its input and output as points in an induced \textit{free vector space}.
By performing a change of basis for the input and output spaces, one can obtain various transition matrices. Linear, differential, and (ultrametic) integral attacks have been well reinterpreted by Beyne's theory in a unified way.
Thus far, the geometric approach always uses the same basis for the input and output spaces. We observe here that this restriction is unnecessary and allowing different bases makes the geometric approach more flexible and able to interpret/predict more attack types. Given some set of bases for the input and output spaces, a family of basis-based attacks is defined by combining them, and all attacks in this family can be studied in a unified automatic search method.
We revisit three kinds of bases from previous geometric approach papers and extend them to four extra ones by introducing new rules when generating new bases. With the final seven bases, we can obtain different basis-based attacks in the -th order spaces, where the \textit{order} is defined as the number of messages used in one sample during the attack.
We then provide four examples of applications of this new framework. First, we show that by choosing a better pair of bases, Beyne and Verbauwhede's ultrametric integral cryptanalysis can be interpreted as a single element of a transition matrix rather than as a linear combination of elements. This unifies the ultrametric integral cryptanalysis with the previous linear and quasi-differential attacks.
Second, we revisit the multiple-of- property with our refined geometric approach and exhibit new multiple-of- distinguishers that can reach more rounds of the \skinny-64 cipher than the state-of-the-art.
Third, we study the multiple-of- property for the first-order case, which is similar to the subspace trail but it is the divisibility property that is considered. This leads to a new distinguisher for 11-round-reduced \skinny-64.
Finally, we give a closed formula for differential-linear approximations without any assumptions, even confirming that the two differential-linear approximations of \simeck-32 and \simeck-48 found by Hadipour \textit{et al.} are deterministic independently of concrete key values.
We emphasize that all these applications were not possible before.
Related-Key Differential and Boomerang Cryptanalysis in the Fixed-Key Model
Differential cryptanalysis, along with its variants such as boomerang attacks, is widely used to evaluate the security of block ciphers. These cryptanalytic techniques often rely on assumptions like the \textit{hypothesis of stochastic equivalence} and \textit{Markov ciphers assumption}. Recently, more attention has been paid to verifying whether differential characteristics (DCs) meet these assumptions, finding both positive and negative results.
A part of these efforts includes the automatic search methods for both the value and difference propagation (e.g., Liu et al. CRYPTO 2020, Nageler et al. ToSC 2025/1), structural constraints analysis (e.g., Tan and Peyrin, ToSC 2022/4), and the quasidifferential (Beyne and Rijmen, CRYPTO 2022).
Nevertheless, less attention has been paid to the related-key DCs and boomerang distinguishers, where the same assumptions are used. To the best of our knowledge, only some related-tweakey DCs of \skinny were checked thanks to its linear word-based key-schedule, and no similar work is done for boomerang distinguishers.
The verification of related-key DCs and boomerang distinguishers is as important as that of DCs, as they often hold the longest attack records for block ciphers.
This paper focuses on investigating the validity of DCs in the related-key setting and boomerang distinguishers in both single- and related-key scenarios.
For this purpose, we generalize Beyne and Rijmen's quasidifferential techniques for the related-key DCs and boomerang attacks.
First, to verify related-key DCs, the related-key quasi-DC is proposed. Similar to the relationship between the quasi-DC and DC, the exact probability of a related-key DC is equal to the sum of all corresponding related-key quasi-DCs' correlations.
Since the related-key quasi-DCs involve the key information, we can determine the probability of the target related-key DC in different key subspaces.
We find both positive and negative results.
For example, we verify the 18-round related-key DC used in the best attack on \gift-64 whose probability is , finding that this related-key DC has a higher probability for keys which is around , but it is impossible for the remaining keys.
Second, we identify proper bases to describe the boomerang distinguishers with the geometric approach. A quasi-BCT is constructed to consider the value influence in the boomerang connectivity table (BCT). For the DC parts, the quasi-biDDT is used. Connecting the quasi-BCT and quasi-biDDT, we can verify the probability of a boomerang distinguisher with quasi-boomerang characteristics.
This also allows us to analyze the probability of the boomerang in different key spaces.
For a 17-round boomerang distinguisher of \skinny-64-128 whose probability is , we find that the probability can be for half of keys, and impossible for the other half.
PEGASIS: Practical Effective Class Group Action using 4-Dimensional Isogenies
In this paper, we present the first practical algorithm to compute an effective group action of the class group of any imaginary quadratic order on a set of supersingular elliptic curves primitively oriented by . Effective means that we can act with any element of the class group directly, and are not restricted to acting by products of ideals of small norm, as for instance in CSIDH. Such restricted effective group actions often hamper cryptographic constructions, e.g. in signature or MPC protocols.
Our algorithm is a refinement of the Clapoti approach by Page and Robert, and uses -dimensional isogenies. As such, it runs in polynomial time, does not require the computation of the structure of the class group, nor expensive lattice reductions, and our refinements allows it to be instantiated with the orientation given by the Frobenius endomorphism. This makes the algorithm practical even at security levels as high as CSIDH-4096. Our implementation in SageMath takes 1.5s to compute a group action at the CSIDH-512 security level, 21s at CSIDH-2048 level and around 2 minutes at the CSIDH-4096 level. This marks the first instantiation of an effective cryptographic group action at such high security levels. For comparison, the recent KLaPoTi approach requires around 200s at the CSIDH-512 level in SageMath and 2.5s in Rust.
Re-Randomize and Extract: A Novel Commitment Construction Framework Based on Group Actions
Cryptographic group actions have attracted growing attention as a useful tool for constructing cryptographic schemes.
Among their applications, commitment schemes are particularly interesting as fundamental primitives, playing a crucial role in protocols such as zero-knowledge proofs, multi-party computation, and more.
In this paper, we introduce a novel framework to construct commitment schemes based on cryptographic group actions.
Specifically, we propose two key techniques for general group actions: re-randomization and randomness extraction.
Roughly speaking, a re-randomization algorithm introduces randomness within an orbit for any input element, while a randomness extractor maps this randomness to uniformity over the message space.
We demonstrate that these techniques can significantly facilitate the construction of commitment schemes, providing a flexible framework for constructing either perfectly hiding or perfectly binding commitments, depending on the type of extractor involved.
Moreover, we extend our framework to support the construction of commitments with additional desirable properties beyond hiding and binding, such as dual-mode commitments and enhanced linkable commitments.
These extensions are achieved by further adapting the extractor to satisfy trapdoor or homomorphic properties.
Finally, we instantiate all our proposed commitment schemes using lattices, specifically leveraging the lattice isomorphism problem (LIP) and the lattice automorphism problem (LAP) as underlying cryptographic assumptions.
To the best of our knowledge, this is the first commitment scheme construction based on LIP/LAP.
Additionally, we use LIP to provide a repair and improvement to the tensor isomorphism-based non-interactive commitment scheme proposed by D'Alconzo, Flamini, and Gangemi (ASIACRYPT 2023), which was recently shown to be insecure by an attack from Gilchrist, Marco, Petit, and Tang (CRYPTO 2024).
Computational Quantum Anamorphic Encryption and Anamorphic Secret Sharing
The concept of anamorphic encryption, first formally introduced by Persiano et al. in their influential 2022 paper titled ``Anamorphic Encryption: Private Communication Against a Dictator,'' enables embedding covert messages within ciphertexts. One of the key distinctions between a ciphertext embedding a covert message and an original ciphertext, compared to an anamorphic ciphertext, lies in the indistinguishability between the original ciphertext and the anamorphic ciphertext. This encryption procedure has been defined based on a public-key cryptosystem. Initially, we present a quantum analogue of the classical anamorphic encryption definition that is based on public-key encryption. Additionally, we introduce a definition of quantum anamorphic encryption that relies on symmetric key encryption. Furthermore, we provide a detailed generalized construction of quantum anamorphic symmetric key encryption within a general framework, which involves taking any two quantum density matrices of any different dimensions and constructing a single quantum density matrix, which is the quantum anamorphic ciphertext containing ciphertexts of both of them. Subsequently, we introduce a definition of computational anamorphic secret-sharing and extend the work of \c{C}akan et al. on computational quantum secret-sharing to computational quantum anamorphic secret-sharing, specifically addressing scenarios with multiple messages, multiple keys, and a single share function. This proposed secret-sharing scheme demonstrates impeccable security measures against quantum adversaries.
Tight Adaptive Simulation Security for Identity-based Inner-Product FE in the (Quantum) Random Oracle Model
Abdalla et al. (ASIACRYPT 2020) introduced a notion of identity-based inner-product functional encryption (IBIPFE) that combines identity-based encryption and inner-product functional encryption (IPFE). Thus far, several pairing-based and lattice-based IBIPFE schemes have been proposed. However, there are two open problems. First, there are no known IBIPFE schemes that satisfy the adaptive simulation-based security. Second, known IBIPFE schemes that satisfy the adaptive indistinguishability-based security or the selective simulation-based security do not have tight reductions. In this paper, we propose lattice-based and pairing-based IBIPFE schemes that satisfy the tight adaptive simulation-based security. At first, we propose a generic transformation from an indistinguishability-based secure -dimensional (IB)IPFE scheme to a simulation-based secure -dimensional (IB)IPFE scheme. The proposed transformation improves Agrawal et al.'s transformation for plain IPFE (PKC 2020) that requires an indistinguishability-based secure -dimensional scheme. Then, we construct a lattice-based IBIPFE scheme that satisfies the tight adaptive indistinguishability-based security under the LWE assumption in the quantum random oracle model. We apply the proposed transformation and obtain the first lattice-based IBIPFE scheme that satisfies adaptive simulation-based security. Finally, we construct a pairing-based IBIPFE scheme that satisfies the tight adaptive simulation-based security under the DBDH assumption in the random oracle model. The pairing-based scheme does not use the proposed transformation towards the best efficiency.
Blind Signatures from Cryptographic Group Actions
Uncategorized
Uncategorized
We provide a generic construction of blind signatures from cryptographic group actions following the framework of the blind signature CSIOtter introduced by Katsumata et al. (CRYPTO'23) in the context of isogeny (commutative group action). We adapt and modify that framework to make it work even for non-commutative group actions. As a result, we obtain a blind signature from abstract group actions which are proven to be secure in the random oracle model. We also propose an instantiation based on a variant of linear code equivalence, interpreted as a symmetric group action.
Trail-Estimator: An Automated Verifier for Differential Trails in Block Ciphers
Differential cryptanalysis is a powerful technique for attacking block ciphers, wherein the Markov cipher assumption and stochastic hypothesis are commonly employed to simplify the search and probability estimation of differential trails. However, these assumptions often neglect inherent algebraic constraints, potentially resulting in invalid trails and inaccurate probability estimates. Some studies identified violations of these assumptions and explored how they impose constraints on key material, but they have not yet fully captured all relevant ones. This study proposes Trail-Estimator, an automated verifier for differential trails on block ciphers, consisting of two parts: a constraint detector Cons-Collector and a solving tool Cons-Solver. We first establish the fundamental principles that will allow us to systematically identify all constraint subsets within a differential trail, upon which Cons-Collector is built. Then, Cons-Solver utilizes specialized preprocessing techniques to efficiently solve the detected constraint subsets, thereby determining the key space and providing a comprehensive probability distribution of differential trails. To validate its effectiveness, Trail-Estimator is applied to verify 14 differential trails for the SKINNY, LBLOCK, and TWINE block ciphers. Experimental results show that Trail-Estimator consistently identifies previously undetected constraints for SKINNY and discovers constraints for the first time for LBLOCK and TWINE. Notably, it is the first tool to discover long nonlinear constraints extending beyond five rounds in these ciphers. Furthermore, Trail-Estimator's accuracy is validated by experiments showing its predictions closely match the real probability distribution of short-round differential trails.
Provably Secure Approximate Computation Protocols from CKKS
Secure multi-party computation (MPC) enables collaborative, privacy-preserving computation over private inputs. Advances in homomorphic encryption (HE), particularly the CKKS scheme, have made secure computation practical, making it well-suited for real-world applications involving approximate computations. However, the inherent approximation errors in CKKS present significant challenges in developing MPC protocols.
This paper investigates the problem of secure approximate MPC from CKKS. We first analyze CKKS-based protocols in two-party setting. When only one party holds a private input and the other party acts as an evaluator, a simple protocol with the noise smudging technique on the encryptor's side achieves security in the standard manner. When both parties have private inputs, we demonstrate that the protocol incorporating independent errors from each party achieves a relaxed standard security notion, referred to as a liberal security. Nevertheless, such a protocol fails to satisfy the standard security definition. To address this limitation, we propose a novel protocol that employs a distributed sampling approach to generate smudging noise in a secure manner, which satisfies the standard security definition.
Finally, we extend the two-party protocols to the multi-party setting. Since the existing threshold CKKS-based MPC protocol only satisfies the liberal security, we present a novel multi-party protocol achieving the standard security by applying multi-party distributed sampling of a smudging error.
For all the proposed protocols, we formally define the functionalities and provide rigorous security analysis within the simulation-based security framework. To the best of our knowledge, this is the first work to explicitly define the functionality of CKKS-based approximate MPC and achieve formal security guarantees.
Reducing the Number of Qubits in Solving LWE
At Crypto 2021, May presented an algorithm solving the ternary Learning-With-Error problem, where the solution is a ternary vector with a known number of and entries. This attack significantly improved the time complexity of from previously known algorithms to , where is the size of the key space. Therefore, May exploited that using more representations, i.e., allowing ternary interim results with additional and entries, reduces the overall time complexity.
Later, van Hoof et al. (PQCrypto 2021) combined May's algorithm with quantum walks to a new attack that performs in time . However, this quantum attack requires an exponential amount of qubits. This work investigates whether the ternary LWE problem can also be solved using only qubits. Therefore, we look closely into Dicke states, which are an equal superposition over all binary vectors with a fixed Hamming weight. Generalizing Dicke states to ternary vectors makes these states applicable to the ternary LWE problem.
Bärtschi and Eidenbenz (FCT 2019) proposed a quantum circuit to prepare binary Dicke states deterministically in linear time . Their procedure benefits from the inductive structure of Dicke states, i.e., that a Dicke state of a particular dimension can be built from Dicke states of lower dimensions. Our work proves that this inductive structure is also present in generalized Dicke states with an underlying set other than . Utilizing this structure, we introduce a new algorithm that deterministically prepares generalized Dicke states in linear time, for which we also provide an implementation in Qiskit.
Finally, we apply our generalized Dicke states to the ternary LWE problem, and construct an algorithm that requires qubits and classical memory space up to . We achieve as best obtainable time complexity.
An Efficient Quantum Oblivious Transfer Protocol
Oblivious Transfer (OT) is a significant two party privacy preserving cryptographic primitive. OT involves a sender having several pieces of information and a receiver having a choice bit. The choice bit represents the piece of information that the receiver wants to obtain as an output of OT. At the end of the protocol, sender remains oblivious about the choice bit and receiver remains oblivious to the contents of the information that were not chosen. It has applications ranging from secure multi-party computation, privacy-preserving protocols to cryptographic protocols for secure communication. Most of the classical OT protocols are based on number theoretic assumptions which are not quantum secure and existing quantum OT protocols are not so efficient and practical. Herein, we present the design and analysis of a simple yet efficient quantum OT protocol, namely qOT. qOT is designed by using the asymmetric key distribution proposed by Gao et al. [18] as a building block. The designed qOT requires only single photons as a source of a quantum state, and the measurements of the states are computed using single particle projective measurement. These make qOT efficient and practical. Our proposed design is secure against quantum attacks. Moreover, qOT also provides long-term security.
Blockchain-based Secure D2D localisation with adaptive precision
In this paper we propose a secure best effort methodology for providing localisation information to devices in a heterogenous network where devices do not have access to GPS-like technology or heavy cryptographic infrastructure. Each device will compute its localisation with the highest possible accuracy based solely on the data provided by its neighboring anchors. The security of the localisation is guarantied by registering the localisation information on a distributed ledger via smart contracts. We prove the security of our solution under the adaptive chosen message attacks model. We furthermore evaluate the effectiveness of our solution by measuring the average register location time, failed requests, and total execution time using as DLT case study Hyperledger Besu with QBFT consensus.
Monotone-Policy BARGs and More from BARGs and Quadratic Residuosity
A tuple of NP statements satisfies a monotone policy if , where if and only if is in the NP language. A monotone-policy batch argument (monotone-policy BARG) for NP is a natural extension of regular batch arguments (BARGs) that allows a prover to prove that satisfy a monotone policy with a proof of size , where is the size of the Boolean circuit computing the NP relation .
Previously, Brakerski, Brodsky, Kalai, Lombardi, and Paneth (CRYPTO 2023) and Nassar, Waters, and Wu (TCC 2024) showed how to construct monotone-policy BARGs from (somewhere-extractable) BARGs for NP together with a leveled homomorphic encryption scheme (Brakerski et al.) or an additively homomorphic encryption scheme over a sufficiently-large group (Nassar et al.). In this work, we improve upon both works by showing that BARGs together with additively homomorphic encryption over any group suffices (e.g., over ). For instance, we can instantiate the additively homomorphic encryption with the classic Goldwasser-Micali encryption scheme based on the quadratic residuosity (QR) assumption. Then, by appealing to existing compilers, we also obtain a monotone-policy aggregate signature scheme from any somewhere-extractable BARG and the QR assumption.
Lattice-Based Post-Quantum iO from Circular Security with Random Opening Assumption (Part II: zeroizing attacks against private-coin evasive LWE assumptions)
Indistinguishability obfuscation (iO) stands out as a powerful cryptographic primitive but remains notoriously difficult to realize under simple-to-state, post-quantum assumptions. Recent works have proposed lattice-inspired iO constructions backed by new “LWE-with-hints” assumptions, which posit that certain distributions of LWE samples retain security despite auxiliary information. However, subsequent cryptanalysis has revealed structural vulnerabilities in these assumptions, leaving us without any post-quantum iO candidates supported by simple, unbroken assumptions.
Motivated by these proposals, we introduce the \emph{Circular Security with Random Opening} (CRO) assumption—a new LWE-with-hint assumption that addresses structural weaknesses from prior assumptions, and based on our systematic examination, does not appear vulnerable to known cryptanalytic techniques. In CRO, the hints are random ``openings'' of zero-encryptions under the Gentry--Sahai--Waters (GSW) homomorphic encryption scheme. Crucially, these zero-encryptions are efficiently derived from the original LWE samples via a special, carefully designed procedure, ensuring that the openings are marginally random. Moreover, the openings do not induce any natural leakage on the LWE noises.
These two features--- marginally random hints and the absence of (natural) noise leakage---rule out important classes of attacks that had undermined all previous LWE-with-hint assumptions for iO. Therefore, our new lattice-based assumption for iO provides a qualitatively different target for cryptanalysis compared to existing assumptions.
To build iO under this less-structured CRO assumption, we develop several new technical ideas. In particular, we devise an oblivious LWE sampling procedure, which succinctly encodes random LWE secrets and smudging noises, and uses a tailored-made homomorphic evaluation procedure to generate secure LWE samples. Crucially, all non-LWE components in this sampler, including the secrets and noises of the generated samples, are independently and randomly distributed, avoiding attacks on non-LWE components.
In the second part of this work, we investigate recent constructions of obfuscation for pseudorandom functionalities. We show that the same cryptanalytic techniques used to break previous LWE-with-hints assumptions for iO (Hopkins-Jain-Lin CRYPTO 21) can be adapted to construct counterexamples against the private-coin evasive LWE assumptions underlying these pseudorandom obfuscation schemes.
Unlike prior counterexamples for private-coin evasive LWE assumptions, our new counterexamples take the form of zeroizing attacks, contradicting the common belief that evasive-LWE assumptions circumvent zeroizing attacks by restricting to ``evasive'' or pseudorandom functionalities.
An ETSI GS QKD compliant TLS implementation
This paper presents our implementation of the Quantum Key Distribution standard ETSI GS QKD 014 v1.1.1, which required a modification of the Rustls library. We modified the TLS protocol while maintaining backward compatibility on the client and server side. We thus wish to participate in the effort to generalize the use of Quantum Key Distribution on the Internet. Finally we used this library for a video conference call encrypted by QKD.
Fair Exchange for Decentralized Autonomous Organizations via Threshold Adaptor Signatures
A Decentralized Autonomous Organization (DAO) enables multiple parties to collectively manage digital assets in a blockchain setting. We focus on achieving fair exchange between DAOs using a cryptographic mechanism that operates with minimal blockchain assumptions and, crucially, does not rely on smart contracts.
Specifically, we consider a setting where a DAO consisting of sellers holding shares of a witness interacts with a DAO comprising buyers holding shares of a signing key ; the goal is for the sellers to exchange for a signature under transferring a predetermined amount of funds.
Fairness is required to hold both between DAOs (i.e., ensuring that each DAO receives its asset if and only if the other does) as well as within each DAO (i.e., ensuring that all members of a DAO receive their asset if and only if every other member does).
We formalize these fairness properties and present an efficient protocol for DAO-based fair exchange under standard cryptographic assumptions. Our protocol leverages certified witness encryption and threshold adaptor signatures, two primitives of independent interest that we introduce and show how to construct efficiently.
Generic Composition: From Classical to Quantum Security
Authenticated encryption (AE) provides both authenticity and privacy.
Starting with Bellare's and Namprempre's work in 2000, the Encrypt-then-MAC composition of an encryption scheme for privacy and a MAC for authenticity has become a well-studied and common approach.
This work investigates the security of the Encrypt-then-MAC composition in a quantum setting which means that adversarial queries as well as the responses to those queries may be in superposition.
We demonstrate that the Encrypt-then-MAC composition of a chosen-plaintext (IND-qCPA) secure symmetric encryption scheme SE and a plus-one unforgeable MAC fails to achieve chosen-ciphertext (IND-qCCA) security.
On the other hand, we show that it suffices to choose a quantum pseudorandom function (qPRF) as the MAC.
Namely, the Encrypt-then-MAC composition of SE and a qPRF is IND-qCCA secure.
The same holds for the Encrypt-and-MAC composition of SE and a qPRF
How Small Can S-boxes Be
S-boxes are the most popular nonlinear building blocks used in symmetric-key primitives.
Both cryptographic properties and implementation cost of an S-box are crucial for a good cipher design, especially for lightweight ones.
This paper aims to determine the exact minimum area of optimal 4-bit S-boxes (whose differential uniform and linearity are both 4) under certain standard cell library.
Firstly, we evaluate the upper and lower bounds upon the minimum area of S-boxes, by proposing a Prim-like greedy algorithm and utilizing properties of balanced Boolean functions to construct bijective S-boxes.
Secondly, an SAT-aided automatic search tool is proposed that can simultaneously consider multiple cryptographic properties such as the uniform, linearity, algebraic degree, and the implementation costs such as area, and gate depth complexity.
Thirdly, thanks to our tool, we manage to find the exact minimum area for different types of 4-bit S-boxes.
The measurement in this paper uses the gate equivalent (GE) as standard unit under UMC 180 nm library, all 2/3/4-input logic gates are taken into consideration.
Our results show that the minimum area of optimal 4-bit S-box is 11 GE and the depth is 3.
If we do not use the 4-input gates, this minimum area increases to 12 GE and the depth in this case is 4, which is the same if we only use 2-input gates.
If we further require that the S-boxes should not have fixed points, the minimum area continue increasing a bit to 12.33 GE while keeping the depth.
Interestingly, the same results are also obtained for non-optimal 4-bit bijective S-boxes as long as their differential uniform and linearity (i.e., there is no non-trivial linear structures) if only 2-input and 3-input gates are used. But the minimum area reduce to 9 GE if 4-input gates are involved.
More strictly, if we require the algebraic degree of all coordinate functions of optimal S-boxes be 3, the minimum area is 14 GE with fixed point and 14.33 GE without fixed point, and the depth increases sharply to 8.
Besides determining the exact minimum area, our tool is also useful to search for a better implementation of existing S-boxes. As a result, we find out an implementation of Keccak's 5-bit S-box with 17 GE. As a contrast, the designer's original circuit has an area of 23.33 GE, while the optimized result by Lu et al. achieves an area of 17.66 GE. Also, we find out the first optimized implementation of SKINNY's 8-bit S-box with 26.67 GE.
MERCURY: A multilinear Polynomial Commitment Scheme with constant proof size and no prover FFTs
We construct a pairing-based polynomial commitment scheme for multilinear polynomials of size where
constructing an opening proof requires field operations, and scalar multiplications. Moreover,
the opening proof consists of a constant number of field elements.
This is a significant improvement over previous works which would require either
1. field operations; or
2. size opening proof.
The main technical component is a new method of verifiably folding a witness via univariate polynomial division.
As opposed to previous methods, the proof size and prover time remain constant *regardless of the folding factor*.
Optimizing Final Exponentiation for Pairing-Friendly Elliptic Curves with Odd Embedding Degrees Divisible by 3
In pairing-based cryptography, final exponentiation with a large fixed exponent is crucial for ensuring unique outputs in Tate and optimal Ate pairings. While optimizations for elliptic curves with even embedding degrees have been well-explored, progress for curves with odd embedding degrees, particularly those divisible by , has been more limited. This paper presents new optimization techniques for computing the final exponentiation of the optimal Ate pairing on these curves. The first exploits the fact that some existing seeds have a form enabling cyclotomic cubing and extends this to generate new seeds with the same form. The second is to generate new seeds with sparse ternary representations, replacing squaring with cyclotomic cubing. The first technique improves efficiency by and compared to the square and multiply (\textbf{SM}) method for existing seeds at -bit and -bit security levels, respectively. For newly generated seeds, it achieves efficiency gains of at -bit, at -bit, and at -bit security levels. The second technique improves efficiency by at -bit, at -bit, and at -bit security levels.
Pencil: A Domain-Extended PRF with Full -bit Security for Strengthening GCM and More
We consider the problem of constructing efficient pseudorandom functions with Beyond-Birthday-Bound (BBB) security from blockciphers. More specifically, we are interested in variable-output-length pseudorandom functions (PRF) whose domain is twice that of the underlying blockcipher. We present two such constructions, and , which provide weak PRF and full PRF security, respectively, where both achieve full -bit security. While several recent works have focused on constructing BBB PRFs from blockciphers, much less attention has been given to weak PRF constructions which can potentially be constructed more efficiently and still serve as a useful primitive. Another understudied problem in this domain, is that of extending the domain of a BBB PRF, which turns out to be rather challenging. Besides being of theoretical interest in itself, this is also a very practical problem. Often, the input to the BBB PRF is a nonce, but random nonces are much easier to handle in practice as they do not require maintaining state---which can be very cumbersome in distributed systems and encrypted cloud storage. Accordingly, in order to maintain a BBB security bound, one requires random nonces of size between and bits long and corresponding BBB (weak) PRF constructions that admit matching input sizes. NIST has recently announced a pre-draft call for comments to standardise AEAD schemes that can encrypt larger amounts of data and admit larger nonces. The call lists two approaches. The first is to define an analogue of GCM using a 256-bit blockcipher, and the second is based on a recent proposal by Gueron, to extend GCM with a key derivation function (KDF) called DNDK to increase its security. In essence, DNDK is a BBB-secure expanding weak pseudorandom function with a domain size of 192 bits that is realised from AES. Our work makes relevant contributions to this domain in two important ways. Firstly, an immediate consequence of our work is that one can construct a GCM analogue with BBB security from , without resorting to a 256-bit blockcipher. Our second contribution is that can be used as a KDF in combination with GCM in an analogous manner to DNDK-GCM. However, being a full PRF as opposed to DNDK which is only a weak PRF, allows one to prove the KDF-GCM composition secure as an AEAD scheme. Finally, when contrasting and DNDK as weak PRFs with comparable parameters, our construction requires only half the blockcipher calls.
On the Security and Privacy of CKKS-based Homomorphic Evaluation Protocols
CKKS is a homomorphic encryption (HE) scheme that supports arithmetic over complex numbers in an approximate manner.
Despite its utility in PPML protocols, formally defining the security of CKKS-based protocols is challenging due to its approximate nature.
To be precise, in a sender-receiver model, where the receiver holds input ciphertexts and the sender evaluates its private circuit, it is difficult to define sender's privacy in terms of indistinguishability, whereas receiver's privacy is easily achieved through the semantic security of CKKS.
In this paper, we present a new definition for CKKS-based protocols, called Differentially Private Homomorphic Evaluation (DPHE) protocols, along with a general method to achieve this.
In our definition, we relax the sender’s privacy condition from indistinguishability to differential privacy notion.
We focus on the fact that most security concern for PPML protocols is differential privacy on evaluation results, rather than the simulatability of the evaluation.
We prove that if the ideal functionality satisfies differential privacy and a protocol satisfies DPHE, then the output of the protocol also satisfies differential privacy.
Next, we provide a general compiler that transforms a plain CKKS-based protocol into a DPHE one.
We achieve this by mixing the Laplace mechanism and zero-knowledge argument of knowledge (ZKAoK) for CKKS.
This approach allows us to achieve sender's privacy with a moderate noise, whereas the previous indistinguishability-based approach requires exponentially large overhead.
Finally, we provide a concrete instantiation of ZKAoK for CKKS in the form of PIOP.
To prove the well-formedness of CKKS ciphertexts and public keys, we devise new proof techniques that use homomorphic evaluation during verification.
We also provide an implementation to demonstrate the practicality of our ZKAoK for CKKS by compiling PIOPs using the HSS polynomial commitment scheme (Crypto'24).
Faster FHEW Bootstrapping with Adaptive Key Update
Lee et al. proposed a new bootstrapping algorithm based on homomorphic automorphism, which merges the empty sets of ciphertexts by adjusting the window size. This algorithm supports arbitrary secret key distributions with no additional runtime costs while using small evaluation keys. However, our implementation reveals that once the window size exceeds a certain threshold, the time required for bootstrapping remains relatively constant. This observation prompts the question of how to further reduce the running time.
To address this challenge, we introduce a new trick called Adaptive Key Update (AKU). With AKU and automorphism techniques, we propose a new bootstrapping algorithm for Gaussian secret keys that requires only
external products and no switching for blind rotation.
Building on this, we employ window size optimization and key switching techniques to further improve the algorithm. The improved algorithm provides a useful trade-off between key storage and computational efficiency, depending on the choice of the window size.
Compared to the current fastest FHEW bootstrapping method for Gaussian secret keys (the LLWW+ method proposed by Li et al.), our AKU-based algorithm reduces the number of key switching by 76% and decreases the running time of bootstrapping by 20.7%. At this time, the practical runtime for bootstrapping is approximately equal to that of performing only external products.
A New Generalized Attack on RSA-like Cryptosystems
Rivest, Shamir, and Adleman published the RSA cryptosystem in 1978, which has been widely used over the last four decades. The security of RSA is based on the difficulty of factoring large integers , where and are prime numbers. The public exponent and the private exponent are related by the equation . Recently, Cotan and Te{\c{s}}eleanu (NordSec 2023) introduced a variant of RSA, where the public exponent and the private exponent satisfy the equation for some positive integer . In this paper, we study the general equation with positive integers and , and . We show that, given the public parameters and , one can recover and and factor the modulus in polynomial time by combining continued fractions with Coppersmith's algorithm which relies on lattice reduction techniques, under specific conditions on , , and . Furthermore, we show that if the private exponent in an RSA-like cryptosystem is either small or too large, then can be factored in polynomial time. This attack applies to the standard RSA cryptosystem.
A Complete Security Proof of SQIsign
SQIsign is the leading digital signature from isogenies. Despite the many improvements that have appeared in the literature, all its recents variants lack a complete security proof. In this work, we provide the first full security proof of SQIsign, as submitted to the second round of NIST's on-ramp track for digital signatures.
To do so, we introduce a new framework, which we call Fiat-Shamir with hints, that captures all those protocols where the simulator needs additional information to simulate a transcript. Using this framework, we show that SQIsign is EUF-CMA secure in the ROM, assuming the hardness of the One Endomorphism problem with hints, or the hardness of the Full Endomorphism Ring problem with hints together with a hint indistinguishability assumption; all assumptions, unlike previous ones in the literature, are non-interactive. Along the way, we prove several intermediate results that may be of independent interest.
Side-Channel and Fault Injection Attacks on VOLEitH Signature Schemes: A Case Study of Masked FAEST
Ongoing efforts to transition to post-quantum secure public-
key cryptosystems have created the need for algorithms with a variety of
performance characteristics and security assumptions. Among the can-
didates in NIST’s post-quantum standardisation process for additional
digital signatures is FAEST, a Vector Oblivious Linear Evaluation in-the-
Head (VOLEitH)-based scheme, whose security relies on the one-wayness
of the Advanced Encryption Standard (AES). The VOLEitH paradigm
enables competitive performance and signature sizes under conservative
security assumptions. However, since it was introduced recently, in 2023,
its resistance to physical attacks has not yet been analysed. In this
paper, we present the first security analysis of VOLEitH-based signa-
ture schemes in the context of side-channel and fault injection attacks.
We demonstrate four practical attacks on a masked implementation of
FAEST in ARM Cortex-M4 capable of recovering the full secret key with
high probability (greater than 0.87) from a single signature. These at-
tacks exploit vulnerabilities of components specific to VOLEitH schemes
and FAEST, such as the all-but-one vector commitments, VOLE gener-
ation, and AES proof generation. Finally, we propose countermeasures
to mitigate these attacks and enhance the physical security of VOLEitH-
based signature schemes.
HiAE: A High-Throughput Authenticated Encryption Algorithm for Cross-Platform Efficiency
This paper addresses the critical challenges in designing cryptographic algorithms that achieve both high performance and cross-platform efficiency on ARM and x86 architectures, catering to the demanding requirements of next-generation communication systems, such as 6G and GPU/NPU interconnections. We propose HiAE, a high-throughput authenticated encryption algorithm optimized for performance exceeding 100 Gbps and designed to meet the stringent security requirements of future communication networks. HiAE leverages the stream cipher structure, integrating the AES round function for non-linear diffusion.
Our design achieves exceptional efficiency, with benchmark results from software implementations across various platforms showing over
180 Gbps on ARM devices in AEAD mode, making it the fastest AEAD solution on ARM chips
Another Look at the Quantum Security of the Vectorization Problem with Shifted Inputs
Cryptographic group actions provide simple post-quantum generalizations to many cryptographic protocols based on the discrete logarithm problem (DLP). However, many advanced group action-based protocols do not solely rely on the core group action problem (the so-called vectorization problem), but also on variants of this problem, to either improve efficiency or enable new functionalities. In particular, the
security of the CSI-SharK threshold signature protocol relies on the Vectorization Problem with Shifted Inputs where (in DLP formalism) the adversary not only receives and , but also for multiple known values of . A natural open question is then whether the extra data provided to the adversary in this variant allows for more efficient attacks. In this paper, we revisit the concrete quantum security of this problem. We start from a quantum multiple hidden shift algorithm of Childs and van Dam, which to the best of our knowledge was never applied in cryptography before. We specify algorithms for its subroutines and we provide concrete complexity estimates for both these subroutines and the overall algorithm.
We then apply our analysis to the CSI-SharK protocol. In prior analyses based on Kuperberg’s algorithms, group action evaluations contributed to a significant part of the overall T-gate cost. For CSI-SharK suggested parameters, our new approach requires significantly fewer calls to the group action evaluation subroutine, leading to significant T-gate complexity improvements overall. We also show that the quantum security of the protocol decreases when the number of public keys increases, and quantify this degradation.
Beyond its direct application to the CSI-Shark protocol, our work more generally questions the quantum security of vectorization problem variants, and it introduces the Childs-van Dam algorithm as a new quantum cryptanalysis tool.
Evasive LWE: Attacks, Variants & Obfustopia
Evasive LWE (Wee, Eurocrypt 2022 and Tsabary, Crypto 2022) is a recently introduced, popular lattice assumption which has been used to tackle long-standing problems in lattice based cryptography. In this work, we develop new counter-examples against Evasive LWE, in both the private and public-coin regime, propose counter-measures that define safety zones, and finally explore modifications to construct full compact FE/iO.
Attacks: Our attacks are summarized as follows.
- The recent work by Hseih, Lin and Luo [HLL23] constructed the first ABE for unbounded depth circuits by relying on the (public coin) ''circular'' evasive LWE assumption, which incorporates circularity into the Evasive LWE assumption. We provide a new attack against this assumption by exhibiting a sampler such that the pre-condition is true but post-condition is false.
- We demonstrate a counter-example against public-coin evasive LWE which exploits the freedom to choose the error distributions in the pre and post conditions. Our attack crucially relies on the error in the pre-condition being larger than the error in the post-condition.
- The recent work by Agrawal, Kumari and Yamada [AKY24a] constructed the first functional encryption scheme for pseudorandom functionalities ( ) and extended this to obfuscation for pseudorandom functionalities ( ) [AKY24c] by relying on private-coin evasive LWE. We provide a new attack against the stated assumption.
- The recent work by Branco et al. [BDJ+24] (concurrently to [AKY24c]) provides a construction of obfuscation for pseudorandom functionalities by relying on private-coin evasive LWE. By adapting the counter-example against [AKY24a], we provide an attack against this assumption.
- Branco et al. [BDJ+24] showed that there exist contrived, somehow ''self-referential'', classes of pseudorandom functionalities for which pseudorandom obfuscation cannot exist. We develop an analogous result to the setting of pseudorandom functional encryption.
While Evasive LWE was developed to specifically avoid zeroizing attacks as discussed above, our attacks show that in some (contrived) settings, the adversary may nevertheless obtain terms in the zeroizing regime.
Counter-measures: Guided by the learning distilled from the above attacks, we develop counter-measures to prevent against them. Our interpretation of the above attacks is that Evasive LWE, as defined, is too general -- we suggest restrictions to identify safe zones for the assumption, using which, the broken applications can be recovered.
Variants to give full FE and iO: Finally, we show that certain modifications of Evasive LWE, which respect the counter-measures developed above, yield full compact FE in the standard model. We caution that the main goal of presenting these candidates is as goals for cryptanalysis to further our understanding of this regime of assumptions.
Simple and General Counterexamples for Private-Coin Evasive LWE
We present a simple counterexample to all known variants of the private-coin evasive learning with errors (LWE) assumption. Unlike prior works, our counterexample is direct, it does not use heavy cryptographic machinery (such as obfuscation or witness encryption), and it applies to all variants of the assumption. Our counterexample can be seen as a "zeroizing" attack against evasive LWE, calling into question the soundness of the underlying design philosophy.
Split Prover Zero-Knowledge SNARKs
We initiate the study of {\em split prover zkSNARKs}, which allow Alice to offload part of the zkSNARK computation to her assistant, Bob. In scenarios like online transactions (e.g., zCash), a significant portion of the witness (e.g., membership proofs of input coins) is often available to the prover (Alice) before the transaction begins. This setup offers an opportunity to Alice to initiate the proof computation early, even before the entire witness is available. The remaining computation can then be delegated to Bob, who can complete it once the final witness (e.g., the transaction amount) is known.
To prevent Bob from generating proofs independently (e.g., initiating unauthorized transactions), it is essential that the data provided to him for the second phase of computation does not reveal the witness used in the first phase. Additionally, the verifier of the zkSNARK should be unable to determine whether the proof was generated solely by Alice or through this two-step process. To achieve this efficiently, we require this two-phase proof generation to only use cryptography in a black-box manner.
We propose a split prover zkSNARK based on the Groth16 zkSNARKs [Groth, EUROCRYPT 2016], meeting all these requirements. Our solution is also \emph{asymptotically tight}, meaning it achieves the optimal second phase proof generation time for Groth16. Importantly, our split prover zkSNARK preserves the verification algorithm of the original Groth16 zkSNARK, enabling seamless integration into existing deployments of Groth16.
KLPT²: Algebraic Pathfinding in Dimension Two and Applications
Following Ibukiyama, Katsura and Oort, all principally polarized superspecial abelian surfaces over can be represented by a certain type of matrix , having entries in the quaternion algebra . We present a heuristic polynomial-time algorithm which, upon input of two such matrices , finds a "connecting matrix" representing a polarized isogeny of smooth degree between the corresponding surfaces. Our algorithm should be thought of as a two-dimensional analog of the KLPT algorithm from 2014 due to Kohel, Lauter, Petit and Tignol for finding a connecting ideal of smooth norm between two given maximal orders in .
The KLPT algorithm has proven to be a versatile tool in isogeny-based cryptography, and our analog has similar applications; we discuss two of them in detail. First, we show that it yields a polynomial-time solution to a two-dimensional analog of the so-called constructive Deuring correspondence: given a matrix representing a superspecial principally polarized abelian surface, realize the latter as the Jacobian of a genus- curve (or, exceptionally, as the product of two elliptic curves if it concerns a product polarization). Second, we show that, modulo a plausible assumption, Charles-Goren-Lauter style hash functions from superspecial principally polarized abelian surfaces require a trusted set-up. Concretely, if the matrix associated with the starting surface is known then collisions can be produced in polynomial time. We deem it plausible that all currently known methods for generating a starting surface indeed reveal the corresponding matrix. As an auxiliary tool, we present an explicit table for converting -isogenies into the corresponding connecting matrix, a step for which a previous method by Chu required super-polynomial (but sub-exponential) time.
Functional Oblivious Transfer with Applications in Privacy-Preserving Machine Learning
Oblivious Transfer (OT) is a fundamental cryptographic primitive introduced nearly four decades ago. OT allows a receiver to select and learn out of private messages held by a sender. It ensures that the sender does not learn which specific messages the receiver has chosen, while the receiver gains no information about the remaining messages. In this work, we introduce the notion of functional OT (FOT), for the first time. FOT adds a layer of security to the conventional OT by ensuring that the receiver only learns a function of the selected messages rather than the individual messages themselves. We propose several protocols that realize this concept. In particular, we propose concrete instantiations of FOT when the function to be executed on the selected message is mean, mode, addition, or multiplication. The schemes are efficient and unconditionally secure. We also propose a non-trivial protocol that supports arbitrary functions on the selected messages mainly using fully homomorphic encryption (FHE) and oblivious linear function evaluation, where the number of FHE invocations is constant with respect to . Our asymptotic and concrete cost analyses demonstrate the efficiency of our unconditionally secure FOT protocols. FOT can enhance the security of privacy-preserving machine learning, particularly in (i) K-Nearest Neighbors schemes and (ii) client selection in Federated Learning (FL).
Simple Public Key Anamorphic Encryption and Signature using Multi-Message Extensions
Anamorphic encryption (AE) considers secure communication in the presence of a powerful surveillant (typically called a ''dictator'') who only allows certain cryptographic primitives and knows all the secret keys in a system. The basic idea is that there is a second (anamorphic) mode of encryption that allows to transmit an anamorphic message using a double key to a receiver that can decrypt this message using a double key. From the point of view of the dictator the encryption keys as well as the ciphertexts in the regular and anamorphic mode are indistinguishable. The most recent works in this field consider public key anamorphic encryption (PKAE), i.e., the sender of an anamorphic message requires an encryption double key (or no key at all) and the receiver requires a decryption double key. Known constructions, however, either work only for schemes that are mostly of theoretical interest or come with conceptual limitations.
In this paper we ask whether we can design such PKAE schemes without such limitations and being closer to PKE schemes used in practice. In fact, such schemes are more likely to be allowed by a cognizant dictator. Moreover, we initiate the study of identity-based anamorphic encryption (IBAE), as the IBE setting seems to be a natural choice for a dictator. For both PKAE and IBAE, we show how well-known IND-CPA and IND-CCA secure primitives can be extended by an anamorphic encryption channel. In contrast to previous work, we additionally consider CCA (rather than just CPA) security notions for the anamorphic channel and also build upon CPA (rather than just CCA) secure PKE.
Finally, we ask whether it is possible to port the recent concept of anamorphic signatures, which considers constructing symmetric anamorphic channels in case only signature schemes are allowed by the dictator, to the asymmetric setting, which we denote by public-key anamorphic signatures (PKAS). Also here we consider security beyond IND-CPA for the anamorphic channel.
Higher Residuosity Attacks on Small RSA Subgroup Decision Problems
Secure two-party comparison, known as Yao's millionaires' problem, has been a fundamental challenge in privacy-preserving computation. It enables two parties to compare their inputs without revealing the exact values of those inputs or relying on any trusted third party. One elegant approach to secure computation is based on homomorphic encryption. Recently, building on this approach, Carlton et al. (CT-RSA 2018) and Bourse et al. (CT-RSA 2020) presented novel solutions for the problem of secure integer comparison. These protocols have demonstrated significantly improved performance compared to the well-known and frequently used DGK protocol (ACISP 2007 and Int. J. Appl. Cryptogr. 1(4),323–324, 2009). In this paper, we introduce a class of higher residuosity attacks, which can be regarded as an extension of the classical quadratic residuosity attack on the decisional Diffie-Hellman problem. We demonstrate that the small RSA subgroup decision problems, upon which both the CEK and BST protocols are based, are not difficult to solve when the prime base is small (e.g., ). Under these conditions, the protocols achieve optimal overall performance. Furthermore, we offer recommendations for precluding such attacks, including one approach that does not adversely affect performance. We hope that these attacks can be applied to analyze other number-theoretic hardness assumptions.
Polynomial Secret Sharing Schemes and Algebraic Matroids
In a secret sharing scheme with polynomial sharing, the secret is an element of a finite field, and the shares are obtained by evaluating polynomials on the secret and some random field elements, i.e., for every party there is a set of polynomials that computes the share of the party. These schemes generalize the linear ones, adding more expressivity and giving room for more efficient schemes. To identify the access structures for which this efficiency gain is relevant, we need a systematic method to identify the access structure of polynomial schemes; i.e., to identify which sets can reconstruct the secret in the scheme. As a first step, we study ideal polynomial secret sharing schemes where there is a single polynomial for each party. Ideal schemes have optimal share size because the size of each share is the size of the secret.
Our goal is to generalize results of linear secret sharing schemes, i.e., schemes in which the shares are computed by applying linear mappings and the linear dependency of these mappings determines their access structures. To achieve this goal, we study the connection between the algebraic dependency of the sharing polynomials and the access structure of the polynomial scheme. Our first result shows that if the degree of the sharing polynomials is not too big compared to the size of the field, then the algebraic dependence of the sharing polynomials determines the access structure of the scheme. This contributes to the characterization of ideal polynomial schemes and establishes a new connection between families of ideal schemes and algebraic matroids.
Conversely, we ask the question: If we associate a polynomial with each party and the dealer, can we use these polynomials to realize the access structure determined by the algebraic dependency of the polynomials? Our second result shows that these access structures admit statistical schemes with small shares. Finally, we extend this result to the general case where each party may have more than one polynomial.
Partial Lattice Trapdoors: How to Split Lattice Trapdoors, Literally
Lattice trapdoor algorithms allow us to sample hard random lattices together with their trapdoors, given which short lattice vectors can be sampled efficiently. This enables a wide range of advanced cryptographic primitives. In this work, we ask: can we distribute lattice trapdoor algorithms non-interactively?
We study a natural approach to sharing lattice trapdoors: splitting them into partial trapdoors for different lower-rank sublattices which allow the local sampling of short sublattice vectors. Given sufficiently many short sublattice vectors, these can then be combined to yield short vectors in the original lattice. Moreover, this process can be repeated an unbounded polynomial number of times without needing a party holding a full trapdoor to intervene. We further define one-wayness and indistinguishability properties for partial trapdoors.
We establish that such objects exist that have non-trivial performance under standard assumptions. Specifically, we prove these properties for a simple construction from the κ-SIS and κ-LWE assumptions, which were previously shown to be implied by the plain SIS and LWE assumptions, respectively. The security proofs extend naturally to the ring or module settings under the respective analogues of these assumptions, which have been conjectured to admit similar reductions.
Our partial trapdoors achieve non-trivial efficiency, with relevant parameters sublinear in the number of shareholders. Our construction is algebraic, without resorting to generic tools such as multiparty computation or fully homomorphic encryption. Consequently, a wide range of lattice-trapdoor-based primitives can be thresholdised non-interactively by simply substituting the trapdoor preimage sampling procedure with our partial analogue.
Enabling Microarchitectural Agility: Taking ML-KEM & ML-DSA from Cortex-M4 to M7 with SLOTHY
Highly-optimized assembly is commonly used to achieve the best performance for popular cryptographic schemes such as the newly standardized ML-KEM and ML-DSA.
The majority of implementations today rely on hand-optimized assembly for the core building blocks to achieve both security and performance.
However, recent work by Abdulrahman et al. takes a new approach, writing a readable base assembly implementation first and leaving the bulk of the optimization work to a tool named SLOTHY based on constraint programming.
SLOTHY performs instruction scheduling, register allocation, and software pipelining simultaneously using constraints modeling the architectural and microarchitectural details of the target platform.
In this work, we extend SLOTHY and investigate how it can be used to migrate already highly hand-optimized assembly to a different microarchitecture, while maximizing performance.
As a case study, we optimize state-of-the-art Arm Cortex-M4 implementations of ML-KEM and ML-DSA for the Arm Cortex-M7.
Our results suggest that this approach is promising:
For the number-theoretic transform (NTT) – the core building block of both ML-DSA and ML-KEM – we achieve speed-ups of and , respectively.
For Keccak – the permutation used by SHA-3 and SHAKE and also vastly used in ML-DSA and ML-KEM – we achieve speed-ups of 30% compared to the M4 code and 5% compared to hand-optimized M7 code.
For many other building blocks, we achieve similarly significant speed-ups of up to .
Overall, this results in 11 to 33% faster code for the entire cryptosystems.
Lattice-Based Updatable Public-Key Encryption for Group Messaging
Updatable Public-Key Encryption (UPKE) augments the security of PKE with Forward Secrecy properties. While requiring more coordination between parties, UPKE enables much more efficient constructions than full-fledged Forward-Secret PKE. Alwen, Fuchsbauer and Mularczyk (AFM, Eurocrypt’24) presented the strongest security notion to date. It is the first to meet the needs of UPKE’s most important applications: Secure Group Messaging and Continuous Group Key Agreement. The authors provide a very efficient construction meeting their notion with classic security based on the Computational Diffie-Hellman (CDH) assumption in the Random Oracle Model (ROM).
In this work we present the first post-quantum secure UPKE construction meeting (a slight relaxation of) the AFM security notion. Based on the Module LWE assumption, our construction is practically efficient. Moreover, public key sizes are about and ciphertext sizes around of those of the state-of-the-art lattice-based UPKE scheme in the ROM by Abou Haidar, Passelègue and Stehlé – despite only being shown to satisfy a significantly weaker security notion. As the AFM proofs relies on random self-reducibility of CDH, which has no analogue for lattices, we develop a new proof technique for strong UPKE, identifying the core properties required from the underlying (lattice-based) encryption scheme.
Traitor Tracing in Multi-sender Setting ( : Traceable Multi-client Functional Encryption)
Traitor tracing is a traditional cryptographic primitive designed for scenarios with multiple legitimate receivers. When the plaintext - that is, the output of decryption - is leaked and more than one legitimate receiver exists, it becomes imperative to identify the source of the leakage, a need that has motivated the development of traitor tracing techniques. Recent advances in standard encryption have enabled decryption outcomes to be defined in a fine-grained manner through the introduction of Functional Encryption (FE). Constructing FE schemes is intriguing, and achieving the tracing property adds an additional layer of complexity. Traitor tracing techniques have been actively developed for more than three decades, yet they have always remained within the same framework - a single sender responsible for encrypting all the data.
However, fine-grained decryption is particularly useful when data originates from multiple sources, allowing for joint computation on personal data. This leads to the concept of multi-client functional encryption (MCFE), where multiple concurrent senders independently encrypt their data while agreeing on the decryption of a specific function (e.g., a statistical measure) computed on the aggregated data, without revealing any additional information. In the era of cloud computing and big data, privacy-preserving joint computation is crucial, and tracing the source of any breach by dishonest participants becomes essential. Thus, in this paper we take the first step toward addressing the tracing problem in the general context of joint computation with multiple senders. Our contributions are twofold:
- We propose the first tracing model in the context of multi-sender encryption, namely ( ), which allows a pirate to extract secret information from both receivers and senders. Our model supports strong and naturally admissible decoders, removing artificial restrictions on the pirate decoder and thus addressing the shortcomings of existing traceable functional encryption schemes designed for the single-sender setting.
- To achieve our conceptual objective, we build upon the recently introduced notion of strong admissibility for MCFE. Our main technical contribution is a generic compiler that transforms a large class of MCFE schemes with weak admissibility into schemes with strong admissibility. This compiler not only helps overcome existing challenges but may also be of general interest within the functional encryption domain. Finally, we present a concrete lattice-based scheme for inner-product functionalities that achieves post-quantum security under standard assumptions.
The Security of Hash-and-Sign with Retry against Superposition Attacks
Considering security against quantum adversaries, while it is important to consider the traditional existential unforgeability (EUF-CMA security), it is desirable to consider security against adversaries making quantum queries to the signing oracle: Plus-one security (PO security) and blind unforgeability (BU security) proposed by Boneh and Zhandry (Crypto 2013) and Alagic et al. (EUROCRYPT 2020), respectively. Hash-and-sign is one of the most common paradigms for constructing EUF-CMA-secure signature schemes in the quantum random oracle model, employing a trapdoor function and a hash function. It is known that its derandomized version is PO- and BU-secure. A variant of hash-and-sign, known as hash-and-sign with retry (HSwR), formulated by Kosuge and Xagawa (PKC 2024), is widespread since it allows for weakening the security assumptions of a trapdoor function. Unfortunately, it has not been known whether HSwR can achieve PO- and BU-secure even with derandomization.
In this paper, we apply a derandomization with bounded loops to HSwR. We demonstrate that HSwR can achieve PO and BU security through this approach. Since derandomization with bounded loops offers advantages in some implementations, our results support its wider adoption, including in NIST PQC candidates.
Adaptively Secure Fully Homomorphic Message Authentication Code with Pre-processable Verification
There has been remarkable progress in fully homomorphic encryption, ever since Gentry's first scheme. In contrast, fully homomorphic authentication primitives received relatively less attention, despite existence of some previous constructions. While there exist various schemes with different functionalities for fully homomorphic encryption, there are only a few options for fully homomorphic authentication. Moreover, there are even fewer options when considering two of the most important properties: adaptive security, and pre-processable verification. To our knowledge, except for some concurrent works, achieving both properties requires the use of nested construction, which involves homomorphically authenticating a homomorphic authentication tag of a message, making the scheme costly and complicated.
In this work, we propose a dedicated scheme for (leveled) fully homomorphic message authentication code that is adaptively secure and has pre-processable verification. Leveraging the secrecy of the primitive, we demonstrate that a slight modification of a selectively secure (leveled) fully homomorphic signature scheme yields an adaptively secure (leveled) fully homomorphic message authentication code with pre-processable verification. Additionally, we introduce a novel notion and generic transform to enhance the security of a homomorphic message authentication code, which also exploits the secrecy of the primitive.
Predicate Encryption from Lattices: Enhanced Compactness and Refined Functionality
Uncategorized
Uncategorized
In this work, we explore the field of lattice-based Predicate Encryption (PE), with a focus on enhancing compactness and refining functionality.
First, we present a more compact bounded collusion predicate encryption scheme compared to previous constructions, significantly reducing both the per-unit expansion and fixed overhead, while maintaining an optimal linear blow-up proportional to .
Next, we propose a Predicate Inner Product Functional Encryption (P-IPFE) scheme based on our constructed predicate encryption scheme. P-IPFE preserves the attribute-hiding property while enabling decryption to reveal only the inner product between the key and message vectors, rather than the entire message as in traditional PE. Our P-IPFE scheme also achieves bounded collusion resistance while inheriting the linear compactness optimized in the underlying PE scheme. Additionally, it supports any polynomial-sized and bounded-depth circuits, thereby extending beyond the inner-product predicate class in prior works.
Furthermore, all the proposed schemes achieve selective fully attribute-hiding security in the simulation-based model, therefore, can further attain semi-adaptive security by adopting existing upgrading techniques.
Vanishing Short Integer Solution, Revisited: Reductions, Trapdoors, Homomorphic Signatures for Low-Degree Polynomials
The vanishing short integer solution (vSIS) assumption [Cini-Lai-Malavolta, Crypto'23], at its simplest form, asserts the hardness of finding a polynomial with short coefficients which vanishes at a given random point. While vSIS has proven to be useful in applications such as succinct arguments, not much is known about its theoretical hardness. Furthermore, without the ability to generate a hard instance together with a trapdoor, the applicability of vSIS is significantly limited.
We revisit the vSIS assumption focusing on the univariate single-point constant-degree setting, which can be seen as a generalisation of the (search) NTRU problem. In such a setting, we show that the vSIS problem is as hard as finding the shortest vector in certain ideal lattices. We also show how to generate a random vSIS instance together with a trapdoor, under the (decision) NTRU assumption. Interestingly, a vSIS trapdoor allows to sample polynomials of short coefficients which evaluate to any given value at the public point. By exploiting the multiplicativity of the polynomial ring, we use vSIS trapdoors to build a new homomorphic signature scheme for low-degree polynomials.
A Note on Zero-Knowledge Simulator of the CROSS Identification Protocol
We point out flaw in zero-knowledge of the CROSS identification protocol, , which allows a distinguisher to distinguish real and simulated transcripts given access to the witness. Moreover, we show that the real and simulated transcripts are not statistically indistinguishable, and therefore the protocol can only satisfy weak computational (rather than strong, statistical or perfect) Honest Verifier Zero-knowledge. This issue is still present in version 2.0 updated on January 31, 2025, which resolves the security losses attained via the attacks of [BLP+25]
The Complexity of Memory Checking with Covert Security
A memory checker is an algorithmic tool used to certify the integrity of a database maintained on a remote, unreliable, computationally bounded server. Concretely, it allows a user to issue instructions to the server and after every instruction, obtain either the correct value or a failure (but not an incorrect answer) with high probability. A recent result due to Boyle, Komargodski, and Vafa (BKV, STOC '24) showed a tradeoff between the size of the local storage and the number of queries the memory checker makes to the server upon every logical instruction. Specifically, they show that every non-trivial memory checker construction with inverse-polynomial soundness and local storage at most must make queries, and this is tight up to constant factors given known constructions. However, an intriguing question is whether natural relaxations of the security guarantee could allow for more efficient constructions.
We consider and adapt the notion of covert security to the memory checking context, wherein the adversary can effectively cheat while taking the risk of being caught with constant probability. Notably, BKV's lower bound does not apply in this setting.
We close this gap and prove that overhead is unavoidable even in the covert security setting. Our lower bound applies to any memory checker construction, including ones that use randomness and adaptivity and ones that rely on cryptographic assumptions and/or the random oracle model, as long as they satisfy a natural "read-only reads" property. This property requires a memory checker not to modify contents of the database or local storage in the execution of a logical read instruction.
Random Number Generation from Pulsars
Pulsars exhibit signals with precise inter-arrival times that are on the order of milliseconds to seconds depending on the individual pulsar. There is subtle variation in the timing of pulsar signals, primarily due to the presence of gravitational waves, intrinsic variance in the period of the pulsar, and errors in the realization of Terrestrial Time (TT). Traditionally, these variations are dismissed as noise in high-precision timing experiments. In this paper, we show that these variations serve as a natural entropy source for the creation of Random Number Generators (RNG). We also explore the effects of using randomness extractors to increase the entropy of random bits extracted from Pulsar timing data. To evaluate the quality of the Pulsar RNG, we model its entropy as a -source and use well-known cryptographic results to show its closeness to a theoretically ideal uniformly random source. To remain consistent with prior work, we also show that the Pulsar RNG passes well-known statistical tests such as the NIST test suite.
Lattice-based Proof-Friendly Signatures from Vanishing Short Integer Solutions
Efficient anonymous credentials are typically constructed by combining proof-friendly signature schemes with compatible zero-knowledge proof systems. Inspired by pairing-based proof-friendly signatures such as Boneh- Boyen (BB) and Boneh-Boyen-Shacham (BBS), we propose a wide family of lattice-based proof-friendly signatures based on variants of the vanishing short integer solution (vSIS) assumption [Cini-Lai-Malavolta, Crypto'23]. In particular, we obtain natural lattice-based adaptions of BB and BBS which, similar to their pairing-based counterparts, admit nice algebraic properties.
[Bootle-Lyubashevsky-Nguyen-Sorniotti, Crypto'23] (BLNS) recently proposed a framework for constructing lattice-based proof-friendly signatures and anonymous credentials, based on another new lattice assumption called parametrised by a fixed function , with focus on being the binary decomposition. We introduce a generalised framework, called , with a keyed and probabilistic function . For example, picking with key for short ring element leads to algebraic and thus proof-friendly signatures. To better gauge the robustness and proof-friendliness of , we consider what happens when the inputs to are chosen selectively (or even adaptively) by the adversary, and the behaviour under relaxed norm checks. While bit decomposition quickly becomes insecure, our proposed function families seem robust.
Commit-and-Prove System for Vectors and Applications to Threshold Signing
Multi-signatures allow to combine several individual signatures into a compact one and verify it against a short aggregated key. Compared to threshold signatures, multi-signatures enjoy non-interactive key generation but give up on the threshold-setting. Recent works by Das et al. (CCS'23) and Garg et al. (S&P'24) show how multi-signatures can be turned into schemes that enable efficient verification when an ad hoc threshold -- determined only at verification -- is satisfied. This allows to keep the simple key generation of multi-signatures and support flexible threshold settings in the signing process later on. Both works use the same idea of combining BLS multi-signatures with inner-product proofs over committed keys. Das et al. give a somewhat generic proof from both building blocks, which we show to be flawed, whereas Garg et al. give a direct proof for the combined construction in the algebraic group model.
In this work, we identify the common blueprint used in both works and abstract the proof-based approach through the building block of a commit-and-prove system for vectors (CP). We formally define a flexible set of security properties for the CP system and show how it can be securely combined with a multi-signature to yield a signature with ad hoc thresholds. Our scheme also lifts the threshold signatures into the multiverse setting recently introduced by Baird et al. (S&P'23), which allows signers to re-use their long-term keys across several groups. The challenge in the generic construction is to express -- and realize -- the combination of homomorphic proofs and commitments (needed to realize flexible thresholds over fixed group keys) and their simulation extractability (needed in the threshold signature security proof). We finally show that a CP instantiation closely following the ideas of Das et al. can be proven secure, but requires a new flexible-base DL-assumption to do so.
Delayed-Input Multi-Party Computation
In this work, we consider the setting where the process of securely evaluating a multi-party functionality is divided into two phases: offline (or preprocessing) and online. The offline phase is independent of the parties’ inputs, whereas the online phase does require the knowledge of the inputs. We consider the problem of minimizing the round of communication required in the online phase and propose a round preserving compiler that can turn a big class of multi-party computation (MPC) protocols into protocols in which only the last two rounds are input-dependent. Our compiler can be applied to a big class of MPC protocols, and in particular to all existing round-optimal MPC protocols. All our results assume no setup and are proven in the dishonest majority setting with black-box simulation. As part of our contribution, we propose a new definition we call Multi-Party Computation with Adaptive-Input Selection, which allows the distinguisher to craft the inputs the honest parties should use during the online phase, adaptively on the offline phase. This new definition is needed to argue that not only are the messages of the offline phase input-independent but also that security holds even in the stronger (and realistic) adversarial setting where the inputs may depend on some of the offline-phase protocol messages. We argue that this is the definition that any protocol should satisfy to be securely used while preprocessing part of the rounds. We are the first to study this definition in a setting where there is no setup, and the majority of the parties can be corrupted. Prior definitions have been presented in the Universal Composable framework, which is unfortunately not well suited for our setting (i.e., no setup and dishonest majority). As a corollary, we obtain the first four-round (which is optimal) MPC protocol, where the first two rounds can be preprocessed, and its security holds against adaptive-input selection.
Stronger Security for Threshold Blind Signatures
Uncategorized
Uncategorized
Blind signatures allow a user to obtain a signature from an issuer in a privacy-preserving way: the issuer neither learns the signed message, nor can link the signature to its issuance. The threshold version of blind signatures further splits the secret key among n issuers, and requires the user to obtain at least t ≤ n of signature shares in order to derive the final signature. Security should then hold as long as at most t − 1 issuers are corrupt. Security for blind signatures is expressed through the notion of one-more unforgeability and demands that an adversary must not be able to produce more signatures than what is considered trivial after its interactions with the honest issuer(s). While one-more unforgeability is well understood for the single-issuer setting, the situation is much less clear in the threshold case: due to the blind issuance, counting which interactions can yield a trivial signature is a challenging task. Existing works bypass that challenge by using simplified models that do not fully capture the expectations of the threshold setting. In this work, we study the security of threshold blind signatures, and propose a framework of one-more unforgeability notions where the adversary can corrupt c < t issuers. Our model is generic enough to capture both interactive and non-interactive protocols, and it provides a set of natural properties with increasingly stronger guarantees, giving the issuers gradually more control over how their shares can be combined. As a point of comparison, we reconsider the existing threshold blind signature models and show that their security guarantees are weaker and less clearly comprehensible than they seem. We then re-assess the security of existing threshold blind signature schemes – BLS-based and Snowblind – in our framework, and show how to lift them to provide stronger security.
Efficient NIZK Arguments with Straight-Line Simulation and Extraction
Non-interactive zero-knowledge (NIZK) arguments allow a prover to convince a verifier about the truthfulness of an NP-statement by sending just one message, without disclosing any additional information. In several practical scenarios, the Fiat-Shamir transform is used to convert an efficient constant-round public-coin honest-verifier zero-knowledge proof system into an efficient NIZK argument system. This approach is provably secure in the random oracle model, crucially requires the programmability of the random oracle and extraction works through rewinds. The works of Lindell [TCC 2015] and Ciampi et al. [TCC 2016] proposed efficient NIZK arguments with non-programmable
random oracles along with a programmable common reference string. In this work we show an efficient NIZK argument with straight-line simulation and extraction that relies on features that alone are insufficient to construct NIZK arguments (regardless of efficiency). More specifically we consider the notion of quasi-polynomial time simulation proposed by Pass in [EUROCRYPT 2003] and combine it with simulation and extraction with non-programmable random
oracles thus obtaining a NIZK argument of knowledge where neither the zero-knowledge simulator, nor the argument of knowledge extractor needs to program the random oracle. Still, both the simulator and the extractor are straight-line. Our construction uses as a building block a modification of the Fischlin’s transform [CRYPTO 2005] and combines it with the concept of dense puzzles introduced by Baldimtsi et al. [ASIACRYPT 2016]. We also argue that our NIZK argument system inherits the efficiency features of Fischlin’s transform, which represents the main advantage of Fischlin’s protocol over existing schemes.
Thorough Power Analysis on Falcon Gaussian Samplers and Practical Countermeasure
Falcon is one of post-quantum signature schemes selected by NIST for standardization. With the deployment underway, its implementation security is of great importance. In this work, we focus on the side-channel security of Falcon and our contributions are threefold.
First, by exploiting the symplecticity of NTRU and a recent decoding technique, we dramatically improve the key recovery using power leakages within Falcon Gaussian samplers. Compared to the state of the art (Zhang, Lin, Yu and Wang, EUROCRYPT 2023), the amount of traces required by our attack for a full key recovery is reduced by at least 85%.
Secondly, we present a complete power analysis for two exposed power leakages within Falcon’s integer Gaussian sampler. We identify new sources of these leakages, which have not been identified by previous works, and conduct detailed security evaluations within the reference implementation of Falcon on Chipwhisperer.
Thirdly, we propose effective and easy-to-implement countermeasures against both two leakages to protect the whole Falcon’s integer Gaussian sampler. Configured with our countermeasures, we provide security evaluations on Chipwhisperer and report performance of protected implementation. Experimental results highlight that our countermeasures admit a practical trade-off between effciency and side-channel security.
Bootstrapping with RMFE for Fully Homomorphic Encryption
There is a heavy preference towards instantiating BGV and BFV homomorphic encryption schemes where the cyclotomic order is a power of two, as this admits highly efficient fast Fourier transformations. Field Instruction Multiple Data (FIMD) was introduced to increase packing capacity in the case of small primes and improve amortised performance, using reverse multiplication-friendly embeddings (RMFEs) to encode more data into each SIMD slot. However, FIMD currently does not admit bootstrapping.
In this work, we achieve bootstrapping for RMFE-packed ciphertexts with low capacity loss. We first adapt the digit extraction algorithm to work over RMFE-packed ciphertexts, by applying the recode map after every evaluation of the lifting polynomial. This allows us to follow the blueprint of thin bootstrapping, performing digit extraction on a single ciphertext. To achieve the low capacity loss, we introduce correction maps to the Halevi-Shoup digit extraction algorithm, to remove all but the final recode of RMFE digit extraction.
We implement several workflows for bootstrapping RMFE-packed ciphertexts in HElib, and benchmark them against thin bootstrapping for . Our experiments show that the basic strategy of recoding multiple times in digit extraction yield better data packing, but result in very low remaining capacity and latencies of up to hundreds of seconds. On the other hand, using correction maps gives up to additional multiplicative depth and brings latencies often below seconds, at the cost of lower packing capacity.
Efficient Distributed Randomness Generation from Minimal Assumptions where PArties Speak Sequentially Once
We study efficient public randomness generation protocols in the PASSO (PArties Speak Sequentially Once) model for multi-party computation (MPC). PASSO is a variation of traditional MPC where parties are executed in sequence and each party ``speaks'' only once, broadcasting and sending secret messages only to parties further down the line. Prior results in this setting include information-theoretic protocols in which the computational complexity scales exponentially with the number of corruptions (CRYPTO 2022), as well as more efficient computationally-secure protocols either assuming a trusted setup phase or DDH (FC 2024). Moreover, these works only consider security against static adversaries.
In this work, we focus on computational security against adaptive adversaries and from minimal assumptions, and improve on the works mentioned above in several ways:
- Assuming the existence of non-interactive perfectly binding commitments, we design protocols with or parties that are efficient and secure whenever is small compared to the security parameter (e.g., is constant). This improves the resiliency of all previous protocols, even those requiring a trusted setup. It also shows that parties are necessary and sufficient for corruptions in the computational setting, while parties are required for information-theoretic security.
- Under the same assumption, we design protocols with or parties (depending on the adversarial network model) which are efficient whenever . This improves on the existing DDH-based protocol both in terms of resiliency and the underlying assumptions.
- We design efficient protocols with or parties (depending on the adversarial network model) assuming the existence of one-way functions.
We complement these results by studying lower bounds for randomness generation protocols in the computational setting.
Juicebox Protocol: Distributed Storage and Recovery of Secrets Using Simple PIN Authentication
Existing secret management techniques demand users memorize complex passwords, store convoluted recovery phrases, or place their trust in a specific service or hardware provider. We have designed a novel protocol that combines existing cryptographic techniques to eliminate these complications and reduce user complexity to recalling a short PIN. Our protocol specifically focuses on a distributed approach to secret storage that leverages Oblivious Pseudorandom Functions (OPRFs) and a Secret-Sharing Scheme (SSS) combined with self-destructing secrets to minimize the trust placed in any singular server. Additionally, our approach allows for servers distributed across organizations, eliminating the need to trust a singular service operator. We have built an open-source implementation of the client and server sides of this new protocol, the latter of which has variants for running on commodity hardware and secure hardware.
Helix: Scalable Multi-Party Machine Learning Inference against Malicious Adversaries
With the growing emphasis on data privacy, secure multi-party computation has garnered significant attention for its strong security guarantees in developing privacy-preserving machine learning (PPML) schemes. However, only a few works address scenarios with a large number of participants. The state of the art by Liu et al. (LXY24, USENIX Security'24) first achieves a practical PPML protocol for up to 63 parties but is constrained to semi-honest security. Although naive extensions to the malicious setting are possible, they would introduce significant overhead.
In this paper, we propose Helix, a scalable framework for maliciously secure PPML in the honest majority setting, aiming to enhance both the scalability and practicality of maliciously secure protocols. In particular, we report a privacy leakage issue in LXY24 during prefix OR operations and introduce a round-optimized alternative based on a single-round vectorized three-layer multiplication protocol. Additionally, by exploiting reusability properties within the computation process, we propose lightweight compression protocols that substantially improve the efficiency of multiplication verification. We also develop a batch check protocol to reduce the computational complexity of revealing operations in the malicious setting. For 63-party neural network inference, compared to the semi-honest LXY24, Helix is only 1.9 (1.1 ) slower in the online phase and 1.2 (1.1 ) slower in preprocessing under LAN (WAN) in the best case.
Homomorphic Encryption for Large Integers from Nested Residue Number Systems
Existing fully homomorphic encryption (FHE) schemes primarily support a plaintext space defined over a relatively small prime. However, in some important applications of FHE one needs arithmetic over a large prescribed prime. In this paper we construct a new FHE system that is specifically designed for this purpose.
Our system composes three layers of residue systems to enable much better performance than was previously possible. Our experiments show that for arithmetic modulo a 256-bit integer, when compared to the TFHE-rs implementation of 256-bit arithmetic, our new system achieves a factor of a thousand better multiplication throughput and a factor of ten better latency. Moreover, for a 2048-bit prime modulus we achieve far better performance than was previously possible.
Publicly Verifiable Threshold Proxy Re-encryption and Its Application in Data Rights Confirmation
Uncategorized
Uncategorized
Proxy re-encryption (PRE) has been regarded as an effective cryptographic primitive in data sharing systems with distributed proxies. However, no literature considers the honesty of data owners, which is critical in the age of big data. In this paper, we fill the gap by introducing a new proxy re-encryption scheme, called publicly verifiable threshold PRE (PVTPRE). Briefly speaking, we innovatively apply a slightly modified publicly verifiable secret sharing (PVSS) scheme to distribute the re-encryption keys to multiple proxies. Consequently, we achieve publicly verifiability of data owners non-interactively. Then, the correctness of data users in decryption and public verifiability of proxies in re-encryption are guaranteed seamlessly through execution of the PVSS reconstruction algorithms. We further prove that PVTPRE satisfies IND-CPA security. Besides, we put forward a privacy-preserving data rights confirmation framework by providing clear principles for data ownership and usage, based on the PVTPRE scheme and blockchain. Blockchain plays the role of data bank and smart contract engine, providing reliable storage and verification for all framework. To our knowledge, we are the first to systematically investigate data rights confirmation considering privacy as well as public verifiability, addressing the growing need for robust mechanisms to protect data rights and ensure transparency. Finally, we conduct comprehensive experiments to illustrate the correctness, feasibility and effectiveness. The experimental results show that our PVTPRE outperforms other PREs in many aspects.
Publicly Verifiable Generalized Secret Sharing and Its Application in Building Decentralized Exchange
Generalized secret sharing (GSS), which can offer more flexibility by accommodating diverse access structures and conditions, has been under-explored in distributed computing over the past decades. To address the gaps, we propose the publicly verifiable generalized secret sharing (PVGSS) scheme, enhancing the applicability of GSS in transparent systems. Public verifiability is a crucial property to gain trustworthiness for decentralized systems like blockchain. We begin by introducing two GSS constructions, one based on Shamir's secret sharing and the other on the linear secret sharing scheme (LSSS). Next, we present PVGSS schemes that combine GSS with non-interactive zero-knowledge (NIZK) proofs. Further, we construct a decentralized exchange (DEX) based on PVGSS scheme, where any users can participate in exchanges and engage in arbitrage. Specifically, users can fairly swap ERC-20 tokens with passive watchers, who earn profits by providing arbitration services. The critical property of "fairness" required by the DEX is ensured through a sophisticated access structure, supported by the PVGSS scheme. We provide a comprehensive evaluation on the performance of the PVGSS schemes and the monetary costs for users in the DEX. The results demonstrate the feasibility and practicality of this approach in real-world applications.
Tight Multi-challenge Security Reductions for Key Encapsulation Mechanisms
A key encapsulation mechanism (KEM) allows two parties to establish a shared secret key using only public communication. For post-quantum KEMs, the most widespread approach is to design a passively secure public-key encryption (PKE) scheme and then apply the Fujisaki–Okamoto (FO) transform that turns any such PKE scheme into an IND-CCA secure KEM. While the base security requirement for KEMs is typically IND-CCA security, adversaries in practice can sometimes observe and attack many public keys and/or ciphertexts, which is referred to as multi-challenge security. FO does not necessarily guarantee multi-challenge security: for example, FrodoKEM, a Round 3 alternate in NIST’s post-quantum project, used FO to achieve IND-CCA security, but was subsequently shown to be vulnerable to attackers that can target multiple ciphertexts. To avert this multi-ciphertext attack, the FrodoKEM team added a salt to the encapsulation procedure and proved that this does not degrade (single-ciphertext) IND-CCA security. The formal analysis of whether this indeed averts multi-ciphertext attacks, however, was left open, which we address in this work.
Firstly, we formalize FrodoKEM's approach as a new variant of the FO transform, called the salted FO transform. Secondly, we give tight reductions from multi-challenge security of the resulting KEM to multi-challenge security of the underlying public key encryption scheme, in both the random oracle model (ROM) and the quantum-accessible ROM (QROM). Together these results justify the multi-ciphertext security of the salted FrodoKEM scheme, and can also be used generically by other schemes requiring multi-ciphertext security.
Traceable Threshold Encryption without Trusted Dealer
The fundamental assumption in -out-of- threshold encryption is that the adversary can only corrupt less than parties. Unfortunately, it may be unfounded in practical scenarios where shareholders could be incentivized to collude. Boneh, Partap, and Rotem (Crypto'24) recently addressed the setting where or more shareholders work together to decrypt illegally. Inspired by the well-established notion of traitor tracing in broadcast encryption, they added a traceability mechanism that guarantees identifying at least one of the colluders. They provide several constructions that enable traceability, all of which require a trusted dealer to distribute the secret shares. While the trusted dealer can be replaced with a DKG for conventional threshold encryption, it is unclear how to do so without compromising traceability. As thresholdizing is meant to mitigate a single point of failure, a natural question that remains is: Can we construct an efficient traceable threshold encryption scheme that does not rely on a trusted party to distribute the secret shares?
In this paper, we achieve two dealerless traceable threshold encryption constructions with different merits by extending the PLBE primitive of Boneh et al. (Eurocrypt'06) and combining it with the silent setup threshold encryption construction of Garg et al. (Crypto'24). Our first construction achieves an amortized ciphertext of size (for ciphertexts). Our second construction achieves constant ciphertext size even in the worst case but requires a less efficient preprocessing phase as a tradeoff. Both our constructions enjoy a constant secret key size and do not require any interaction between the parties.
An additional restriction in the constructions of Boneh et al. is that they can only guarantee to find at least one colluder, leaving techniques to identify more traitors as an open problem. In this paper, we take a first step towards solving this question by formalizing a technique and applying it to our first construction. Namely, our first construction enables tracing traitors.
CCA-Secure Traceable Threshold (ID-based) Encryption and Application
A recent work by Boneh, Partap, and Rotem [Crypto'24] introduced the concept of traceable threshold encryption, in that if or more parties collude to construct a decryption box, which performs decryptions, then at least one party's identity can be traced by making a few black-box queries to the box. This has important applications, e.g., in blockchain mempool privacy, where collusion yields high financial gain through MEVs without any consequence - the possibility of tracing discourages collusion.
Nevertheless, their definitions leave room for exploitation as they only achieve CPA security and do not consider inconsistency in decryption via different participating sets.
This paper proposes stronger definitions of traceable threshold encryption, which supports CCA-security and consistency. Our main approach considers identity-based variants of traceable encryption (which we also define). It converts that to a CCA-secure construction, adapting two generic transformations, first using a one-time signature and then a fingerprinting code.
We put forward two efficient instantiations of our identity-based scheme with different merits: our first construction is based on Boneh-Franklin IBE [Crypto'01] and has constant size ciphertexts but quadratic size public keys - this is proven secure based on XDH and BDDH. Our second construction is based on Boneh-Boyen IBE [Eurocrypt'04]. It supports both constant-size ciphertexts and constant-size public keys - this is proven secure based on a variant of the uber assumption over bilinear pairings. Our concrete analysis shows that the first construction's ciphertext is much (~6x) smaller than the second construction. Finally, we extend the definitions to support consistency and achieve it by adjoining an efficient, non-interactive proof of correct encryption.
Hollow LWE: A New Spin, Unbounded Updatable Encryption from LWE and PCE
Uncategorized
Uncategorized
Updatable public-key encryption (UPKE) allows anyone to update a public key while simultaneously producing an update token, given which the secret key holder could consistently update the secret key. Furthermore, ciphertexts encrypted under the old public key remain secure even if the updated secret key is leaked -- a property much desired in secure messaging. All existing lattice-based constructions of UPKE update keys by a noisy linear shift. As the noise accumulates, these schemes either require super-polynomial-size moduli or an a priori bounded number of updates to maintain decryption correctness.
Inspired by recent works on cryptography based on the lattice isomorphism problem, we propose an alternative way to update keys in lattice-based UPKE. Instead of shifting, we rotate them. As rotations do not induce norm growth, our construction supports an unbounded number of updates with a polynomial-size modulus. The security of our scheme is based on the LWE assumption over hollow matrices -- matrices which generate linear codes with non-trivial hull -- and the hardness of permutation code equivalence. Along the way, we also show that LWE over hollow matrices is as hard as LWE over uniform matrices, and that a leftover hash lemma holds for hollow matrices.
Key-Homomorphic Computations for RAM: Fully Succinct Randomised Encodings and More
We propose a new method to construct a public-key encryption scheme, where one can homomorphically transform a ciphertext encrypted under a key into a ciphertext under , for any polynomial-time RAM program with runtime and memory . Combined with other lattice techniques, this allows us to construct:
1) Succinct-randomised encodings from RAM programs with encoder complexity and rate-1 encodings.
2) Laconic function evaluation for RAM programs, with encoder runtime bounded by and rate-1 encodings.
3) Key-policy attribute-based encryption for RAM programs, with ciphertexts of size . The same scheme can be converted to the register setting, obtaining linear CRS size in the number of parties.
All of our schemes rely on the hardness of the \emph{decomposed learning with errors} (LWE) problem, along with other standard computational assumptions on lattices. The decomposed LWE problem can be interpreted as postulating the circular-security of a natural lattice-based public-key encryption scheme. To gain confidence in the assumption, we show that it is implied by the hardness of the succinct LWE problem of Wee (CRYPTO'24).
CT-LLVM: Automatic Large-Scale Constant-Time Analysis
Constant-time (CT) is a popular programming discipline to protect
cryptographic libraries against micro-architectural timing attacks.
One appeal of the CT discipline lies in its conceptual simplicity: a
program is CT iff it has no secret-dependent data-flow,
control-flow or variable-timing operation. Thanks to its simplicity,
the CT discipline is supported by dozens of analysis tools. However, a
recent user study demonstrates that these tools are seldom used due to
poor usability and maintainability (Jancar et al. IEEE SP 2022).
In this paper, we introduce CT-LLVM, a CT analysis tool designed for
usability, maintainability and automatic large-scale analysis.
Concretely, CT-LLVM is packaged as a
LLVM plugin and is built as a thin layer on top of two standard LLVM
analysis: def-use and alias analysis. Besides confirming known CT
violations, we demonstrate the usability and scalability of CT-LLVM by
automatically analyzing nine cryptographic libraries. On
average, CT-LLVM can automatically and soundly analyze 36% of the
functions in these libraries, proving that 61% of them are CT. In
addition, the large-scale automatic analysis also reveals new
vulnerabilities in these libraries. In the end, we demonstrate
that CT-LLVM helps systematically mitigate compiler-introduced CT
violations, which has been a long-standing issue in CT analysis.
Efficient IP Masking with Generic Security Guarantees under Minimum Assumptions
Leakage-resilient secret sharing schemes are a fundamental building block for secure computation in the presence of leakage. As a result, there is a strong interest in building secret sharing schemes that combine resilience in practical leakage scenarios with potential for efficient computation. In this work, we revisit the inner-product framework, where a secret is encoded by two vectors , such that their inner product is equal to . So far, the most efficient inner-product masking schemes (in which is public but random) are provably secure with the same security notions (e.g., in the abstract probing model) as additive, Boolean masking, yet at the cost of a slightly more expensive implementation. Hence, their advantage in terms of theoretical security guarantees remains unclear, also raising doubts about their practical relevance. We address this question by showing the leakage resilience of inner-product masking schemes, in the bounded leakage threat model. It depicts well implementation contexts where the physical noise is negligible. In this threat model, we show that if bits are leaked from the shares of the encoding over an -bit field, then with probability at least over the choice of , the scheme is -leakage resilient. Furthermore, this result holds without assuming independent leakage from the shares, which may be challenging to enforce in practice. We additionally show that in large Mersenne-prime fields, a wise choice of the public coefficients can yield leakage resilience up to , in the case where one physical bit from each share is revealed to the adversary. The exponential rate of the leakage resilience we put forward significantly improves upon previous bounds in additive masking, where the past literature exhibited a constant exponential rate only.
Succinct Oblivious Tensor Evaluation and Applications: Adaptively-Secure Laconic Function Evaluation and Trapdoor Hashing for All Circuits
We propose the notion of succinct oblivious tensor evaluation (OTE), where two parties compute an additive secret sharing of a tensor product of two vectors , exchanging two simultaneous messages. Crucially, the size of both messages and of the CRS is independent of the dimension of .
We present a construction of OTE with optimal complexity from the standard learning with errors (LWE) problem. Then we show how this new technical tool enables a host of cryptographic primitives, all with security reducible to LWE, such as:
1)Adaptively secure laconic function evaluation for depth- functions with communication .
2) A trapdoor hash function for all functions.
3) An (optimally) succinct homomorphic secret sharing for all functions.
4) A rate- laconic oblivious transfer for batch messages, which is best possible.
In particular, we obtain the first laconic function evaluation scheme that is adaptively secure from the standard LWE assumption, improving upon Quach, Wee, and Wichs (FOCS 2018). As a key technical ingredient, we introduce a new notion of adaptive lattice encodings, which may be of independent interest.
Privacy-Preserving Multi-Signatures: Generic Techniques and Constructions Without Pairings
Multi-signatures allow a set of parties to produce a single signature for a common message by combining their individual signatures. The result can be verified using the aggregated public key that represents the group of signers. Very recent work by Lehmann and Özbay (PKC '24) studied the use of multi-signatures for ad-hoc privacy-preserving group signing, formalizing the notion of multi-signatures with probabilistic yet verifiable key aggregation. Moreover, they proposed new BLS-type multi-signatures, allowing users holding a long-term key pair to engage with different groups, without the aggregated key leaking anything about the corresponding group. This enables key-reuse across different groups in a privacy-preserving way. Unfortunately, their technique cannot be applied to Schnorr-type multi-signatures, preventing state-of-the-art multi-signatures to benefit from those privacy features.
In this work, we revisit the privacy framework from Lehmann and Özbay. Our first contribution is a generic lift that adds privacy to any multi-signature with deterministic key aggregation. As our second contribution, we study two concrete multi-signatures, and give dedicated transforms that take advantage of the underlying structures for improved efficiency. The first one is a slight modification of the popular MuSig2 scheme, achieving the strongest privacy property for free compared to the original scheme. The second is a variant of the lattice-based multi-signature scheme DualMS, making our construction the first post-quantum secure multi-signature for ad-hoc privacy-preserving group signing. The light overhead incurred by the modifications in our DualMS variant still allow us to benefit from the competitiveness of the original scheme.
How to Share an NP Statement or Combiners for Zero-Knowledge Proofs
In Crypto'19, Goyal, Jain, and Sahai (GJS) introduced the elegant notion of *secret-sharing of an NP statement* (NPSS). Roughly speaking, a -out-of- secret sharing of an NP statement is a reduction that maps an instance-witness pair to instance-witness pairs such that any subset of reveals no information about the original witness, while any subset of allows full recovery of the original witness. Although the notion was formulated for general , the only existing construction (due to GJS) applies solely to the case where and provides only computational privacy. In this paper, we further explore NPSS and present the following contributions.
1. **Definition.** We introduce a refined definition of information-theoretically secure NPSS. This notion can be seen as a cryptographic variant of standard NP-reductions and can be compiled into the GJS definition using any one-way function.
2. **Construction.** We construct information-theoretic -out-of- NPSS for any values of with complexity polynomial in . Along the way, we present a new notion of secure multiparty computation that may be of independent interest.
3. **Applications.** Our NPSS framework enables the *non-interactive combination* of instances of zero-knowledge proofs, where only of them are sound and only are zero-knowledge, provided that . Our combiner preserves various desirable properties, such as the succinctness of the proof. Building on this, we establish the following results under the minimal assumption of one-way functions:
(i) *Standard NIZK implies NIZK in the Multi-String Model* (Groth and Ostrovsky, J. Cryptology, 2014), where security holds as long as a majority of the common reference strings were honestly generated. Previously, such a transformation was only known in the common random string model, where the reference string is uniformly distributed.
(ii) A *Designated-Prover NIZK in the Multi-String Model*, achieving a strong form of two-round Multi-Verifier Zero-Knowledge in the honest-majority setting.
(iii) A *three-round secure multiparty computation protocol* for general functions in the honest-majority setting. The round complexity of this protocol is optimal, resolving a line of research that previously relied on stronger assumptions (Aharonov et al., Eurocrypt'12; Gordon et al., Crypto'15; Ananth et al., Crypto'18; Badrinarayanan et al., Asiacrypt'20; Applebaum et al., TCC'22).
Leap: A Fast, Lattice-based OPRF With Application to Private Set Intersection
Oblivious pseudorandom functions (OPRFs) are an important primitive in privacy-preserving cryptographic protocols. The growing interest in OPRFs, both in theory and practice, has led to the development of numerous constructions and variations. However, most of these constructions rely on classical assumptions. Potential future quantum attacks may limit the practicality of those OPRFs for real-world applications.
To close this gap, we introduce Leap, a novel OPRF based on heuristic lattice assumptions. Fundamentally, Leap builds upon the Spring [BBL+15] pseudorandom function (PRF), which relies on the learning with rounding assumption, and integrates techniques from multi-party computation, specifically Oblivious Transfer (OT) and Oblivious Linear Evaluation (OLE). With this combination of oblivious protocols, we construct an OPRF that evaluates in less than a millisecond on a modern computer.
Efficiency-wise, our prototype implementation achieves computation times of just 11 microseconds for the client and 750 microseconds for the server, excluding some base OT preprocessing overhead. Moreover, Leap requires an online communication cost of 23 kB per evaluation, where the client only has to send around 380 bytes online. To demonstrate the practical applicability of Leap, we present an efficient private set intersection (PSI) protocol built on top of Leap. This application highlights the potential for the integration of Leap into various privacy-preserving applications: We can compute an unbalanced set intersection with set sizes of 2^24 and 2^15 in under a minute of online time and just over two minutes overall.
Towards Leakage-Resilient Ratcheted Key Exchange
Ratcheted key exchange (RKE) is at the heart of modern secure messaging, enabling protocol participants to continuously update their secret material to protect against full state exposure through forward security (protecting past secrets and messages) and post-compromise security (recovering from compromise). However, many practical attacks only provide the adversary with partial access to a party's secret state, an attack vector studied under the umbrella of leakage resilience. Existing models of RKE provide suboptimal guarantees under partial leakage due to inherent limitations in security under full state exposure.
In this work, we initiate the study of leakage-resilient ratcheted key exchange that provides typical guarantees under full state exposure and additional guarantees under partial state exposure between ratchets of the protocol. We consider unidirectional ratcheted key exchange (URKE) where one party acts as the sender and the other as receiver. Building on the notions introduced by Balli, Rösler and Vaudenay (ASIACRYPT 2020), we formalise a key indistinguishability game under randomness manipulation and bounded leakage (KIND), which in particular enables the adversary to continually leak a bounded amount of the sender's state between honest send calls. We construct a corresponding protocol from a key-updatable key encapsulation mechanism (kuKEM) and a leakage-resilient one-time MAC. By instantiating this MAC in the random oracle model (ROM), results from Balli, Rösler and Vaudenay imply that in the ROM, kuKEM and KIND-secure URKE are equivalent, i.e., can be built from each other. To address the strong limitations that key indistinguishability imposes on the adversary, we formalise a one-wayness game that also permits leakage on the receiver. We then propose a corresponding construction from leakage-resilient kuKEM, which we introduce, and a leakage-resilient one-time MAC. We further show that leakage-resilient kuKEM and one-way-secure URKE are equivalent in the ROM, highlighting the cost that strong one-way security entails. Our work opens exciting directions for developing leakage-resilient messaging protocols.
Private Multi-Party Neural Network Training over via Galois Rings
Secret-sharing-based multi-party computation provides effective solutions for privacy-preserving machine learning. In this paper, we present novel protocols for privacy-preserving neural network training using Shamir secret sharing scheme over Galois rings. The specific Galois ring we use is , which contains as a subring. The algebraic structure of enables us to benefit from Shamir scheme while performing modulo operations only on instead of a prime number, making our protocols more compatible with modern computer architectures. We achieve the parallel processing of training data by embedding different training samples into the different coefficients of the polynomial representing a single Galois ring element, and we show that this embedding can be performed with no additional communication overhead compared to processing only one sample at a time. To evaluate our methods, we conduct private training of neural networks on the MNIST dataset between different numbers of participants. The experimental results indicate the advantages of our protocols compared to existing -based implementations in this domain.
(Multi-Input) FE for Randomized Functionalities, Revisited
Randomized functional encryption (rFE) generalizes functional encryption (FE) by incorporating randomized functionalities. Randomized multi-input functional encryption (rMIFE) extends rFE to accommodate multi-input randomized functionalities.
In this paper, we reassess the framework of rFE/rMIFE enhancing our understanding of this primitive and laying the groundwork for more secure and flexible constructions in this field. Specifically, we make three key contributions:
- New definition: We identify critical gap in the existing indistinguishability-based (IND) security definition for rFE/rMIFE. Notably, current definition fails to adequately address security against malicious encryptors—a crucial requirement for rFE/rMIFE since their introduction. We propose a novel, robust IND security definition that not only addresses threats from malicious decryptors but also quantifies the security against malicious encryptors effectively.
- Counterexample: To illustrate the importance of this definitional gap, we provide a counterexample of an insecure rFE scheme that meets IND security under the previous definition but explicitly fails in a natural setting (and where this failure would be precluded by our enhanced definition). Our counterexample scheme is non-trivial and meticulously designed using standard cryptographic tools, namely FE for deterministic functions, pseudorandom function (PRF), public key encryption (PKE), and simulation-sound non-interactive zero-knowledge (NIZK) proof systems.
- Adaptive unbounded-message secure construction: The only viable prior construction of rMIFE by Goldwasser et al. [EUROCRYPT 2014] (which uses indistinguishability obfuscation (iO) and other standard assumptions) has significant limitations: it permits only a pre-defined number of messages per encryption slot and operates under selective-security constraints, requiring adversaries to declare challenge ciphertext queries and "corrupted" encryption keys in advance. We address these shortcomings by employing sub-exponentially secure iO. Technically, we build on and adapt methods developed by Goyal et al. [ASIACRYPT 2016] for deterministic MIFE.
Towards a White-Box Secure Fiat-Shamir Transformation
The Fiat–Shamir transformation is a fundamental cryptographic technique widely used to convert public-coin interactive protocols into non-interactive ones. This transformation is crucial in both theoretical and practical applications, particularly in the construction of succinct non-interactive arguments (SNARKs). While its security is well-established in the random oracle model, practical implementations replace the random oracle with a concrete hash function, where security is merely assumed to carry over.
A growing body of work has given theoretical examples of protocols that remain secure under the Fiat–Shamir transformation in the random oracle model but become insecure when instantiated with any white-box implementation of the hash function. Recent research has shown how these attacks can be applied to natural cryptographic schemes, including real-world systems. These attacks rely on a general diagonalization technique, where the protocol exploits its access to the white-box implementation of the hash function. These attacks cast serious doubt on the security of cryptographic systems deployed in practice today, leaving their soundness uncertain.
We propose a new Fiat–Shamir transformation (XFS) that aims to defend against a broad family of attacks. Our approach is designed to be practical, with minimal impact on the efficiency of the prover and verifier and on the proof length. At a high level, our transformation combines the standard Fiat–Shamir technique with a new type of proof-of-work that we construct.
We provide strong evidence for the security of our transformation by proving its security in a relativized random oracle model. Specifically, we show diagonalization attacks on the standard Fiat–Shamir transformation that can be mapped to analogous attacks within this model, meaning they do not rely on a concrete instantiation of the random oracle. In contrast, we prove unconditionally that our XFS variant of the Fiat–Shamir transformation remains secure within this model. Consequently, any successful attack on XFS must deviate from known techniques and exploit aspects not captured by our model.
We hope that our transformation will help preserve the security of systems relying on the Fiat–Shamir transformation.
Fully Asymmetric Anamorphic Homomorphic Encryption from LWE
As introduced by Persiano {\it et al.} (Eurocrypt'22), anamorphic encryption (AE) is a primitive enabling private communications against a dictator that forces users to surrender their decryption keys. In its fully asymmetric flavor (defined by Catalano {\it et al.}, Eurocrypt'24), anamorphic channels can work as hidden public-key mechanisms in the sense that anamorphic encryptors are not necessarily able to decrypt anamorphic ciphertexts. Unfortunately, fully asymmetric AE is hard to come by and even impossible to obtain from ordinary public-key encryption via black-box constructions. So far, only three schemes are known to rely on well-established assumptions. In this paper, we exhibit constructions from the standard LWE assumption based on Regev's cryptosystem and its dual version. In both cases, we retain the additive homomorphism of the schemes. We additionally show that dual Regev is public-key anamorphic in the sense of Persiano {\it et al.} (Crypto'24). In the FHE setting, we show that the dual GSW system provides fully asymmetric AE (while preserving its leveled homomorphism) when instantiated with binary/ternary secret keys. Along the way, we discuss the extent to which our schemes satisfy a generalization of Banfi {\it et al.}'s notion of robustness (Eurocrypt'24) to the case of homomorphically evaluated ciphertexts.
Bulletproofs for R1CS: Bridging the Completeness-Soundness Gap and a ZK Extension
Bulletproofs, introduced by Bünz, Bootle, Boneh, Poelstra, Wuille and Maxwell (IEEE S&P, 2018), is a highly efficient non-interactive argument system that does not require a trusted setup. Recently, Bünz (PhD Thesis, 2023) extended Bulletproofs to support arguments for rank-1 constraint satisfaction (R1CS) systems, a widely-used representation for arithmetic satisfiability problems. Although the argument system constructed by Bünz preserves the attractive properties of Bulletproofs, it presents a gap between its completeness and soundness guarantees: The system is complete for a restricted set of instances, but sound only for a significantly broader set. Although argument systems for such gap relations nevertheless provide clear and concrete guarantees, the gaps they introduce may lead to various inconsistencies or undesirable gaps within proofs of security, especially when used as building blocks within larger systems.
In this work we show that the argument system presented by Bünz can be extended to bridge the gap between its completeness and soundness, and to additionally provide honest-verifier zero-knowledge. For the extended argument system, we introduce a refined R1CS relation that captures the precise set of instances for which both completeness and soundness hold without resorting to a gap formulation. The extended argument system preserves the performance guarantees of the argument system presented by Bünz, and yields a non-interactive argument system using the Fiat-Shamir transform.
On the Adaptive Security of Free-XOR-based Garbling Schemes in the Plain Model
A Garbling Scheme is a fundamental cryptographic primitive, with numerous theoretical and practical applications. Since its inception by Yao (FOCS'82, '86), optimizing the communication and computation complexities of securely garbling circuits has been an area of active research. One such optimization, and perhaps the most fundamental, is the `Free-XOR' technique (Kolesnikov and Schneider, ICALP'08) which allows XOR gates in a function garbling to not require representation, and therefore communication.
Since then, several works have designed and analysed the security of schemes that adopt the Free-XOR optimisation. In particular: (1) Applebaum (JoC'16) proved that this can be securely instantiated assuming symmetric-key encryption satisfying a notion called RK-KDM security; and (2) Zahur, Rosulek and Evans (Eurocrypt'15) proposed the so-called `Half Gates' scheme, and proved that it can be instantiated assuming hash functions satisfying a notion called CCR security. Although both schemes have been proven selectively secure, prior work leaves it open to analyze whether they satisfy a stronger security notion -- adaptive security -- in the plain model.
In this work, we formally show that the selective security of these two schemes cannot be lifted to adaptive security under the same assumptions. To establish these barriers, we adopt techniques from the work of Kamath et al (Crypto'21), who proved similar negative results for Yao's garbling. We use that as a starting point and introduce new techniques tailored towards addressing Free-XOR-based schemes.
On Quantum Money and Evasive Obfuscation
We show a black box barrier against constructing public key quantum money from obfuscation for evasive functions. As current post-quantum obfuscators based on standard assumptions are all evasive, this shows a fundamental barrier to achieving public key quantum money from standard tools. Our impossibility applies to black box schemes where (1) obfuscation queries made by the mint are classical, and (2) the verifier only makes (possibly quantum) evaluation queries, but no obfuscation queries. This class seems to capture any natural method of using obfuscation to build quantum money.
Fine-Grained Complexity in a World without Cryptography
The study of fine-grained cryptography has proliferated in recent years due to its allure of potentially relying on weaker assumptions compared to standard cryptography. As fine-grained cryptography only requires polynomial gaps between the adversary and honest parties, it seems plausible to build primitives relying upon popular hardness assumptions about problems in such as - or - - . The ultimate hope is that fine-grained cryptography could still be viable even if all current cryptographic assumptions are false, such as if or if we live in Pessiland where one-way functions do not exist.
In our work, we consider whether this approach is viable by studying fine-grained complexity when all standard cryptographic assumptions are false. As our main result, we show that many popular fine-grained complexity problems are easy to solve in the average-case when one-way functions do not exist. In other words, many candidate hardness assumptions for building fine-grained cryptography are no longer options in Pessiland. As an example, we prove that the average-case - and - - conjectures are false for sufficiently large constant when no one-way functions exist. The average-case - - assumption was used to build fine-grained key-exchange by Lavigne et al. [CRYPTO'19]. One can also view the contrapositive of our result as providing an explicit construction of one-way functions assuming average-case hardness of - or - - for all constant .
We also show that barriers for reductions in fine-grained complexity may be explained by problems in cryptography. First, we show that finding faster algorithms for computing discrete logarithms is equivalent to designing average-case equivalence between - and - (an extension of - to cyclic groups). In particular, finding such a reduction from - to - could potentially lead to breakthrough algorithms for the discrete logarithm, factoring, RSA and quadratic residuosity problems. Finally, we show that discrete logarithms with preprocessing may be reduced to the - - problem, and we present faster algorithms for average-case - - and - - .
A Generic Approach to Adaptively-Secure Broadcast Encryption in the Plain Model
Broadcast encryption allows a user to encrypt a message to recipients with a ciphertext whose size scales sublinearly with . The natural security notion for broadcast encryption is adaptive security which allows an adversary to choose the set of recipients after seeing the public parameters. Achieving adaptive security in broadcast encryption is challenging, and in the plain model, the primary technique is the celebrated dual-systems approach, which can be implemented over groups with bilinear maps. Unfortunately, it has been challenging to replicate the dual-systems approach in other settings (e.g., with lattices or witness encryption). Moreover, even if we focus on pairing-based constructions, the dual-systems framework critically relies on decisional (and source-group) assumptions. We do not have constructions of adaptively-secure broadcast encryption from search (or target-group) assumptions in the plain model.
Gentry and Waters (EUROCRYPT 2009) described a compiler that takes any semi-statically-secure broadcast encryption scheme and transforms it into an adaptively-secure scheme in the random oracle model. While semi-static security is easier to achieve and constructions are known from witness encryption as well as search (and target-group) assumptions on pairing groups, the transformed scheme relies on random oracles. In this work, we show that using publicly-sampleable projective PRGs, we can achieve adaptive security in the plain model. We then show how to build publicly-sampleable projective PRGs from many standard number-theoretic assumptions (e.g., CDH, LWE, RSA).
Our compiler yields the first adaptively-secure broadcast encryption scheme from search assumptions as well as the first such scheme from witness encryption in the plain model. We also obtain the first adaptively-secure pairing-based scheme in the plain model with -size public keys and -size ciphertexts (where suppresses polynomial factors in the security parameter ). Previous adaptively-secure pairing-based schemes in the plain model with -size ciphertexts required -size public keys.
Partial and Fully Homomorphic Matching of IP Addresses Against Blacklists for Threat Analysis
In many areas of cybersecurity, we require access to Personally Identifiable Information (PII), such as names, postal addresses and email addresses. Unfortunately, this can lead to data breaches, especially in relation to data compliance regulations such as GDPR. An IP address is a typical identifier which is used to map a network address to a person. Thus, in applications which are privacy-aware, we may aim to hide the IP address while aiming to determine if the address comes from a blacklist. One solution to this is to use homomorphic encryption to match an encrypted version of an IP address to a blacklisted network list. This matching allows us to encrypt the IP address and match it to an encrypted version of a blacklist. In this paper, we use the OpenFHE library \cite{OpenFHE} to convert network addresses into the BFV homomorphic encryption method. In order to assess the performance impact of BFV, it implements a matching method using the OpenFHE library and compares this against the partial homomorphic methods of Paillier, Damgard-Jurik, Okamoto-Uchiyama, Naccache-Stern and Benaloh. The main findings are that the BFV method compares favourably against the partial homomorphic methods in most cases.
Differential Cryptanalysis of the Reduced Pointer Authentication Code Function used in Arm’s FEAT_PACQARMA3 Feature
The Pointer Authentication Code ( ) feature in the Arm architecture is used to enforce the Code Flow Integrity ( ) of running programs.
It does so by generating a short - called the - of the return address and some additional context information upon function entry, and checking it upon exit.
An attacker that wants to overwrite the stack with manipulated addresses now faces an additional hurdle, as they now have to guess, forge, or reuse values.
is deployed on billions of devices as a first line of defense to harden system software and complex programs against software exploitation.
The original version of the feature uses a 12-round version the block cipher.
The output is then truncated to between 3 and 32 bits, in order to be inserted into unused bits of 64-bit pointers.
A later revision of the specification allows the use of an 8-round version of .
This reduction may introduce vulnerabilities such as high-probability distinguishers, potentially enabling key recovery attacks.
The present paper explores this avenue.
A cryptanalysis of the computation function entails restricting the inputs to valid virtual addresses, meaning that certain most significant bits are fixed to zero,
and considering only the truncated output.
Within these constraints, we present practical attacks on various configurations.
These attacks, while not presenting immediate threat to the mechanism, show that some versions of the feature
do miss the security targets made for the original function.
This offers new insights into the practical security of constructing from truncated block ciphers, expanding on the mostly theoretical understanding of creating PRFs from truncated PRPs.
We note that the results do not affect the security of when used with the recommended number of rounds for general purpose applications.
Committing Authenticated Encryption: Generic Transforms with Hash Functions
Recent applications and attacks have highlighted the need for authenticated encryption (AE) schemes to achieve the so-called committing security beyond privacy and authenticity. As a result, several generic solutions have been proposed to transform a non-committing AE scheme to a committing one, for both basic unique-nonce security and advanced misuse-resistant (MR) security. We observe that all existing practical generic transforms are subject to at least one of the following limitations: (i) not committing to the entire encryption context, (ii) involving non-standard primitives, (iii) not being a black-box transform, (iv) providing limited committing security. Furthermore, so far, there has been no generic transform that can directly elevate a basic AE scheme to a committing AE scheme that offers MR security. Our work fills these gaps by developing black-box generic transforms that crucially rely on hash functions, which are well standardized and widely deployed.
First, we construct three basic transforms that combine AE with a single hash function, which we call and . They all guarantee strong security, and can be applied to both AE and basic privacy-only encryption schemes. Next, for MR security, we propose two advanced hash-based transforms that we call and . is an MRAE-preserving transform that adds committing security to an MR-secure AE scheme. is the first generic transform that can directly elevate basic AE to one with both committing and MR security; moreover, also works with arbitrary privacy-only encryption schemes. Both of them feature a simple design and ensure strong security.
For performance evaluation, we compare our transforms to similar existing ones, both in theory and through practical implementations. The results show that our achieves the highest practical efficiency among basic transforms, while excels in MRAE-preserving transforms. Our MRAE-lifting transform demonstrates comparable performance to MRAE-preserving ones and surpasses them for messages larger than approximately bytes; for longer messages, it even outperforms the benchmark, non-committing standardized .
Single Trace Side-Channel Vulnerabilities Discovery Using Statistical Leakage Simulator
This paper presents a novel single-trace side-channel attack on FALCON—a lattice-based post-quantum digital signature protocol recently approved for standardization by NIST. We target the discrete Gaussian sampling operation within the FALCON key generation scheme and use a single power measurement trace to succeed. Notably, negating the ‘shift right 63-bit’ operation (for 64-bit values) leaks critical information about the ‘-1’ vs. ‘0’ assignments to intermediate coefficients. These leaks enable full recovery of the generated secret keys. The proposed attack is simulated on the ELMO simulator running both reference and optimized software implementation from FALCON’s NIST Round 3 package. Statistical analysis with 20k tests reveals a full key-recovery success rate of 100% for FALCON-512. This work highlights the vulnerability of current software solutions to single-trace attacks and underscores the urgent need to develop single-trace resilient software for embedded systems in the presilicon phase.
Traceable Verifiable Secret Sharing and Applications
A secret sharing scheme allows a trusted dealer to divide a secret among multiple parties so that a sufficient number of them can recover the secret, while a smaller group cannot. In CRYPTO'21, Goyal, Song, and Srinivasan introduced Traceable Secret Sharing (TSS), which enhances traditional secret sharing by enabling the identification of parties involved in secret reconstruction, deterring malicious behavior like selling shares. Recently, Boneh, Partap, and Rotem (CRYPTO'24) presented two more efficient TSS schemes. However, these existing TSS schemes assume that all distributed shares are valid and shareholders act honestly during the secret reconstruction phase. In this paper, we introduce Traceable Verifiable Secret Sharing (TVSS), a concept designed to ensure both traceability and verifiability in the face of malicious actions by either the dealer or shareholders. We propose a general strategy for transforming a Shamir-based, computationally secure Verifiable Secret Sharing (VSS) scheme into an efficient TVSS scheme. Building on this strategy, we construct two practical TVSS schemes in the honest-majority setting, based on well-known VSS schemes proposed by Feldman (SFCS'87) and Pedersen (CRYPTO'91). Our proposed TVSS schemes retain public shareholder indexes, enhancing flexibility in designing accountable threshold protocols (e.g., Distributed Key Generation protocols) using TVSS. Compared to the original VSS schemes, the individual share size in the new TVSS schemes increases by only a single field element and is just two or three times the size of the main secret.
Motivated by a recent study on Accountable Threshold Cryptosystems (ATCs) by Boneh, Partap, and Rotem (CRYPTO'24), and by leveraging our proposed Feldman-based TVSS scheme, we also introduce an efficient ATC based on ElGamal cryptosystem. This new ATC enables a tracer to uniquely identify the parties involved in the decryption process while introducing minimal overhead to existing actively secure (and/or robust) threshold protocols built on the ElGamal cryptosystem.
Minicrypt PIR for Big Batches
We present PIR protocols for offline/online two-server setting where a client wants to privately retrieve a batch of entries from database of size by interacting with a servers . The client has interacted with a server ahead of time, not colluding with . We present simple protocols based on one-way functions that substantially improve on the query complexity or runtime over existing works. Concrete instantiations of our general paradigm lead to batch PIR protocols with the following parameters:
- A protocol for batches of , where , and each spend a total of work and exchange bits of communication. This yields an amortized complexity of work and communication per query in the batch.
- A more balanced protocol for batches of size in which spends a total of work, and spend work, and the total communication is of size .
Our protocols have immediate applications such as Private Set Intersection (PSI) in the two-server setting with preprocessing and unbalanced set sizes.
Uncategorized
Uncategorized
We introduce , a hash-based succinct argument for integer arithmetic. 's goal is to provide a practically efficient scheme that bypasses the arithmetization overheads that many succinct arguments present. These overheads can be of orders of magnitude in many applications. By enabling proving statements over the integers, we are able to arithmetize many operations of interest with almost no overhead. This includes modular operations involving any moduli, not necessarily prime, and possibly involving multiple moduli in the same statement. In particular, allows to prove statements for the ring for arbitrary . Importantly, and departing from prior work, our schemes are purely code and hash-based, and do not require hidden order groups. In its final form, operates similarly to other hash-based schemes using Brakedown as their PCS, and at the same time it benefits from the arithmetization perks brought by working over (and ) natively.
At its core, is a succinct argument for proving relations over the rational numbers , even though when applied to integer statements, an honest prover and verifier will only operate with small integers. consists of two main components: 1) - , a framework for proving algebraic statements over the rationals by reducing modulo a randomly chosen prime , followed by running a suitable PIOP over (this is similar to the approach taken in prior works, with the difference that we use localizations of to enable prime modular projection); and 2) , a Brakedown-type polynomial commitment scheme built from an IOP of proximity to the integers, a novel primitive that we introduce. The latter primitive guarantees that a prover is using a polynomial with coefficients close to being integral. With these two primitives in place, one can use a lookup argument over the rationals to ensure that the witness contains only integer elements.
Cryptanalysis of Full SCARF
SCARF is a tweakable block cipher dedicated to cache address randomization, proposed at the USENIX Security conference. It has a 10-bit block, 48-bit tweak, and 240-bit key. SCARF is aggressively optimized to meet the harsh latency constraints of cache address randomization, and uses a dedicated model for its security claim.
The full version of SCARF has 8 rounds, and its designers claim security up to queries and computations. In this work we present a distinguisher against 6-round SCARF under the collision model with time and query complexity , and a key-recovery attack against the full 8-round SCARF under the encryption-decryption model with queries and time . As part of the attack, we present a novel method to compute the minimal number of right pairs following a differential characteristic when the input pairs are restricted to a subspace of the domain of the primitive.
Towards Optimally Secure Deterministic Authenticated Encryption Schemes
The public comments received for the review process for NIST (SP) 800-38A pointed out two important issues that most companies face: (1) the limited security that AES can provide due to its 128-bit block size and (2) the problem of nonce-misuse in practice. In this paper, we provide an alternative solution to these problems by introducing two optimally secure deterministic authenticated encryption (DAE) schemes, denoted as DENC1 and DENC2 respectively. We show that our proposed constructions improve the state-of-the-art in terms of security and efficiency. Specifically, DENC1 achieves a robust security level of , while DENC2 attains a near-optimal security level of , where is the total number of blocks, is maximum number of blocks in each query, and is a user-defined parameter closely related to the rate of the construction. Our research centers on the development of two IV-based encryption schemes, referred to as IV1 and IV2, which respectively offer security levels of and . Notably, both of our DAE proposals are nearly rate 1/2 constructions. In terms of efficiency, our proposals compare favorably with state-of-the-art AE modes on contemporary microprocessors.