NIST plans to coordinate with other standards organizations, such as the IETF, to develop standards for stateful hash-based signatures. As stateful hash-based signatures do not meet the API requested for signatures, this standardization effort will be a separate process from the one outlined in the call for proposals. It is expected that NIST will only approve a stateful hash-based signature standard for use in a limited range of signature applications, such as code signing, where most implementations will be able to securely deal with the requirement to keep state.
This process shares many features with NIST competitions, and is modelled after the successes we have had with competitions in the past. There are, however, some important requirements that the current research climate demands we require for this process which constitute significant distinctions between this process and a competition.
First, our handling of the applicants does not coincide with a competition as specified in NISTIR 7977, nor does this process correspond to multiple parallel competitions. There will not be an appropriate or directly analogous concept of “winners” and “losers.” Our intention is to select a couple of options for more immediate standardization, as well as to eliminate some submissions as unsuitable. There will likely be some submissions that we do not select for standardization, but that we also do not eliminate and which may be excellent options for a specific application that we're not ready or don't have the contemporaneous resources to standardize. In such a circumstance, we would communicate with the submitters to allow these to remain under a public license for study and practice and to remain under consideration for future standardization. There is no specification for the handling of such an applicant in a competition.
NIST will not remove a scheme from consideration just because it was submitted with incorrectly analyzed parameters. Depending on how far off the estimate was, and how unanticipated the attack, NIST may take it as a sign the algorithm isn’t mature enough, which could lead NIST to remove the scheme from consideration. However, assessments of an algorithm’s maturity will not be primarily based on security strength categories. Rather, the point of the categories is to compare like with like when doing performance comparisons and to make it easier to plan crypto transitions in the future. NIST will respond to attacks that contradict the claimed security strength category, but do not bring the maturity of the scheme into question, by bumping the parameter set down to a lower category, and potentially encouraging the submitter to provide a higher security parameter set.
For any scheme selected for standardization, NIST hopes to select parameters sets from those offered by the submitter. If the submitted parameter sets fail to meet NIST’s needs, for whatever reason, NIST hopes to work with the submitter to provide parameter sets that do meet NIST’s needs. NIST may also choose not to standardize some of the submitted parameter sets. NIST’s reasons for doing this could include insufficient security, unacceptable performance, and too many parameter sets.
NIST has numerous reasons for specifying a categorical post-quantum security hierarchy in the Call for Proposals for post-quantum standards. The primary purpose is to facilitate the comparison of submissions achieving specific benchmark security levels so that an honest assessment can be made. Due to the fact that the science is not yet fully developed in this area, it is possible and appropriate for these benchmarks to be refined in response to future advances in theory. It is not NIST’s intent to unfairly review submissions based on an analysis on parameter sets we learn to be un-impactful.
It is, however, NIST’s present belief that all five of the security strength categories provide sufficient security to allow for standardization. More precisely, NIST would describe security strengths 4 and 5 as “likely excessive,” 2 and 3 as “probably secure for the foreseeable future,” and security strength 1 as “likely secure for the foreseeable future, unless quantum computers improve faster than is anticipated.” The only security considerations which are likely to lead NIST to decline to standardize a parameter set for a scheme NIST has selected are
NIST may also decline to standardize parameters which have unacceptable performance. If NIST feels the higher security strength categories cannot be met with acceptable performance, NIST may encourage the submitter to provide parameters with intermediate security between security strengths 2 and 3, or between 3 and 4.
Finally, NIST may pare down the range of options offered by the submitter, regarding how to select parameters. Flexibility is generally a good thing, but it may be weighed against the complexity of implementing and testing for all available options.
To convert a public key encryption function to a KEM, NIST will construct the encapsulate function by generating a random key and encrypting it. The key generation and decapsulation functions of the KEM will be the same as the key generation and decryption functions of the original public key encryption scheme. To convert a KEM to a public key encryption scheme, NIST will construct the encryption function, by appending to the KEM ciphertext, an AES-GCM ciphertext of the plaintext message, with a randomly generated IV. The AES key will be the symmetric key output by the encapsulate function. (The key generation function will be identical to that for the original KEM, and the decryption function will be constructed by decapsulation followed by AES decryption.)
NIST understands that real-world cryptographic algorithm implementations will necessarily contain platform-specific optimizations. The two required implementations in the submission package are primarily intended to facilitate future analysis and development throughout the evaluation period, and as such, we require that both be written in a cross-platform manner. Additionally, the two required implementations need not be distinct. If a submitter does not see value in a separate cross-platform optimized implementation, they may simply note in their submission that the reference implementation is also the cross-platform optimized implementation.
Regarding the ANSI C requirement, submitters should note that key requirements are that the submission code should be written in a cross-platform manner and that the submission must contain build scripts or instructions for version 6.4.0 of the GNU Compiler. In particular, mandatory implementations written in C99 and C11 are both perfectly fine, as long as any necessary compiler directives are included as part of the build script(s).
Additionally, implementations that use NTL (see Question and Answer 16 for details on the use of third-party open source libraries) are necessarily allowed to be written in C++, although to ease portability to a pure C implementation via swapping NTL for C-based libraries, we ask that the original and new code in this submission be as ANSI C-like as possible, only using C++ functionality where absolutely required in order to interact with NTL.
Submitters may not write their own new and original assembly (including inline assembly) code or compiler intrinsics for either the mandatory referenced implementation or the mandatory optimized implementation but may use third party open-source libraries that themselves rely on assembly optimizations, subject to the constraints described in Question and Answer 16.
During the course of the evaluation process, NIST will be looking at performance data for the best-available implementations a variety of platforms. As such, we strongly encourage submitters to include optimized versions for major platforms- particularly x64, and 32-bit and 64-bit ARM architectures. However, we have made such submissions optional so as not to discourage submissions from teams that may have very strong algorithmic candidates, but have little experience in the area of platform optimization. For further questions on platform-specific optimizations and the role they will play in NIST’s evaluation process, see Question and Answer 4.
The reference platform was defined in order to provide a common and ubiquitous platform to verify the execution of the code provided in the submissions.
The reference platform should be treated as a single core machine, but if an algorithm can make particular use of multiple cores or vector instructions, submitters are encouraged to provide additional implementations for these platforms.
In our evaluation process, NIST plans to include performance metrics from a variety of platforms, including: 64-bit “desktop/server class,” 32-bit “mobile class,” microcontrollers (32-, 16-, and where possible, 8-bit), as well as hardware platforms (e.g., FPGA). Submitters are strongly encouraged to provide additional implementations for these platforms, but to avoid discouraging submissions from teams with strong candidate algorithms but little experience in the area of platform-specific optimizations, NIST is making them optional as part of the submission itself.
NIST expects that as the evaluation process moves beyond the first round, we will see the wider cryptographic community (in particular, those skilled in platform-specific optimizations) provide optimized implementations for most submissions for a wide variety of platforms, as was the case in the SHA-3 competition. NIST plans to use such third-party optimized implementations and third-party benchmarking tools such as eBaCS/ SUPERCOP and Open Quantum Safe as part of its evaluation process.
The function randombytes() will be available to the submitters. This is a function from the SUPERCOP test environment and should be used to generate seed values for an algorithm.
For functional and timing tests a deterministic generator is used inside randombytes() to produce the seed values. If security testing is being done simply substitute calls to a true hardware RBG inside randombytes().
Function prototype for randombytes() is:
// The xlen parameter is in bytes
void randombytes(unsigned char *x,unsigned long long xlen)
In both the mandatory reference implementation and the mandatory optimized implementation, submissions may use NTL Version 10.5.0 (http://www.shoup.net/ntl/download.html), GMP Version 6.1.2 (https://gmplib.org).
For the 2nd Round, submissions may use the newest version of OpenSSL 1.1.1 (available at https://www.openssl.org/source/) and
libkeccak from XKCP (the Keccak team’s updated library, available at https://github.com/XKCP/XKCP). We strongly encourage submitters to make use of these newer versions when testing any tweaks to be made for the 2nd Round rather than the previously cited older versions of OpenSSL Version 1.10f (https://www.openssl.org/source) and the Keccak code package (https://github.com/gvanas/KeccakCodePackage) .
Submitters may assume that these libraries are installed on the reference platform and do not need to provide them along with their submissions.
If a submitter wishes to use a third-party open source library other than the ones specified above, they must send a request to NIST at pqc-comments@nist.govby September 1st, 2017, with the name of the library and a link to the primary website hosting it from which it may be downloaded. NIST will either approve or deny this request within 2 weeks of receiving it. Should a request be approved, it will be added to the above list of acceptable third-party open source libraries provided in this FAQ.
All submission packages using third-party open source code should contain build scripts which will allow for seamless “one-stop” building of the submissions.
For example, on a Linux platform, it should require no more work to build the than running the standard
> ./configure [--options]
> make
> make install
succession of commands. In particular, the build process should be able to find the versions of these libraries specified above that will be pre-installed on the reference platform.
Separate build scripts should be included for the reference Windows platform and reference Linux platform that work using the GNU Compiler Collection version 6.4.0 and related tools as well as any platform-specific commands required.
In addition, as part of the written submission, the submitter shall describe in their own words the functionalities provided by any algorithms from third-party open-source libraries that are used in the implementations.
An inventor is whoever conceived of the algorithm described and implemented in the submission. If more than one person conceived of the algorithm, the algorithm will have been invented by co-inventors.
An owner of the algorithm is the inventor unless and until the inventor assigns (i.e., transfers ownership) the algorithm to another. In the case of co-inventors, the co-inventors jointly own the algorithm, and each co-inventor may assign their individual ownership interest in the algorithm. The algorithm may be claimed in a patent or a patent application, and the patent or patent application can be assigned to transfer ownership of the patent or patent application to an assignee.
The implementation of the algorithm may be subject to copyright, and the owner of the copyright in the implementation of the algorithm is initially the author or authors of the implementation. Authors of a joint work (i.e., an implementation made by more than one author) are co-owners of copyright in the implementation. The employer or other person for whom the implementation was prepared may own all rights comprised in the copyright of the implementation.
Each submission must include signed statements by the submitter(s), patent (and patent application) owner(s), as well as the reference/optimized implementations’ owner(s). If an algorithm or implementation is put into the public domain, we still require the signed statements from the submitter(s) and owner(s) exactly as specified in Section 2.D of the Call for Proposals.
NIST has completed the reviews for all the submissions received by the preliminary deadline, and has sent back comments to each submission team. We note the reviews were to check if submissions were “complete and proper”, meeting both our submission requirements and minimal acceptance criteria. They were NOT a review on the technical merits. Submissions which had elements missing will need to revise their submissions, and re-submit by the final deadline of November 30, 2017.
After going through this process, we have some suggestions we think will help submitters to make their submissions complete and proper, as well as help NIST with a more efficient review process following the final deadline.
With regards to the implementations and KATs:
Thank you, and let us know if you have any questions. Specific questions on a submission should be sent to us at pqc-comments@nist.gov. General questions may be posted on the forum, or sent to us the email address just given.
Our reason for primarily considering attacks involving fewer than 2 to the 64 decryption/signature queries is that the number of queries is controlled by the amount of work the honest party is willing to do, which one would expect to be significantly less than the amount of work an attacker is willing to do. Any attack involving more queries than this looks more like a denial of service attack than an impersonation or key recovery attack. Furthermore, effectively protecting against online attacks requiring more than 2 to the 64 queries using NIST standards would require additional protections which are outside the scope of the present post-quantum standardization effort, most notably the development of a block cipher with a block size larger than 128 bits. This may be something NIST pursues in the future, but we do not feel it is necessary for addressing the imminent threat of quantum computers. That said, as noted in the proposed call for algorithms, NIST is open to considering attacks involving more queries, and would certainly prefer algorithms that did not fail catastrophically if the attacker exceeds 2 to the 64 queries.
NIST did not provide an explicit limit on the rate of decryption/decapsulation failure. In cases where a scheme is targeting chosen ciphertext security, decryption/decapsulation failures may pose a security threat. A failure rate sufficiently high to violate the claimed security of a scheme is, of course, unacceptable. If, on the other hand, there is a strong argument that decryption/decapsulation failures do not pose a security threat, then the decryption/decapsulation failure rate becomes simply one among many performance considerations. NIST does not wish, at this time, to prejudge what performance considerations are important, and will therefore leave it up to submitters to provide performance characteristics that they feel will be most useful for the applications they think best fit their schemes.
Classical cryptanalysis is still valuable for a number of reasons. First, classical computers are not going away. For algorithms not subject to dramatic quantum attacks, such as those involving Shor’s algorithm, NIST believes that classical measures of security will continue to be highly relevant. Currently envisioned quantum computing technologies would be orders of magnitude slower and more energy intensive than today’s classical computing technology, when performing the same sorts of operations. In addition, practical attacks typically must be run in parallel on large clusters of machines, which diminishes the speedup that can be achieved using Grover’s algorithm. When all of these considerations are taken into account, it becomes quite likely that variants of Grover’s algorithm will provide no advantage to an adversary wishing to perform a cryptanalytic attack that can be completed in a matter of years, or even decades. As most quantum attacks on proposed postquantum cryptosystems have involved some variant of Grover’s algorithm, it may be the case that the best attack in practice will simply be the classical attack.
Also, the science involved in assessing classical security is better developed than that for assessing post-quantum security, and there is a larger community of researchers who can contribute to these investigations, increasing our confidence in the security of the proposed cryptosystems. Finally, classical cryptanalysis can improve our understanding of the mathematical structures underlying these cryptosystems, which is also the basis for quantum cryptanalysis.
Even assuming no disparity in the cost of quantum and classical gates, NIST estimates that the assumption holds as long as the adversary is depth limited to fewer than about 287 logical quantum gates. This is quite near the limit of what NIST considers to be a plausible technology for the foreseeable future.
Security strengths 1, 3, and 5 are defined in such a way that they are likely to be met by any scheme that:
Security strengths 1,3, and 5 are unlikely to be met by any scheme with less than 128, 192 or 256 bits of classical security, respectively. This is not however an explicit requirement: At least for categories 3 and 5, NIST is open to classifying parameters with less classical security in these categories, given a sufficiently compelling argument demonstrating that:
Security strengths 2 and 4 are defined in such a way that they offer the maximum possible quantum security strength that can be offered by a scheme that only has a classical security strength of 128 or 192 bits, respectively. They will generally be easier to meet with parameter sets offering more classical security. A detailed quantum security analysis will be required to determine whether a parameter set meets these security strengths (unless the parameter set also meets the criteria for the next higher security strength).
NIST is looking primarily to replace quantum-vulnerable schemes with functionalities that are widely used, have widely agreed upon security and correctness definitions in academic literature, and for which there appear to be a range of promising approaches for designing a postquantum replacement. NIST considered a number of other functionalities, but did not provide explicit support for them, since it did not feel they met the above criteria as well as encryption, KEM, and signature. In many cases, NIST expects that schemes providing some of these functionalities may be submitted as a special case or an extension of one of the functionalities we explicitly asked for. In such a case, any additional functionality would be considered an advantage as noted in section 4.C.1 of our Call for Proposals. Two particular functionalities NIST considered were authenticated key exchange (AKE), and a drop in replacement for Diffie-Hellman.
Diffie-Hellman is an extremely widely used primitive, and has a number of potentially useful special features, such as asynchronous key exchange, and secure key use profiles ranging from static-static to ephemeral-ephemeral. However, NIST believes that in its most widely used applications, such as those requiring forward secrecy, Diffie-Hellman can be replaced by any secure KEM with an efficient key generation algorithm. The additional features of Diffie-Hellman may be useful in some applications, but there is no widely accepted security definition of which NIST is aware that captures everything one might want from a Diffie-Hellman replacement. Additionally, some plausibly important security properties of Diffie-Hellman, such as a secure, static-static key exchange, appear difficult to meet in the post-quantum setting. NIST therefore recommends that schemes sharing some or all of the desirable features of Diffie-Hellman be submitted as KEMs, while documenting any additional functionality.
AKE is also a widely used functionality. However, NIST would consider it a protocol rather than a scheme. This is an important distinction, because most widely used AKE protocols are constructed by combining simpler primitives, like digital signature, public key encryption, and KEM schemes. NIST wants to leave open the possibility that standards for these schemes may come from different submitters. Additionally, the security definitions for AKE are significantly more complicated and contentious than those for the functionalities NIST is explicitly asking for in its call for proposals. NIST recognizes that there are some AKE functionalities, in particular implicitly authenticated key exchange (IAKE), that cannot easily be constructed from simpler components. While it is less natural to treat IAKE schemes as an extension of the KEM framework, than it is for Diffie-Hellman-like primitives, NIST does believe that it can be done in most cases. For example, a significant part of the functionality of a 2-message IAKE protocol could be demonstrated by treating the initiator’s public authentication key as part of a KEM public key, and the responder’s public authentication key as part of the KEM ciphertext.
While NIST will permit submitters to choose any NIST approved cryptographic algorithm for their submission if they feel it is necessary to achieve the desired security and performance, a number of potential submitters have asked us to offer default options for common symmetric cryptographic primitives. As such, here are our suggestions:
Also recall, from the CFP: "If the scheme uses a cryptographic primitive that has not been approved by NIST, the submitter shall provide an explanation for why a NIST-approved primitive would not be suitable."
A hybrid key-establishment mode is defined here to be a key establishment scheme that is a combination of two or more components that are themselves cryptographic key-establishment schemes. The hybrid key-establishment scheme becomes a composite of these component schemes.
NIST currently allows a generic composite key-establishment technique described in SP 800-56C. Assume that the value Z is a shared secret that was generated as specified by SP 800-56A or 800-56B and that a shared secret T is generated or distributed through other schemes. The value Z’=Z||T may then be treated as a shared secret and any of the key derivation methods given in SP 800-56C may be applied to Z’ to derive secret keying material.
NIST intends to update SP 800-56C so that the value Z may be generated as specified by any current and future NIST key-establishment standards. This will include SP 800-56A, SP 800-56B, FIPS 203, and any additional post-quantum key-establishment standards. The desired property of hybrid techniques is that derived keys remain secure if at least one of the component schemes is secure. Security properties can be complex, and for composite key establishment schemes they will need to be analyzed on a case-by-case basis with the requirements of the application in mind. NIST intends to offer guidance on various key combiners in the forthcoming SP 800-227, Recommendations for Key Encapsulation Mechanisms.
Additionally, the output of the key-establishment scheme specified in FIPS 203 is a shared secret key which is a shared secret that does not require further key derivation. NIST emphasizes that any shared secret key generated as specified in FIPS 203 may be used as the value Z in the generic composite mode described in SP 800-56C. These same properties will apply to any future FIPS which standardize KEMs.
NIST leaves the decision to each specific application as to whether it can afford the implementation cost, performance reduction, and engineering complexity (including proper and independent security reviews) of a hybrid mode for key establishment. To assist external parties that desire such a mechanism, NIST will accommodate the use of a hybrid key-establishment mode in FIPS 140 validation when suitably combined with a NIST-approved scheme.
Common techniques for hybrid digital signatures involve the use of dual signatures, which consist of two or more signatures on a common message. It may also be known as a hybrid signature or composite signature. The verification of the dual signature requires all of the component signatures to be successfully verified, such as by creating a single logical composite signature from two or more component signature algorithms.
Dual signatures could be used to sign user data (e.g., a document or e-mail) or digital certificates that contain references to user key pairs within a PKI. Existing NIST standards and guidelines accommodate their use provided that at least one component digital signature algorithm is NIST-approved.
NIST leaves the decision to each specific application as to whether it can afford the implementation cost, performance reduction, and engineering complexity (including proper and independent security reviews) with the use of dual signatures. To assist external parties that desire such a mechanism, NIST will accommodate the use of dual signatures in FIPS 140 validation when suitably combined with a NIST-approved scheme.
NIST leaves the decision to each specific application as to whether it can afford the implementation cost, performance reduction, and engineering complexity (including proper and independent security review) of a hybrid mode for key establishment or the use of dual signatures. Future experience will help to decide on whether they can be a useful long-term solution. To assist external parties who desire such a mechanism, NIST will accommodate the use of a hybrid key-establishment mode and dual signatures in FIPS 140 validation when suitably combined with a NIST-approved scheme.
Grover’s algorithm allows a quantum computer to perform a brute force key search using quadratically fewer steps than would be required classically. Taken at face value, this suggests that an attacker with access to a quantum computer might be able to attack a symmetric cipher with a key up to twice as long as could be attacked by an attacker with access only to classical computers. However there are a number of mitigating factors suggesting that Grover’s algorithm will not speed up brute force key search as dramatically as one might suspect from this result. First of all, quantum computing hardware will likely be more expensive to build and use than classical hardware. Additionally, it was proven by Zalka in 1997 that in order to obtain the full quadratic speedup, all the steps of Grover’s algorithm must be performed in series. In the real world, where attacks on cryptography use massively parallel processing, the advantage of Grover’s algorithm will be smaller.
Taking these mitigating factors into account, it is quite likely that Grover’s algorithm will provide little or no advantage in attacking AES, and AES 128 will remain secure for decades to come. Furthermore, even if quantum computers turn out to be much less expensive than anticipated, the known difficulty of parallelizing Grover’s algorithm suggests that both AES 192 and AES 256 will still be safe for a very long time. This of course assumes that no new cryptographic weaknesses, either with respect to classical or quantum cryptanalysis, are found in AES.
Based on such understanding, current applications can continue to use AES with key sizes 128, 192, or 256 bits. NIST will issue guidance regarding any transitions of symmetric key algorithms and hash functions to protect against threats from quantum computers when we can foresee a transition need. Until then, users should follow the recommendations and guidelines NIST has already issued. In particular, anything with less than 112 bits of classical security should not be used.
Security and Privacy: post-quantum cryptography