Dear EUGridPMA and IGTF members, The 27th EUGridPMA meeting is now over, and I would like to take this opportunity to again thank Roberto Cecchini, GARR and CASPUR for hosting us in Rome and providing excellent amenities for all participants. Consolidated minutes of the meeting will be provided later on, but meanwhile I would like to share with you a few of the highlights of the meeting and draw your attention to a few action items that will concern you all. Send corrections and omissions if you spot them, since these have been taken from my own scribbles and memory. We also sorely missed Milan both during the meeting and in the trust building afterwards: the void that Milan left behind cannot be filled, and I for one felt very inadequate trying to discuss topics where Milan would surely have had decisive impact. And for those who have not yet seen it: http://milansovacesnet.wordpress.com/ Subsequent meetings will be in Kyiv, UA, from 13-15 May, and Bucharest, RO, from 9-11 September 2013. I would also like to draw your attention to the OGF meeting in Charlottesville (VA, USA) on March 11-13, as well as the APGridPMA meeting and ISGC on March 18-22. The next IGTF All Hands meeting is foreseen for October or November 2013 in La Plata, Argentina. Best regards, DavidG. Subject discussed and listed below ---------------------------------- - SHA-2 time line - CA readiness for SHA-2 and 2048+ bit keys - OCSP support - MICS Profile and Kantara LoA-2 - Towards an LoA 1.x "light-weight" AP - Security Token Service profile - Private Key Protection Guidelines - IGTF Test Suite - On on-line CAs and FIPS 140-2 level3 HSMs - Public Relations - IPv6 readiness - Updates Self-audit - Risk Assessment Team SHA-2 time line --------------- The TAGPMA proposed a new time line in a bulleted format, but apart from changing the September 2014 sunset date into October 2014, the proposal is materially the same as the EUGridPMA consensus from Lyon. We therefore agree with the TAGPMA text which now becomes the new SHA-2 public time line. Given that users and developers should be able to prepare for SHA-2 EECs, one could consider issuing two certificates for each request submitted, based on the same key pair. Although there are clear use cases, it is expected to lead to significant user confusion, which is why we do NOT recommend to issue two certs for each request (but it is allowed). When issuing them anyway, the following should be kept in mind: - both certificates need to be revoked is the key pair is compromised - two different serial numbers MUST be used - it is more useful for intermediate certs if both are available, again with different serial numbers (for transition period) Most of the infrastructures are not actively testing for SHA-2 compliance, e.g. including tests of Unicore by PRACE and jGlobus2 in dCache. For EGI the move to SHA-2 readiness is foreseen to be monitored by Nagios via the EGI CSIRT probes, but this is not operational yet (>Q2 2013). Sufficient SHA-2 issuing CAs should also be available to support testing in PRACE for server certs. CA readiness for SHA-2 and 2048+ bit keys ----------------------------------------- Virtuall all CAs have 2048-bit key issuing capability, and are actively enforcing the minimum limit., There are some residual issues with CAs using levecy VBscript when supporting Internet Exporer generated requests. In a few cases, the CA cannot do 2048 bit yet, or it is not relevant (where CA key length is historical) For SHA-2 there are still a few CAs not ready, whereas a few others can do either SHA-2 OR SHA-1 but not both (so they need to wait for software to support SHA-2). All APGridPMA CAs are expected to be ready by March 2013. It should be kept in mind that old Alladin eTokens (32k) do not support SHA-2. OCSP support ------------ Support for OCSP should help in many deployment scenarios, including more dynamic hierarchies, but also help in getting more timely revocation information available to the RPs. The way to deploy and use OCSP effectively is not always clear, though, and there are a lot of choices to make, some of which are good and many others bad. During the last IGTF meeting two documents were proposed: - a profile and guidance of RFC5019 light-weight OCSP for CAs (those CAs already deploying full RFC 2560 OCSP are not the audience) - a 'best practices' guide for RPs and their software developers in using OCSP information. The two documents were drafted during the meeting, with the results now available on the EUGridPMA Wiki and reflect the conclusions reached: - https://wiki.eugridpma.org/Main/OCSPProfileForIGTFCAs - https://wiki.eugridpma.org/Main/OCSPDeploymentGuidelines Especially when pre-computing responses, one should keep in mind that many of these responses are never used, especially when the number of certificates is high. And the algorithm to determine the 'not yet present' responses, when calculating in advance, should be clever enough not to generate the same response again. It is perfectly feasible to pre-calculate up to 30 days ahead with 6-hour validity per response, as long as one does not recalculate everything all the time. Look at the exact Wiki text for the consensus reached (and improve where you can). We specifically invite the remaining OCSP experts to have a look! Precomputation of responses in OpenSSL is possible with e.g. openssl ocsp -index index.txt -respout ocsp.resp -serial \ -signer ca-cert.pem -signkey ca-key.pem -sha1 -validity_period 86400 (thanks Eygene!) And with RFC6277 in mind the response itself should be signable with SHA-256. MICS Profile and Kantara LoA-2 ------------------------------ Following on from the rough consensus in Lyon, we propose to add the following to section 3.1 of the MICS profile: "A primary authentication system that complies with the Kantara Identity Assurance Accreditation and Approval Program at at least assurance level 2 as defined in the Kantara IAF-1400-Service Assessment Criteria qualifies as adequate for the identity vetting requirements of this Authentication Profile." directly after the first paragraph of this section (following "... and sufficient information should be recorded to contact the registered owner.") This clarifies the "should" mentioned several times in the second line of that paragraph, as we have now interpreted it several times in this particular way (TCS eScience Personal, CILogon Silver). And helps some of the IdP in TCS to understand that there processes are indeed in line with the requirements (although the TCS CP/CPS itself was clear enough on this). We ask APGridPMA and TAG to consider this (and the TAGPMA to revise the document if there is consensus). Towards an LoA 1.x "light-weight" AP ------------------------------------ With some of the relying parties and communities performing a large amount of identity data collection themselves, the need for defining an authentication profile which does not duplicate the collection of that vetting data is becoming more and more relevant. Although the core requirements on the CA stay the same (secured infrastructure, global uniqueness of naming across the IGTF, no re-use of issued identifiers), some other elements of tracability and vetting can be supplied by other registration processes run by the relying parties themselves (such as in PRACE where the sites do the registration of users anyway, and associate a cert with each vetted account). The general requirements stay the same (for OSG a survey indicated these were the (real) name of the user, an email contact, and the community to which the user is affiliated), but the traceability can be done also by the RP. Some resources providers indicated that the tracing must be possible without the involvement of the user community. For the CA, it would remain important to: - enure uniqueness and non-reuse - operate a secure infrastructure by the CA itself (to prevent false issuance or hijacking) - ensure some contact data which is verified on issuance (but might be just an email address embedded in every xcert issued under such a profile) When implementing a new AP, the new CAs accredited under this AP must: - use a distinct trust anchor for issuing these cert (since selecting on policy OIDs is not supported by the software) - include a different OID anyway (since the software might be fixed) and we should write a new AP. The assurance level of this new AP is different (lower) than the classic AP, but should certainly be higher than LoA-1 (it is better than OpenID). A properly secured CA like CILogon Basic should fit in here [would have to check if the non-reuse can be guaranteed by the participating IdPs] The profile thus defines a "Light-weight Identity Vetting Environment with Secured Infrastructure" (LiveAP - SecuredInfra) There is interest for such a profile by PRACE, EGI, and others, and it should appeal to OSG as well. People interesting in writing a profile include DavidK, JulesW, DavidG, Jens, and should include Jim Basney! A Wiki page has been created to draft the profile, but is largely empty for the moment. The basis may be the MICS profile - and contribution are welcomed. Security Token Service profile ------------------------------ The development of an STS profile (the generalized SLCS) ties in nicely with the FIM4R work, but would certainly benefit from having an actual prototype to drive the contents of the profile. The wLCG prototype for FIM4R which could serve as the driver is expected in a few months. A new STS profile could be general enough to have the X.509 bits optional in the text, although we realize that today the X.509 output is important also for the FIM4R prototype work, and integration with current (grid) software. Next steps include: - writing the Wiki page with core STS AP text, based on the SLCS (by DavidK, DavidG, Jens, inviting RomainW) - attend the FIM4R workshop (at PSI near Zurich) if possible and you are not going to the APGridPMA There are several other work items that could support linkage between the current authorities (who predominantly produce their assertions as X.509). For example, the most important "asset" of many authorities is their Registration Authority network, and the reach into a widely dispersed user community that might not yet be served by their home organisations. Opportunities include: - using the RA network a cross countries to populate a single 'catch-all' certificate mint, to serve those use cases that cannot yet be dealt with by TCS (because the country or institute does not sign up to TCS, but the researchers needs certs anyway, or because you need robot certs) But of course the cases served by TCS are already taken care of well! - using the RA network in a country to establish a high-LoA IdP that could incorporate 'lone researchers' into national federations. The organisation hosting the CA and running the RA network is left with the policy and federation burden, though -- and you need a national federation. - proxy in PKI-based IGTF credentials into national (eduGAIN?) federations, akin to the way IGTF users were proxies into eduroam through GRNET a few years back It is technically trivial, policy-wise very hard, and the downside is that users would still need to handle PKI. In all cases (also STS) the key asset of unique global non-re-used naming is essential to preserve! Lastly: if your country does not have a proper Identity Federation yet, it should be strongly encouraged to create one -- and make sure it is well linked with the REFEDS community AND addresses the FIM4R requirements. Private Key Protection Guidelines --------------------------------- The re-factoring of the Private Key Protection guidelines was completed, and the new text is now available at and The structure is different, but the currently allowed use cases are covered by the new text, and are clearer. The companion document on how to secure key stores (be they run by NGIs, CAs, home organisations, or anyone) should also be written. We expect the key stores to be run securely! The new text is for consideration by the other PMAs to replace the current PKP Guidelines v1.2 IGTF Test Suite --------------- The IGTF distribution is sufficiently large and complex that software can break in the production environment in ways that could not be easily foreseen. A realistic test is needed for the software, but the simple solution to give each software group a certificate form each of the IGTF CAs, or to ask each CA to participate in the testing of software, is not realistic, not feasible or desirable. A specific test suite that very closely resembles the IGTF distribution would be good to have. This is more than the NIST test suite for certs, which only has public bits, but should include the capability of using EECs for actual software tests, with certs from each of the fake CAs. There are a couple of requirements on such a test suite: - the actual key used should be similar (length, algo) to the real ones, but different from the actual CA keys in order not to cause incidents - the naming for the text CAs should be very close (same properties, character sets), but again different to avoid trademark issues and confusion with the real ones. For example a simple solution is to ROT-13 all printable ASCII characters in the name. - the number of CAs in the suite, their hierarchies, extensions, and naming, should be a copy of the real IGTF certs - the EECs should demonstrate the breath of options by the CAs, and include edge cases (the 'unusual' but allowed formats issued by each of the CAs, such as parentheses, dashes, colons). - the test suite must include valid, expired, revoked, and not-yet-valid certs (and define the expected result), and various CRL issuance modes - OCSP can be faked and pre-computed if needed, OpenSSL has RFC2560 daemon) - there should be some deliberate failure modes included The 'easiest' way is to actually make a script that converts an IGTF distribution into a 'fake' one, generating sets of CAs on the fly with the extensions taken from the 'real' ones. The names can be ROT13s as needed, and the key pairs generated on demand so that they are different. Whereas the set of IGTF CAs is well known (it's the distribution itself), the sample EECs are not readily available for many CAs. These samples and the extremities need to be supplied to the test suite developers. Actions decided: - each CA to send a URL to or a sample of end-entity certs, at least personal cert and server cert, and depending on the CA also a robot cert and/or a 'service' ("blah/") cert - each CA to indicate some edge cases for their CA (use of colons, dashes, weird characters) and parameter space of the subject naming - known troublesome certs should be included The requirements on the content of the test suite, and the test cases, are developed on the Wiki: which now has some samples and conditions. There is no actual effort identified yet, but anyone with some (summer student) effort is welcome to contribute to a test framework build (script). Interested people include JensJ and DavidG. Paul Miller (DESY, dCache team) ought to be interested, but cannot get actual certs for all CAs (nor a test person with each CA). Sorry! On on-line CAs and FIPS 140-2 level3 HSMs ----------------------------------------- Inspired by the idea of NIIF for buidling an on-line CA based on a low-power Raspberry Pi and a level-3 HSM in USB format, a discussion emerged on whether it is possible to have enough compensatory controls around a level-2 HSM to make the risk comparable to the current off-line CA or level-3. It is not entirely clear which elements of level-3 improve the risk resilience when compared to an off-line classic CA. For example: embedding the entire CA signing system including a level-2 HSM (like a SAFEnet eToken) inside a physical safe and bolding it to a cabinet, with only a network cable and power supply going through a hole in the safe: is that not enough compensatory control to make it comparable to a classic CA (which is usually a plain USB stick in a similar safe)? Provided of course the software is also good (signing system uses polling only, good issuance log, monitoring) AND the actual activation of the level-2 HSM requires manual intervention by an operator? The level 3 mainly provides tamper-proof-ness (autodestruct), but the off-line classic CA's USB key also dot not have that. And the SLCS already has a L2 HSM requirement. The key should probably be generated in a described and audited key generation ceremony and then imported into the L2 HSM, so that there is no vendor lock (and still a good backup, of course in a different safe!) With the new ultra-low-power systems like the RPi, the generated heat can actually dissipate through the safe, so it becomes really feasible. And it would help a lot of smaller CAs is improving usability and service level for subscribers (and still have a very good secure setup). We think it is worthwhile doing the risk analysis compared to the off-line classic CA, and if the risk is comparable allow the use of L2 HSM or eTokens in conjunction with compensatory controls like a safe. We propose to discuss this with the TAGPMA and APGridPMA and have a discussion at the IGTF All Hands in La Plata (October 2013). Interested people include Ursula, Roberto, and DavidG. Public Relations ---------------- For the world at large our work and progress are not necessarily clear. The article in ResearchMedia is not enough. In particular the wider scope and new direction should be emphasised. Papers (academic and PR) are encouraged so that we more clearly demonstrate usefulness and relevance -- and thus may get fewer questions on this. IPv6 readiness -------------- The accessibility over IPv6 is most relevant for CRL downloads and OCSP, but most of the CA host sites have only a testing IPv6 capability or none at all. The /review pages have a link to the tracking status page run by particle.cz, and it is again recommended to have IPv6 capability by January 2014 (and to still keep a v4 address)! Updates Self-audit ------------------ During the meeting SlovakGrid, MD-Grid (RENAM), NIIF Hungarnet, and the NorduGrid CA all presented audits and updates of their CA. Although new self-audits are progressing well, it is equally important that self-audits that have been performed are followed up both by the CA and the reviewers! The status is maintained on the internal pages at https://www.eugridpma.org/review/selfaudit-status Tidbits resulting from the discussion include - hint not to change the subscriber subject names wantonly, since although some authorization mechanisms can deal gracefully with changes, others like persistent storage cannot. - in storing information to prevent re-issuign of a subject DN to a different person, the personal data collected should be minimally invasive in view of the new EU regulation. The attendance and presentation status has been updated appropriately. Risk Assessment Team -------------------- In operational security it is well recognised that trust groups improve security by early exchange of information. Although traditionally we have been well aware of new security risks facing identity management (both crypto issues, PKI and relevant SAML vulnerabilities) we are not aware of any specific trust groups that can give us advance warning of issues. If anyone is in contact with such trust (or crypto) groups, relevant information on vulnerabilities is very welcome from them! A more proactive approach proposed. Jens doesn’t have the time to put much effort in this. Others should step in. Also a secretary would be welcome... Ursula will be coordinating the communications challenges to the CAs and the internal (encrypted) mailing list