Dear EUGridPMA and IGTF members, The 42nd EUGridPMA meeting is now over, and I would like to take this opportunity to thank again Jana and Jan - and the excellent staff at CESNET for hosting us in (again a snow-free) Prague and keeping us rolling. I would like to share with you a few of the highlights of the meeting. Send corrections and omissions if you spot them, since these have been taken from my own scribbles and memory at https://eugridpma.org/meetings/2018-01/eugridpma42-notes-davidg.pdf Slides with background of the meeting are attached to the agenda at http://www.eugridpma.org/agenda/42 More extensive notes, kindly taken by David Kelsey, are available on the web site as well. There's also the audio-recording (a novum) of the OIDCFed session, but that recording misses the first 15 minutes (and thus the introduction). Specific thanks goes to the AARC project and participants, who have contributed their effort and ideas to the OIDC federation use case analysis, and whose joint session for Policy Harmonisation (Policy Development Kit, Training, and the AUP analysis) formed a great contribution to the IGTF schedule. Subsequent meetings will be: ** 43rd EUGridPMA meeting, May 23-25 2018, Karlsruhe DE, kindly hosted by Ingrid and Melanie at the KIT North Campus and the Steinbuch Centre for Computing --- Note this meeting will start after lunch on *Wednesday* ** and we're looking for a host for the 44th meeting in September 2018, for which proposals are very welcome (thanks for considering it) and of course our affiliated meetings: * FIM4R and TIIME, Vienna 5 - 8 Feb 2018 * WISE, Abingdon 26 - 28 Feb * APGridPMA, Taipei 19 March * ISGC, Taipei 20 - 23 March * AARC AHM, Athens 10 - 12 April * EOSC-HUB kick-off, Malaga 16 - 20 April * I2 GS & REFEDS, San Diego 6 - 9 May * TNC18 & REFEDS, Trondheim 10-14 June * PEARC18, Pittsburg 22-27 July * I2 TechEx & REFEDS, Orlando 14-18 Oct See all of you in Karlsruhe, or at any of the upcoming meeting of the IGTF or elsewhere! Best regards, DavidG. Subject discussed and listed below ---------------------------------- * OIDC Federation at the IGTF for Research and e-Infrastructures * Community Engagement * Snctfi development and the Policy Development Kit for AARC * AUP comparison and alignment study * Assurance Profiles and Vectors * Assurance Assessment Matrix * Sharing of code and ideas * PMA operational matters, reviews, new Grid-FR, RCauth.eu governance * Other updates: IANA RFC6711 LoA registry entries, Grid Community Forum * Attendance All presentations are available on the agenda page: http://www.eugridpma.org/agenda/42 please review these as well as a complement to this brief summary. Much information is contained therein and not repeated here. OIDC Federation at the IGTF for Research and e-Infrastructures -------------------------------------------------------------- OpenID Connect (OIDC) is gaining a lot of momentum in the Research Infrastructure (RI) and generic e-Infrastructure (EI) space, where it is recognised as the most simple way for services to use third-party identity sources. Addressing both web and non-web use cases (through complementary technologies like OAuth2), it is underpinning several well-known services in the community. The IGTF has as its traditional aim and scope to "establish common policies and guidelines that enable interoperable, global trust relations between providers of e-Infrastructures and cyber-infrastructures, identity providers", and as such we discussed the logical inclusion of any federation technologies that would help the community. This is not the general use case including many identity sources (OIDC Providers, or "OPs") and services that R&E globally needs to solve, but is by nature more limited to RIs and EIs, where "identity proxies" (see for more) result in a limited set of OPs, and where services (RPs) are usually grouped by the infrastructures. So we expect O(100) entities in the system that would be in scope for IGTF, not thousands of OP and RP organisations that constitute the general R&E use case (and is the remit of the eduGAIN OIDC working groups, REFEDS, and the GN*-* projects). The IGTF established a working group on OIDC Federation (see for ToR and background), and during the I2TechEx ACAMP the objectives were clarified. In this session, we looked at the scope of the requirements and the use cases from both RIs and EIs and how OIDCFed, coordinated by the IGTF, would help solve concrete issues now and in the near future. Some complex federation elements are already taken care of in OIDC by the standards: discovery and several methods of metadata-exchange and client registration. But: some client registration methods convey a technical connection only, but not trust. And manual client registration, while it can be made trustworthy, does not scale beyond a few entities. Making that trust link is the most important bit now, and for that the IGTF can leverage its existing policy sets: the authentication profiles, and Snctfi (to bind RPs together). More documents could be needed - and part of the exercise is also to find out if our current policy doc suite is comprehensive. The reason RPs now need to comply to policies is that - in contrast to the PKIX model - the relationship is direct between OP and RP. With user-held (certificate) credentials, it was always the user that were to present the credential to a relying party, and who would take 'responsibility' for releasing (signed) data to such an RP. In the OIDC case however, there is direct exchange of meta-data between OP and RP, and the OP must know to which RP (scope) the claims will be released. Even though the claims data may technically flow also through the user, the RP-OP relationship is direct and must be technically supported by trust. That's why in this case requirements (Snctfi, SCI) should be imposed also on the RPs. In this session we invited our key stakeholders to share requirements and context: AARC, GN*, EXILIR and the LSAAI, EGI and EOSC-HUB AAI, CILogon and XSEDE, and the OIDC-Agent/WaTTS use cases. We are grateful to everyone for joining the session and sharing ideas! The following notes must be read in the context of the presentations linked from the agenda page * Davide Vaghetti (GARR, for AARC and GN*-*) The OIDC Federation spec (by Roland Hedberg et al., see https://openid.net/specs/openid-connect-federation-1_0.html) provides the framework to distributed integrity-protected meta-data. The challenge now is how to sign (create) the chain of signing keys in an effective and responsive way. Ideas in GN* evolve around an on-line signing service where participating orgs can request (real-time) signing of meta-data - although that would necessitate client authentication to that service [the alternative discussed at ACAMP is to merely require a trigger to the signing service, which can then retrieve the orgs meta-data from a trusted URI source. This requires of course basic trust in https, but allows more flexibility (and there are standard tools to protect such transfers - JimB referred to this model as well. The standard allows a URI reference in all places where in-line meta-data is given - so is fine with such an approach] There can also be many parallel federations. The standard itself does not impose any particular model or hierarchy. In the SAML world there is (now) such a de-facto single source (eduGAIN and its signing key), although that does not address groupings such as entity categories. Licia reminded us of the fact that the many parallel and meshed federations from the early days actually resulted in the realisation that a central thing like eduGAIN was needed (and that took time!). Yet for OIDCFed the 'federation' can also be seen as an analogy to the entity categories (Sirtfi, R&S, DPCoCo) - and there also a central eduGAIN is of no real help today and dedicated campaign needed ... So the problem is inherently complex, and the structure does not much help (nor complicate) it. It should be noted that organisations (and Federation Operators) can be nested but also 'stopped' and re-rooted at any level. In that sense, OIDC federations are a lot closed to the Communities of Interest that were pioneered by Moonshot. The general model proposed here by Davide needs 'scalability tests': does it actually work at the intended size of the community. Some of that related to policies and assessment (trust is more than enforcement of technicalities!), but also the intended model of change management (maintenance of meta-data). If the target is to change keys every 15 minutes, this puts HA requirements on the signing service, puts at more risk continuity of service (we know some federations at times fail to update their signatures ;), and pushes down these requirements also on the RPs. So a balance needs to be found in real-life tests. Maybe once a day is enough, but much longer and you would need revocation. There is no free lunch (apart from those offered by CESNET). Relying on resilient information (such as https URLs) can make it easier to scale. * Michal Prochazka (MUNI, for ELIXIR and LSAAI) Although at the moment there are just a few clients that talk to the ELIXIR AAI (or the LS AAI), this is expected to grow as the AAI services extends beyond just ELIXIR and into other RIs (BBMRI, Instruct, &c&c). Also many of the cloud service providers will be using OIDc, and there is the potential that dynamically created cloud instances of services will all be OIDC clients - the current manual registration of clients (and issuing clientID and client secrets to all) will then definitely break down. Actually, since the LSAAI will underpin soon a full production RI suite, this is 'urgent' (few months time scale), although minor changes can be done later of course. The IGTF level of trust based on the organisational validation and compliance to at least Snctfi will be sufficient to be useful. It is not expected that the IGTF would assert suitability for specific client purposes ("this client RP can be used for human data" is not expected as a statement from the IGTF :). Such details will be added by the RI on top of other trust data. * Jim Basney (NCSA, for CILogon and SciTokens) For the use cases that CILogon sees (including XSEDE), the need is for a set of policies and practices that would support a 'trust anchor distribution'-like service, analogous to what there is today for PKIX, but then targeting OIDC OPs and RPs. There are a couple of unique features in OIDC that would make it better suited for the research use cases, such as the addition of an explicit scope to the OAuth2 tokens. This is also the distinguishing factor between CILogon (authN) and the SciTokens project (AuthZ). In principle, XSEDE is interested in RPs that are 'in the community' and a way to recognise those, i.e., the 'R&S' RPs. For those, dynamic client registration is especially important in the federation context. The 'tags' like R&S (and CoCo, Sirtfi) could translate into (overlapping) federations - so we may end up with a set of IGTF federations, just like we have a set of Assurance Profiles today (ASPEN, BIRCH, &c) The standard then allows to 'mesh' federations also at each end point, so that a relying party (or consortium such as an EI or RI) can designate of a set of trusted sources and "concatenate" them themselves. Thus, a single root is NOT needed for federation, as long as the (group of) relying parties has a bit of clue on configuration: it can then flatten it my itself. Adding tags to the JSON meta-data statements is possible but more complex. Traditionally, the IGTF has devolved decisions to the relying parties For any kind of distribution (or signing service), it should be kept in mind that also today our major relying parties (EIs) actually re-bundle the IGTF distribution based in their own risk assessments, and add (and might remove) specific trust anchors at will. 'who owns the policy set in a federation'?? Yet it should be noted that choice results in complexity, and especially in the general use case choice should likely be avoided. And for the RIs and EIs: even there sites look towards the central (EGI, PRACE, WLCG, ...) policy groups to give definitive guidance. For the IGTF use case however, there will be O(100) organisations, so it will be easier to manage. Expected number of OPs < 100, expected number of RP orgs O(100), but potentially thousands or millions of RP clients (microservices, cloud dynamic services, instances, &c). * Nicolas Liampotis (GRNET, for EOSC-HUB AAI, EGI and B2ACCESS) The goal for EGI and EUDAT in EOSC-HUB is a scalable and trusted form of OIDC usage. Today there are still less than O(50) clients, but already next year that might be O(100-1000), and cloud-based services (containers, microservices) could push that to millions. The current client registration is heavyweight, as it depends on manual assessment and bilateral trust (via mail as well). And it does not address the maintenance of registration data (end of life, revocation). EOSC will also have a need for monitoring - the OIDC end-points are public and could be tested automatically but then you need a testing credential! Today in EGI all clients are reviewed against and must assert compliance with the EGI service operations policy and top-level policy (see ). EUDAT relies on self-assertions only, with no checking. For a federation context, peer-reviewed self-audit could work, and it should be at the organisation (NOT the client) level. groups like "R&S" federations would make sense for this use case * Marcus Hardt (KIT, WaTTS & OIDC-Agent) The most interesting tool from the suite presented by Marcus would (in his own words) be OIDC-Agent. Taken from the analogous "ssh-agent" concept, it allows command-line forwarding of access tokens by loading a 'refresh token' in a persistent agent process that can be queried and used by CLI tools from various places. Here, the scale of the problem has a potential (needs to be investigated!) for an explosion of clients: #users x #devices -> 1M+ clients? Adding one level of indirection (client-introduced clients) can help mitigate the explosion, but is out of scope of the OIDCFed work. There is a potential policy issue if client cannot be limited to act only as sources of action (initiators): in general they could also start acting as services which would be subject to other policies. Limiting issuance to access tokens could solve this (but ask Marcus for details - this is very much a rough thought) If one can limit the client to act as an initiator only, the relevant policy is 'just' the AUP and that could be implemented with a checkbox. The concept of OIDC-Agent is very appealing to CLI-based infrastructures and could be very useful for XSEDE! Community Engagement -------------------- Although relevance to the community is of course the best way to increase engagement from RIs and EIs, it would also be beneficial to actually make that relevance more visible. The OIDC Fed effort is one thing, but it needs more publicizing in various forums in order to attract new RIs to the IGTF as members. The realisation that federation is more than just some technology and needs policies and assessments usually comes after a few years - we can help out there. Membership being a prerequisite for getting to the (RP-oriented) OIDCFed of course will be a trigger in itself, but we should be better findable. The other key value for RPs to join is that they can exchange ideas also between themsleves, and share & influance developments in Snctfi and policy development kits. Potential target communities in the short term that sould want to be in are the Life Sciences, LIGO-Virgo (for OIDC?), but also the SKA/AENEAS community, who are now taking their first steps towards federation in AENEAS WP6.1 There are also national communities to target (DAPHNE in the UK) and we can find communities based on the AARC surveys (like those for the AUP) Now it should just be clearer for the world that the IGTF can help here, and that is both a working demonstrator (the AARC Pilot on this, with Mischa and Jouke at Nikhef and JimB at XSEDE can help), as well as engaging more strongly the EOSC-HUB and AARC communities. There is also a role for WISE, and there we can dedicate time to Snctfi and OIDC & policies based on the SCI framework. The other component is good material to explain what can be done. There are great experts at EGI (Sara) and at GEANT (Laura) whom we can ask. And as a result, it would also help ourselves in clarifying what we do, since it requires us to actually explain it in clear language! Involved in this effort should aoso be the TAGPMA - it might help get more funding ot Latin America if the OIDC elements are made more clear. Talking opportunities: - WISE in Abingdon (and the SCI session) - APGridPMA and ISGC (guerrilla poster) - an AARC InfoShare - I2GS and TNC (in an AARC session) - GEANT Connect glossy - PEAR newsletter Snctfi development and the Policy Development Kit for AARC ---------------------------------------------------------- The Scalable Negotiator for a Community Trust Framework in Federated Infrastructures (Snctfi) model is an off-shoot of the SCIv1 framework that was developed in AARC1 to support scalable policy negotiation and group entities behind an IdP-SP-proxy (in the AARC Blueprint Architecture) within a common policy set. It is a partially normative framework that also requires compliance to Sirtfi (incident response) and pur certain requirements on community policies and the organisation. https://igtf.net/snctfi With RPs becoming more important (e.g. in OIDCFed) this framework gets a more important role, and is being used e.g. in EGI (and likely EOSC-HUB) to structure some of the community-oriented policies. This then covers both generic services as well as the new Thematic services in EOSC-HUB (which has many communities and services working together). There are several options on whcih to base the policy development kit (PDK) that has been strongly requested by the user communities in AARC and also by the EOSC-HUB Thematic Services. A possible basis (which of course should be assessed and revised) is the current EGI SPG suite https://wiki.egi.eu/wiki/SPG:Documents The PDK should in itself be more than just Snctfi, and address also data protection (GDPR) guidelines; management of services, VMs, and containers in (cloud) infras; the role of 'value-added resellers' and community services that layer on top of these inside such containers, and: they may depend on how the services are actually run. Are these just 'user jobs' without privilege, or do they require sysadmin skills by the operators? At the same time, the AARC NA2 training activity needs policy elements to support training for the communities, and an inventory of ideas was made (see agenda page). For AARC the focus is (obviously) on the research communities now, which form the cornerstone of the AARC pilots and the training effort. Many of them have expressed the need to have either a policy kit with templates or a set of policies they could adopt 'as-is' for their community. The complementary aspect to this is that the research communities - when using the generic EIs - will have to abide by both the generic EI policy set as well as any augmented policy requirements that are community specific. Whenever structure of policy sets and documents is considered, the result is a framework that essentially conforms to SCI, of which version 2 was adopted and endorsed recently (see the TNC17 pages for details). This suggests to use SCIv2 as basis for organising the PDK, taking additional inspiration fomr the EGI SPG policy suite already referenced above. Each project or Infrastructure can then take elements from this policy set and brand it appropriately - the development should preferably be joint between all projects (both in Europe but also in the US, where CTST has a similar initiative, re-using much of the foundational JSPG work). Besides this joint session between AARC and the IGTF, also the WISE community workshop and the SCIv3 WG should be involved. There are several approaches then to the PDK suite: 1. a templating approach where communities can just fill-in the blanks This has the advantage that all policies will be the same, but it makes it harder to reach consensus, and in some cases projects do now want to take a template because they want their uniqueness emphasises (like OSG has done to the JSPG set). 2. a framework approach (like SCI), where each community develops a policy set but - as long as all elements are addressed - is free in content and wording This can bring in many RIs and EIs, but the mapping exercise is expensive (we know that from Bridge PKIs) and moreover there is no common exchangeability, so (bilateral) protocols are needed to make the mapping complete and acceptable to both parties. 3. a 'baseline' approach, where some elements are common to everyone (the 'black-lined' text in a template), but it explicitly allows and encourages the community to either make additions to each policy statement, and to have a complementary policy that extends or augments the baseline. Then the 'baseline' should be sufficient to allow most use cases (e.g. access to generic EIs), as well as put community specific statements (e.g. the requirement in BBMRI to forbid attempts to reverse-engineer pseudonymisation for data one has access to) in their logical place. Of course, this can also be done with a policy document hierarchy. The current line of thought favours using the grouping of the EGI SPG policy set as a basis, but use initially a wide consultation process to actually determine a baseline. Then model #3 is likely to be the most convenient for communities as well as retain effectiveness for interoperation across RIs and EIs. Some policies in the PDK have more priority then others. Having a joint AUP baseline is high on the community requirements list, as it is imperative to show such an AUP to new users. Both EOSC-HUB but also in AARC2 this was explicitly highlighted as an important need (and in AARC2 singled out in the DoW as well). The Community Policies (on which work was done jointly by AARC NA3 and EGI ENGAGE in the summer of 2017) are similarly high-ranked. For access to the existing EIs (EGI, EUDAT) of course meeting their policy requirements is necessary. Thus before interoperation (or service use) can start the communities should agree also on the common set. For RIs, this could be a statement such as "we have all SCIv2 policy elements in place, AND we comply with the Baseline version X". For smaller communities, they might just prefer to take the entire PDK suite as-is. Short-term work: - for the AARC2 review (May 2018): structure of the PDK defined (using SCI and the EGI SPG grouping - community-centric (Snctfi-compliance) polict templates in place - AUP draft under discussion (see below) Elements for data protection (GDPR) will be deferred until we know the results of the DPCoCo version 2 work and whether the DPB (WP29) will accept the idea - including the aspect of attribute authorities therein. The elements of the PDK can also be used for the initial training suite. An important element then would be a risk assessment by the community on how much policy they need, and how statements (configurable variables in a policy template) should be chosen. Not all communities may need all policies, and e.g. for multi-factor (MFA) the need for it will depend on a end-to-end risk assessment specific to the community. Potential sources to look for risk-assessment training and methodologies are also TF-DPR (Andrew Cormack) and the WISE RAW working group. There should (and probably are) already flow charts and training on how to do the risk assessment, and these should be re-used as far as possible (maybe CTSC has something in this area). Application-specific data protection needs will always be out of scope for a generic PDK. It is clearly for Infrastructure Operations. (keep in mind that a risk assessment is needed here - e.g. for data breach notification where there is a 72hr cut-off limit for acting!) AUP comparison and alignment study ---------------------------------- Ian Neilson in the context of AARC NA3 did a comparison of Acceptable Use Policies for RIs and EIs, the result of which develop on the Wiki page https://wiki.geant.org/display/AARC/NA3.3++e-Researcher-Centric+Policies In quite a few cases, the joint original JSPG AUP ("Taipei Accord") is still clearly visible: this is the case for EGI and EUDAT and WLCG. Others have added significant new elements (like PRACE, where particular permissible use elements like "peaceful use" are singled out, which in the other versions are grouped under "purpose for which access has been granted" - and we would expect that to have been covered in the review of the project before granting access). Other AUPs such as BBMRI and ELIXIR add community-specific (and very important!) elements such as the obligation not to de-pseudonymise. In general, such community specific statements can be 'layered' on top of a generic AUP that would come from the EIs. For PRACE, the 'community' typically equals the Home Site. Site (organisational) AUPs typically branch out in to all kind of other areas (such as permissible private use for employees) that are not relevant to the RI/EI AUPs since - cleverly :) - the JSPG AUP immediately limits their use to professional work only (and can do that because it is not corporate or desktop IT). At the moment, about a dozen AUPs has been analysed. The plan is to extract a common (minimum) baseline and then make a template model on which a hierarchy of policies (see the PDK discussion option #3) can be built. A 'guide to usage of the template' should accompany the text. The baseline should not be changed, as it would reduce the value significantly - which also means the baseline should really be the baseline (e.g. the set for generic EI only, not more). And since this should preferably be global, wide consultation is needed: WISE, with XSEDE, and in Europe the AEGIS consultation group can be used to get adoption once it is near-complete. Hopefully OSG can follow XSEDE - the new OSG security officer Susan should be involved as well. There is plenty of use and testing opportunities for a AUP template: - EOSC-HUB - the Helmholtz Data Federation (DE) - DAPHNE (UK) Timeline: - before WISE have a joint AARC-EOSC-HUB discussion (TIIME?) - develop a strawman AUP (maybe based on EGI/JSPG) WITH input from EUDAT and PRACE (ask Ralph for that one) this would start with the structure - at WISE meeting: expose work and involve more people - March: formalise v1 - in time for the AARC2 review please ;) That version can also be used to develop some training (with Uros) Assurance Profiles and Vectors ------------------------------ The new assurance profile guidance from NIST in SP800-63v3 is a significant deviation from previous models. Now decomposing assurance into three different 'vectors' (identity, authenticator, federation), it should allow for more flexibility - for those of us able to actually parse such composite assurance (there are none, really). A summary is given the slides by Scott at https://indico.nikhef.nl/event/1076/contribution/18/material/slides/0.pdf The NIST specification itself consists of three documents (A..c) and there is one planned (63D) that will address international mapping (certainly to eIDAS, likely more). The international mapping was separated out due to time constraints only. Unlike for example the REFEDS Assurance Framework https://wiki.refeds.org/display/CON/Consultation%3A+REFEDS+Assurance+Framework it does not group combinations together into profiles, but leaves it to the implementers to pick and combine (the idea was that vendors that specialise in identity would need to look only at 63A and no other documents, but since any assurance is always a combination of the three elements, that will not quite work. To make risk-based decision for any use case, the combination is essential. The REFEDS RAF elements "identifier" and "association freshness" are absorbed into identity in 63v3. There are now plenty of profiles and vectors. The IGTF levels of course are now in the IANA RFC6711 registry https://www.iana.org/assignments/loa-profiles/ and there are 2 REFEDS RAF levels (capuccino and espresso) as in the above document (the final version awaits resolution of the SFA and MFA REFEDS profiles). And for comparing assurance between infrastructures, the AARC JRA1.1 document gives the two IGTF levels BIRCH and DOGWOOD as reference profiles, but additionally introduced two hybrid levels (Darjeeling and Assam) to address specific interop use cases. The draft is at https://docs.google.com/document/d/1Fi07J9lpUbqYTlPMINkbHl7xvA5tJ98L4jai6XNKbDM/view Assurance Assessment Matrix --------------------------- The (self) audit assessment matrix (for assurance profiles, not the technology profiles) was reviewed for section 1-3. The methodology is valid, but - as always - clarification is needed for some of the text. Changes that would need revision of the Generalised Assurance Profile document itself were deferred (but recorded so that they can be incorporated in version 1.2). The latest version of the assessment matrix can be found at http://wiki.eugridpma.org/Main/AssuranceAssessment Sharing and re-use of ideas and code ------------------------------------ Jens's Soapbox was dedicated to the the sharing of experiences. Until recently the CAOPS working group was the go-to venue to exchange ideas and also code and code snippets in the wider community. CAOPS attracted people from the IGTF circle prober (such as Mike Jones), and provided a place for technical discussions beyond the policy focus of the IGTF. What we see in practice is that many authorities reinvent parts of a wheel, and tools like OpenCA grow modifications of which the sharing would benefit the community at large. With enough collaboration, one could even start considering "CA as a Service", given that copying (of CP/CPS and code) happens anyway. Attaching credential repositories to services like the Pathfinder AAI CA (UK) potentially share a lot with credential repositories for MasterPortals, for example, and also there sharing ideas and code snippets might be helpful. (on that note: the Pathfinder AAI CA now uses per-assertion assurance information conveys through Assent to determine eligibility for Birch and compliance with Sirtfi - given that such meta-data is not readily available otherwise) Even the structure of an authority organisation could be shared. And what in some cases is done with legal contracts (like for TCS) could be re-used in a non-legal but more practical context (why not re-use the model subscriber agreements as a guide as to what an admin should consider when saying "yes, I do"?) As a single sharing forum is impractical, the idea would be to deposit code and code snippets (even in an non-beatified form) to a public repository (say, Github), and have a common single place to refer to these repositories. That common place can then be the existing CAOPS Redmine project wiki (see https://redmine.ogf.org/dmsf/caops-wg) This should be announced as well, though, maybe to the caops-wg@ list. PMA operational matters, review, and accreditations --------------------------------------------------- - For the pending self audits, the one for UGrid was completed successfully, others are pending either updates or reviewer comments. - MaGrid presented an update (close to a self-audit to be done soon) and MARWAN could consider TCS as well. There are options in TCS to issue without being part of eduGAIN. See for the example eduPKI is for eduroam use only (and kindly managed by DFN-CERT for GEANT) - Specific measures have been taken with respect to non-responsing authorities that could result in suspension from the distribution - PLEASE look at the validUntil date of your Root and ICAs once in a while, when you get a warning from the Chair, it's too late ... - the new Grid-FR structure is now in place, with the issuance systems operated by the French Ministry of Education. This makes it into an on-line classic CA with HSM support, but using all the existing RA processes and practices. TCS was "not the chosen solution" for internal reasons (but Renater of course happily does TCS for everything non-France Grilles) The profiles (v1.1 of the CP/CPS) were reviewed and commented on, and since it is very close to the existing practice the v1.1 CP/CPS and new trust anchors were APPROVED and will be DISTRIBUTED in the next release (end of Feb) - RCauth.eu will be going through changes in governance, and additional operational partners (GRNET for EGI.eu, STFC for EUDAT) will be introduced. The PMA will have autonomy in the implementation of policy and practices, and an operational team will report to it and ensure trusted operation. Trusted people from the wider community (PKI experts that have also federation knowledge and are involved with the communities) will constitute the PMA. The Governance Board will be the org reps that put resources (people, money) on the table. The key could either be a single one shared by the three ops partners, or there could be three distinct ICAs for RCauth. The latter is kind-of ugly for the outside world. And since trust is needed anyway, the IGTF has no objections to having a single key for all three partners. When a partner withdraws from the operation, the ICA key may have to be revoked and re-issued, through! The key distribution ceremony must be described, and auditably documented - in the new CP/CPS, therefore ;) - the GridKa-CA self-aduit will be reviewed again and the (minor) changes implemented in a new CP/CPS. IPv6 CRL end-points will be there in Feb 2018; a SHA-1 root is still fine, and if a subscriber organisation really wants to put in something in a CAA record of their own, why not use "gridka.de" there. Other updates ------------- - The IGTF assurance profiles have been registered with IANA as per RFC6711. Look in the registry at https://www.iana.org/assignments/loa-profiles/ - As referenced in the TAGPMA update, there is now a Grid Community Forum (GCF) with a Grid Community Toolkit (cgt) distribution which will be the sustainable fork of the Globus(tm) Toolkit. The Globus open source toolkit will no longer be supported by the Argonne team after 2018 (and during 2018 there will only be critical security updates). Those using Globus can join the forum and contribute to the work on GitHub: Attendance ---------- We would like to thank the following members for the in-person attendance: Jana Kejvalova, Jan Chvojka, Ingrid Schaeffner, Melanie Ernst, Marc Turpin, Scott Rea, Dave Kelsey, Bouchra Rahim, Samia El Haddouti, Ian Neilson, and DavidG the RI and EI OIDC session contributors for attending the meeting: Michal Prochazka, Davide Vaghetti, Marcus Hardt, Nicolas Liampotis, Mischa Salle, and of course also our members Christos Kanellopoulos and Licia Florio! and for their extensive presence in the videoconference: Miroslav, Vladimir, Roberto, Reimer, Mariam, Cosmin, Jens, Nuno, John Kewley, Uros Stevanovic, and Hannah Short.