Breakout session notes CDISC UK Network Face to Face meeting June 23 rd 2015

Size: px
Start display at page:

Download "Breakout session notes CDISC UK Network Face to Face meeting June 23 rd 2015"

Transcription

1 Breakout session notes CDISC UK Network Face to Face meeting June 23 rd 2015 Questions that came up during the breakout sessions are colour-coded: Blue questions that were answered during the session, or afterwards. Red questions that came up in the sessions and which we plan to feedback to CDISC. Orange open question to anybody: if anybody reading through this document has an answer then let us know! CDASH / Standards Governance 1. Biggest struggles encountered when dealing with CDASH implementation: - CDASH only covers a few domains 2. Challenge with governing standards - protocol not being consistent, getting the correct people to review the ecrf and for it to be targeted review 3. What is happening with the new CDASH working group for version 2.0? 4. CDASH terminology - why do we have this, how does it work with SDTM terminology? Davy has researched this: The CDASH terminology is limited to a small number of domains as shown in the list below: CM, EG, EX, SU, DA and VS. * The main difference with the SDTM terminology is that the SDTM terminology is much more generic: E.g., CDASH CMDOSFRQ vs SDTM FREQ. * CDASH terminology is only documenting values where the SDTM accommodates for about values. The CDASH terminology should therefore be seen as a limited specialized subset of the SDTM terminology. The added value is mainly in the fact that these subsets can be readily integrated into a CRF without the burden of having to go through an enormous list of possibilities present in the SDTM terminology. 5. There is not enough guidance in general in how to map / standardise the medical devices. 6. Has anyone done a cost analysis of much it costs downstream to change / add a standard field? 7. Discussion about CDASH compliance and how Davy's tool works (see presentation) 8. SDTM CT - it is not always backward compatible and needs to be. It requires better versioning 9. There needs to be better explanation when a term is removed from SDTM (and other standards?)

2 10. There was a discussion on Other, specify questions and where to map the free text fields for these, as I think different companies have various rules on this. This is covered in the SDTMIG V3.2 Section Submitting Free Text from the CRF Specify Values for Non-Result Qualifier Variables This section contains 3 different examples on how to handle other, specify values. Depending upon the sponsor s wishes to maintain terminology. Each method has its advantages and disadvantages 1) Maintain terminology Store OTHER in non-result qualifier and the other description in SUPP--: E.g. EXLOC=OTHER EXLOCOTH in SUPPEX = UPPER RIGHT ABDOMEN Summary: SDTM mapping can be done upfront Codelists can be defined upfront in define.xml à easy compliance checking is possible Variable data that is spread out into 2 fields might require some more programming for analysis OTHER value might be flagged by the SDTM compliance checker; However this is not a real issue as it s perfectly compliant with the SDTM business rules o Note that the Other race, Specify example included for the Demographics domain follows this approach 2) Maintain terminology by mapping the verbatim to SDTM terminology and at the same time Store mapped CDISC compliant terminology value in non-result qualifier and the (original) other description in SUPP--: E.g., EXLOC=ABDOMEN EXLOCOTH in SUPPEX = UPPER RIGHT ABDOMEN Summary: SDTM mapping can NOT be done upfront: each other specify value needs to be (manually) mapped Codelists can be defined upfront for define.xml; all values are known upfront: this allows for easy compliance checking Variable data that is spread out over 2 fields might require some more programming for analysis; However if each value is properly mapped to SDTM terminology there is a big chance that the original entry stored in SUPP will be ignored. 3) Not maintain terminology Store verbatim in non-result qualifier

3 E.g. EXLOC=UPPER RIGHT ABDOMEN Summary: SDTM mapping can be done upfront Codelists cannot be defined upfront for define.xml Variable data is spread out into 2 fields might require some more programming for analysis Lots of free text values will be flagged as being non-compliant to the terminology; this might result in the fact some real terminology issues can be overlooked as they as they tend to hide themselves inside the compliancy reports Identifying the other, specify values requires more programming effort and full CRF knowledge Specify Values for Result Qualifier Variables This section shows examples on how to play with the date between ORRES and --STRESC 1)--ORRES= Verbatim as originally collected STRESC= OTHER 2) --ORRES= Verbatim as originally collected STRESC= Mapped verbatim value which is part of a (CDISC/CRF/Sponsor codelist) 3) --ORRES= Verbatim as originally collected STRESC= Verbatim as originally collected The advantages/disadvantages are similar to the ones listed for the non-result qualifiers Specify Values for Topic Variables Events/Interventions: verbatim goes into TRT/TERM Findings: Other specify tests would need to be mapped to existing terminology Regulatory requirements and submissions Questions on dataset XML and define XML 1. How to select the version required for a Submission: Certain TAs require a later version of the standard, but the FDA is not yet accepting SDTM 3.2. Suggestion is to keep a close eye on the catalog on the for industry page. This question was raised at the FDA reviewer session at the CDISC European Interchange May 2015, Module 1. Suggestion from FDA CDER (Centre for Drug Evaluation and Research) team members in that session was to contact CDER at any point (ideally near the start of a study) with questions about specific domains/variables to ask whether they can be used. 2. Dataset XML Strong business case for change: 8 characters for variable name, 40 for label and an overall limit of 200 chars would go with dataset XML. Lack of foreign

4 support would go also Dataset XML when is it coming at FDA? Yes in the future, pilot project was really successful, with 100% re-conversion rate. This also came up at the FDA reviewer session at the CDISC European Interchange May 2015, Module 3. The suggestion from the CDER team members at that time was that CDER have more work to do before deciding whether to support it. Perhaps a phase 2 pilot. 3. EcTD would SEND ever be included in ectd? SEND would fit well (Mike Harwood) vada Perkins to follow up. 4. For listings are the regulators going to use SDTM, ADaM? Can we remove fancy tables do we need to keep requesting them. These are not requested by EMA, but are required for FDA. 5. Do we have to send listings for EMA? No 6. Do we need a standard for the audit trail? In Europe they want access to trial data and audit information. ODM can take audit this information, this could be a potential standard 7. When to standardize data? Do you standardize data for submission at the beginning consistently or when the trial gets called for submission? As required by your business. There comes a point when it makes sense to realize your investment by converting lots of legacy data for cross trial analysis. 8. Submitting programs for ADaM data sets not clear on if upfront do we need to prepare now, what do we need to answer: need to discuss the code at the first meeting, they want to see the programs but it doesn t need to run in situ, the FDA meeting can be 2 years before a submission but if the person changes they can require different information, FDA can be more focused on the stats i.e. efficacy rather than safety. Don t offer to send in executables as FDA is bound to say yes. Some sponsors are providing this all the time 9. Do tools exist that allow you to save out define XML? Yes, there are free tools available from CDISC.org 10. Open CDISC validation enterprise, is anybody using this in the cloud some potential customers are reluctant. 11. What are the gains by submitting to FDA when sending SDTM. Faster responses? Is there any evidence? 12. ADR scales, are they going to be harmonized? Question for the Controlled Terminology group 13. How is FDA bringing departmental approaches together? By using the same software is one way. CBER and CDER are trying to work closer together. The Jump start program (small proportion of trials) is also helping. SDTM 1. More implementation examples, based on real-world trials. IG should contain guidance for the kinds of problems faced in practice. Is there any prospect of producing a comprehensive dummy trial exemplifying CDISC standards, based on a real trial design?

5 2. When SDTM contains multiple values, only one of which is to be used in reports/analysis, there seem to be several ways that this might be represented in SDTM, depending in part on the context. Some of the documentation about this is cryptic and little rationale is given. It would be useful to know what the author of these variables had in mind, and give examples of use. E.g. Qualifier variables for findings observation class (SDTM v1.3 & v1.4 section 2.2.3) --EXCLFL : exclude from statistics says not to be used with human clinical trials, without any explanation. --ACPTFL : accepted record flag. This appears beneath variables EVAL and EVALID so there was perhaps some intent that it should be related to these variables, but this is not obvious. --SPCUFL : specimen usability for the test. Some people thought that decisions about which values end up being used in analysis should not be represented in SDTM unless this is known at the time of data collection. Is that correct? (There are some instances elsewhere in SDTM where information related only to analysis gets included in SDTM datasets e.g. MedDRA SOC used in analysis ends up in AE.AEBODSYS, but this is not necessarily known at time of data collection) 3. Representing the event adjudication process. Representing how events change through the adjudication process. This can be a complex process for which a carefullyconstructed data model is needed. SDTM doesn t currently cater for this (there are a few variables related to this, but it has not yet received any comprehensive treatment). (Simone Suriano, Ethical GmbH, talked about modelling the event adjudication process at the CDISC Europe Interchange 2015). Does CDISC have any plans for a model for event adjudication? 4. Some Gold members would like access to e-share without having to become platinum members. 5. PDF for non-adobe PDF viewers, is a non-portfolio version of the standards PDFs available? Yes when downloading the SDTMIG v3.2 package from there is a separate link for downloading a non-portfolio version, the link says Download the SDTMIG v3.2 as a single file (verified on ) 6. There was some discussion about which tools are being used for deriving SDTM and ADaM datasets. Among the people attending this meeting, most seemed to use SAS for this, but there were also a significant minority (I estimate about one third) using other tools: e.g. bespoke or off-the-shelf ETL tools

6 ADaM - Topic for discussion: How far can we go in Standardizing ADaM datasets and TFLs as an industry? 1. Do we want to? 2. Safety data what is the situation? 3. Efficacy data what is the situation? 4. Barriers / Drivers to Success 1. Do we want to? General consensus was Yes, it is worth doing so, but there are real difficulties in doing so. The maturity of the ADaM standard is further behind that of SDTM, so the situation is akin to SDTM back in The building blocks are in place, the uptake is there, but as yet, there is no common this is how we do. 2. Safety data what is the situation? For Safety analysis reporting (Demography; Adverse Events; Medical History; Physical Examination; Concomitant Medication; Labs; ECGs; Vital Signs; Subject Disposition) there could be far more agreement on what are the standards for datasets and for TFLs. 3. Efficacy data what is the situation? Efficacy is far more complicated, but it could be addressed. Building on the TA standards from SDTM, it could be that core sets of analyses could be defined. For example, the majority of the analyses that Pharma Company A performs on a Diabetes trial will be the same or very similar to that of Pharma Company B, C and D. Undoubtedly, there will be unique analyses for any study, but why not agree on the common analyses? 4. Barriers / Drivers to Success Technology o Barrier to Success Many companies are heavily invested in existing technology, corporate standards and reports. o Driver to Success Need to move the conversation on from we like our tables to look like this to this is how we as an industry build datasets and the summary tables that go with them look the same. Differences should not be about cosmetics, but clinical information.

7 In the medium term, the availability of hosted services (MDRs and Study Reporting platforms) will likely drive standardisation in reporting. The availability of end to end platforms, supporting canned datasets and reports for Safety and TAs, will be a large incentive to organisations looking to leverage the maximum value of Software as a Service. The costs and time savings associated from the use of pre-defined datasets and reports that are submission-ready and regulatory tried-and-tested, compared to the creation of bespoke reports will see organizations move from in-house solutions to commoditized, industry-standardized reporting, where the majority of analysis is routine and predictable, with the focus and effort placed on the unique and bespoke. Political Debate o Barrier to Success Getting companies to agree what is standard and who decides is a challenge o Driver to Success Part of the purpose of organizations such as CDISC and PhUSE is to achieve agreement and consensus across the industry. This should be an area for future exploration. Communication and Trust o Barrier to Success Companies may see their solutions as part of the competitive advantage they have in reduction of time for analysis and getting their drug through the submission process, so there may be limited willingness to share information and commit to being open and transparent about issues and solutions. o Driver to Success Development and usage of standards is ultimately enlightened self-interest. No company has the perfect solution, some are better at certain aspects than others, so by participating in improving standards and driving them forward, both the individual companies and the industry benefit, and the burden is shared. Enforcement of standards o Barrier to Success ADaM is a framework of rules by which to build datasets. If it becomes more prescriptive, who is to decide?

8 o Driver to Success Development of expert working groups along the model of SDTM TAUG. Need buy-in of companies to work across TAs this already happens but need key stakeholders to buy-in (clinicians, statisticians, academics, and TA SMEs to define and develop. Regulatory Requirement there was a feeling that we need to a definitive statement of requirement from regulatory agency to create the commitment. Whilst 65% of submissions contain ADaM datasets, the real push to standardise needs the we like to change to we require.