Kivo News

Webinar: In-Time Implementation of ICH E6 R3 for Good Clinical Practice

Written by Jianna Lieberman | Jul 1, 2025 12:35:35 AM

 

ICH E6 R3 was first released on January 6th, 2025. As of July 23rd, the latest revision of ICH E6 (revision 3) will be effective and widely adopted by ICH member nations. Revision 3 was adapted and adjusted to give more applicable guidance of GCP for the increasing digitization and decentralization of Clinical Trials. This webinar will cover what you need to know to prepare your organization, including:

 

  • What are the key changes from R2 to R3 that affect my org?

  • What do I need to do right now to align? How do I prioritize?

  • How do I order my operations to effectively implement these changes? 

  • How to incorporate ICH E8 when applicable

View the full session or read the transcript below!

 

Full Transcript

Kevin Tate:    Thanks so much for joining everyone. Super excited for today's topic and excited to have our own Sarah Ruiz here to talk us through it.

A few housekeeping notes before we dive in; we will be sharing a recording of the presentation. So keep an eye out for that just after the webinar. There's a lot of material here. There's a very hot topic. We've got a lot of registrants in a big group here with us. So Sarah's gonna be going through it in some detail. We're gonna be taking a couple of breaks along the way for questions, so please don't be shy with questions.You can throw 'em in the Q&A. You can throw 'em in the chat. I'll see 'em either place and we'll take a couple of breaks as we go along to answer those. And with that, I think it's time for me to hand it over to Sarah.

Sarah Ruiz: Awesome. Thanks Kevin. And by the way, if we don't get to your questions, we will commit to them outside and after this webinar as well. Okay, about myself before we get started.

I'm Sarah. I am about 13 plus years in the clinical research industry. My degree is focused on interdisciplinary studies. That means that I had a few areas to focus, all science or health science related. and I have focused my career on clinical trial technology since late 2019, early 2020 with a focus on product operations as well as clinical operations. What you don't see on the slide here is I am also a type one diabetic. That's how I got into clinical research- the patient level. But that's a little bit about me.

So what are we talking about today? Mainly we're gonna talk about how to take some practical implementation techniques and apply them to your organization, but we're going to talk about some of the reasons for why we have revision three today. What are those key changes? How we're gonna align your operations to Revision Three, how to prioritize that alignment and implementation tasks, and also how you can order those operations efficiently and effectively. And we're also gonna talk just a little bit about ICH E8.

Okay. So, why revision three? Why this complete revision versus an addendum? Revision three has actually been in draft since late 2019 and was open to a public consultation in about 2023. So it's been a long time in this sort of draft state. But it was realized that principles of good practice are not really designed to be one size fits all. They're designed to be flexible.

I have this little quote literally from the guideline that I love. It says this along with ICH E8 "encourages thoughtful consideration" -we're gonna be talking a lot about that -"and planning to address specific and potentially unique aspects of an individual clinical trial." Really says a lot in one sentence. But beyond that, it's basically an evolving digital health technologies, DCTs, new electronic formats that weren't really addressed or not able to be addressed at the revision two level. We needed to have proportionate application of these guidelines, whether it's a public health emergency or something like that.

ICH E6 has really always been about this harmony of GCP, right? We want this protection of patient rights and uniformity of data globally, right? This doesn't really change from R2 to R3, but there is much more of an emphasis on participant safety and scientific validity. Those are your top priorities. Making sure that your trial is operationally feasible and scientifically sound. So another couple of quotes - this is actually from E8 - on how to design your trials with quality. Making sure that you're asking the important questions and answering them with the appropriate research and making sure that you have the right data from prior studies to inform your later studies. So it's really not an accident that E8 is emphasized so much in ICH E6 because they go hand in hand.

R2 and R3; there are a lot of changes, not just in words, but in philosophy. So R3 focuses on quality trial design, proportionate risk management, and proportionate oversight to your trial. So that means sponsor oversight, investigator oversight, you name it.

This is directly from a presentation used to train individuals on R3. So Revision 2 was really more of an addendum. We see that in the text itself. It included that addendum to  lean forward into that efficient approach to clinical trials. And it also updated the standards a little bit for electronic records.

Now, revision three is more grounded in that quality by design principle, involves a lot more critical thinking, which I love. And it utilizes more proportionate risk-based approaches, recognizes that one size does not fit all. And there's obviously that reference and emphasis on ICH E8.

Now, this is more of a subtle shift. These are things that I personally noticed from reading both documents, right? All 86 pages. In revision two, there were 13 principles of GCP. They didn't even span across two full pages. Revision three has 11 principles, which seems shorter, but those 11 principles span across five full pages.

If you don't have time to sit down and read all 86 pages in one setting, who does? I would at least focus on those five that outline principles. Not only that, R2 didn't really acknowledge the use of electronic formats and electronic data capture specifically when it came to things like informed consent collection. But revision three does. So it allows you to adapt to new technologies a lot more simply and with more security in mind.

There's also specific guidance for CROs in version two, and there's a shift in language from CROs to service providers. I couldn't put the graphic in the slide for copyright reasons, but this really cool presentation from Barnett International. One of the presenters did a word count comparison, which I really enjoyed. And, there's several things that come up more than twice as much in revision three than revision two. And the top three were risk, metadata and oversight. There's also a lot more emphasis in a word count application for things like proportionate validation, reliable fit for purpose, things like that.

Speaking of this service provider shift. I wanted to zoom in a little bit on this. This is a definition from ICH E6 and if you're a CRO on the call, it might feel like a little bit "ouch" to not being included, but I actually think that this shift in language is a really good thing. Revision three is all about flow, thought and critical thinking when it comes to conducting clinical trials. So it's really succeeded very well in outlining the responsibilities of sponsors, the investigators and the IRBs, and also having some data governance guidelines for those. And it allows that shift of responsibility, allows that ownership for the data. Now, delegation of responsibilities is absolutely still appropriate, but as a site or a sponsor, it's important to ensure that that delegation is properly documented. There's proper management, proper oversight, all that stuff.

At this point, rather, I think it's a good place to pause for questions. Give me a little bit of a time to breathe and see if there's anything that's popped up at this point that we can answer. We wanna make this collaborative .

Kevin Tate: Yeah, it's perfect. Thanks Sarah. And thanks for that intro and overview. And yes, don't hesitate to keep the questions coming.

It seems like part of what we're seeing with R3 is it casts a bit of a wider net in terms of who falls under the umbrella of the guidelines, and so whether you're a CRO or another of these service providers... question about oversight. How best to maintain sponsor oversight or oversight for others who may need that visibility?

Sarah Ruiz: It's a good question. I think there's some obvious techniques that a lot of you already use - weekly meetings, meeting minutes, decision trackers. But this idea of delegation is not like a "okay, we did it. We documented it, we're done." Constant review of who is supposed to be doing what, where those responsibilities lie and proper documentation behind the decision making is really key for oversight because trials last a long time. It could be a year, it could be five years, it could be five+ years. It depends on the indication and the phase and all that good stuff. It can be really hard to remember even from just diving into meeting minutes on why a decision was made and when and who and all that stuff. So it's all about the proper documentation, and having regular fit for purpose meetings.

If you have a meeting template that's like, "all right, we're gonna go over this, and this", and only two of those things are relevant at that part of the study, focus on the relevant things. Have some sign-offs when big decisions are made, as far as vendors are concerned, or how data will be handled, things like that. I think it's a combination of bringing the right people together at the right times, documenting those things and then following up. That's really it.

Kevin Tate:Makes a lot of sense. And to your point around consistency, these trials do take a long time. People and even providers may come and go, and yet you need to show there was consistency and oversight's a big piece of that. One question also came in around metadata. What's your opinion on the description or the use of metadata as reflected here in R3?

Sarah Ruiz: Yeah, that's a great question. I'm a nerd for metadata, if I'm being honest. There are so many real life uses for metadata. It's not just data to describe your data, right? It can be very useful for filter criteria, for completeness overview, for targeting multiple different data points in one question.

I think that there's lot of room for metadata, whether it's in your TMF, whether it's in your EDC or IRT. The idea behind metadata personally that you have very hardcoded points throughout your study to be able to assess the progress of your study, right?  I've got all of my study milestones from the TMF reference model programmed into all these documents, so I can filter those milestones and see what I'm missing. That's where I think metadata really holds the key to efficiency and compliance is when you can use it throughout the conduct of your study, not just at the end.

Kevin Tate: Perfect. Thank you. All right, with that, let's keep cooking.

Sarah Ruiz: Moving right along. All right, how do you align your operations to R3? It's really about just taking your one size fits all approach for your operations and making a little bit more bespoke, making it more custom, applying that critical thinking that we all have and love.

In my terms, what does this really mean? I'm a huge fan of understanding the philosophy behind something, just understanding the words. I think it really gives the words weight. Above all when you're starting a trial, be proactive. Don't assume things aren't going to go wrong. They will. We all know that it's just part of life. So what could go wrong and how you're going to mitigate that, right? It's all about that critical to quality factors. Be proactive, not reactive.

What I'm trying to say here is don't rush your first patient, first visit. It's not worth it. You wanna make sure that all your ducks are in a row, your i's are dotted, your t's are crossed before you rush into a first patient, a first visit. Your service providers are saying, "yeah, we can get it done in X amount of time, no problem." And they haven't identified the risks? Probably not going to hit that target date.

Recruitment is one of those big challenges that we face every day, all day. And so ensuring that everything is laid out, making sure that this quality trial design focuses on the data lifecycle. I call it A to Z thinking, not A to B, like your next goal shouldn't be, "what's next?" It should be how do we have a successful trial.

Understanding the scientific method, that just means making sure that you're asking the right questions, hypothesizing the right ways to solve those or answer those questions, and then laying out the right experiment to test that hypothesis, sharing your results, all that good stuff. Get enough input from your qualified parties, from your principal scientists that ran the preclinical studies, your biostatisticians.

I want to zoom in on biostats here... other than literally being stated in the guideline, you should have some sort of statistical analysis involved.

I think everyone wants to add a lot of endpoints to their clinical trial. It's natural. A biostats person will be able to curb your enthusiasm there because when you hear someone a statistician say, "okay, but there's no really way to power that endpoint with the patient population, with the visit schedules, with the procedures, you name it", then it's not worth having that endpoint. Just to be frank. The reason why it's not worth it is because you're not gonna prove anything and you're only going to narrow down your inclusion/ exclusion criteria, which will then make recruitment harder, which will make your study go longer, etc. All the way to being two years out of your targeted end of recruitment date.

Just make sure that you involve the right people. Doctors and specialists, obviously, clinical operations specialists.

If I can zoom in a little bit on that as well. If someone says, "yeah, I've used that vendor before", or, "yeah, I've done this before, didn't work out the way we expected. Here's why." Those kind of people are really good to have on your team.

Of course, patients in the community, and when I say patients, the people that have the indication that you're testing, not just a random pool of people. And community. If you're targeting a specific demographic or you're targeting a specific country or state or you name it it's really good to get those people involved for those two top priorities. Validity and protection.

And of course I'm biased, but bonus points if you find a generalist with a background in interdisciplinary studies . I like the phrase  (you learn it in all your capstone and cornerstone courses) that multidisciplinary is different from interdisciplinary. A multidisciplinarian will take a bunch of fruit and make a fruit salad. An interdisciplinary will take a bunch of fruit and make a smoothie. So we want a really nice, easy to swallow smoothie at the end of the day.

Let's talk about this not being a one size fits all and how to operationalize that. I think we all think about SOPs as - for lack of a better term- your Bible, and that's not wrong. But really SOPs should inform the approach to how you discuss, how you plan, how you finalize your study specific strategies.

So I put an example here. I just pulled these little blurbs of text directly from my brain, so it's flawed. But an example of a standard operating procedure is saying prior to the trial of conduct, these are the plans we make to address these GCP principles, right? This is when, this is how we review, this is who we involve, all that stuff. A study specific plan or strategy needs to be fit for purpose for that trial. You want to make sure that you're planning for the risks. You want to say, "Hey, we identified these critical to quality factors for this trial. These are the foreseen risks, these are the mitigations of those risks." List them out, have everybody sign it. That's part of your study startup and it, or it should be part of your study startup. We'll dive in a little bit deeper into this as we go.

As a person living in this space for the last several years, this was such a welcome section from my point of view. So just a couple things I want to point out from this excerpt. I'm not gonna read it, line for line. But it's important to grasp this.

There are eight defined data elements in revision three regarding the full data lifecycle. Each element has at least some subtext, so it's a lot of pages, but I think it's a very good idea to make sure that your relevant SOPs, your data management plans, your data transfer agreements, other study specific documentation that addresses any or each of these elements in the data life cycle... You wanna make sure that they are now aligned with R3. The good news here is that if all of your systems that you use to manage the conduct of your clinical trial is already part 11 compliant, you're probably not facing many changes with aligning with R3.

The big emphasis here is that "fit for purpose" phrase. In layman's terms, fit for purpose just means that the system was developed with a very specific purpose in mind. In technical terms, that means that there are documented user roles, there's documented requirements, quality tests are executed, validation is completed, and not just validation against the requirements, but also against the regulations and the compliance needs as well.

What I'm trying to say is that Excel, SharePoint, those kind of examples, they're not fit for purpose, unfortunately. They're cheap, they're easy. Everybody knows how to use them. But they don't have the necessary components to be considered fit for purpose.

Really it's just all about documenting the intended use of the system. So if you're using a vendor system or you are a vendor, it's possible that you have several research functions that you're serving maybe as a user, you're only using this one feature. It's good to document that. It's really just about mitigating misuse of the system more than anything else, which will you a lot when it comes to an audit.

Speaking to the vendors on the call, user research goes like a really long way for good clinical practice. E6 specifically recommends that representatives of the intended patient populations and healthcare professionals are involved in the design of a system. So this obviously applies to things like ePRO, e-Consent, but it also applies to site and sponsor facing question systems like EDC, IRT, TMF, ISF, those fun acronyms. And it's nothing more is coming from vendor side for the last several years. It's very disheartening to hear "this isn't intuitive" or "I have no idea". You can mitigate that risk by user research.

Okay. I'm also gonna pause for questions, just so we don't get bogged down.

Kevin Tate: That's perfect. That's perfect. And a couple of other big themes there.

One that comes to mind is this idea of showing your work. As you come up with SOPs and study plans, document that, show that work, be ready to show what you considered and what you selected and how it was codified. And then, to the slide just before this one, as you assign responsibility and delegate responsibility and shared responsibility... shared responsibility is always a tricky one when you're talking about compliance. So make sure that's show that work and make sure that's captured as well.

The other big theme, and I wanted to ask you about this one is fit for purpose. Obviously something we think about a lot at Kivo. Especially as we often work with teams that are coming from using SharePoint or Excel or other systems to track things. So just to make sure I'm clear, if we have systems that we're using not for study data or records, but just for study tasks or study management, to what extent do those need to be fit for purpose and validated?

Sarah Ruiz: That's a great question and it's a little bit of a gray area. You might disagree with my opinion here. Since there is such a emphasis on showing your work and such an emphasis on critical thinking to not just focus on what's gonna end up in your clinical study report. It's important to focus on tracking how your decisions are made. And so for that reason, I think a simple task tracker, like if you're literally just draft this plan, use whatever you want, right? But if you're using like a shared system to run your meetings, to document your decisions, to do those things, and it doesn't have the key components of an audit trail and user permissions and security and things like that... I would be wary of that. I would consider a shift to something that's a little bit more fit for purpose. Because when you're talking about data governance it's just as much about how you reached those decisions in that data as anything else. So personally, yes. I think that that reaches like either shows that you did something when you were supposed to do it or shows collaboration that says " Hey, we had all of the involved parties contribute to this document," this decision making tool should be fit for purpose.

Kevin Tate: Thank you. That makes a ton of sense.

Sarah Ruiz: All right let's keep cooking along.

So the next thing is prioritizing and ordering the steps and tasks that I haven't even mentioned yet to successfully implement R3. I really believe that in order to get to a point where you're super efficient and speedy at your work, you have to first have a process. You have to become proficient at said process, and then you can get faster as you go. This is also known as slow and smooth is fast. I really think it applies because it's most used in really high stake situations that require lot of juggling, a lot of collaboration.

So how do we get there? All of you take this step by step and you bring your entire organization with you. That's what I mean by step together. I also hope that very little changes are actually required to your current processes.

Let's say that you're just starting out or you are very inspired as I am by the new revision and you decide to make some big changes. This is what I would recommend. And before I dive in here, unfortunately, you know it's June 30th, effective date of R3 [is] July 23rd.

Now, do I think you could blitz all of these changes and from a document perspective, ready for R3 by July 23rd? Totally. Do I think you'd be missing the point? Yes.

If you haven't started yet, it's not the end of the world, but I wouldn't rush it. It's understandable and it's expected that quality and updating those quality practices takes time. It takes intention. It takes a lot of change control. And I can hear the virtual applause from the quality people in the background, but when I say change control, I don't only mean the literal documentation of change. Help people adjust to the change. We're all human. A lot of us have years of experience under our belt, and that means years of doing something a very specific way. Incorporating change into a specific way that you do something isn't bad but it does take time, right? And energy and a lot of oversight.

Understand that as an organization, getting everything done and everybody on the same page is going to take time, and that's just the way it is. Also, a document trail or an audit trail that starts and ends in a matter of couple weeks doesn't really scream quality. What does scream quality is a very well-thought out, clearly and concisely documented plan on what sections of ICH E6 R3 apply to your org, not only E6, but E8. How you plan to update those practices, how you plan to train your staff, when you plan to do it, and all the follow-up documentation that goes along with documenting that training, documenting those steps. 

Very first thing, make sure you plan with the respective parties how you're going to implement R3 and document that planning. If you don't get anything else done by July 23rd, make sure you get that done. A quality improvement plan, just for lack of a better term, goes a really long way in the face of an audit. It really does.

What I recommend you include in that plan is steps on training your staff. Obviously that should be your first step. Train your staff on revision three. Train your staff or at a minimum, the staff that's actually involved in study design, whether you are a CRO that offers that kind of service, whether you are a sponsor, whether you're even a site running, an investigator initiated study, you never know, but E8 revision one is definitely key to get some training documented.

I would then run a comparison gap analysis on your current SOPs against the new revision. Do they still hit those principles? Are there gaps essentially? Then you can re-review and update your document templates.

So whether it's sponsors or CROs or some other entity in clinical research, you're gonna have templates that have already been approved by quality to address study specific requirements. Things like risk management plan, quality management plan, your monitoring, your data management, all of those things. I would just make sure that they are built to address study specific requirements. I think this is especially important for service providers or entities that run a lot of different indications and a lot of different phases. Your risk management plan for a phase one ophthalmology study is not gonna look like a phase three dermatology study, for example.

Once you've done that, you can ensure that all your SOPs actually support one another and don't contradict themselves. That list gets kinda long and it's easy to do. And then of course you can train all your staff on the updated curriculum, your updated SOPs, your updated strategies, all that good stuff.

Next, I would review the use and compliance of all your computerized systems. If they're trial related or, as Kevin and I dived into during that last question, if they're used to track decisions based on your clinical trial, just make sure that they're fit for purpose and compliant in whatever regulation that they're touching.

My own sense of humor here. But here's a good example of what I mean by a gap analysis or using a matrix, right? This is more of a gap analysis than a decision matrix. But these are such good tools to use. They help you clearly and concisely document decision making, show your work, they help remove bias. And there's just so many ways to reorganize this and employ this for your purpose. Here's just one example. If you're like, "how do I even go about reviewing my SOPs?" I'd start here. And then of course, corrective action plans to address and close those gaps.

I personally would start with those non-study specific processes. Make sure that they're documented in alignment with R3 and followed. You can then identify the risks of what delayed ICH E6 implementation for your active studies looks like, so that you can prioritize how you're going to put this in action for your active studies. Prioritize the update of study specific strategies based on the risks that you see, those dependencies.

Based on those risks that you identified, make sure that all your controlled documents and study specific documents are in some sort of compliant and secure fit for purpose quality management system, document management system. Those audit trails are very important. The way that we make sure the right people are touching the right things is very important.

Create a track for periodic internal audits that address those things. So it's this like nice feedback loop, right? Are your study specific plans still fit for purpose? Do your SOPs need to be adjusted due to evolving technologies? And I haven't even mentioned Annex two, which isn't out yet, but a lot of Annex two will emphasize in Annex one with a little bit more guidance. So we should all be very excited about that.

Then of course, computerized systems. Are there gaps in your validation process? Have new needs arisen out of your system that are not being met? Things like that. It's a lot to take in. I think starting from the beginning and trying to imagine what it looks like at the end will help fill in those gaps.

Okay. I think we're gonna move right along and then at the end we'll have our final little question block. There's gonna be so much words on the next few slides, and I apologize for that. So we're not gonna read every single slide. But ICH E8 really does support and provide so much clarity to revision three of ICH E6. Personally, I think it's actually wise to read this document before you tackle R3. It's super digestible. It's very well laid out. It's less pages. Everything that you hope for honestly. and it also defines all the different types of clinical studies very well. So it's helpful if you're new to clinical research or if you need guidance on like valid next steps for your pipeline. For example, do you need more exploratory studies before you have enough data to inform the design of a confirmatory study? Things like that.

I'm gonna go through each of these actually and translate them a little bit. These are the literal defined approaches to identifying critical to quality factors in ICH E8, revision one.

Establishing a culture that supports dialogue. Basically, transparency between irresponsible parties. Make sure that you aren't excluding people that could actually have a lot of important input in your critical to quality factors of your trial. Whether that's in the design, in the conduct anything like that. Focusing on activities essential to the study, going all the way back to the first couple of slides. Is it essential for maintaining the safety of your participants or the scientific validity and reliability of the results? If yes, it should be considered essential. I'm probably saying everything that you already know, but just in case.

Engaging stakeholders in study design. Very similar to the first point, but it also encourages input from outside subject matter experts. Maybe you have a key opinion leader that you desperately want to do your trial, but they're too busy. It's not feasible for them to run it. They can still give input to design. It's still a subject matter expert that has valid input.

Reviewing critical to quality factors. So it's a little bit cyclical there, but the way that I interpret this is not only a beginning review, but a constant holistic lessons learned, like a very project management approach to adequately manage the risk of your trial on an ongoing basis. So don't just say, "Hey, these are our factors" at the beginning of the study and then forget about them because that also misses the point of good clinical practice. And last but not least, is those critical to quality factors that you have identified in operational practice? A little bit of a feasibility assessment. We've identified all these factors. Is it actually feasible to operationalize these in real life? A 4:00 AM blood draw. Something to discuss, right?


So these are direct copy and paste from ICH E8. if you don't gather anything from that, I think it's important to gather this. A couple of things that I gather from this little bit is don't put the cart before the horse. It's understandable that you want to, for cost effective reasons and efficiency and just teeing yourself up for success, that you want to go ahead and start designing your phase two study while you're still in the preclinical stage. Maybe. But don't. You simply don't have enough information yet. And data should drive all of your decisions one way or the other. I think it's very important that you do the steps as they're proven to work in the scientific method.

Making sure that your ethics are priority is also very important to things that are critical to quality, the quality design of the study, all of those things. Do you have enough? Are you going to be collecting enough data with minimal risk to support whatever your objective is? Is it proving that something's more effective than a comparator? Is it proving that it's safety? Just remembering what your study objective is, in general, is very important.

Only two more slides and then we're done. Obviously. I really have a bone to pick with eligibility criteria sometimes when it comes down to the operational feasibility. So if you're adding things that are not reflective of the study objective, you're shooting yourself in the foot a little bit. I think, as long as you still have the right independent, dependent, and control variables that should be enough. That's why input is so important from various groups, 'cause they're all gonna have something to say. You need to be able to glean off of that what is important.

Last but not least, the SAP, the statistical analysis plan is pre-specified and defines analysis. This goes back to what I was saying about don't rush your first patient, first visit. It's not going to do you justice. It's more important that by the end of the study you have really clear results that can't be misinterpreted than "Oh yeah. We hit our target." Hot take, maybe. But I really believe in that. I believe in proper planning taking your time where it's necessary.

Then monitoring, make sure it's tailored. We have this whole concept of risk-based monitoring and we've had it for a really long time. And I'm still waiting for it to be applied more broadly. A hundred percent QC can be valid, but I don't think it's going to be and when I say a hundred percent QC, I mean the data itself. Do you need to know every little thing? Possibly? If it's not critical to the study data and the statistical analysis, then it maybe doesn't need to be that 100%. It should be risk-based. And you can make determinations of that through proper planning.

The last step of the scientific method is the reporting of the study results is not only planned, comprehensive, accurate, timely, but publicly accessible. Share your results, people. It's what makes the world go round.

Kevin Tate: That was great. Thank you, Sarah. So much. Yeah, so much great content and so many good takeaways there. I'm gonna share a picture for a minute to return to this theme of showing your work and and a fit for purpose system because it's something that we've really tried to align with for our customers.

A lot of times, our customers are coming from more of a do it yourself approach, and with these new guidelines, they're looking for ways to take those familiar activities that move them into a more fit for purpose system. So I'm gonna use Kivo here as the example, but really whatever your fit for purpose approach, the way we look at it is starting with document control. So making sure all the key documents for your study, your operations, your team have a workflow, have an audit trail, have part 11 compliance where required, and then on top of that document control and workflow implement a process tracking capability that is gonna be consistent.

This is often the the stuff that's living in maybe Smartsheet or Excel or maybe post-it notes. But finding an easy way to track things so that one, they're done. And two, you can show your work. Once you have those two, it becomes a lot easier to create projects and reports that make all that more clear, back to your themes of oversight and also being able to consistently show that the processes were followed over what might be a long study.

The way we put those building blocks together at Kivo is consistent across these different areas. So we have a way of doing that in quality and regulatory and clinical. Many of you're using Kivo today, but those that aren't, this is also I think, a template you can take to other fit for purpose systems and think about those building blocks as a way to create that compliance and visibility.

Kevin Tate: We've had a few more questions come in that I wanna make sure we get to here, Sarah. A lot of material here. A lot maybe to train the team on. So what advice would you have for where training needs to take place for a team, and what are acceptable timelines for that training?

Sarah Ruiz: Yeah, that's a great question. I think training should happen as soon as possible. Ideally you want to have your team trained on at least the document itself of R3 before the effective date. I think it's part of bringing the organization with you as part of your planning. So I would say try to get your team trained in the next couple of weeks on R3.

There are a couple different ways to do this. The ICH website itself has presentations specifically to train that are a lot more concise. I think cities getting their ducks in a row there, they've not already for updated good clinical practice training. I think a course in a presentation and a group effort is a lot more effective than "hey, read this document and signed off that you read it." simply because we are in the clinical research world, expert jugglers, and it's a lot easier to just sign off and say you read something than to actually take it to heart.

My big takeaway from literally reading cover to cover, was that critical thinking and that philosophy . So it's really important that people take this to heart. I would make it a priority. I would present it in your all-hands if you have one. I would present the plans. I would present why it's important, and then I would go about the training itself. I know that if I don't know why I'm doing something, I'm either not going to do it or I'm not gonna remember it. So yeah, I think, presenting the philosophy first, training on the content second and then doing that before July 23rd is really important. And I'm hoping that answered that first question.

Kevin Tate: Yeah, that makes a ton of sense. And it gets to the next question, which, so July 23rd is when this goes into effect in the US. Other than the things you just mentioned around sharing the philosophy and the training and making sure it's a corporate wide initiative, is there anything specific that teams should do to document the fact that they are recognizing R3 and implementing it? Like any documentation requirements there?

Sarah Ruiz: The good thing about ICH is that it's a guideline, not a regulation. So it's meant to harmonize good clinical practice across the world, right? I think of the effective date as not like the, you are going to be in trouble if you are not using ICH R3. It's expected that it takes time. As far as like it being implemented, obviously July 23rd, the effective date. But as long as you have a plan, and by a plan, something written down, something reviewed, something signed off on that says, "Hey, we acknowledge that there's a shift in R3, this is our first revision. Here's our interpretation and how it applies to our practice." Because you could be a vendor, you could be a CRO, you could be a sponsor, a site and it's gonna shift, right? And that's the whole point is that shift of critical thinking versus just a one, one size fits all. So as long as you have that plan. And then you can go about doing that plan in some sort of scheduled task plan and that scheduled task is in your plan. And then you follow up and you keep documenting. I don't think you can go wrong there.

Kevin Tate: That's perfect. Thank you Sarah, so much. Thanks to everyone who joined us for the webinar. Thank you for the questions. As mentioned, we'll send out a recording as soon as that's ready and we'll see you at the next one. Bye!

Sarah Ruiz: Thank you.