Adaptive strategies for more efficient, data-rich and patient-friendly trials

Read transcript

Nate Akers:

Good morning, everybody. Good afternoon. Thank you so much for joining today. My name is Nate Akers, senior vice president, Parexel Biotech. Very much looking forward to our discussion today. Before we get into the discussion, just a very few short housekeeping items. The audience is muted. To ask a question, please type any questions that you have into the box, appropriately labeled Submit a question. The format for today's webinar is the panel will be going for about 30 to 40 minutes, and then we'll open up to questions at the end. The questions will be visible to myself and the panel, so we'll field those throughout and then pose those during the Q&A process.

To set the stage for today's discussion, in December of last year, we conducted a survey focused on innovation readiness in biotech. The survey focused on three areas of innovation, decentralized clinical trials, synthetic control arms, and adaptive design trials. We presented the findings at JP Morgan in January and subsequently launched a three-part webinar to discuss the findings from each area. Today's webinar, which is the last in the series of focuses on adaptive trials.

To discuss the findings from the survey, we're thrilled to have two wonderful subject matter experts in the space with us today, Martin Roessner and Ned Wydysh. I will allow our panelists to introduce themselves. Martin.

Martin Roessner:

Thank you, Nate. This is Martin Roessner. I'm a corporate vice president, Biostatistics, and I work at Parexel. In my role as a subject matter expert to provide input into trial designs, and that is what we want to talk about today, adaptive trials. I have a 40-year career in pharmaceutical industry and development in all phases, so I'm happy to share some of my experiences, particularly on adaptive designs with you today.

Nate Akers:

Great, thanks, Martin. And Ned?

Ned Wydysh:

Hi, everyone. I'm Ned Wydysh, vice president at Health Advances. Health Advances is the healthcare strategy consulting arm within Parexcel. So, really our goal is to work with clients to maximize the value of their technologies and their portfolios and their organizations. I'm a scientist by training. I have a PhD in organic and medicinal chemistry from Johns Hopkins. I'm one of the leaders in our biopharma practice. I also co-lead our oncology practice, as well as our cell and gene therapy practice.

So much of my time is really spent working with early-stage companies thinking about how to maximize efficiency within their organization, hit near-term value inflection points, and optimize indication selection for their promising technologies.

Nate Akers:

Great. All right. Hi, gentlemen. Shall we dive in?

Ned Wydysh:

Absolutely. Let's just start and dive right into the data. As Nate indicated, this was a survey conducted with industry executives at the end of 2022. One of the first questions we asked them was really what motivates them and their organizations to enter into adaptive trial design. I think not surprisingly, we see accelerating clinical development timelines, reducing cost and reducing risk as the major motivators for engaging in adaptive trials. I think there is very widespread recognition of those benefits of adaptive designs, but we also do see some recognition of some of those other potential benefits, particularly around improving data quality and improving patient recruitment and retention.

This is great to see. There is widespread recognition of a lot of the potential benefits of adaptive trials.

Nate Akers:

Yeah, no, really interesting data for sure. One thing I know, hard to believe, but we're over halfway through May already. I know the survey was conducted in December. Just curious, in the past five and a half months, in your work with clients, have you seen any changing trends in terms of how client's needs are aligning to this or what's your perspective on that?

Ned Wydysh:

Yeah, great question, Nate. Certainly what we've seen, say, the last five and a half months is really a continuation of what we've seen for the last year, maybe last year and a half, where I think driven largely by the economic environment, we've seen a greater emphasis placed on reducing costs and improving operating efficiencies. At the same time, we've seen instances, fairly high profile instances of pharma deprioritizing early-stage development and deprioritizing early-stage and preclinical programs that have less supporting clinical data that would de-risk subsequent investment.

I'd actually argue that in the current funding constrained environment, it's more important to be able to efficiently produce promising data packages for a range of stakeholders, including investors or potential partners than it was a year ago than it was even six months ago. Early-stage companies, I think in particular, are put in the position where they have to maximize the probability of success of their initial shots on goal, really make sure they're focused on the right patient populations, right trial design, but they also have to get there very quickly and inexpensively and adaptive trials are great facilitators of those goals.

Nate Akers:

Yeah, no, that's really interesting perspective. It was something that stuck out at me from the survey but wasn't particularly surprising was of the three areas, decentralized trials, synthetic control arms, and adaptive trials, respondents had the most experience with adaptive trials. The number was around 61%. But it begs the question, I guess as a CRO, obviously, we see protocols that come in a lot of different states. Martin, in your experience working with clients, would you say you're seeing adaptive design protocols come in in its first instance? Or are you seeing more traditional designs that then upon further review have an opportunity to be implemented in an adaptive manner?

Martin Roessner:

We probably see the same trend. I will say the majority still is not adaptive, but we are able to really communicate with our sponsors to potentially propose something in this direction. In some instances, it's not going to change. It will still be a fixed design. The company needs to be comfortable with doing it, but in some instances, we see also a shift and we are able to move this into a more optimal design and apply adaptive approaches.

Nate Akers:

Yeah, great. That's helpful. 

Ned Wydysh:

Great. So then, to keep moving through the survey data, we also asked executives in which therapeutic areas adaptive trials are well-positioned and where do they think adaptive trials are best suited. This was pretty telling in that, almost 40% of our respondents actually said all therapeutic areas. These have value regardless of indication, regardless of therapeutic area. A high percentage of respondents, almost half, indicated oncology. But then, we do see a very long tail across essentially all other therapeutic areas, immunology, infectious disease, CNS, hematology, and then all the way through other therapeutic areas.

This is pretty consistent with our belief that adaptive trials do have benefits regardless of the therapeutic area. I think it's encouraging to see that recognition by a lot of the survey respondents.

Nate Akers:

Just from a market dynamics perspective, it's not necessarily surprising that oncology stands out, but it is somewhat surprising to see the discrepancy between how much oncology stands out against the other therapeutic. I don't think that's necessarily indicative of what the volumes and data looks like in terms of trials that are being run. Why do you think oncology stands out so much?

Ned Wydysh:

I think there are a couple things at play here. I think the first is the sample size for the survey, which was targeted towards senior executives and biopharma leaders isn't massive. So, 33 executives. We know the overall pipeline is pretty heavily weighted toward oncology. Those experts are just more familiar with oncology assets, oncology trial designs is where they live their professional lives. I think it is a reflection of the overall pipeline being weighted toward oncology to some extent.

I think another component that play here is that we do have highly validated surrogate endpoints like response rate in oncology that do allow for greater confidence with interim analysis that those initial signals will represent improvements in progression-free survival and overall survival. There's greater confidence with early decision-making on the basis of what is seen with surrogate endpoints in oncology relative to some of those other therapeutic areas.

I think the main reason though really might be that so many mechanisms that are being tested in oncology are potentially relevant in a wide range of malignancies. We see a lot of studies that are phase one slash two studies in a handful of different solid tumor types. We see a lot of basket studies where a targeted therapy is studied in a wide range of histologies and also umbrella trials where different combinations are studied in different molecularly defined subpopulations just because those mechanisms are widely applicable across different tumor types and malignancies.

And then, I think the last piece is that oncology is so combinatorial in their treatment approaches. It's always about an add-on. It's rarely a situation where it is purely testing a monotherapy alone. It's typically an addition to existing standard of care. And that added complexity definitely lends oncology trials to more adaptive trial designs maybe than some of these other therapeutic areas.

Nate Akers:

Yeah, no great points. I guess just a question for Martin and Ned, feel free to tack on too. There's a large discrepancy here, obviously. I wonder is there an opportunity here for these other therapeutic areas are adaptive designs underutilized in these other therapeutic areas? Martin, I guess first question is from the protocols you review, are there any specific therapeutic areas that stand out you as having more opportunity for adaptive designs that may be underutilized?

Martin Roessner:

Absolutely. Before I answer that question in more detail, let me add one particular point on oncology. Why is oncology so large? I think there's one tiny reason as well from my perspective is that this is an area which obviously is in so much medical need, patients die. And that is one of the reasons why also I would say statisticians are getting creative and trialists getting creative in developing these adaptive approaches. That helps obviously, particularly when there is such an unmet medical need.

But coming back to your question, what is happening in other therapeutic areas, I see quite a bit of uptake of adaptive designs in areas like autoimmune space, infectious disease as well, maybe even in the neurology psychiatry area where we now, as we have a good armamentarium of adaptive approaches, we come to the conclusion that, for example, in autoimmune disease, you have psoriasis. You have psoriatic arthritis, you have ankylosis spondylitis, you see some compounds. Maybe one of the best-selling compounds ever was Humira.

That took 15 years to develop 15 different indications. The mode of action is there. Because the mode of action can really treat several diseases, why not applying then adaptive approaches right from the start? We see it's branching out now. We are offering those approaches also for other disease areas, and it picks up now. People are happy to really engage in those discussions, where is appropriate to apply adaptations as necessary.

Nate Akers:

In the Humira example you just outlined, a good potential design would be a basket trial.

Martin Roessner:

Absolutely.

Nate Akers:

Interesting. It'll be really interesting to see, look at this data in a year or two. I think the question asked to respondents is where do people see value? I think there's opportunity here in the market to help make people aware that it's not just oncology where you can leverage adaptive design.

Ned Wydysh:

Absolutely. Completely agree. What we have seen from the survey data is that there is widespread recognition of the potential value of adaptive trials and really getting to efficient answers around the best therapy indication pairs, building broader clinical datasets, understanding the right indication, understanding the right therapy for that indication or that subpopulation. And of course, there are challenges associated with the trial design and operationalizing adaptive trials. Martin will speak much more eloquently about those than I can, but I do want to emphasize that's only one piece of the challenge.

After you get the data from the interim readout or from an adaptive trial, you have to make a decision on the best way to proceed forward. This is actually where we see a lot of clients struggle because the datasets usually aren't ultra-clear. It would be an amazing situation if we're running study of one targeted therapy across let's say four solid tumors. We have very high response rate in one of those tumor types and nothing in the other three. We know, "Okay, that indication is identified, that's what we'll go forward with.

But of course, the reality is the data package is in much grayer areas. Companies need to plan ahead and think about what the likely data packages and scenarios will mean for their decision-making. Really what this means is doing some early research with the wide range of stakeholders that are going to interact with the data. This includes clinicians, it definitely includes regulators. It includes potential partners.

If we're thinking about out-licensing or collaborating on subsequent development, it certainly includes patients because ultimately, if the sponsor doesn't really know how we're going to take those data forward, how we're going to be as efficient as possible in making those decisions that will come out of the adaptive trial data, that really negates a lot of the advantages around efficiencies gained with adaptive trials.

Really want to encourage greater pre-work and thoughtfulness around the likely data packages so those companies can be as efficient as possible in the next steps in decision-making based off of the data that comes out of those adaptive trials.

Nate Akers:

Just a question based on that, I guess in your experience, are you seeing sponsors do that pre-work in that appropriate timeframe? Maybe you can answer this in reverse. How early is early on? Let's say it's a phase two, three adaptive trial. When do you feel sponsors should really be doing that pre-work?

Ned Wydysh:

Yeah, great question. We at Health Advances are seeing a pretty broad mix. We are seeing some of our clients plan very early, early clinical stages, initial clinical trials around, okay, what are the data packages that are really going to, one, make clinicians excited to enroll their patients in subsequent studies? Then eventually, what are the data packages? What PFS, what OS, what response rates are going to encourage those clinicians to use the therapy if it's approved? Also engaging regulators at those early stages as well to understand, okay, what would constitute sufficient evidence to proceed into the next stage? And so, some of our clients are thinking about this very early.

Some of them think a little bit about it at an early stage, but don't truly internalize it and what it means for decision-making in terms of is this a partnership indication for us if we don't hit this bar? Is this an internal development program if we hit this response rate and really thinking through some of the strategic complexities and implications of those datasets. I think we definitely do see a range there, but we are starting to see our clients become more thoughtful and a little more conscious of how critical it is to plan early with a broader range of stakeholders.

But it's not just clinicians and it is not just regulators, especially as a lot of these companies are having to prioritize lead programs and deprioritize follow-on indications.

Nate Akers:

Plan early, plan often. That's the message.

Ned Wydysh:

That is the name of the game. Absolutely.

Nate Akers:

Maybe just a pivot onto talking a little bit about operationalizing these trials. Martin, I'm curious from your perspective, what's the main mistake you see sponsors make when trying to operationalize adaptive design trials?

Martin Roessner:

I think it's probably what Ned already said. The planning piece is really critical, and planning means that you think through the details. You think through how the data are coming in to begin with, that they're coming in a timely fashion that you can make decisions, but that you also play with scenarios and you are prepared for that. I give you one example, which was quite eye-opening. There was a trial where an adaptive design was planned with a so-called sample size re-estimation, a valid approach to look at the data and then determine whether the sample size should be enlarged.

The company decided, yeah, based on the data and the committee reviewed the data, said, "Let's increase by 20%." And guess what? They did not have thought about that this would result then in a larger study medication package for the trial. The sponsor really was struggling to then find drug in a short period of time to get ultimately the study medication to the sites where it was needed. It sounds trivial, but you have to think through these details, what you need in case of adaptation so that you are prepared from an operational perspective and not say, "Oops, that was a mistake we did here."

You have to go through these scenarios, build it, understand it, and be prepared for every single direction the adaptation is supposed to go that you are operationally ready to execute it.

Nate Akers:

Great. Excellent

Ned Wydysh:

Great point, Martin. We then asked our survey respondents about the greatest barriers to their organizations implementing adaptive trial designs and four barriers really stuck out of the dataset here. The first is really a skepticism or perceived lack of acceptance around the resulting data packages among regulators or clinicians, concern around the risk and uncertainty around the impact of an adaptive trial design on trial success, the lack of awareness and familiarity at the organization and lack of qualified or experienced partners. This is pretty interesting.

I would say lack of awareness and familiarity at the organization, lack of qualified, experienced partners and actually lack of internal systems software is really in terms of operationalizing that study, how do we have those processes in place to effectively execute that study? The first two barriers, skepticism, lack of acceptance around the dataset from regulators or clinicians and uncertainty around the impact on trial success are really more around perception of the value of the data package and really a perceived concern that these data packages aren't going to be as strong as data packages from a fixed trial design. 

And to me, they speak to a lack of organized early planning and outreach to those different stakeholders. What do regulators really need to see? What do clinicians need to see? How do the data packages impact your likely trial success with scenario planning? This can be successfully addressed with some of that early research. So, we really know, "Okay, this represents a promising dataset, this will answer our question and we'll know exactly how to proceed forward in a way that doesn't reduce our likelihood of trial success and really putting this asset and this program in the best position to succeed."

Nate Akers:

Yeah, it's interesting. I think we have the benefit that the three of us have reviewing 100s of protocols a year and working with many different sponsors and we've seen them through different phases of clinical development. And oftentimes, that includes submission and approval. To me, that 42% seems high based on my experience working with sponsors, but I'm curious from Martin and Ned, your perspective, is that an accurate perception? Maybe just a follow-up to that, are you able to speak to how the agency views adaptive trials?

Ned Wydysh:

Sure, I can start maybe then Martin can speak more to how the agency views adaptive trials. I think the 42% is high, but it definitely speaks more to pharma's hesitancy to change how they've typically interacted with regulators and clinician stakeholders. There's a significant aversion to any risk around regulatory approaches. But I think what we can take out of this dataset is that if you're really willing to build these capabilities and engage with regulators and other stakeholders on novel approaches, you can actually gain a competitive advantage relative to these organizations that say, "Okay, yeah, we just don't believe the data packages are strong enough. We think it's going to limit and reduce our likelihood of success."

But if you're willing to do that early research, willing to engage regulatory bodies and a range of stakeholders early, that isn't necessarily the case.

Martin Roessner:

Yeah, I would agree, and you will see on one of my slides coming, one of the biggest item is get regulatory buy-in. I think the perception is obviously here that you created a protocol with an adaptive design. If you have not really addressed all the pitfalls, the requirements the agencies have, you will fall under these 42%. You will get a response, "No, that is not acceptable." You have to present really the adaptation criteria. You have to ensure that the statistical requirements are met, that you maintain integrity of the study and really elaborate on the details of the adaptation, how the process works, how you ensure data integrity, who gets to see the data during the review.

Those aspects, they need to be laid out in the protocol. If that is not there, the agency will in fact not accept that design, but there are guidance documents out there. We have seen enough feedback from the agencies where we know these are the critical aspects and we can address them in protocols so that ultimately. And you hear even from top FDA folks presenting on the acceptability of adaptive design. I will say when you go back, one of the biggest initial meetings was in 2006 where the FDA invited really industry, academia and FDA and brought together people.

We are now 17 years later and we still talk about innovative designs. In principle, these adaptive approaches are there. They can be applied. You just have to do it right.

Nate Akers:

Yeah, I can't help but think of the opportunity on the therapeutic area slide to get more adaptive designs and other therapeutic areas other than oncology. Looking at that in a year or two and seeing how that changed, I am optimistic that if we looked at this same slide in a year or two, there'd be that perception of lack of acceptance amongst the regulators that 42% would go down as well.

Ned Wydysh:

Absolutely. Great. With that, I would like to transition from really talking through a lot of the interesting survey findings and hand it over to Martin to walk us through some of the critical elements of success for operationalizing adaptive trials and some of the additional barriers that are presented and successfully executing these studies.

Martin Roessner:

Okay, thank you, Ned. A little recap on what these adaptive designs are representing. I will say there are obviously now myriads of opportunities to be adaptive. The most simple design being adaptive is group sequential, but we talked about also seamless design from phase one to phase two or phase two and phase three. That probably answers already a particular question which I see in the chat about where do we apply these designs. Is it for novel mechanism? Is it for rare disease? I will say rare disease is a very prominent field where once we have the patients, we don't want to let them go. We are not really enrolling them into a phase one and when the phase one is completed, we say, "Okay, now you can go to standard of care."

No, once we have a patient identified particularly in rare disease, we want to keep them in phase one, in phase two and potentially even in phase three. Thinking about sometimes seamless phase one, phase two, phase three design, rare disease is not rare. This is a particular design type which we are applying particularly in the early development. I think it's most prominent. Phase one, phase two seamless design, SAD, MAD part, and the extension part or as we talked about oncology, and I will give you an example in a minute. Those designs are helpful.

Adaptation to the patient population. We talk about really several disease indications in one study, in a basket type of design, platform design. These are designs where you test several indications, maybe even several treatment modalities, be it a monotherapy and a combination therapy or be it different doses of one monotherapy. That could lead to these types of basket and platform designs.

If you want, you can even include the idea of writing a master protocol where the vast majority of the protocol is really a core protocol, and then you have different modules there for different indications or treatment modalities. These approaches are there. I will also say that maybe as part of the adaptive design feature, you could even include the external control arms into that using real-world data as control arms. That is an option two, but there's a whole host of additional challenges probably when you talk about real-world data to be part of randomized clinical trials.

The most important part I want to focus a little bit on today here is on the components for successful implementation. We talked about planning. What is the planning representing here? It's really, as I mentioned already in a response, the pre-definition of the adaptation criteria. What do we do in which situation? If we see a response rate of X percent in a particular cohort or in one of the arms, what do we do with that arm? Is that futile? Is that in a promising zone? Can we expand that then further?

Those criteria need to be pre-specified and they need to be defined accordingly in the statistical analysis. Often you need to run really the simulations of those potential scenarios and figure out what would be if we increase the sample size or if we go further from that expectation from that observed treatment effect and continue the study in this way, what is the outcome? What is the likelihood of having a positive study? Those simulations will help us to understand that and it may really rule out maybe some of the criteria.

But in order to do them, you need to do what Ned was saying earlier. You need to make assumptions around what is standard of care doing, what is the expectation, what is the realistic expectation on treatment effects just to be seen? Can I play those scenarios and really get a sense what was promising and what is not? I mentioned already to maintain integrity of the study.

That is critical to control access to the results that after the adaptation, that trial is run with the same rigor, with the same intention of the population that you are not intentionally so to say, changing in an exclusion criteria and suddenly your second part of the trial is no more comparable to the first part. Those considerations need to be met. Of course finally, my biggest point is usually get regulatory buy-in on those adaptations and presentations, but that is only so to say in the beginning of the program.

Now let me talk a little bit about one of the examples and hopefully we can wrap this up pretty quickly, although it is a complex design, I will tell you. On the left side, you have a typical dose escalation study. So, where we try to identify the dose? This can be done with a Bayesian design. You have a simple implementation, it is flexible in the cohort size and it gives you good accuracy to identify the MTD. Now meanwhile, the FDA has issued guidance that you are really presenting after you have done your dose escalation, before you go into broader extension studies, you built in a dose optimization plan.

That could be done by randomizing really two cohorts to basically the given dose. You choose two dose levels, which may be interesting. You have this as a randomized comparison, but you apply basically a two-stage design here. And then, once you have reached stage one, you can make a decision which of these doses is really promising and then you go further into an expansion cohort where you take the more promising dose and treat these patients than in expansion cohort.

That could be one indication and you could then initiate, once you have that dose identified, you could then expand so to say this whole cohort for lung cancer patients into other cohorts, breast cancer, gastric cancer, depending on the mode of action. This is just an example of how a seamless phase one/two design could work. It's quite complex. By the way, if you see really substantial benefit in any of these cohorts, you could even expand then further potentially talk to the regulators and see if that result and this drug is really a candidate for an accelerated approval.

That would be an ideal situation. I recognize that, and it has definitely a lot of complex considerations here. But you can see you go from first in human to potentially an approval stage within one study protocol. But I give you one example where it's really from an operational perspective is critical to look and to be prepared. Let's take a look at these dose escalation piece here. How do you make the decision on the dose escalation? Relatively simple. That is standard cohort management you do, but how long does it take to go from one dose to another?

We have a trigger for the data review that is ongoing while the patients are in that first cohort for the first dose level. That last patient who has completed the review period, that triggers my day zero. On that same day, I want the data to be entered in the database. I want all the data be collected for that last patient for the other one of course has been done. And then, I will ensure that in the next two, three days, I extract these data, review it, clean it as necessary. If we have other vendor data required like PK or ECG or we want to have a scan, yeah, those data may have to be collected as well, maybe analyzed and that may add a little bit of time to the whole process.

But once you are there and you time this adequately, you can have on day three and four, you can generate the results. On day four and five, you give it to the safety review committee and they make on day five the decision that this is all fine, we can now go to those level two. Or they may make decisions, no, we stay with another cohort of three patients or more at the same dose level. And on day six, you can start the next cohort. That is need to be really worked out in detail so that you can ultimately ensure from an operational perspective, you are prepared to execute this adaptive design.

That applies to all these milestones in between when you want to make a decision that you have a detailed plan worked out operationally, how to get the data, how to analyze the data, how to inform the decision committee, whether it's a DMC or it is simply a safety review committee to be able to make those decisions. That is a prerequisite to plan ahead of time. One aspect from an operational perspective I want to mention is really that operational team readiness.

You have seen in this example how this is important that really the different functions are prepared for those complex study designs, the project management setup that they have tools to really analyze these complex designs in a way that they know what are the milestones, what do I need to do for that milestone that I have all the adequate information available, setup of data collection that I have agreed, how the data are entered into the database, the data integration from the different sources of data which are coming that are integrated and have the analytics prepared for the interim analysis that I'm not starting programming when the data come in.

The programs should be available at the time when I reach that milestone and they should be validated. Drug supply, I mentioned that little example I had. If there's modification, if you want to eliminate one dose and continue with another, then you need to be sure that you have the drug supply organized in a way that patients can be treated. The vendor qualification. If I have a vendor doing labs or the images, I need to be sure that they can deliver in time so that my sequence of events is not disturbed.

The analysis, statistical analysis needs to be there to have a timely decision-making. And then, the predefined roles of IDMCs and safety committees to have really in the chart a very clear what their role will be in that decision and who should get then information about these data. Ultimately, I will say it is good practice to have a continuous tracking and the project management will need to address any potential issues and risks early so that they don't become rate-limiting issues.

These are probably from an operational perspective, some of the critical pieces I would like to highlight. If you do that, I believe you can successfully implement adaptive designs in your development.

Nate Akers:

I have a question for you. Just based on your experience, what do you see as the main one of these that you continually see not being properly managed?

Martin Roessner:

I would say from my end, it's really the time you need to spend before you really initiate the first patient, that you go through not only the protocol and you have the protocol written, but from an operational perspective you have your team ready. The team has to understand and that goes down really to the last CRA at a site to be aware that they need to ensure that the site is entering the data in time, that they are preparing the information which is required for that decision-making process. And that this team is ready to work together, understands the deliverables in between at these milestones and are familiar with the complexity of it.

It's not really a fixed study. We still hear things like, well, we are collecting the data and then there will be a bolus at the end before database log. This is not what adaptive design can tolerate. Adaptive design need really continuous review of data, really almost real-time access to data so that you can make also real-time decisions on these data. That is what I would think is really critical to have these teams in place, prepare them before you run this type of trial, educate them, train them on the design, on the milestones which are to be achieved.

I think that's a prerequisite that we will be successful in the implementation of those trials.

Nate Akers:

Great. Thank you, Martin. Very helpful.

Ned Wydysh:

Perfect. Thank you, Martin. The last slide we just like to wrap up on before the Q&A component is the response to the question on expectations around future use of adaptive trials. The vast majority of our respondents, 63% anticipate a large increase or an increase in adaptive trials in the future really as opportunities to accelerate the timeline, reduce costs and reduce risk. This is fantastic to see, but it does emphasize the need to properly plan around operationalizing, properly plan around decision-making, and ensure that all these tremendous potential benefits of adaptive trials are truly realized.

Nate Akers:

Great. All right. We have some time for Q&A. Let me see. We had a couple. There was a question, quick administrative one. Yes, we'll be sharing the slides after the meeting and we'll also similarly share if you haven't read it yet, the report that also includes decentralized trials and synthetic control arms. Several questions have come in from the audience, so great participation. Let me pull one. There's one that came up that I thought was interesting.

Do IRB or ethics committees or regulatory authorities expect to be updated when decisions to change trial is undertaken during the process of the trial?

Martin Roessner:

I would say in general not, but that requires that you have very precisely defined your criteria because then it's clear. Whatever you do is following that path. If the agency can see what you are preparing and how the decision is made, then it would not be a requirement to inform the agency between decisions or off decisions. I have seen pushback when those criteria are not totally clear. Then I see exactly that pushback of these 42%, Ned mentioned earlier, that the agency said, "Well, we do not fully agree with your approach, come back when you have done that first stage and then inform us."

But that is a result of not being transparent enough and clear enough what the adaptation criteria are. In general, as long as the adaptation criteria are very clearly formulated and it is clear that it is acceptable, then you may be able to do that without an interaction with the agency during the trial.

Nate Akers:

Thanks, Martin. Yeah, great response. Back to the regulatory piece, someone posed the question, what are the most common issues of non-acceptance by regulators?

Martin Roessner:

I would say sometimes we see particularly, and I want to use the example in oncology. I think the agency particularly with new entities, new mode of actions, there is some reluctance to go into larger trials. Large means exposure of more patients than you might be comfortable with. I've seen some pushback in that area where somebody said, "Well, yeah, I would like to expand that right away into 100, 150 patients," the phase one/two study. And you see pushback from the agency say, particularly in oncology, we hear that "No, for solid tumors, we probably should not go over 40." To begin with, you can do a two-stage design there that is adaptive if you want.

The first 15 or 20 patients will be assessed whether the drug is futile or not. And then, you expand into a second stage and go. And there is a limit of up to 40. First, you demonstrate safety in that 40 patient cohort and only then you would be allowed to expand further. I think there are probably opportunities also for adaptations. As I mentioned in my case study, if you do your two-stage design in the beginning and you see not only a small improvement in efficacy but you see a substantial benefit, I think every company is ready to go to the agency and say, "This is what we have. What would you recommend we do? Do we need to do a new trial or can we expand this trial now beyond the 40?"

But in general, I think the guidance often is to finish that expansion cohort with a limited number of patients and then review the data and then have a plan for subsequent program, either single arm or largely the agencies want randomized control studies to see how the drug really works and then adequately powered whether that's a phase 2B or phase three. That's another question which you can discuss then with the agency once you have the initial data, but that is often a pushback not to go too high in the exposure to begin with.

Nate Akers:

I can't help but think we've just spent 53 minutes talking about adaptive design trials, which is fantastic, but are they always the answer? Are they always better than fixed design trials? What's your thoughts on that?

Martin Roessner:

I will say there is a place for adaptive designs and there's a place probably for fixed designs. I will say it tends to me still a little bit challenging to define complex phase three designs, adaptive designs in a pivotal setting. I believe that there are situations where you can do still a pretty standard fixed design and that is well-suited. That could be very early on when we think about healthy subjects and patients in a particular indication, then you may still do a very simple fixed design in the early phase. But for pivotal trials as well, usually agencies do not like to mess around too much with a pivotal trial. They want this really be executed to a standard and ensure that ultimately these results are replicable and credible.

That means sometimes in some situations, the fixed design for a phase three might still be a better choice, but I personally would always look for opportunities to optimize that protocol. Even if it is just to perform an interim analysis and see if you are on the right track there with not too much impact on your sample size and spending type one error there, that would be something I would recommend to do in any case.

Ned Wydysh:

Yeah, I completely agree with Martin. I really view adaptive trials as a type of tool that makes sense or don't make sense depending on the manufacturer and developers strategic goals and understanding of the indication therapy fit. If it's a mechanism that is only relevant for one indication, if it's very clear what patient subpopulation or line of therapy they're going to go into, that might not be a situation where an adaptive trial does make sense, particularly in later stages. But as we think about the need to maximize probability of success in earlier stages where there are questions around the appropriate patient population, indication, dosing combination, then adaptive trial designs have tremendous benefits.

Nate Akers:

Yeah, no, great perspective, gentlemen. Time for one more question and there's some questions in the chat that I don't think we've answered here, but we will grab those and we'll work to provide responses. I just don't feel it would be appropriate to have a discussion like this without talking about patients. And so, I think all of us do what we do to improve the lives of patients and families. What impact do adaptive designs have on patients?

Martin Roessner:

I think this is probably a great question to be aware and take care of. Last not least, the patient is really the center of the clinical trial. I believe you need to include the patient into that trial design and in the execution as well to make it successful. Obviously, keeping patients in the trial is important. When there are adaptations, you need to be ready to have appropriate language in your informed consent already that the patient is aware that if I'm on one dose level and there is a certain result, what is my next option? You cannot guarantee what dose the patient would be on, but you can prepare the patient on the eventual changes.

If you have an open label extension, you tell the patient from the start. We go 16 weeks and after that, we will move you to open label treatment. If you have adaptations which allow the patient to be then treated with another dose, randomized to another treatment arm, the patient needs to know that. You need to ensure that the patient gets the right information and continues to be part of the study that they are not surprised and are informed and can work as patients in the trial to be ready to be participating to the end of the trial.

It's critical that we really have the data from all patients to the end of the trial to be able to make really unbiased conclusions from those data. The patient is a big part of an adaptive trial design.

Ned Wydysh:

Just to very quickly add, I know we're about up on time, a successful adaptive trial design should be better for patients. It should result in a lower likelihood that they will be continued on a study arm that is failing to show a benefit. There should be, and I think there is tremendous benefit for the most important stakeholders in the adaptive trial design process.

Nate Akers:

Perfect note to end on. Ned, Martin, always a pleasure. This does conclude our webinar for the day. Thank you so much to the audience for joining. We will be sharing the slides and a link to the report as well. Thank you, all, and have a wonderful day.

Ned Wydysh:

Thank you, all.

Martin Roessner:

Thank you.

Return to Insights Center