Artwork for podcast Tinnitus Talk
Good Tinnitus Science, Bad Tinnitus Science
Episode 2313th April 2024 • Tinnitus Talk • Tinnitus Hub
00:00:00 01:12:12

Share Episode

Shownotes

We often talk about the lack of research and funding for tinnitus. But what about the quality of research? Do tinnitus sufferers benefit from the research that is conducted? In reality, many studies are conducted improperly, thus giving misleading results and false promises for patients.

During this episode, we dive deep into concepts like research design, patient selection, outcome measures, statistical analysis, and everything else required for high-quality studies. We focus on studies that assess tinnitus interventions; in other words, studies that measure the effectiveness of new treatments. How do we ensure that such studies generate valuable information for patients?

We discuss these topics with Inge Stegeman, an epidemiologist from the University of Utrecht, and Jorge Simões, Assistant Professor in data science and mental health at the University of Twente.

(00:00) Why Is This Topic Important?

(16:15) Things That Can Go Wrong in Designing a Study

(24:55) How Do We Measure the Success of Clinical Trials?

(35:54) Open Science and the Importance of Being Systematic

(39:24) Breaking Out of Research Silos

(46:04) The Importance of Negative Results

(49:43) Being Honest About Study Outcomes

(55:59) Reasons for Optimism

(58:39) Unifying Tinnitus Research

Become a Tinnitus Talk Podcast Patron at https://moretinnitustalk.com for bonus content, video interviews, Ask an Expert series, and more!

Transcripts

Hazel:

Hello everyone! Today's episode of the Tinnitus Talk Podcast focuses on a topic close to my heart, namely the quality of tinnitus research. How can you, as a layperson with an interest in following tinnitus research, or even just as a regular tinnitus sufferer who wants to know about any new treatments coming out, how can you judge the validity and reliability of tinnitus studies?

Hazel:

Because if you just read the media reports, or even if you go and read the original academic publications, the results often seem much more promising than they actually are. And we want to save you from spending a lot of time and money on something that might not work for you, not to mention the mental anguish that can come from putting your hopes on something only to find out it doesn't work.

Hazel:

There's nothing wrong with you. It's the scientists and the people who market new treatments that need to be honest and transparent about what a study does or does not prove. And we want to arm you with some basic knowledge to help you be more critical about this. Mind you, we're not talking about scams here.

Hazel:

That's a big problem as well for tinnitus sufferers. There's plenty of unscrupulous people out there looking to make a quick buck off of our suffering, but that's a topic for another day. We are talking about legitimate scientific research. But, not all science is equal. If you're going to test a new treatment, for instance, best practice is to use a control group alongside with the patient group that gets the actual treatment, so that you can correct for the placebo effect or test whether the new treatment is better than an existing treatment.

Hazel:

Yet, many clinical trials are still conducted without a control group. And that's just one of the many things to look out for when you read about an interventional study. Some of the other things include how patients are selected for the study, who is included, who is excluded, the sample size of the patient group, what kind of data is gathered from patients during the study, how the success of the study is defined and measured, how data is analyzed, and much more.

Hazel:

We often talk on this podcast about more funding being needed for research. But really, it's just as much about conducting research in the right way, not wasting the opportunity to learn something new, and communicating openly and honestly about the evidence for or against a treatment. Who does it work for?

Hazel:

Who does it not work for? How well does it work? And how sure are we really about the results? I'm talking about this topic today with two tinnitus researchers with deep knowledge on study methods and data science, Inge Stegeman and Jorge Simos. I really appreciated their receptiveness to critique and their commitment to open dialogue with patient communities like us.

Hazel:

Before we start listening to my conversation with them, just a reminder that we always create transcripts for all of our episodes to cater to those of you who prefer to read rather than listen. You can find these on tinnitustalk.com/podcast/ by clicking the CC button next to the player. Also, please like and review our podcast through whichever podcast app you're using.

Hazel:

And if you want to support our volunteer efforts, go to moretinnitustalk.com. That's moretinnitustalk.com. As a paid subscriber, you get access to bonus content, live events, and at the higher tiers, you can even co-create content with us if you want. Our next episode is coming out soon, within a few weeks already, and will contain a big announcement about a new research project that we're involved in.

Hazel:

The project includes some of the biggest names in tinnitus research, it already has some funding behind it, and the aim is to significantly speed up the quest for a cure. It's very exciting, but that's all I'm going to say for now, so stay tuned, and now I invite you to listen to my conversation with Inge and Jorge.

Hazel:

Welcome to the Tinnitus Talk Podcast. I'm your host, Hazel, and today we're talking about good tinnitus science and bad tinnitus science. I probably shouldn't say bad, that maybe sounds a bit controversial, but I do think over the years I've seen some, let's say, substandard tinnitus studies come out and, we've become more and more critical of that, and we want to educate tinnitus sufferers on how to spot quality issues with tinnitus research. But of course, we also want to push the researchers to improve the quality of their research because we want better treatments hopefully a cure someday. So to discuss this very important topic with me today, I have Inge Stegeman and Jorge Simoes.

Hazel:

Welcome.

Inge:

Thanks. Thank you.

Hazel:

Inge, can I first ask you to introduce yourself a bit and tell us a bit about your background and like why this topic is important to you because there's a reason I invited both of you. It's not because you're maybe the most, prolific tinnitus researchers or we're not actually going to talk so much about your own research.

Hazel:

We can do that some other day maybe. But I did specifically invite both of you because I know you're both quite critical about the field in general, and you're not afraid to voice it. Particularly you, Inge, I think you're not afraid to voice it because I've been to a few conferences where you started your presentation with something along the lines of I'm here to tell you that your research sucks.

Hazel:

You didn't use those words. You did not use those words, but it was something along those lines. So yeah, can you tell us a bit about your background and why this topic is important to you?

Inge:

Great. Thank you very much for inviting us. I'm very pleased that we can actually speak about this topic. I'll first introduce myself.

Inge:

I'm Inge Stegeman. I'm an epidemiologist at the Department of Otorhinolaryngology of the UMC Utrecht, and Clinically, I'm doing research into tinnitus and hearing loss, but apart from that research, I'm doing research into what we call meta science. So that is trying to solve the issues that exist in current biomedical science and that is problems with methods.

Inge:

So as long as we don't stick to the right methods and use the right statistics to actually evaluate our interventions, then we're probably not going to give patients the right answers and we're not going to help society any further. So your other question was why this subject is so important to me.

Inge:

So that's exactly what it is. So I think that we're using a lot of societal money to try to improve the lives of everybody living here. And then in this case, we're speaking about today of tinnitus patients. And at least I think that the only way to actually change that and to find usable interventions is to do our research in a good manner.

Inge:

So according to the best methods and using the best statistics and regrettably currently that is not what is always done. There's a lot of good research out there and there are good, a lot of good researchers. So I think we have to differ between intentionally making mistakes, which is usually not the case.

Inge:

So that's not what we're here for. So indeed, I try to tell people how we can change it, but I also want to see the positive side of this. Most researchers are really trying to do the right thing, but I think we have to keep the methods and statistics more in sight in order to find a solution for tinnitus patients.

Hazel:

Okay. We'll ask you a lot more questions about that later, but Jorge, could you also introduce yourself, please?

Jorge:

Sure. So thank you very much for the invitation. It's a pleasure to be here with you today. My name is Jorge Simoes. I'm an assistant professor also in the Netherlands, not very far away from here in Utrecht, in Twente, and I have a position in responsible AI and mental well being.

Jorge:

So it's a position that also has a lot of, a lot to do with methods and how that can be used to increase the quality of life of people suffering from all kinds of chronic conditions. including tinnitus. The reason why I think I'm motivated by this theme, how to improve the evidence. I mentioned to you, I think a couple of weeks ago, that if you look at the literature, you'll see that the only thing that kind of has some evidence towards it is CBT to manage tinnitus.

Jorge:

And if you start taking it apart, what CBT is, you have all those different components. And one of the strong components of CBT is the so called exposure. And in case of tinnitus, it's exposure to silence, so you can hear your tinnitus. So in other words, what we are saying is that one of the things that work best for tinnitus is saying, just listen to it.

Jorge:

And I don't think that's enough of a good answer or that's not good enough to offer patients that. I also suffer from tinnitus since 2016. And I went through the same, let's say, patient journey as everybody else. You go to your GP and he says you have tinnitus, get on with life. And that's why I decided to be part of this research line. And I think that at the state that we are right now it's not good enough.

Hazel:

And Jorge, so you are, I think we can say a fairly early stage researcher, right? You finished your PhD, when was this?

Jorge:

2021.

Hazel:

Yeah, so fairly recent. I imagine you come into this field kind of young and excited and you're happy to contribute to some amazing, what you think or hope are going to be amazing tinnitus research projects, and then you discover things along the way where you're like maybe this is not optimal.

Hazel:

Can you talk a little bit about that?

Jorge:

Yeah, I think that's a good description of how things are done. And there's always room for improvement. And I think now the most difficult part for young researchers, how to translate all those ambitions and those wishes into practice. So now I'm very thankful that's my new position.

Jorge:

I have some freedom to pursue my own research. I also had that during my PhD, more than, let's say, the average PhD student, but now now, I don't have a supervisor, right? I have someone above me in the faculty ranking that is ahead of the group, but in terms of coming up with my own research, now I have full freedom to, to to come up with it.

Jorge:

And that's also why I like talking to the two of you because excuse me for my French, but you have a good bullshit detector. So it also keeps me in line that I can reach out to you and get some feedback on my own work and use this feedback to actually come up with something productive.

Hazel:

Yeah, and that's one of the reasons we've developed this, open line of communication because we felt you're open to these kind of discussions. And not all researchers are, and that's a pity, like we were talking about this earlier informally.

Hazel:

I do think people do get very personally attached to their research and they say, oh, we're, we're always open to criticism or we love hearing from the patient organizations. But when we actually voice or even just ask questions, like critical questions, it's not always well received. So I, I think there's, there's the ideal of science which should of course always be open to criticism, and there's the reality of people just, I think, feeling very personally attacked when you actually do that.

Jorge:

I agree. There is even that one article that I published a few years ago and some members of the Tinnitus Talk didn't like it. And I was actually then justifying myself and then putting it in context.

Hazel:

Oh, because it was your own research.

Jorge:

Yeah. And I think it's a part of the game and some people will say I don't like it and they are free to do. So, um, I Try my best to give the reasons why I think that's relevant. And sometimes it's just like that, that some people would disagree.

Jorge:

I hope at least that in terms of methodology, the study is sound on that I can support the claims that they made in that article.

Hazel:

Yeah. Inge, I'm curious to go back to this point of these kind of talks that you give at the Tinnitus Research Initiative conference and others where you're not like criticizing anyone in particular, but you're criticizing the field or you're wanting them to do better.

Hazel:

Like, why did you decide to go and do that? That's my first question. And the second one is, how is this typically received? Like, Do you have conversations afterwards with people where they say, oh yeah, you've got a point or, how does that go?

Inge:

I decided to do that because I'm an epidemiologist.

Inge:

So methods of clinical research is my job. And to be clear, first of all the status of biomedical research the problems we are finding in tinnitus research are not special for tinnitus research. So I think that's very important to say. So the challenges and the problems that we, and also the very good things we have in tinnitus research are there all over the field of biomedical science.

Inge:

So problems with reproducability or problems with a high risk of bias in studies or problems with the misuse of some statistical methods. That's not unique for tinnitus research. So I find it very important to first of all, say that. Why do I choose to really voice that?

Inge:

Because I think it's very important that we as researchers try to be as critical as we can. And I think we need to voice whatever we mean or whatever we want to improve in a positive manner. So I hope that I'm always trying to say that it's not about the researchers themselves. And I really find it important to say that today on this podcast as well.

Inge:

It's not about the researchers. It's about how we do our research and that we need to find a way of doing better research and maybe slower research. To think about the protocols and the plans a bit longer and to really involve all different specialties. Because that's also what it is about. There's not a lot of statisticians and epidemiologists involved in clinical tinnitus research.

Inge:

And I think that what we need is really a combination of the different clinical knowledge. The knowledge of patients, but also the knowledge of methods and statistics.

Hazel:

Yeah.

Inge:

And then your other question was, is how people respond. Very positive. No, honestly. I think that as what I'm trying to do is not make it about the person, but really making it about the research.

Inge:

And I think that the tinnitus research community is very enthusiastic about trying to improve the methods and the statistics and the way we are doing research. On the other hand, I think that it is a challenging thing sometimes, because I think that for me, and I think that Jorge also voiced that earlier, it's mainly about these collaborations, and then sometimes people find it difficult to find the right persons or to really collaborate and I think that's something we need to try to improve.

Inge:

And the other thing is that even if we have this collaboration, then we are also restricted sometimes to a four year time period for a certain grant, for getting a certain goal. And I think that we need to try to do a bit less research maybe within that time, a bit slower, think about what we can really aim for, what is more realistic with the data we have and with the possibilities we have, and then get more reliable results from which we can base our new studies again.

Jorge:

But also related to what Inge was saying, my impression is also that a lot of people then are willing to collaborate with you, because if you say I know something that perhaps you don't, and I have this skill set that perhaps is not immediately available to you, that generates a lot of possibilities for people.

Jorge:

Yeah, for collaboration. So it doesn't need to be something that's negative.

Hazel:

Yeah, no, totally agree. But so let's dive a bit more into, what you could call research design, because I think you started to talk a little bit, but let's take a bit more time to design the proper protocols and things like that.

Hazel:

And I have a little bit of direct experience with that and a kind of a frustration that we have as a patient organization is that we're usually not asked for any input at those early sort of design stages, the conceptual thinking about the research and to give an example, I'm a member of the advisory board of UNITI, which is one of the tinnitus research programs that is funded by the EU.

Hazel:

And I know Jorge you were involved so if you don't want to speak too directly about that's also fine, but I will so I was asked to be a member of the advisory board, and then they sent me the clinical trial protocol, and I had some very fundamental questions around sample size and patient selection. And I said I feel like you should select more people on the severe end of the spectrum. And what UNITI was doing, just for the listeners background, they were testing like a combinations of different treatments. So instead of only doing hearing aids or only CBT or only counseling, they were testing like different combinations of these things.

Hazel:

We were anyway not too happy about that because it didn't sound like something very innovative to us, right? These are all very established treatments. But okay, maybe there's something new to find out regarding combinations of things. Okay, maybe. But then sample size, I thought , how much can we really then learn?

Hazel:

I'm not a statistician, I don't know, but it seemed insufficient because there's so many different possible combinations of things to test, right? So it seemed logical to me, you need a really large sample for that, and I was critical, like I said, about them not selecting patients or not so many on the more severe end of the spectrum, which, that should to us be, like, that's the patient group that you should most want to target or to help. And so we gave this feedback, and it was not just me, I'd gotten some people from the Tinnitus Talk community to also look at it and give feedback, and I was like, it seemed nice that at least they were asking for some input, but then the response was, oh actually, we can't really change the protocol anymore.

Hazel:

I was like why do you even ask then? Why even ask if you can't change the protocol? So apparently there was, there had already been a lot of discussions and decisions were already made. So yeah, that's just one frustration I have. But it leads into this whole question of research design, like proper design.

Hazel:

So Inge, can you talk a bit more about that? Specifically to tinnitus studies, what are really the critical elements that you would look for to say, okay, this study is properly designed?

Inge:

I think that in tinnitus studies, but then again in clinical studies in general, it's very important that first of all, your statement is correct that we include like a lot of different stakeholders from the beginning of designing the project.

Inge:

Then when it comes to how you design a study depends fully on what you want to study. So if you want to study a new way of diagnosing tinnitus, obviously there's different methods than when you want to study an intervention or when you study, want to study the prognosis of having tinnitus or coping with tinnitus.

Inge:

So I think that if we look at interventions, then there are a couple of things that are very important. And I know that recently you wrote an interesting blog about that on your website. So it's basically about if you want to study an intervention, then usually the best way of doing that is randomizing two groups of patients and giving one patient one treatment and the other group of patients the other treatment.

Inge:

Because that is usually the most easy way to get all sorts of factors that can involve or have an effect on the treatment randomized between those two groups. The other thing that is important is looking at sample size. So to look at how did the research group define the sample size. And in order to define a sample size, you need to do certain assumptions.

Inge:

And without making this a full lecture on research design, but I do think that's important and maybe we can put some stuff on the website about that as well. But sample size is fully dependent on the assumption of how large you think the effect is going to be. And I think that's very important to see how large you think and if that's a reasonable effect that they choose to calculate it with.

Hazel:

Can you, sorry, can you elaborate? On that sample size is dependent on how large you think the effect will be. Can you explain that for the layperson?

Inge:

I think that the first thing we need to explain that in research and statistics, assumptions are a big thing. In whatever analysis you do, you make assumptions about what your data looks like.

Inge:

We call that the distribution of the data, so we make assumption about what the data looks like. But we also make assumptions before we start a trial on. What we think a reasonable outcome would be or on what we think a beneficial outcome for the patient would be. So that is, for example, a difference between a treatment group and another group of a couple of points on a questionnaire.

Inge:

And I think that this assumption can be easily checked because we all know that a one point difference between in improvement is not really a large thing.

Hazel:

On a hundred point scale, that's nothing.

Inge:

On a hundred point scale, that's nothing.

Inge:

But a ten point difference might be more reasonable. But then again, would that be important for you as a patient? So you don't know.

Hazel:

Yeah. So 10 points reduction in TFI. So Tinnitus Functional Index. I think that's the most used scale. Probably people fluctuate more from one day to the next than 10 points.

Inge:

Thank you for saying that. It's like we rehearsed this, but we didn't. So that's the thing, what we need to know is that these are all assumptions. So even if you make it a clinical difference of ten points, then that is not all of a sudden going to change someone's life.

Inge:

So we're making all sorts of assumptions to see what we need in the research design. And that is as I said, that is about the design you use, like randomized controlled trial, that's also about the sample size. And maybe we can put a bit more information about that on the website to explain that a bit.

Inge:

And also about how you're going to analyze the data. So what groups are you going to compare, for example, or are you going to do subgroup analysis? But then there is the other problem, and that's that a lot of people who are listening to this podcast are now going to say yeah, but Inge randomized controlled trial is not always possible.

Inge:

Right.

Hazel:

Yeah, I hear that actually a lot from researchers. They say it's difficult to come up with what they call a sham treatment. Let's say you're testing some kind of sound therapy and you have a specific combination of sounds and things that are applied in a certain way that you think will help, then you would have to come up with a different treatment that seems like it's, a legitimate treatment, but actually we're assuming that one doesn't work. And they're saying, yeah, that's too difficult. We don't know how to come up with that fake treatment for comparison purposes. Yeah.

Inge:

So that's fine.

Inge:

That's not a problem at all.

Inge:

We shouldn't only focus on randomized controlled trials, but if you're not doing a randomized controlled trial, you're doing what is called an observational study, which is one group of people which you're going to give the same treatment and then you're going to see how they progress in their, in this case, in tinnitus.

Inge:

And that can be a very reliable way of assessing a treatment, but it's not as good as a randomized controlled trial. And I think that we need to keep that in mind. So some treatments we cannot assess with randomized control trials, which is fine, but we need to keep in mind that if we're not doing that, the studies are more prone to systematic errors and we need to find either statistical ways of dealing with that, or we need to be honest about that in our limitations and our conclusions and we need to be honest about that in and see how we can actually solve this methodological problem.

Inge:

So the thing that we can most improve is to not say draw too large conclusions based on studies with either a inferior or suboptimal research design or a suboptimal sample or a suboptimal type of inclusion of patients. So we need to be very, I would say, limit ourselves in drawing too large conclusions.

Jorge:

Yeah. It's not as if all studies are the same, right? You have studies with different levels of evidence and of course in biostatistics RCTs, randomized control trials, remain let's say the gold standard. Regarding the research design, you were mentioning those 10 points of the TFI. That's something that I spent too much time thinking about it because it's often the case that you see, let's say, a 13 or 15 point decrease in the TFI.

Jorge:

And then you ask the person, say, okay, how do you feel? I got over it. Maybe it's a little bit better, but I'm not sure. And that's something that I don't have an answer for it. Maybe there's some researcher that has been doing some work on it, and I haven't found it yet. But what does it mean?

Jorge:

Those, let's say, 13, 15 points increase or decrease in a questionnaire. What about quality of life, psychological well-being, effect, positive or negative? So you have a whole universe of outcome measures, and you were part of the COMIT'ID study in which people said for this type of intervention, that would be interesting to see and so on, but they would like to see what's the correlation of the improvement or the worsening of the TFI with other metrics of relevance for a patient's quality of life.

Hazel:

Yeah. And anecdotally, I know it's true what you say, if someone has a 10 or even 20 point reduction on TFI, but you ask them, how do you feel? Usually it's huh, I don't know. Maybe a little bit better or whatever, but so I don't know if it correlates very well with, because what you want people to say is yeah, I feel significantly better.

Hazel:

Like my quality of life has improved. And I don't know how well that correlates indeed with this 13 point reduction on the TFI, because that 13 point reduction comes back time and time again. I've by now read many clinical trial publications where the outcome is always an average 13 point reduction on the TFI, so Tinnitus Functional Index, and that is considered clinically significant.

Hazel:

And I used to think, okay, that must mean something. Someone must have really figured out what clinically significant means. And I used to think that 13 point reduction was very meaningful. Now, I don't really think so anymore. And I went back to the original paper where this 13 point threshold was defined and I actually found out that it means it correlates with people feeling, indeed, slightly better. Just slightly.

Jorge:

Yeah.

Jorge:

I don't understand why not to use this metric, right? Because this is from a questionnaire called Clinical Global Impression, you could simply use that. But just then to come back to the previous point, I don't have a full answer to that question. I have only a partial question.

Jorge:

I investigated this, so from the clinic that I was previously working with, we just looked what happens to patients over the years. No intervention, just the good old effect of time. So some kind of habituation, I don't know. But just look at patients and see what's happening to them.

Jorge:

And we saw not in the TFI, but in the THI, the Tinnitus Handicap Inventory, a clinically significant improvement. And then I also looked at the quality of life. And although a lot of on average, I saw clinically significant improvement, there was no difference whatsoever in the quality of life.

Jorge:

Of course, this would require that we do some kind of systematic analysis across all studies to see if this was a bias from my sample or if that's indeed the case that perhaps those metrics of distress do not correlate over time with other measurements that are relevant for patients and it should be considered when evaluating the efficacy of an intervention.

Jorge:

And one thing that you mentioned previously about your role as a patient organization in designing a trial, I'm not a biostatistician myself, but if you go on Twitter and then follow real biostatisticians, they will say that they have the same problem, that someone comes to them and say, hey, I got the grant, I collected the data, And I have to publish now the paper.

Jorge:

So please, in two days, come up with the analysis for this RCT. And that's not high rules, because what Inge was describing is not only about Mathematics, so to say. So there's a lot of, when you're thinking in terms of epidemiology and biostatistics it's not only about crunching numbers, but it's also how you design a study.

Jorge:

And if this is already done and you already collected the data if there's any kind of biases in your set, you hope not to be able to fix that. Perhaps you can, depending on the problem, but a lot of the issues you cannot solve anymore.

Hazel:

You can't fix it.

Hazel:

Can we make this a little bit more tangible for the listener?

Hazel:

Can you give an example of a type of mistake that could be made in, patient selection, for instance?

Jorge:

Sure.

Jorge:

If you didn't randomize your participants into intervention one or intervention two, for example. So you cannot be sure if, let's say that you're in your uh, RCT, but you didn't have a proper randomization.

Jorge:

You cannot be sure that the effect that you're seeing is from the intervention itself or some bias in which you say well, group will go to, we'll get this treatment or that one.

Hazel:

Oh, so if it's not randomized, it could mean that people picked their own treatment or?

Jorge:

Could be that the groups are not matched.

Jorge:

It could be that there's some kind of bias some, let's say it's called the moderator effect that just because of the way in which people were randomly distributed, if it's not done properly, let's say, subrandom, that's actually a good way to hide your tracks.

Hazel:

Oh, that's very, then it gets very technical because I would never be able to spot something like that, I think.

Inge:

No, but you would be able to spot, for example, if there's no randomization at all, people are only included. So, and there is like a very large sample, which might be eligible, let's say a thousand people. And then in the end there was no randomization, but also in the end, only 30 people did agree to take part in the trial.

Inge:

In that case, then probably the people who were either the most enthusiastic about the type of treatment or those who um, had, there can also be different reasons. The ones who were most enthusiastic or maybe the ones who were closest to the therapy or maybe the ones who were most burdened by the tinnitus at that moment, or maybe the ones that were less burdened.

Inge:

I don't know. There can be all sorts of reasons, but the fact is that then in that case, we don't know and there is a self-selection in that.

Jorge:

Another one, sorry, that's a big one, is if you have, if you're delivering a treatment with a lot of side effects, maybe a lot of people then just quit the treatment, and then you're just analyzing the ones that had minor side effects.

Hazel:

But you're conveniently leaving out the fact that a lot of people actually had a negative effect from the treatment. Yeah, dropouts. So that's one thing. Yeah. So you mentioned this self-selection mechanism, and I was mentioning an example to you guys earlier, and I will repeat it now for the listeners.

Hazel:

There was a mindfulness study I read about recently, and it was actually the author, the researcher themselves who approached me, I will, generally speaking, I will not mention names because of what we talked about earlier. I will assume at least that people have good intentions, right?

Hazel:

But this person approached us and they were super excited about the results of their mindfulness for tinnitus study. And they wanted us to spread the word. And I read the paper and I find out that yes, the 43 people who finished the mindfulness study actually had like pretty impressive reductions in their TFI score, but then I found out that originally there were 670 or something participants who had originally applied for the study or just filled in a form that they might want to take part or something like that. So you're going from over 600 to 40 ish participants. So what happened along the way? People decided, okay, maybe I don't want to take part after all.

Hazel:

Or maybe they started, but they didn't finish. I don't know. I don't think they really even published that information, what happened to all those hundreds of other people. But you're ending up with 43 people who finish the actual mindfulness course. And of course those are gonna be the people who are like most open to mindfulness.

Hazel:

Maybe they've already tried it before. They like it, they, it, it works for them. So obviously there's a self-selection mechanism going on there. So what have we learned? Mindfulness can work well for tinnitus if you're really open and receptive to mindfulness. But what about all the people who are not?

Hazel:

That's important with all of these studies to realize that, yes, I'm not saying it doesn't work. You know, I tend to be always very skeptical and critical. I'm not saying it doesn't work for anyone, but let's be clear, it probably works for a specific subgroup of people.

Hazel:

And let's be clear about that in, you know, when we market these treatments and not oversell it.

Inge:

I think that is very important to not oversell it. So we need to be honest about what the methods are and that considering the methods we use, considering the patients we included, considering the assumptions we did in the statistical analysis, this and that is the conclusion.

Inge:

And I think that is the only way to help us forward.

Jorge:

I agree. And in the light of these findings, I think the very nice follow up to a question would be, are there, let's say, factors that predispose someone to be more receptive to mindfulness intervention? Because then it could be something that you could design in your study and then find out if you can have some kind of subtyping.

Jorge:

People who are, for example, more prone to benefit from mindfulness intervention from others that perhaps will not. So there is this element of building up science, but it's really an effort and needs to be well done.

Hazel:

Then they would have had to have all the 600 plus original people originally applied already fill in like maybe a long survey to gather data on certain characteristics and then really follow everyone along the way and figure out when they drop out and why and like, then you have all that data and you can analyze and you can say something meaningful about this is the group of patients for whom this treatment might work well.

Jorge:

Indeed.

Hazel:

But yeah, that's a lot of effort.

Jorge:

It should be self-correcting too, right? Because I can imagine that it's not something that a researcher expects that only 10, but even less than 10 percent of your sample will finish the study. But if you know that you in the next round, you can account for that and you can be prepared for it.

Jorge:

So I guess that it builds up. But indeed, this needs to be done systematically and thinking through.

Inge:

I think that's the important word, systematically. And I think that also what you say like, it is a lot of work, but I think that is the thing we need to acknowledge.

Inge:

So we need science to slow down. And that's also something that is already happening, because with the transition to open science which might sound as a sidetrack of this podcast, but it is a very important transition in which science is changing and using more the societal benefit.

Hazel:

What do you mean by open science?

Hazel:

Does that mean just like openly publishing all your data, having your papers like publicly accessible instead of behind a paywall, those kinds of things?

Inge:

Yeah, so Open Science is actually a collection of things that are happening to change the, maybe so to say, the future of science.

Inge:

We know we have all sorts of challenges at the moment with reproducibility and with sharing and with people not having access to the current literature. And what the open science movement is trying to do, and that is not even a movement anymore, it's something that's happening in science worldwide, is that we're trying to share more data.

Inge:

And we are getting the resources to actually share data. And we are making publications, open access, so available for everybody in the world to read and to use. And I think that with this transition also comes the acknowledgement that in order to do science, that science is a systematic effort of gathering new knowledge and that therefore we need to slow it down a bit and that we actually should take the time to think about our protocols and think about what we're going to do and think about what we are going to study as well.

Inge:

And I think that this transition might also help every type of research to bring more answers for patients.

Jorge:

Just to, to, to what Inge said, this movement also goes in towards the open revision process. Usually how this is done, it's you submit a paper to a journal and then the editor of the journal finds two researchers that are somehow familiar with the scope of your paper.

Jorge:

And then they give some feedback, you improve on it and so on. And then you send it back. And if the reviewer says that's great, then the paper is peer reviewed and it is published in the journal. Nowadays we also have ways in which you can publish a paper as a preprint and you can get feedback from anybody in the world.

Jorge:

So what you were just describing before . Here you are, you don't have, let's say, an academic background. But your feedback on some of those papers you just mentioned is spot on. So in a way the whole field would benefit if there were a platform for highly engaged patients that can give a completely different outlook.

Hazel:

Yeah.

Jorge:

Other than, let's say, the academic, scientific, traditional yeah. So in a way, I think that this thing is, this particular aspect of open science is lagging behind of what Inge described, but I actually believe that's going to be the future, that anybody will be able to review a paper. And if authors are really engaging with it and and try to improve it. Why not?

Inge:

Definitely and that's definitely the way forward and that is what is going to happen. But it will take some time before we are fully there. But there are already preprint servers which you can use and which more and more researchers post their research on.

Inge:

But I think that is not the only way we can improve science. Open science is just one of the tools we can use, but in the end we have to change the scientific purpose way more from I'm doing it on my own and I'm trying to solve this problem to a whole team of people trying to solve one problem or multiple problems.

Inge:

Because then we will increase the clinical knowledge as well as the patient view as well as the methodological and biostatistical knowledge. And I think that's what we need at the moment.

Jorge:

And I would add to that's something that I usually talk to Hazel and Markku. I'm a big fan of it's called PPI, public patient involvement.

Jorge:

I don't think that's actually been done in the tinnitus field.

Hazel:

Not much. No, I think there's some other health research fields that are much more advanced in terms of involving patients or the general public.

Jorge:

But I think that's also the way forward. Bringing all those different stakeholders together and figuring out how can we move forward together.

Jorge:

It's not going to be an isolated researcher that will be able to do this .

Hazel:

No, exactly. That's where we're always pushing for please ask us for our input. And like I said, at the early stages, we'd love to discuss your research ideas with you.

Hazel:

And it would be wonderful. And, some researchers have done that. Will Sedley, he's a neuroscientist studying tinnitus. He is asked us a couple of times to give feedback to a new research idea, like even before he started writing the grant proposal. So that's like the very early stages.

Hazel:

So that's wonderful. And we're just always pushing, for more researchers to do that. But I'm also interested in this aspect of like researchers looking at each other's work and getting out of their silos. How do you guys like follow? Okay, so there's, there's a bunch of things going on in tinnitus research at the moment.

Hazel:

There's maybe some positive momentum in the sense of, I don't know, there just seems to be last few years more going on in general, more different tracks of research and in the Tinnitus Talk community, people are very excited to see Susan Shore's new trial results.

Hazel:

Lenire is still a big topic of discussion because Neuromod just got FDA approval to market their device in the US, a whole new market for them. I've always been, like, very critical about, how effective their device is. But, you know, I'm just mentioning like, a few things that are sort of hot in the, in the, in the patient community.

Hazel:

Do you guys look at those studies, like do you try to follow and in general what's going on or maybe you have other, it doesn't have to be those specifically that I mentioned, but, do you try to follow what's going on and do you then ever like also reach out to those researchers or, is there some cross pollination or critical review or discussions going on?

Inge:

I think in, in, in multiple ways. So first of all, I think most researchers try to collaborate on a regular basis and obviously the subjects sometimes differ a bit. I, for example, do not a lot of research in neuromodulation. So that would be my topic. But I think we reach out. That's also what the two of us are already doing.

Inge:

I think researchers really collaborate, and I think there's something else going on, and I think that's also important to know, and that's that more and more researchers are trying to combine data sets. What we're trying to do is not only collaborate on the different subjects, or reach out to each other, or ask each other for help, or provide help either asked or unasked but also to, in a later phase, when the studies are done, collect the data.

Inge:

So, for example, we are now going to collect data on cochlear implants in tinnitus patients. And then we want to try to get all the data in the world about this subject. And we will combine this data into one large data set and then we will actually evaluate this data. And I think that these are things that are more and more being done.

Inge:

And I think that's the way forward because I think we also should be very critical about collecting new data each and every time.

Hazel:

Oh, yeah. Everyone always wants to collect their own data.

Inge:

Yeah. But quite often the data is not always obviously, but quite often the data's already out there, but in smaller data sets and is smaller and you can ask your new sub question based on the data of someone else. Sounds strange, but used data is also something that we are seeing more and more. In order to do that properly, though, we also need to find better ways of actually logging that data and of actually having more uniform ways of how we collect the data and how you transfer the data and how you can actually use data from another clinic.

Inge:

But I think that's something that's going on.

Jorge:

Just to add to what Inge said. I also have my, let's say, my niche. I cannot evaluate, for example, genetic data. For that, there's some great research being done. I focus myself mostly on clinical research. That's, let's say, my expertise. I also like a lot uh, studies that have some kind of they, they, they try to understand tinnitus as, as kind of like a, as a principle, as a, how can I say it?

Jorge:

To understand the mechanism behind it, not just throwing darts blindly at the wall and seeing if it hits the bullseye, but also is there a kind of a framework that we can understand it? So I'm always looking forward for articles in which we can explain, for example, why a subset of people suffer from tinnitus.

Jorge:

The obvious answer is because it's very annoying. But that doesn't answer the question why a subset of people can live with it under scant. So there is all kinds of models that can explain that, but I still think it's inadequate, the number of models that we have and that we can put to scrutiny and say well, I'll, I'll test the assumptions and see if it works and if it doesn't, I have to tweak my model or not.

Jorge:

So you have a lot being done, let's say, from the neuroscience perspective, but let's say from the psychological one, there's not so much going on. I know the fear avoidance model and now there is a new one by the group in Nottingham talking about the executive functioning. So there are a few things that I try to keep myself up with and steers the direction that I would like to go to.

Jorge:

That and of course, actually, clinical trials, because there's so much being offered. And if you were only to read those papers as well, problem solved, right? They all kind of work.

Hazel:

It's rare that you see someone publish clinical trial results with a conclusion of, oh so that didn't work at all.

Hazel:

At least now we know this doesn't work, that kind of thing.

Inge:

That's very strange, right? That in itself is very strange.

Jorge:

Because I know that was something that you wanted to ask us, recent publications that we found great. And there is one from a group in Belgium in which they found exactly that.

Jorge:

The conclusion is not that the treatment doesn't work. It's that it didn't work in this cohort, in this population, right? You cannot generalize it to other cohorts. But they were trying transcranial magnetic stimulation, and they did everything correctly with a sample size calculation. It was an RCT and so on and so forth.

Jorge:

And by the end, they say we found no evidence that rTMS in those - rTMS is the acronym for the treatment - in those specific locations where the brain was stimulated has any effect on distress. Brilliant. And it was actually published in a quite renowned journal just because it was a well done study.

Hazel:

Yeah.

Jorge:

Even though there was no finding.

Hazel:

Yeah. Isn't that supposed to be one of the basic tenets of science, that you try to disprove your theory?

Jorge:

Yeah.

Inge:

And the other thing is that each and every outcome should be publicized. So I think that this is a very great example because that's, and it's actually worrisome that we are surprised whenever we find a null finding.

Inge:

Each and every finding should be publicized because otherwise you get a skewed idea of what is a beneficial treatment and what is not. So each and every outcome should be publicized and also because otherwise, not only because otherwise we don't find the evidence, but also because otherwise we keep on burdening patients with the same treatments that are not evidence based.

Inge:

That's also why it's, for example, so important for researchers to publicize their protocol. I think that earlier in this hour we spoke about the design of the study, that is very important, and the assumptions you make, and the statistics, but it's also very important that protocols are being publicized.

Inge:

Because if a research group publicizes their protocol, then later on when the study is done, you can actually look at the protocol and see if the researchers did what they on forehand thought they were going to do. And also because then you know that the study is actually being publicized independent of the outcome.

Hazel:

Yeah. In the US, they have ClinicalTrials.gov, right? So this is if a pharmaceutical or healthcare company, if they know if they want to get the FDA approval, they have to first publish the protocol. And then they conduct the study and then they publish the outcomes and you can always go back and look at the original protocol.

Inge:

And then ClinicalTrials.gov is mainly focusing on randomized controlled trials, but we should do this for each and every study, also for observational studies, also for smaller studies. We should make a protocol and publicize it, even if it's only on the website of the university you're working or whatever place you can put it, a server on GitHub or an OSF or on wherever you want to put it, but we need to publicize the protocols in order to know what the researchers were planning on doing.

Jorge:

And it's not rare to see a paper that has a study design that's different from the protocol. You'd be surprised how often that happens. Then you really need to have a very keen eye on identifying it in those cases.

Hazel:

Yeah, exactly. Yeah. Yeah. So this is the problem also for lay people, right?

Hazel:

A tinnitus sufferer is trying to understand a bit about research and how do you identify those potential issues? Because that's, like you said, if you read the research papers, it's oh apparently there's so many things that work. And we know in reality, people have a very different experience.

Hazel:

And then, you know, I feel kind of sad when uh, when I see, tinnitus sufferers, patients get really excited about something new that was published and I read the paper and I just know no, it's actually not a good study. But then people are disappointed again because, if you just read the high level results, it seems like something wonderful

Jorge:

In a certain way I think that's also our fault, researchers, that there is no, let's say there's not a broader discussion with the lay audience on how to interpret those results. So we could think about how could we support people interpreting those results. Also because there's a lot of jargon and there's a lot of technicalities and so on.

Jorge:

But in the end , someone reading the paper wants a clear answer. Did this treatment work or not? And how can we make this conclusion as accessible as possible to the maximum number of people.

Inge:

I think there are some very good examples of that, like the Cochrane Collaboration who provides layman's summaries.

Hazel:

So Cochrane is the institute where the, I think one of the things they do is publish these meta studies, right? Where reviews, what do you call it? Systematic reviews? Where someone looks at like, all the studies ever conducted on treatment X and tries to draw some general conclusions.

Inge:

Yeah. And so they're doing very extensive work because gathering all this data and doing that is quite extensive work.

Inge:

The papers are like 30, 40 pages long and they provide a layman's summary at the beginning of the paper in plain English, but also they translated more and more to different languages because I think that's also something we need to consider that, this podcast is in English, which is perfect, but there's a lot of people who don't understand English, obviously.

Inge:

So it's also important to not only summarize this data in English, but also in as many other languages that we are possible to do. And that's what the Cochrane Collaboration is doing quite well. But I think they are one of the only ones who are doing that.

Hazel:

I think it's a great initiative. Yeah, and I would encourage anyone who's interested in tinnitus research and really wants to understand the general quality of evidence for different treatments, look at the Cochrane Library, I think there's around a dozen or so systematic reviews that were conducted for different tinnitus treatments.

Hazel:

Unfortunately, then disappointingly, of course, that there's not a single one that has both high quality evidence and shown to be very effective. So even CBT, I think, Jorge you mentioned earlier, that has like the highest quality evidence of all the tinnitus treatments, but the Cochrane Review on that one was pretty mediocre.

Hazel:

I think they said, there's limited evidence of long-term effectiveness or something along those lines.

Inge:

While that's very disappointing, that's something that we should, as researchers, really take responsibility for. So one thing is that if there's no evidence for a certain treatment, that's not something that we can do really something about, but what we can do something about is make our studies more reliable.

Inge:

And so that at least if there's a systematic review or no systematic review, or you want to just use the data that you can use it. So we really need more reliable studies. And that is also then on the researchers end, that's something we really need to, I would, in an ideal world, patients shouldn't have to go to the hassle of seeing if the design was correct.

Hazel:

Yeah, that's true. Yeah.

Inge:

It sounds strange, right?

Hazel:

Yeah, true.

Inge:

It's like me trying to get a mortgage and then I have to understand each and every detail of all the different types of mortgages. Then you would say like, okay, but that's crazy because you're an epidemiologist, how on earth can you understand that?

Inge:

Correct. So while I encourage patients to understand what they are reading, and I think that the Tinnitus Hub is doing a great job in helping patients with that, I feel also a bit ashamed as a researcher, honestly, that patients actually have to go to the hassle of finding out if what they're reading is reliable.

Hazel:

No that's yeah, that's funny that you say that, yeah, but that's why we're trying to like at least make that translation, that's why we do things like writing critical blog posts about Lenire because their marketing material says it's 80 percent effective or it works for more than 80 percent of people and I'm like, I read the paper and they literally mean 80 percent have at least a one point improvement on a hundred point scale.

Hazel:

That's ridiculous. How can you, in good conscience, make a claim that it helps 80 percent of people, clearly it doesn't help 80 percent of people, but yeah, that's why we feel so strongly about providing that bit of sort of translation for the lay audience.

Inge:

That should have actually been my answer to your first question of this podcast.

Inge:

Why I'm so convinced that this is the way forward is that we as researchers really have to take the responsibility for that and we don't, we shouldn't let people go through the hassle of finding out themselves if those papers and those articles and the data is correct. That is just ridiculous, right?

Jorge:

Yeah. And it's not only going through the papers, but also going through the interventions, right? And the frustration, right? Because I can also imagine that for a lot of people, they think I'm trying all those different treatments. Nothing works. What's wrong with me?

Jorge:

Because I keep reading those papers and some people are responding to them. Why am I not responding to them? So I think that there is also a lot of burden that's put into people that just go hopping from one intervention to the next.

Hazel:

Yeah.

Hazel:

Are you guys generally optimistic? How do you feel the field is right now developing if you had to predict which direction we're moving in?

Inge:

I am generally very optimistic. No, really, because science is really changing, and not only tinnitus science, but science in general, with the transition to open science, with more emphasis on collaboration between different disciplines and collaborations with patient organizations and further and with more and more emphasis on quality of research, I think that we are really getting there.

Inge:

I think that for tinnitus research in general, we should keep on doing that and obviously the collaborations are always there, but they're not always interdisciplinary. And I think that's something we can focus on for getting it one step further. And another thing is where we can focus on, which would make me even more optimistic is getting a more accurate idea about what we're actually doing and the data we're actually using.

Inge:

Because currently on each and every conference, I hear about AI.

Hazel:

And that's going to solve all our problems, right?

Inge:

Yeah, that's the solution for everything. And I think it would be a good idea if we researchers also look at ourselves and see then what we need to answer a certain question.

Inge:

For example, with AI, in order to actually make that work, whatever AI means, because that's also something that there's a lot of definitions about, but we need a lot of data and we need to actually know how to work with that data. We need to have very reliable statistical manners to do that. And I think that we as researchers should be a bit more realistic about what we can actually do and what we can promise people and how we can actually help the patients.

Inge:

And I think that by doing that, I am very optimistic about what is going on and about the future of tinnitus research.

Hazel:

Yeah. How about you, Jorge?

Jorge:

I'm also. For all the reasons I think I mentioned, and it's something that I think is awesome to see . In the past, I think, five or six years, there were, at least here in Europe, a lot of EU funded projects that were awarded for tinnitus research, including two training networks.

Jorge:

That means that there's a lot of fresh blood coming to the field. New ideas. People thinking outside of the box. That's also going to the direction of what you're saying, that there's a lot of research in different directions popping up. I think that's to a certain extent a reflection of that. And uh, for that, I'm also very thankful for, let's say, the more established researchers that pursue those projects and open the opportunity for our younger generation to hop in.

Jorge:

I don't have, let's say, a specific line of research that I say that's...

Hazel:

Okay, that was going to be my next question. Oh, no. Okay. No, because I know this is the most difficult question, right? But what is actually the there's so many different potential avenues of research. And Jorge, you mentioned like, we need to understand more about the basic mechanisms of tinnitus and the networks in the brain that are involved.

Hazel:

I agree. That's, I think that's one of the keys. I think Inge already mentioned the data aggregating and connecting all these different databases and making it more easily shareable and all of these things. I agree. Again, I think those are big points. But yeah, if you could shape the future course of tinnitus research, where do you think we should focus our energies?

Hazel:

Because it's also very easy to waste your energy on avenues that might not yield anything.

Inge:

I do have an answer to that question, but I'm not going to say some, one, one or the other clinical subject, I'm sorry because I would really not be able to say that. Actually you summarized my answer already.

Inge:

I couldn't say we need to focus on treatment X or Y or on pathophysiology or on diagnosis or on whatever, on prognosis, I'm not able to say that. What we need to focus on is whatever course we're taking and what actually subject we're studying and actually domain we're looking at, we need to focus on the methods we use for doing that type of study at that moment.

Inge:

And the only way to do that is by collaborating. Period. I think that is what we need to focus on.

Jorge:

What I would add to that is that the way that I have seen this is not specifically only for our tinnitus, but actually for all health research, open questions that we have, regardless of, what chronic condition you're talking about, you never know where the next breakthrough will come from.

Jorge:

So I would not say that's a smart strategy to say, we will put all the money and all the resources into this type of research. I was really excited for the FX-322 and...

Hazel:

Yeah, a lot of people were.

Jorge:

I was really expecting that it would work and that would be the end of it.

Hazel:

Yeah.

Jorge:

And what do you do afterwards? The drug doesn't work. So I think it's a smarter strategy that we have, let's say, high risk, high reward research, really ambitious projects, but I also think it's very important to have research on how, for example, to support people that are already suffering from it right now.

Jorge:

So you need to have a combination of all of them, and I don't think that selecting resources is something that's done. Let's say absolutely centrally. Of course, there are funding agencies that are.

Hazel:

There's almost no like centralization there, right? I think the only potential body to maybe do this would be the Tinnitus Research Initiative, because that's like the biggest network of tinnitus researchers.

Hazel:

But they haven't like, you know, tried to let's all come together and define like the top three or five questions that we want to answer in the next five or ten years. I haven't seen them take that role.

Inge:

No, and also to be the epidemiologist again today here, it's that also for picking those different research topics, we need to be very careful with not doing that only on a very subjective matter.

Inge:

But there are very reliable ways of, and I don't want to make everything more quantitative, but there are very reliable ways of assessing if a certain topic, for example, is studied enough, and if we now know the answer, there will be a new study.

Hazel:

This is interesting. So are there, yeah, so I would say CBT has been studied enough.

Hazel:

CBT for tinnitus. I think we know everything, you know, well not, you never know everything, but I think we know enough and we can let that be. I don't know if you agree.

Inge:

I'm not going to say if I agree or not, because I don't know. Because that's, my answer is usually I don't know, and in this case I really do not know.

Inge:

Because for me, but this is a good example, because for me that would entail a quantitative analysis of all the data that is out there, and of seeing if a new study would actually change something to the current available evidence. And there are quantitative ways of doing that, and I think that we need to incorporate that in the current knowledge setting, because usually researchers, patient organizations, governments are trying to set agendas for research, and usually that's quite subjective, right? So people say what they think is important and then, but I think we also need a bit of quantitative data in that input in that to actually see if the new subjects, if we actually need to study them.

Hazel:

Good point. Yeah.

Jorge:

I agree. There are good ways to do that. And I think that the last time that there was this paper that I think it's almost 10 years old that defined alongside another patient organization, what are the top 10 priorities for changes?

Hazel:

Yeah, that was James Lind Alliance or something.

Hazel:

I think it's almost 15 years old now. So that's outdated.

Jorge:

Maybe something to consider. Maybe it's time to update it.

Hazel:

Yeah. Yeah.

Inge:

Agree.

Jorge:

And you can do that with all those different stakeholders and just figure out what should we do? What should we be doing for the next 15 years?

Hazel:

Yeah, that should be a great exercise.

Hazel:

Yeah. And yeah, so yeah, I think a little bit more of coordination and agreeing together on which are the issues we're going to tackle now. I do think that would be good, but you get so many people who are just like so excited about their own research idea. I was at the ARO conference in Florida in February, ARO is, what does it stand for? Association for Research in Otolaryngology. It's a huge conference. And yeah, you, there were, it was actually, there was a lot of attention for tinnitus. That was actually interesting. There were two different symposiums on tinnitus, not one, but two, and there were a lot of posters. And so that was good.

Hazel:

And one of the symposiums, actually, there were different researchers presenting, but they had coordinated and there was kind of a common theme, right? So they were presenting like different models for both tinnitus and hyperacusis, and it was really interesting because they were trying to like make all the models fit to unify them.

Hazel:

So that was super interesting. The other symposium was just a bunch of, to me random things. Someone had studied how he had developed a software that analyzes your facial expressions, and he'd shown that he could measure how distressed you are about your tinnitus through your facial expressions.

Hazel:

And I'm like, I don't know. Am I missing the point here? I thought what is this about? So sometimes there's these people just doing random things. That guy was just super excited about his facial analysis software and that's why he wanted to use that.

Hazel:

I don't know.

Jorge:

I mentioned this previously. That's something that I'm really interested in. That's the really difficult part in which you aggregate all this data that's being generated by all those different branches of academia, all those different researchers. How can you bring this all together into a model that makes sense, that's cohesive and makes predictions about the future that you say well, according to my model, I can expect that the average tinnitus patient will have this and this characteristics in this trajectory and therefore I can intervene in this person in a reliable manner for a certain outcome. I think that's what you're hinting at, that a lot of research indeed is a bit spread out.

Hazel:

Fragmented?

Jorge:

Fragmented, and I think it would be great if we had more and more models that are put to scrutiny. It's not about being right. Of course, we would like that, but uh, you have to test and some of the things you can hold it and you can improve and can tweak it and so on and others you just discard.

Jorge:

That's part of the game, but it's something that I think would be a good way for the future.

Hazel:

Agreed. I think we're all in agreement what we have to do then. Yeah, I feel like we've covered so much. I don't know. Is there still anything you guys wanted to discuss?

Inge:

No, the only thing I might want to still say is that I hope that patients keep on being critical. First of all, I'm very ashamed of the fact that patients actually have to get through these articles and then find a way of understanding them.

Inge:

But the second thing is that I'm hoping that you only have positive experiences with researchers when you are asking questions about their papers because, as I said in the beginning, I think we need to really assume that people are really trying to do the good thing and then sometimes in research you make certain decisions for certain reasons and then people can ask you about these decisions and the assumptions you make and I think that is what research is about.

Inge:

It's about getting other people involved, asking you questions and thereby improving your research. So I'm really hoping that patients are willing to still do that because I think that's the only way to go and that we as researchers try to really learn from each other and also learn from the maybe more suboptimal decisions we made and as well as the optimal decisions we make to really learn and speak with each other and not only discuss outcomes of studies on conferences, but also to really get into what is the right way of doing this.

Hazel:

That's such a good point because all of the conferences I've been to, there were maybe one or two like more panel discussion type of things, but almost always it's just presentation after presentation of, I did this study, these were my methods, these were my results.

Hazel:

Next person, I did this study, these were my methods, these were my results. Maybe there's time for one question. Oh, sorry, we don't have time for questions. Let's move on. But real discussion, no, not really.

Inge:

No, and I would like us to move from the concept of sharing the research you did and the outcomes to moving to a concept of conference in which there's also room for discussing general concepts of why we're not able to, for example, gather all the data about a certain subject in tinnitus research. Why didn't we get that done yet or maybe even more content related subjects like why are we getting stuck into studying this pathway? So instead of only sharing results, which we can honestly also read, it would be interesting to also share the more conceptual way of why we're doing this and how we can do this.

Inge:

And not only we don't have to move away from results and outcomes, but I think that there should be a place for doing that. And I think that on the last uh, TRI event last year um, uh, close to Munich.

Hazel:

Yeah, near Munich.

Inge:

Near Munich, yeah. They did a very good effort to do that actually.

Inge:

So there were debates and it was more about the general concepts of, for example, collaboration or of data sharing or of large data.

Jorge:

I think that's also the theme for this year's conference, which is perhaps the one thing that I would add to it. If you think about research, you come up with a new intervention that you think might work.

Jorge:

That by definition means that you're going to select a few hundred people. And you're going to deliver the intervention and it's not, let's say, a mass scale intervention. For me, what's interesting is this, and I don't have an answer for that, is how can we integrate researchers that are, let's say, the spearheads trying to find new solutions with primary caretakers, people that reach out to patients every single day.

Jorge:

What can be done to increase the everyday experience of patients suffering from it? We have companies, for example, hearing aid companies or cochlear companies that reach out to thousands, millions of patients per year. How can all those different stakeholders, and not to forget, of course, a patient organization, it's the kind of thing you ask Hazel and you ask Markku, what's their opinion?

Jorge:

They will tell you directly, there's so much room for combining those efforts and getting the best of what each stakeholder can provide. I think the biggest problem is how to do that. That's a viable and sustainable strategy, I think. Perhaps more attention should be given to that.

Jorge:

And I hope that's something that will be discussed in this upcoming TRI conference.

Inge:

Yeah, me too. I agree. I fully agree.

Hazel:

All right. Well, I have nothing more to add, guys. I just want to thank you so much for your participation today. Like both of you have reiterated these kind of discussions are very necessary.

Hazel:

And we certainly as a patient organization intend to keep asking the difficult questions. Yeah.

Jorge:

Those are the best ones.

Inge:

Please do so. And also ask us if we can help with making data available or with translating evidence because I think that's what we're here for.

Inge:

Researchers are not only here for doing the research, but we're also here for communicating about research.

Hazel:

Yeah, that's a good point. And so what we do now often is we send a Skype message to Jorge like, have you seen this new paper? What do you think? But we could have, of course, like a broader network or panel of people that could help us with that.

Hazel:

Yeah.

Inge:

Please keep on doing that. Because that's what we're here for. That's also what researchers are here for.

Hazel:

Thank you both for being here today.

Inge:

Thank you.

Jorge:

Thank you very much.

Chapters

Video

More from YouTube