Handling Conflicts of Interest in Abstract Review Processes

Handling Conflicts of Interest in Abstract Review Processes

If you’re a researcher, you’ve probably submitted abstracts to conferences and wondered how the selection process works. More importantly, you might have questioned, at some point in your journey, if the system is truly fair.

Conflicts of interest in abstract review are more common than the event management industry has come to acknowledge. In fact, acknowledging their existence is only the first step. These are not minor hiccups that can be brushed aside. Rather, they are serious issues that can compromise the quality and credibility of academic gatherings.

For conference organizers struggling with this challenge, after multiple unsuccessful attempts at creating transparent systems, implementing proper conflict of interest protocols often comes as the last hope. And not to forget the complications arising from small research communities where everyone seems to know everyone else.

Understanding What Actually Constitutes a Conflict of Interest

Here is everything you need to know about handling conflicts of interest in abstract review processes and why getting this right matters more than you think.

When a reviewer’s judgment might be influenced by factors other than the scientific merit of the work, you’ve got a conflict of interest. Sounds fairly straightforward, right? But the reality is far more sophisticated. The most obvious conflicts are easy to spot: you can’t review your own student’s work, your collaborator’s submission or a paper from someone at your institution. These are clear-cut cases that most conference systems flag automatically.

But what about the gray areas?

The academic world operates on relationships and these relationships always hold potential for conflicts that are not always obvious to the eye.

Don’t Take the Short Route with Conflict of Interest Disclosure

Many conferences rely heavily on self-disclosure of conflicts by reviewers. They ask reviewers to flag submissions where they have a conflict. This sounds like a simple solution and perhaps, it is better than doing nothing at all. However, self-disclosure has significant limitations.

Your reviewers might not recognize subtle conflicts. They might forget about past collaborations. Or they might genuinely believe they can remain objective despite a conflict.

Some reviewers might even intentionally fail to disclose conflicts if they want to influence the outcome for or against certain submissions.

It is important to note that simple does not mean effective or sufficient. Conferences are consistently discovering that basic disclosure policies miss a substantial number of conflicts that automated systems can catch.

But here’s the thing: even the best automated system won’t catch everything. A reviewer might have a personal grudge against an author from a heated exchange at a previous conference. They might stand to benefit financially should the research not be published.

Humans know other humans the best. No sophisticated algorithm can detect such scenarios and flag them without human input.

Technology is Not Entirely Useless in Flagging Conflicts of Interest

Conference management systems have come a long way in the past decade. Modern platforms can automatically detect many types of conflicts by analyzing bibliographic databases, checking co-authorship relationships and even examining citation patterns. A reviewer who has cited an author’s work extensively might show favoritism. One who has never cited them might hold opposing theoretical views. These systems can process thousands of potential reviewer-submission pairs in minutes, flagging obvious conflicts and pointing out questionable cases for human review.

The review process that costs conferences countless hours of manual checking can now be handled by algorithms in seconds.

Still, technology isn’t always foolproof.

Someone needs to configure these systems properly. Someone needs to decide what threshold of co-authorship constitutes a conflict. Someone needs to review the cases where the algorithm isn’t sure. And someone needs to update the system as new types of conflicts emerge. Technology makes the job easier and not obsolete.

On Building a System That Actually Aids The Identification of Conflicts of Interest

With conferences from small regional meetings to massive international gatherings now implementing formal conflict of interest policies, it’s evident that there’s a need. With many conferences in STEM fields now using some form of automated conflict detection, the academic community has maintained high standards of review integrity despite growing submission numbers.

Manual checking alone also catches obvious conflicts, but this approach operates on a limited scope model, focusing on only the most blatant cases. Moreover, since research communities have become more connected through social media and open science initiatives, the potential for subtle conflicts has climbed sharply. What this means for someone organizing a conference is that one must have multiple layers of conflict detection in place that go beyond basic disclosure forms.

So, this is your sign: If you are running a conference and looking to give your review process a complete overhaul, don’t forget the fundamentals.

    • Be specific: Don’t just say ‘disclose conflicts.’ Tell reviewers exactly what counts as a conflict. Give them examples. Show them the gray areas where they should err on the side of caution. Then layer your defenses.
    • Set up conference management: Equip software that will automatically flag clear conflicts based on co-authorship, institutional affiliation and recent collaborations.

To answer the question of how recent, you will have to look inward into your field. Five years is common but some rapidly moving fields might use three years and others might extend to seven or more.

Have program committee members or area chairs manually review flagged cases to make final decisions. Automated systems can both miss subtle conflicts and flag false positives, so human oversight matters.

Reviewers sometimes only discover a conflict after accepting a review assignment. Maybe they recognize the work after reading the abstract. Maybe they suddenly remember a past collaboration. Or maybe the writing style gives away the author despite anonymization. They need an easy way to recuse themselves at any point without penalty. Make the process simple and judgment-free. Don’t make them explain themselves or justify their decision. Just let them step back.

The Human Oversight Cannot Be Ignored

At any conference organizing committee, technology can only take you so far. Even the most sophisticated conflict management system in the world cannot yet be a replacement for human judgment. Machines and softwares do not understand professional promises and ethical commitments. Reviewers, therefore, need to first understand why such conflicts of interest are a matter of concern. The first few fundamental steps lie in helping them take personal responsibility for maintaining integrity in their review.

Conference organizers should also provide clear guidance and training materials for reviewers.

What you can do: A simple video or written guide explaining the conflict of interest policy can significantly improve compliance.

You may also show them examples of different conflict types and walk through the disclosure process step by step. Make this training mandatory before reviewers can even access their assignments.

Some reviewers are sure to frown about the extra time but most will appreciate the guidance.

Conflict of Interests in Modern Conferences

In the last decade alone, research has become much more competitive as well as collaborative. There is pressure crushing researchers and reviewers from across different channels. Funding is scarce and very selective, making it easier than ever for contenders to pit against each other.

The advent of social media has made professional relationships all the more visible and complicated than ever. All of these concerns have made managing conflicts of interest more challenging than it has ever been. Yet the integrity of the peer review process depends on getting this right.

Conferences that invest in proper conflict of interest management protect their own reputation and the broader credibility of academic research. Most importantly, they honor and prioritize the work of their submitters and participants in accordance with the highest ethics.

It takes time.

It takes money.

It takes sustained attention year after year to entirely crack the issue of conflicting interests in research. But on the other side is a review process that gradually loses the trust of the research community. Your next conference submission might depend on whether you take these issues seriously.

The abstract your participants submit deserves a fair review based on its scientific merit alone. And if you’re organizing a conference, your attendees and authors are counting on you to build a system worthy of their trust. Don’t let conflicts of interest undermine the incredible work researchers are doing every day. Work with us at Dryfta to know how you can use your human insight and our automated intelligence to host better conferences in 2026. Sign up for a free demo today.