Most biological evaluation deficiency findings are preventable. The testing is done. The data exists. But the documentation fails to present it in a way that satisfies the reviewer.

The deficiency letter comes because the documentation failed to demonstrate those things clearly enough. A justification was written too generically to support the conclusion it claims, a risk assessment mentioned a hazard but never traced it back to the biological evaluation plan, or the BER covered cytotoxicity but never explained why hemocompatibility was excluded for a device with indirect blood contact.

These are documentation problems. And documentation problems are exactly what a structured pre-submission review is designed to catch. Whether you call it an ISO 10993 gap analysis, a biological evaluation gap analysis, or a biocompatibility gap analysis, the purpose is the same: a systematic check of your documentation before a reviewer sees it.

The first pass problem

When a company sends a biological evaluation package to an external consultant for review, the consultant's first task is a documentation review. They read through the BEP, the BER, the supporting test reports, the TRA, and the chemical characterization data. They assess what is present, what is missing, and what is incomplete. They check whether the BEP's stated scope matches what the BER actually addresses, and verify whether every endpoint listed in the evaluation plan has a corresponding section in the report.

This first-pass work is necessary, but it is also systematic and repeatable. It involves checking structural completeness, cross-referencing sections for consistency, assessing whether justifications are specific enough to survive scrutiny, and verifying that conclusions are supported by the data they reference. It takes hours, and those hours cost $200 to $400 each depending on the consultant. By the time the consultant gets to the part that requires device-specific scientific expertise (evaluating whether a specific extraction solvent is chemically compatible with a specific polymer, whether the TRA methodology is appropriate for the material composition, whether the testing strategy will satisfy a specific notified body), a significant portion of the engagement has already been spent on work that does not require their specialized knowledge.

Internal teams face a different version of the same problem. The person who wrote the BER is often the person reviewing it. They know what they meant, so they read what they meant rather than what they actually wrote. Gaps that would be obvious to a fresh set of eyes are invisible to the author. For example, Section 4 says the device has prolonged mucosal contact, but the endpoint table in Section 7 evaluates for limited contact. The author knows the table is wrong and plans to fix it before submission, but six weeks later it ships as-is because no structured check caught the inconsistency.

What a pre-submission gap analysis actually catches

A gap analysis evaluates your documentation across two dimensions: structural completeness and substantive quality.

Structural completeness is whether required sections exist, whether every endpoint in your evaluation plan has a corresponding section in the report, and whether your documents reference each other consistently. This is the checklist layer.

Substantive quality goes deeper. It evaluates whether your justifications are device-specific or generic, whether your conclusions are supported by the data they reference, whether your chemical characterization covers the relevant polarity range and documents the rationale for solvent selection, whether your TRA calculates margins of safety individually for each identified constituent, whether your risk management integration includes specific biological hazards with severity and probability estimation or just mentions ISO 14971 in the introduction. This is the layer that catches documentation that exists but would not survive a reviewer's scrutiny.

What a gap analysis does not do is make device-specific scientific determinations. It cannot tell you whether a specific extraction solvent is chemically compatible with your specific polymer at a given temperature, or determine whether a particular margin of safety threshold is adequately conservative for a specific clinical application. Those decisions require a qualified expert who understands your device, your materials, and your regulatory context. But the gap analysis evaluates whether your documentation presents and justifies those decisions in a way that a reviewer can assess. Specifically:

Scope gaps. For example, your BEP states the device has prolonged mucosal membrane contact and lists the corresponding endpoints from Table A.1, but your BER evaluates for limited contact duration without explaining the discrepancy. Or your BEP references biological equivalence as a strategy to reduce testing, but the BER never presents the equivalence argument or documents the comparison device. These are not missing sections. The sections exist, but the scope they evaluate does not match the scope the plan established.

Weak justifications. For example, your BER excludes genotoxicity testing with the statement "the device materials are well characterized and biocompatible." That is a conclusion presented as a justification. A gap analysis flags the circular reasoning and identifies what a defensible exclusion rationale needs to include: reference to chemical characterization data, identified extractable compounds, and a documented assessment of genotoxic potential.

Insufficient risk assessment methodology. For example, your TRA identifies six extractable compounds but only calculates margins of safety for four of them, addressing the other two with "these compounds are present at trace levels and are not expected to pose a toxicological concern." A gap analysis flags the missing MOS calculations and the generic safety claim, and identifies that every compound above the analytical evaluation threshold requires individual quantitative assessment with tolerable exposure values from recognized sources.

Inadequate chemical characterization rationale. For example, your extractable study used a single polar solvent without documenting why non-polar extraction was omitted, or your AET is stated as a value without showing the calculation methodology. A gap analysis scores the completeness of your extraction rationale and flags where the documentation falls short of what ISO 10993-18 and FDA draft guidance expect.

Unverifiable claims. For example, your TRA references "the extractable and leachable study in Appendix C," but there is no Appendix C, or the Appendix C that exists covers a different study. The reviewer cannot verify the claim, so the finding reads as insufficient supporting data. These errors are easy to introduce during revision and nearly impossible for the author to catch through their own re-reading.

Standard alignment. For example, your BER was written against ISO 10993-1:2018. The 2025 revision introduced new requirements around foreseeable misuse, biological equivalence, and risk integration. A gap analysis identifies where existing documentation meets 2018 requirements but falls short of 2025 expectations, including whether your terminology has been updated, whether your evaluation rationale is risk-based or still structured as a Table A.1 checklist, and whether foreseeable misuse is addressed at all.

Regulatory framework coverage. For example, your documentation was written primarily for an FDA submission. If you also need CE marking, the same package may need to address EU MDR-specific requirements that the FDA pathway does not emphasize: CMR substance thresholds under Annex I Section 10.4.1, GSPR traceability, and the three-phase equivalence framework from MDCG 2020-5. A gap analysis scores your documentation against both frameworks and flags where a justification satisfies one but falls short of the other.

Where the expert comes in

None of the above replaces a qualified regulatory professional or toxicologist. The gap analysis evaluates whether your documentation adequately presents and justifies the decisions you made, but it does not make those decisions. It flags that your equivalence argument does not address manufacturing differences or release characteristics, but it does not tell you whether the comparison device you chose is scientifically appropriate for your specific clinical application. It flags that your TRA uses a tolerable exposure value from a non-authoritative source, but it does not determine what the correct value should be for your specific compound in your specific exposure scenario. Those require someone who understands the science, the device, and the regulatory expectations of the body reviewing your submission.

The value of a pre-submission gap analysis is that it changes what the expert spends their time on. Instead of spending the first several hours reading through documentation and cataloging structural gaps, weak justifications, and cross-document inconsistencies, they receive a scored report that has already identified those issues and can go straight to the judgment calls: is this equivalence argument defensible? Is this TRA methodology appropriate? Will this testing strategy hold up?

For consultants managing multiple clients, the gap analysis also provides consistency. Every package gets the same evaluation framework applied regardless of how many submissions the consultant is juggling that week. The structured first pass does not get less thorough because the consultant is busy.

Who benefits most

Internal regulatory teams get an independent check on their own work before it goes out the door. The gap analysis catches the blind spots that familiarity creates and functions as a second reviewer that does not share the author's assumptions about what the document says.

Regulatory consultants get a pre-scored baseline for every client package. Their expert hours go toward scientific judgment rather than the initial documentation review, and turnaround improves because the structured work has already been done.

Small and mid-size manufacturers who cannot justify $5,000 to $8,000 for a full consultant engagement on every submission get a meaningful quality check at a fraction of that cost. The gap analysis does not replace the consultant, but it tells the company whether they need one and what specifically to ask about.

Companies facing a deficiency response can run their revised documentation through a gap analysis before resubmission to verify that every deficiency has been addressed and that the revisions have not introduced new inconsistencies. This is where common deficiency patterns tend to repeat: the fix for one finding creates a new gap somewhere else in the package.

When to run a gap analysis

The highest-value timing is after your documentation is in near-final form but before submission. Running it too early, when the BER is still a rough draft, generates findings that will resolve themselves through normal revision. Running it after submission means the notified body finds the gaps instead of you.

The second highest-value timing is immediately after a deficiency response has been drafted, before resubmission. Deficiency responses are written under time pressure, and the focused effort on fixing specific findings sometimes introduces new problems elsewhere in the package.

For the 2018 to 2025 transition, running existing documentation through a gap analysis now gives your team a prioritized list of what needs updating rather than working from the full standard and trying to figure out where you stand.

Another high-value scenario is when an existing device file is coming up for notified body review and needs to be checked against the latest standard revision. Over time, biological evaluation files accumulate prior testing reports, outdated justifications, and references to superseded guidance. A gap analysis scrubs the documentation against current requirements and flags where prior testing or rationale no longer meets the standard, so your team can address those gaps before the reviewer opens the file.

What this means for the industry

Biological evaluation documentation has grown more complex with every revision of ISO 10993-1. The 2025 standard added new requirements around risk integration, foreseeable misuse, and biological equivalence. MDCG guidance documents layer EU-specific expectations on top of the standard. The FDA's 2023 guidance adds its own interpretive layer.

The result is that even experienced professionals miss things. Not because they lack the knowledge, but because the documentation surface area is large enough that structural gaps, inconsistencies, and unsupported claims slip through review. A structured pre-submission gap analysis does not replace the knowledge. It makes sure the knowledge actually made it onto the page.

If you are preparing a biological evaluation package for submission and want to see how your documentation holds up, request a gap analysis or get in touch to discuss how it fits into your review process.

Find the gaps before the reviewer does.

BioEvalPro evaluates your biological evaluation documentation for structural completeness, justification strength, cross-document consistency, and regulatory alignment. Every section is scored with specific findings and remediation guidance. Use it alongside your expert review or as your structured first pass.

Request Gap Analysis Have questions? Get in touch