Since the release of our inaugural IPR Intelligence Report in 2017, we have continuously refined our ranking methodology to better capture the realities of PTAB practice. Each year, we revisit our approach with the dual goals of improving accuracy and enhancing the practical value of our rankings for the IP community. A key part of this evolution has been our ongoing dialogue with stakeholders, through direct outreach, surveys, and feedback from practitioners, ensuring that our methodology reflects both statistical rigor and real-world applicability.
The following section outlines the methodology used in the 2025 PTAB Intelligence Report. This year’s analysis covers a five-year window, from July 1, 2020, through June 30, 2025. Our approach integrates both activity- and performance-based measures, with adjustments designed to account for factors such as timing of cases, complexity of proceedings, and role-specific impact. By sharing this framework transparently, we aim to provide readers not only with clear rankings but also with an understanding of the logic underpinning them.
We view this as an iterative process and remain committed to further refinement in future editions. We welcome feedback from practitioners, academics, and other stakeholders, as it plays a crucial role in the direction of our ongoing research and ensuring the continued relevance of our work.
General Considerations
We will proceed with the assumption that our readers possess a basic understanding of PTAB proceedings. At their core, these proceedings are trial processes conducted before the Patent Trial and Appeal Board (PTAB), designed to determine the patentability of one or more claims. The grounds for review differ depending on the type of proceeding, but the central goal remains the same: to provide a streamlined forum for testing the validity of issued patents.
Patent litigation shifted dramatically with the passage of the Leahy-Smith America Invents Act (AIA) in September 2011. This landmark legislation introduced a suite of new procedures that reshaped the way patents are challenged in the United States. Among its most significant innovations were Inter Partes Review (IPR) and Post-Grant Review (PGR), which quickly became essential tools for parties seeking a faster, more cost-effective alternative to traditional litigation in federal courts.
- Inter Partes Review (IPR): IPR provides a mechanism for third parties to challenge the validity of an existing patent by petitioning the PTAB to review specific claims. Unlike district court litigation, IPRs are adjudicated by panels of administrative patent judges with technical and legal expertise. This targeted and efficient process has made IPR the dominant form of PTAB proceeding since its introduction.
- Post-Grant Review (PGR): PGR was created to address newly issued patents (post-AIA) and permits challenges on a wider range of grounds, including subject matter eligibility, sufficiency of disclosure, and written description. A PGR petition must be filed within nine months of a patent’s grant, making it a front-loaded tool for early challenges. While less common than IPRs, PGRs play a critical role in the scope and enforceability of recently issued patents.
Together, these proceedings reflect a broader trend toward specialized and administrative resolution of patent disputes, a trend that continues to define litigation strategies and influence the innovation ecosystem.
Numbers for Performance and Success Rankings
In earlier editions of this report, our methodology required the exclusion of attorneys and law firms with limited activity when compiling performance rankings. Over time, however, we have refined our approach to ensure broader coverage and inclusivity. In this year’s edition, we consider all individuals and entities who have participated in at least one concluded IPR or PGR proceeding.
For the calculation of Success Scores and Performance Scores, we limited our analysis to proceedings that had reached a final outcome, excluding pending cases. Specifically, we included cases that fell into one of the following four categories as of September 1, 2025: Terminated–Denied, Terminated–Settled, Terminated–Adverse Judgment, and Final Written Decision Entered. This ensures that the rankings are based on complete and scorable outcomes.
Any attorney, law firm, or company that meets this minimum case requirement qualifies for inclusion in our Success and Performance Rankings. These rankings are reported both overall and by role (petitioners and patent owners), providing a more nuanced view of activity and effectiveness.
As in previous years, we have capped our analysis at the 1,000 most active attorneys and included corresponding law firms and companies. Their scores and rankings across all categories are presented in this report, while the full dataset, with detailed rankings and metrics, is made available in the accompanying Excel spreadsheet for further exploration.
Success Scores for Petitioners, Patent Owners, and Overall
To establish the Performance Ranking of each stakeholder, our process begins with the calculation of their Success Score in every IPR or PGR proceeding. This assessment is performed as soon as a case reaches termination, whether through settlement, denial of institution, or the issuance of a Final Written Decision by a PTAB panel of judges.
Whenever available, we leverage claim-level data to enhance the precision of our evaluation. This thorough approach enables us to more accurately capture the degree of success achieved by each party, rather than relying solely on case-level outcomes.
The framework for point allocation, which serves as the foundation of our scoring and ranking methodology, is presented below in Table 1. This table details the specific manner in which credit is assigned to parties and their representatives, reflecting the outcomes of the proceedings in which they were involved.
Status | PO Success Points | Petitioner Success Points |
Terminated-Denied | 1 Point | 0 Point |
Terminated-Settled | 0.25 Point | 0.75 Point |
FWD Entered | 1 – Petitioner Success Point | 0.5 + 0.5 x Claims InvalidatedClaims Listed |
Terminated-Adverse Judgement | 1 Point | 0 Point |
Table 1 – Point Allocation for the Petitioner and the Patent Owner or their Representatives
In situations where not all claims are invalidated, we recognize the dynamics at play. From the patent owner’s perspective, surviving claims, particularly those tied to ongoing district court litigation, can represent a significant victory, as they preserve the ability to pursue damages for infringement.
At the same time, the petitioner’s representatives also achieve meaningful results by securing invalidation of some claims. Even partial invalidation can shift leverage in negotiations, influencing settlement terms or shaping litigation strategy. To capture this balance, we award petitioners a minimum of 0.5 points for invalidating at least one claim. For multiple invalidated claims, their credit is calculated proportionally using the formula in Table 6.1.
For example, if a petition challenges 10 claims and three are invalidated in the Final Written Decision, the petitioner’s representatives earn:
0.5 + (0.5 × 3/10) = 0.65 points
while the patent owner’s side receives the complement:
1 – 0.65 = 0.35 points.
This approach ensures both sides are fairly credited for their respective contributions.
Settlements represent another complex scenario. While they may appear mutually beneficial, they often suggest that the patent owner, after weighing the prior art and petitioner’s arguments, chose to avoid the risk of full adjudication. Strategic considerations, such as parallel district court litigation, can also influence the decision. To ensure consistency, we assign 0.25 points to the patent owner and 0.75 points to the petitioner in all settlement cases, reflecting the general tilt in favor of petitioners in such outcomes.
The “Terminated – Adverse Judgment” outcome constitutes another important category. Here, the patent owner voluntarily concedes invalidation of all claims, and the credit is accordingly assigned entirely to the petitioner’s side, including their attorneys and law firm.
Once success points are calculated at the case level, we average them across all of an attorney’s, law firm’s, or company’s non-pending (concluded) cases—those falling into the categories Terminated-Denied, Terminated-Settled, Terminated-Adverse Judgment, or Final Written Decision Entered.
- Petitioner Success Score: average score across all concluded petitioner cases.
- Patent Owner Success Score: average score across all concluded patent owner cases.
- Overall Success Score: weighted average across all cases (petitioners + patent owners), with normalization to balance representation across both roles.
This system provides a comprehensive and balanced framework that accounts for the complexity of PTAB outcomes while ensuring comparability across practitioners and entities.
Normalization Using Machine Learning
In the realm of Patent Trial and Appeal Board (PTAB) proceedings, evaluating success is a complex endeavor. A wide range of variables influence case outcomes, including the strength of the legal arguments, the decisions of the presiding judges, the nature of the patents and technologies at issue, and the legal strategies of the attorneys involved. Expert witness testimony and other contextual factors further impact the results.
Given this complexity, assessing the contribution of any single participant, be it an attorney, law firm, or petitioner, within one case can be highly challenging. The interconnected dynamics often obscure the extent to which an individual’s efforts drive the final resolution.
To address this challenge, we leverage large-scale datasets drawn from thousands of PTAB proceedings and apply advanced computational methods, including machine learning algorithms, to isolate and measure individual performance across diverse cases and contexts.
Our Success Score methodology is designed with two guiding principles: fairness and precision. The model minimizes the influence of external variables such as opposing counsel’s strategies, company resources, or the discretion of PTAB judges. By accounting for these external effects, we more accurately reflect the independent contributions of each participant.
In practice, our model applies statistical adjustments and recalibrations to normalize outcomes across different cases, ensuring that the calculated scores are not unduly swayed by factors outside of an individual’s control. Once adjusted scores are derived, we compute the average Success Score for each participant, reflecting their performance across all PTAB proceedings in which they were engaged.
The result is a transparent and robust evaluation framework that highlights the genuine impact of each attorney, law firm, or company. This methodology provides a fair and meaningful benchmark for comparing performance, offering the IP community actionable insights into the effectiveness of PTAB practitioners.
Activity Scores for Petitioners, Patent Owners, and Overall
We recognize that in-house counsel often prioritize candidates with recent and ongoing PTAB experience over those whose activity occurred years ago. To reflect this reality, our Activity Score methodology places greater weight on recent cases while gradually discounting older ones. For example, an attorney who handled 50 cases exclusively in 2020 would rank lower than an attorney who managed the same number of cases but spread across the last five years.
In addition, to calculate the Activity Score, we apply a logarithmic function. This approach helps mitigate extreme differences between entities with very high or very low case counts. By smoothing out these disparities, the Activity Score produces more equitable comparisons and enables a fairer, more balanced assessment of activity levels across attorneys, law firms, and companies.
Performance Scores for Petitioners, Patent Owners, and Overall
To calculate the Performance Score, we integrate both the Activity Score and the Success Score into a single metric. This combined measure helps address a key challenge: attorneys or firms with very large caseloads may see their performance diluted over time, making direct comparisons against smaller but more selective practices less meaningful.
The Performance Score, therefore, acts as a balanced indicator. It is essentially a weighted average of success and activity, designed to highlight those who not only handle a substantial volume of cases but also deliver consistently strong outcomes. By merging both dimensions, this approach enables companies to identify the most active and most effective professionals in the PTAB arena, attorneys and law firms that bring both experience and proven results to the table.
Performance and Activity Rankings for Petitioners, Patent Owners and Overall
We ranked all companies, law firms, and attorneys according to their Activity Scores. For the Performance Rankings, however, only those entities with at least one concluded and scorable case in the relevant category were included. As a result, some companies, law firms, and attorneys that were active during the study period do not appear in the performance rankings because their cases were either still pending or not eligible for scoring.
Performance of Judges
In our analysis of PTAB Administrative Judges, we focused on cases where a single judge was primarily responsible for authoring the Final Written Decision. We applied the same methodology described earlier, considering only cases with statuses of Terminated-Denied or Final Written Decision. Cases that were settled were excluded, as judges do not influence settlements between parties.
Each judge was assigned a score out of 100, reflecting the extent to which their rulings tended to favor the petitioner. Our report covers the full roster of PTAB Administrative Judges for 2025, providing insights into both their activity (number of proceedings) and performance, particularly with respect to petitioners and patent owners. A Performance Score near 100 indicates that a judge more frequently ruled in favor of the petitioner.