Drawing on our previous work covering CAFC, PTAB litigation, patent and trademark prosecution, ITC Section 337 investigations, and ANDA matters, we have continued to work closely with practitioners and other members of the IP community to develop practical and balanced approaches for evaluating activity and performance in district court patent litigation. Our methodology is not static. It is reviewed and adjusted on an ongoing basis, informed by our expanding data set, real-world experience, and feedback received from attorneys, firms, and other stakeholders across the patent ecosystem.
General Considerations
Before outlining the details of our ranking methodology, we assume readers are familiar with the basics of patent litigation in U.S. District Courts. Key strategic considerations in this area often include choosing the appropriate venue, addressing challenges before the Patent Trial and Appeal Board (PTAB), and deciding whether to coordinate International Trade Commission (ITC) Section 337 investigations with district court actions (see, for example, Patexia Insight 44). While our IP Insight Intelligence Reports have examined a wide range of topics, including PTAB proceedings, ITC Section 337 investigations, ANDA litigation, and appeals to the Court of Appeals for the Federal Circuit (CAFC), this report is focused solely on district court patent litigation. In such cases, a plaintiff typically initiates litigation seeking injunctive relief or damages, alleging that a defendant has manufactured, imported, sold, or used products that infringe the plaintiff’s patents. This report offers a detailed review of these proceedings, providing insight into the practical and procedural aspects of district court patent cases.
Scoring and Refining PACER Statuses Classification
Patent cases that concluded during the study period are grouped according to the official case status categories used by PACER (Public Access to Court Electronic Records). These classifications help clarify how the cases reviewed in this report were resolved and provide important context for interpreting their outcomes (see Table 7.1). They also form the basis for analyzing and comparing the different ways patent litigation matters reached resolution throughout the study period.
Table 7.1 – PACER Outcome Classifications for patent cases
| Listed PACER Outcome |
| Judgment – Judgment on Consent |
| Judgment – Motion Before Trial |
| Judgment – Court Trial |
| Judgment – Judgment on Default |
| Judgment – Other |
| Dismissed – Voluntarily |
| Dismissed – Settled |
| Dismissed – Other |
| Dismissed – Lack of Jurisdiction |
| Dismissed – Want of Prosecution |
| Transfer/Remand – MDL Transfer |
| Statistical Closing |
| Non-reportable Closing |
Although PACER assigns a wide range of status labels to patent cases filed between July 1, 2020, and June 30, 2025, many of these labels do not clearly indicate how a case ultimately concluded or which party prevailed. To overcome this shortcoming, we designed a unified scoring framework that allows outcomes to be assessed in a consistent and balanced manner across all participants. This framework was developed through:
- Manually reviewing case judgment decisions across various PACER categories.
- Reassessing and reclassifying PACER outcomes into a more practical framework and point system that better reflects the real-world results for parties, their legal representatives, and judges.
The finalized scoring system, defined in part by feedback from experienced practitioners, is presented in Table 7.2 along with a detailed explanation of how the scores are applied. Input from attorneys across our network helped ground the methodology in real-world litigation practice and highlighted considerations that are not always evident from docket data alone. We are grateful to the legal professionals who shared their time, insights, and practical experience in support of this work.
Table 7.2 – Patent Case Outcomes and Scores Applied
| Outcome | Plaintiff | Defendant | Plain. Atty/Firm | Def. Atty/Firm | Judge |
| Judgment – Defendant Wins | 0 | 1 | 0 | 1 | 0 |
| Judgment – Plaintiff Wins | 1 | 0 | 1 | 0 | 1 |
| Judgment – Settlement (Confidential) | 0.5 | 0.5 | 0.5 | 0.5 | – |
| Judgment – Partial Win for Both | 0.5 | 0.5 | 0.5 | 0.5 | 0.5 |
| Judgment – Voluntarily Dismissed by Party(ies) | 0.25 | 0.75 | 0.25 | 0.75 | – |
| Dismissed – Voluntarily | 0.25 | 0.75 | 0.25 | 0.75 | – |
| Dismissed – Settled (no IPR petition) | 0.5 | 0.5 | 0.5 | 0.5 | – |
| Dismissed – Settled (IPR denied) | 0.75 | 0.25 | 0.75 | 0.25 | – |
| Dismissed – Settled (IPR, settled pre-Institution) | 0.25 | 0.75 | 0.25 | 0.75 | – |
| Dismissed – Settled (IPR, settled post-Institution, pre-trial) | 0.25 | 0.75 | 0.25 | 0.75 | – |
| Dismissed – Settled (IPR, < 50% claims survive) | 0.25 | 0.75 | 0.25 | 0.75 | – |
| Dismissed – Settled (IPR, > 50% claims survive) | 0.75 | 0.25 | 0.75 | 0.25 | – |
| Dismissed – Other | 0.25 | 0.75 | 0.25 | 0.75 | – |
| Dismissed – Lack of Jurisdiction | 0 | 1 | 0 | 1 | – |
| Dismissed – Want of Prosecution | 0 | 1 | 0 | 1 | – |
| Transfer/Remand – MDL Transfer | – | – | – | – | – |
| Statistical Closing | – | – | – | – | – |
| Non-reportable closing | – | – | – | – | – |
All five PACER “Judgment” categories (highlighted in yellow in Table 7.1) required manual examination, as the category labels alone often failed to clearly indicate whether the outcome favored the plaintiff or the defendant. To ensure accurate classification, we reviewed the underlying case documents for every matter in these categories that was filed between July 1, 2020, and June 30, 2025, and had a recorded decision in PACER as of December 1, 2025, when the review was conducted.
The remaining PACER classifications did not necessitate manual review for the following reasons:
- Fair and Consistent Scoring: Cases in these categories could be scored accurately using the available PACER designations.
- Confidential Settlements: Settlement terms in many cases are confidential, limiting the additional insights gained through manual review.
- High Volume: The sheer number of cases terminated in these other categories during the study period rendered manual review impractical.
Based on this manual review, we reassigned PACER “Judgment” cases into five new outcome categories that better reflect concrete results (highlighted in yellow in Table 7.2). These classifications were determined through close review of the underlying filings and were designed to capture outcomes that represent a meaningful result, or “win”, for either the plaintiff or the defendant.
Each reclassified outcome was then assigned a corresponding score based on our evaluation of the case records. In developing these scores, we also incorporated guidance from experienced practitioners in our legal network to better calculate relative success and involvement across companies, attorneys, law firms, and judges participating in the litigation.
Judgment – Plaintiff or Defendant Wins:
In adjudicated cases where the judgment or decision explicitly favored either the plaintiff (e.g., infringement or injunction) or the defendant (e.g., non-infringement or a successful motion to dismiss), scoring was straightforward:
- Plaintiff Victory: The plaintiff received 1 point (Judgment – Plaintiff Wins), while the defendant received 0 points.
- Defendant Victory: The defendant received 1 point (Judgment – Defendant Wins), while the plaintiff received 0 points.
The attorneys and law firms representing each party were assigned scores identical to their clients, as their efforts were instrumental in achieving these outcomes. Likewise, the judges presiding over these cases were also scored based on their decisions, with 1 point awarded for rulings favoring the plaintiff and 0 points for decisions favoring the defendant. This consistent scoring methodology ensured an equitable reflection of the contributions and outcomes for all involved parties.
Judgment – Settlement (Confidential):
Cases resolved through a confidential settlement agreement were assigned a neutral score of 0.5/0.5 for all involved parties (Judgment – Settlement (Confidential)). This equal scoring reflected the undisclosed nature of settlement terms, which made it impossible to determine a clear prevailing party. Judges were not assigned scores in these instances, as their involvement did not result in a definitive ruling on the merits of the case.
Judgment – Partial Win for Both:
In many cases, complaints included multiple counts. While most judgments favored one side across all counts, there were instances of mixed judgments where the defendant prevailed on some counts and the plaintiff on others. In such cases, where no clear winner could be determined, a neutral score of 0.5/0.5 was assigned to all parties involved.
Judgment – Voluntarily Dismissed by Party(ies):
Cases classified under the PACER Judgment category and concluded through a voluntary motion to dismiss by the plaintiff were scored as 0.25 for the Plaintiff and 0.75 for the Defendant (Judgment – Voluntarily Dismissed by Party(ies)). This scoring approach reflects the assumption that, in most instances, plaintiffs may gain some tangible yet typically undisclosed advantage from the dismissal. As a result, these cases were not penalized as severely as a loss on the merits, which would have been scored as 0 for the Plaintiff and 1 for the Defendant.
Dismissed – Voluntarily:
The dismissal status originates from the original PACER Judgment category, which we retained as one of our newly reclassified categories (Table 7.2). To determine the scoring criteria for this category, we engaged with our community of patent attorneys. These cases often imply undisclosed settlements or agreements, suggesting that the plaintiff may have achieved some form of benefit. The majority of attorneys recommended assigning a score of 0.25 to Plaintiffs and 0.75 to Defendants for this category, as reflected in Table 7.2.
Dismissed – Other:
The “Dismissed – Other” category was another scorable classification that transitioned directly from the original PACER categories to our scoring system. Similar to the “Dismissed – Voluntarily” category discussed earlier, we assigned a score of 0.25/0.75 to Plaintiffs and Defendants, along with their legal representatives (highlighted in red in Tables 7.1 and 7.2). These cases were typically dismissed following a motion filed by the plaintiff but were not explicitly categorized by PACER as voluntary dismissals.
Additionally, three other categories transitioned directly from PACER but were deemed non-scorable, as they represented administrative closures or procedural transfers (highlighted in green in Tables 7.1 and 7.2):
- Transfer/Remand – MDL Transfer
- Statistical Closing
- Non-reportable Closing
Dismissed – Settled:
The final original PACER category requiring attention is “Dismissed – Settled” (highlighted in gray in Table 7.1). Following consultations and surveys conducted within our esteemed legal community, these cases were further analyzed, classified, and scored based on their association with related inter-partes review (IPR) cases, as shown in Table 7.2 (highlighted in gray).
As previously discussed in Patexia Insight 44, over 80% of IPR filings are defensive in nature, initiated either directly by defendants or indirectly by third parties aiming to challenge the validity of the asserted patents (e.g., filings by Unified Patents or RPX Corporations). The outcomes of these IPR proceedings often play a pivotal role in shaping settlement negotiations between the parties, influencing the terms of resolution.
For this reason, all “Dismissed – Settled” cases were scored in alignment with the outcomes of their corresponding IPR proceedings, as detailed below:
Dismissed – Settled (no IPR filed):
In cases where none of the patents involved were addressed by an IPR petition, we assigned an equal score of 0.5/0.5 to both parties, reflecting the absence of any influence from an IPR proceeding on the settlement outcome.
Dismissed – Settled (IPR denied):
When an IPR is submitted but the institution is denied, it generally indicates that the PTAB panel found the challenged claims likely to withstand IPR scrutiny based on the prior art. As a result, settlements in such patent litigation cases may suggest favorable terms for the plaintiff, given the patent’s survival through the IPR process. In these situations, we assigned points with a 0.75/0.25 split, favoring the plaintiff.
Dismissed – Settled (before the institution of IPR):
When cases settle before the institution phase, the implications can vary. It might suggest that the patent holder (plaintiff) recognized a limited chance of success and sought to minimize losses, or it could reflect early negotiations that resulted in a royalty agreement with the defendant. Given the lack of concrete evidence, it is difficult to draw definitive conclusions. Initially, we considered treating these cases like other settlement agreements, splitting the points equally (0.5/0.5) between the plaintiff and defendant. However, after surveying respondents, a preference emerged for a 0.25/0.75 split in favor of the defendant. As a result, this scoring criterion was adopted for this category.
Dismissed – Settled (after the institution of IPR):
When a case settles after the IPR institution phase, it typically indicates strong prior art, implying that the patent owner and their legal team might anticipate an unfavorable outcome if the IPR proceeds. In these situations, the settlement of the patent case often suggests more favorable terms for the defendant. As a result, we decided to allocate points with a split of 0.25/0.75 in favor of the defendant.
Dismissed – Settled (less than 50% of the claims challenged in the IPR survived)
In cases where the majority of challenged claims in an Inter Partes Review (IPR) are invalidated (i.e., fewer than 50% survive), we infer that the settlement terms likely favored the defendant. Therefore, we allocate points with a 0.25/0.75 split in favor of the defendant.
Dismissed – Settled (more than 50% of the claims challenged in the IPR survived)
When an Inter Partes Review (IPR) concludes with a final written decision, we evaluate the survival rate of the challenged claims. If more than 50% of the claims survive, indicating a favorable outcome for the Plaintiff, we allocate points with a 0.75/0.25 split in favor of the Plaintiff. This distribution reflects the increased likelihood that the patents remain valid and enforceable, benefiting the Plaintiff.
Other Considerations
In the majority of patent cases involving a single plaintiff and a single defendant, outcomes are generally clear and easy to attribute. More complex situations arise in cases with multiple parties, where outcomes may be consistent across parties on the same side or, in some instances, administratively split into separate matters. When such divisions occur, we are able to evaluate and score each party individually.
Like any analytical framework, this scoring approach is subject to practical limits, driven by the level of detail available in court records and the inherent complexity of certain case resolutions. That said, the guiding principle across all of our Intelligence Reports, including this one, has been to apply a methodology that is both fair and consistent. We take care not to over-penalize or over-reward participants when their direct impact on a specific outcome cannot be clearly determined. Instead, our objective is to accurately reflect each participant’s involvement and allow them to attain the highest score reasonably supported by the data.
After extensive internal review and multiple rounds of feedback from practitioners in our expert community, gathered through surveys and direct discussions, the scoring framework presented in Table 7.2 was selected as the most balanced and reliable option. These metrics are applied uniformly to companies, attorneys and law firms, and judges, using the best information available from the case records. Once points were assigned in accordance with Table 7.2, we calculated the resulting Activity, Success, and Performance Scores and Rankings for all participants.
Automated Case Status Detection
In many instances, PACER records list a termination date for a case without providing a corresponding status that explains how the matter concluded. To resolve this gap, we developed an automated process that determines case outcomes by reviewing documents filed around the time of termination. Using natural language analysis and supporting analytical methods, the system evaluates the content and context of these filings to infer the most accurate case status.
This approach not only fills in missing status information but also identifies and corrects situations where the recorded PACER status conflicts with the substance of the underlying filings. The result is a more accurate and dependable record of case terminations. By improving the reliability of outcome data, this process strengthens the quality of our analysis and contributes to clearer, more trustworthy patent litigation records.
Activity Score
In our analysis, we’ve observed that a single plaintiff with a limited number of patents can file lawsuits against hundreds of companies. Interestingly, while having more defendants generally increases the workload for the plaintiff, this increase does not scale proportionally with the number of cases filed using the same patents. In contrast, the number of patents involved directly correlates with the workload managed by each party or their representatives.
To assess an entity’s activity, we consider two key factors: the number of unique cases and the number of unique patents involved. Therefore, our Activity Score is calculated based on both the total number of unique cases and unique patents filed within the report’s timeframe, in which the company, attorney, or firm acted as a Plaintiff, Defendant, or both. This score is computed as a weighted average of cases and patents involved. Our approach, endorsed by the expert community, is considered the fairest method to assess activity, taking into account the influence of both case count and patent count on workload.
We applied a higher weight to more recent cases by using a weighted scale. This accounts for the possibility that some firms may have been more active in past years than currently, ensuring a ranking system that reflects recent activity levels.
Rather than using a linear scale, we implemented a logarithmic scale for calculating the activity score. A linear scale assigns scores based on a direct proportion to the value being measured. For example, if one firm files twice as many cases as another, it would receive twice the score. However, in the patent litigation landscape, the number of cases filed can vary greatly, which can result in large disparities between firms. The logarithmic scale reduces these disparities by assigning scores based on the logarithm of the value being measured. This ensures that the difference in scores between firms decreases as the number of cases increases, providing a more accurate and fair representation of activity levels.
Judges’ Activity Scores are calculated similarly, but they are based on all cases over which they presided. These scores are not separated by Plaintiff or Defendant cases and reflect the overall caseload they have handled.
All participants, including companies, attorneys, firms, and judges, received Activity Scores (“Activity Score” in the tables) and Activity Ranks (“Activity Rank” in the tables) based on this model, applied to the period from July 1, 2019, to June 30, 2024. This methodology ensures that activity levels are accurately measured, providing a consistent and equitable way to rank participants in patent litigation.
Success Score
Plaintiffs and defendants were scored based on the outcomes of cases they were involved in, as detailed in Table 7.2.
Plaintiff Success and Defendant Success scores are calculated by aggregating the total points earned in cases where an entity acted as either Plaintiff or Defendant. These scores are then divided by the total number of scorable cases and expressed as percentages (multiplied by 100). Overall Success score is determined by considering all cases collectively, whether the entity acted as Plaintiff or Defendant. Rankings (“Success Rank” in the Tables) were assigned based on these success scores.
To improve the accuracy of success rate calculations, we employed a refined method that considers the likelihood of winning all cases rather than merely averaging the success rates. This adjustment accounts for the higher difficulty faced by entities dealing with a larger volume of cases. By evaluating the total activity level of an entity, we not only consider the number of cases won but also estimate the probability of winning across all cases. This method offers a more nuanced and fair representation of an entity’s performance, especially for those managing higher case volumes.
Attorneys and firms were assessed based on their role in the cases (as Plaintiff or Defendant) and their influence on the case outcome. Scores and rankings were determined for scorable outcomes where they played a substantial role, as shown in Table 7.2.
For judges, scores were allocated based on case outcomes in which they directly participated. A judge’s score was assigned as follows:
- 0 for Defendant wins
- 1 for Plaintiff wins
Thus, a judge’s score closer to 1 indicates a higher frequency of rulings favoring the Plaintiff.
This comprehensive evaluation methodology ensures that both individual performance and the broader context of each case are fairly represented, leading to a more accurate and balanced ranking of participants in patent litigation.
Performance Score
To enhance the accuracy and fairness of our evaluations, we introduced a Performance Score as a weighted average of the Activity Score and Success Score (win percentage). This new metric addresses the challenge that sustained high activity may lower performance over time due to the law of averages, making direct comparisons of success scores across entities with varying case volumes misleading. By combining activity and success, the Performance Score offers a more balanced evaluation, ensuring that entities with high case volume are not unfairly penalized and accounting for their success in litigation. It highlights attorneys, law firms, and companies that excel not only in experience, reflected by their activity level, but also in achieving favorable outcomes. This comprehensive approach provides a more accurate and fair ranking, distinguishing top performers who demonstrate both significant involvement and successful results in patent litigation.
Normalization Using Machine Learning
Measuring success in patent litigation is inherently difficult. Case outcomes are influenced by a wide range of variables, including the strength of legal arguments, judicial decisions, the patents at issue, technical subject matter, and the experience and strategies of the attorneys, firms, and companies involved. Isolating the contribution of any single participant within a given case is especially challenging, as these factors are often interdependent. To address this challenge, we rely on large, structured datasets drawn from multiple patent litigation sources and apply advanced analytical methods to evaluate performance with a high degree of precision.
Our methodology for calculating Success Scores is designed to prioritize consistency and objectivity. To reduce the impact of external influences, such as the actions of co-parties, opposing counsel, or case-specific judicial factors, we use an analytical framework that evaluates each participant independently. This framework adjusts results across all parties involved in a case, allowing us to account for differences in case complexity, posture, and outcome. By doing so, the resulting scores better reflect an individual company’s, attorney’s, or firm’s contribution, rather than the broader circumstances surrounding a particular dispute.
These metrics are not intended to serve as judgments on the overall quality, reputation, or competence of any attorney, law firm, company, or judge. Even highly capable professionals may experience unfavorable outcomes due to factors outside their control, and attorneys frequently take on matters with uncertain prospects as part of their role. For this reason, we strongly advise readers to treat these rankings as one input among many and to conduct their own evaluation, through research, discussions, and interviews, when selecting legal counsel or assessing litigation strategy.
