As with our previous ANDA, PTAB, Patent Prosecution, Patent Litigation, Trademark, and ITC Intelligence reports, we have worked closely with the CAFC community to develop fair and effective methods for evaluating the performance of companies, attorneys, law firms, and judges in CAFC cases. Our ranking methodologies and formulas are continuously refined, drawing on feedback, surveys, and iterative improvements with each report. In the following section, we outline the rationale and methodology behind the scoring used in this report, while also highlighting updates and enhancements compared to our 2024 edition.
General Considerations
We’ll assume our readers are familiar with the CAFC landscape. In brief, the United States Court of Appeals for the Federal Circuit is a federal appellate court with jurisdiction over specific types of cases. It primarily handles appeals related to patents, as well as certain civil cases originating from the U.S. International Trade Commission, district courts, and the Patent Trial and Appeal Board. Beyond the CAFC, the ultimate point of appeal is the United States Supreme Court. The time for the Federal Circuit to issue a decision can vary, generally ranging from about one month for straightforward cases to three to four months for decisions that establish precedents, with occasional exceptions. The primary outcomes of the appeals process are as follows:
Affirmed – Affirming the decision of the trial court, meaning that the verdict at trial stands.
Reversed – The Appellate Court decides that the lower court’s decision was wrong, and a new trial may be ordered.
Remanded – The case, or part of it, goes back to the lower court to be reheard.
The CAFC cases analyzed in this report are represented by the following outcome classifications (Table 6.1).
Table 6.1 – CAFC Outcomes and Scores Applied
| Outcome | Appellant | Appellee | Appellant Atty/Firm | Appellee Atty/Firm | Judge |
| Affirmed | 0 | 1 | 0 | 1 | 0 |
| In-Part Outcomes (Affirmed-in-Part, Vacated-in-Part, Reversed-in-Part, etc.) | 0.5 | 0.5 | 0.5 | 0.5 | 0.5 |
| Vacated and/or Remanded Outcomes | 0.75 | 0.25 | 0.75 | 0.25 | 0.75 |
| Reversed | 1 | 0 | 1 | 0 | 1 |
| Dismissed/Withdrawn | N/A | N/A | N/A | N/A | N/A |
| Transferred/Consolidated | N/A | N/A | N/A | N/A | N/A |
Table Notes: As shown in the table, we examined the outcomes of CAFC cases, categorizing them based on the type of opinion or order, and assigned scores to each outcome category. Among the six types of case closures depicted in Table 1, four are eligible for scoring. Whenever “N/A” appears in any column or row, it indicates that the corresponding participant was not evaluated for that specific outcome. Further explanation is provided in the accompanying text.
In analyzing CAFC cases terminated between January 1, 2020, and December 31, 2024, we encountered a subset of cases where it was difficult to determine a clear “win” for either the Appellant or Appellee. Assigning a traditional “score” or “point,” as used in prior reports, was challenging in such instances. These scenarios often involved technical closures, transfers, withdrawals, or dismissals.
Our scoring approach was guided by a simple question: Did the outcome leave the Appellant in a better position than under the original ruling? If yes, the Appellant received 1 point. If no, the point was awarded to the Appellee. By consensus from surveyed CAFC attorneys, affirmed decisions were treated as wins for the Appellee, while reversed decisions counted as wins for the Appellant.
Partial decisions, such as “Affirmed-in-Part, Vacated-in-Part, Remanded-in-Part,” were particularly complex. For many “in-part” judgments, we assumed that both parties contributed to the final result. Accordingly, we divided the point evenly, allocating 0.5 points to both the Appellant and Appellee sides (as shown in Table 1). Other outcomes, like ‘Affirmed,’ ‘Reversed,’ and ‘Vacated/Remanded’, were straightforward and allowed for a clear allocation of points. While the ultimate “winner” may depend on the specifics of the decision, this approach allows numerical evaluation in a consistent manner.
It is important to recognize the inherent limitations of any scoring system. Available case information cannot capture all nuances, such as legal strategy or confidential settlement terms. Despite these constraints, our methodology aims to apply a fair and consistent scoring approach to all participants. We strive to avoid unfairly penalizing or rewarding participants when their influence on a specific outcome is unclear, and we provide opportunities for partial credit whenever appropriate.
In summary, these scoring metrics represent the most equitable approach to ensure fairness and consistency across companies, attorneys/firms, and judges, given the practical limits of available information.
Activity Score
Participation in CAFC cases by companies, their representing attorneys and law firms, and judges was measured using Activity Scores, which reflect involvement as appellants, appellees, or both. For companies, attorneys, and law firms, the Activity Score represents the total number of cases in which they acted as an Appellant (Appellant Cases), an Appellee (Appellee Cases), or both (Overall Cases) during the report’s covered period. Judges’ Activity Scores were calculated similarly, including all cases they presided over, without differentiating between Appellant and Appellee cases.
All participants were assigned an Activity Score (labeled “Cases” in the tables) and a corresponding rank (“Activity Rank”) based on the total number of CAFC cases filed between January 1, 2020, and December 31, 2024. Success and Performance scores, explained below, were also calculated for this period, using case outcomes as of September 1, 2025.
To better reflect recent engagement, we applied a weighted system that gives greater importance to more recent cases. This approach ensures that a participant with multiple recent cases (e.g., five cases in 2024) can achieve a higher rank than another participant with the same total number of cases spread across the entire study period.
Additionally, to address disparities between entities with different levels of activity, Activity Scores were calculated using a logarithmic function. This method allows for more meaningful comparisons across participants with widely varying caseloads.
Success Score
Success Scores for Appellants and Appellees are based on the outcomes of their CAFC cases, as detailed in Table 1. The Appellant Success score is calculated by summing all points earned for cases in which the entity acted as the Appellant, including partial points for partial victories, and then dividing this total by the number of scorable cases. The result is expressed as a percentage. The same approach is applied to compute the Appellee Success score, considering cases where the entity served as the Appellee. The Overall Success score combines all cases in which the entity acted as either Appellant or Appellee, using the same methodology.
Attorneys and law firms are similarly evaluated, as they directly influence case outcomes by representing clients and participating in proceedings. Their Success scores are determined using the same method applied to the entities they represent, ensuring consistency across clients and counsel.
Judges are scored based on the outcomes of cases over which they presided. For scoring purposes, an Appellee win is assigned 0 points, an Appellant win is assigned 1 point, and a partial win is assigned 0.5 points. Thus, a judge with a score closer to 1 has more frequently ruled in favor of the Appellant.
To ensure inclusivity, Success Scores were calculated for all companies, attorneys, and law firms with at least one terminated and scorable case, as defined in Table 1. This approach provides a fair assessment of performance, even for participants with a limited number of cases.
Normalization Using Machine Learning
Assessing individual success in CAFC appeals is inherently complex. Many factors influence case outcomes, including the legal arguments presented, the decisions of judges, the specific patents and technologies involved, and the expertise of the attorneys. Measuring the contribution of any single entity, whether a company, attorney, or law firm, in a particular case is challenging because these factors are highly interconnected.
To address this, we leverage extensive datasets from multiple CAFC cases and apply advanced computational methods, including machine learning algorithms, to analyze performance across diverse scenarios. Our methodology aims to provide a fair, data-driven measure of individual success while accounting for the complex interplay of factors in appellate litigation.
Specifically, we developed a robust analytical model to isolate the performance of each entity from external influences, such as the effectiveness of other participants or the discretionary decisions of CAFC judges. This approach ensures a more impartial assessment of performance, whether for companies, attorneys, or law firms.
In practice, we use these adjusted success scores to calculate an average performance measure for each participant, taking into account all CAFC cases in which they were involved. By incorporating these adjustments, our model offers a precise and equitable evaluation of individual performance, providing a comprehensive view of activity and success within the intricate CAFC landscape.
Performance Score
As with our other IP Insight reports, the Performance Score represents a weighted combination of Activity and Success Scores. This metric captures what clients often look for in an attorney or firm: both substantial experience, reflected in a high caseload or activity level, and a strong track record of successful outcomes. It is important to note that extremely high caseloads can naturally dilute performance scores over time due to the law of averages, making direct comparisons between participants with vastly different case volumes less meaningful.
The Performance Score allows us to evaluate and rank companies, attorneys, and law firms based on both the quantity and quality of their appellate engagement. By considering Activity and Success together, it provides a holistic view of overall performance in CAFC cases.
Finally, it is crucial to emphasize that these metrics do not measure the inherent quality, experience, skill, or impartiality of any law firm, attorney, company, or judge. Just as even the most skilled doctors may experience unfavorable outcomes, attorneys can face cases they do not win due to factors beyond their control. In some instances, attorneys may accept cases with a higher likelihood of loss as part of their professional duties. For these reasons, we strongly encourage clients to conduct their own due diligence, analysis, and interviews when selecting legal representation, focusing on the firm or attorney best suited to the unique needs and circumstances of their case.
