As with our earlier intelligence reports on PTAB, Patent Prosecution, Patent Litigation, CAFC, Trademark Prosecution, and ITC, we have incorporated input from the ANDA community to refine our approach. Their feedback has helped us shape methods that aim to provide a fair and accurate evaluation of the performance of companies, attorneys, law firms, and judges involved in ANDA cases.
Consistent with past editions, we continue to review and adjust our methodology with each report. The scoring formulas and criteria are tested, updated, and improved over time to reflect both data-driven analysis and industry feedback. In the following section, we outline the methodology and scoring framework used for this year’s rankings, offering transparency into how the results were derived.
General Considerations
Assuming readers possess a basic understanding of the ANDA litigation landscape, in its simplest form, ANDA litigation, governed by the Hatch-Waxman Act, arises when a generic drug manufacturer (“Generic”) submits a Paragraph IV certification in their ANDA application. This certification asserts that the relevant patents listed in the Brand’s Orange Book are either invalid or will not be infringed by the Generic’s proposed drug. Upon receiving notice of all Paragraph IV ANDA filings, the Brand has a 45-day window to file a lawsuit, which triggers a 30-month stay on the ANDA approval process. If no suit is filed within the 45-day period, the Food and Drug Administration (FDA) may proceed with the customary approval of the ANDA.
During our analysis of PACER categorizations for concluded ANDA cases, it became clear that practical, conclusive outcomes were relatively scarce. Only a limited number of decisions could be classified as clear “wins” for either the Plaintiff (Brand) or Defendant (Generic), making it challenging to assign straightforward scores or points, as was done in our earlier reports.
Many of the PACER outcome classifications instead reflected procedural or administrative closures rather than substantive rulings. These include case consolidations, venue transfers, or closures resulting from parallel litigation. For example, our case-by-case review and consultations with ANDA attorneys showed that case consolidations or duplicative parallel actions might be classified under several PACER statuses, including ‘Statistical Closing,’ ‘Dismissed – Voluntarily,’ or ‘Dismissed – Other.’ Other categories, such as ‘Transfer/Remand – MDL Transfer,’ ‘Non-reportable Closing,’ or ‘Dismissed – Settled,’ were generally self-explanatory and are directly represented in Table 1.
To resolve these overlaps and provide more meaningful insights, we reclassified the 12 official PACER statuses observed in ANDA litigation into 10 consolidated outcome categories. This reclassification captures the true resolution of each case, whether the Brand prevailed, the Generic prevailed, or the matter was resolved through settlement, providing a clearer and more consistent framework for analysis (Table 1).
Table 1 – ANDA Outcomes and Scores Applied
Outcome | Plaintiff | Defendant | Plain. Atty/Firm | Def. Atty/Firm | Judge |
Judgment – Defendant Wins | 0 | 1 | 0 | 1 | 0 |
Judgment – Plaintiff Wins | 1 | 0 | 1 | 0 | 1 |
Judgment – Settled | 0.5 | 0.5 | 0.5 | 0.5 | – |
Dismissed – Settled | 0.5 | 0.5 | 0.5 | 0.5 | – |
Dismissed – Voluntarily | – | – | – | – | – |
Dismissed – Other | – | – | – | – | – |
Transfer/Remand – MDL Transfer | – | – | – | – | – |
Statistical Closing | – | – | – | – | – |
Non-reportable closing | – | – | – | – | – |
Table 1 Notes:
The analyzed cases are represented by 12 official PACER outcome classifications:
- Dismissed – Other
- Dismissed – Settled
- Dismissed – Voluntarily
- Statistical Closing
- Transfer/Remand – MDL Transfer
- Transfer/Remand – Transfer
- non-reportable closing
- Judgment – Judgment on Consent
- Judgment – Other
- Judgment – Court Trial
- Judgment – Motion Before Trial
- Judgment – Judgment on Default
Seven of the PACER classifications (categories 1–7) are presented directly in Table 1. The remaining five classifications (categories 8–12) required closer review, as their labels did not always reflect the true outcome of a case. After a case-by-case analysis, we distilled these into three practical outcome categories: Judgment – Defendant Wins, Judgment – Plaintiff Wins, and Judgment – Settled.
Of all the closure categories listed in Table 1, only four could be scored. Blank entries (dashes) indicate instances where a participant was not scored for a particular outcome. To establish a consistent framework for scoring, we asked one core question: Did the judgment, whether through trial or consent order, place the Defendant (Generic) in a more favorable position than they would have been under the Hatch-Waxman Act (HWA) alone?
- If the answer was yes, 1 point was awarded to the Defendant (Generic).
- If the answer was no, 1 point was awarded to the Plaintiff (Brand).
This framework allowed us to evaluate the performance of companies, attorneys, firms, and judges on an objective basis.
A common example is the consent order. In most cases, consent orders impose two basic terms: (A) the Generic is barred from selling the relevant product until the expiration of the patents in suit, and (B) the Generic retains the right to maintain or file Paragraph IV certifications without restricting the FDA’s authority to approve them. Occasionally, however, consent orders require the Generic to amend its filing to Paragraph III. In practice, if the outcome left the Generic in no better position than under the HWA’s default provisions, we classified it as Judgment – Plaintiff Wins and awarded 1 point to the Brand side. By contrast, if a judgment—whether trial-based or by consent order—resulted in a finding of invalidity, non-infringement, or unenforceability, or otherwise allowed the Generic to enter the market before patent expiry, it was deemed a “win” for the Defendant, and 1 point was assigned accordingly.
Settlements presented a different challenge. Because their terms are confidential and only limited information appears on the docket, we adopted a uniform rule: all settlements (whether labeled in PACER as Judgment – Settled or Dismissed – Settled) were scored 0.5/0.5 for both Brand and Generic. While we acknowledge that not all settlements are truly equal, this approach ensured consistency and avoided speculative scoring. Attorneys and firms received settlement scores, but judges were excluded since settlements involve no adjudication.
Unlike ITC Section 337 investigations or broader commercial litigation, most ANDA cases involve only a single plaintiff and a single defendant. Where multiple parties were present, outcomes were usually aligned on each side, or cases were administratively divided into separate actions. This reduced complexity and allowed us to score each party’s role with accuracy.
We recognize that no scoring system can fully capture the strategic nuances of litigation, particularly in the absence of detailed settlement terms or insight into behind-the-scenes negotiations. Nevertheless, our objective was to create a framework that is consistent, transparent, and fair across all participants. Where the available record left ambiguity, we erred on the side of balance, granting partial credit rather than risk unfairly penalizing or rewarding participants.
In short, the cases eligible for scoring fell into two categories: (1) those with a rendered judgment or trial verdict, where points were awarded to all parties involved, and (2) those that were settled, where points were awarded equally to both sides but not to judges. This methodology ensures a comprehensive assessment of the ANDA landscape, while recognizing the limits imposed by the available case information.
Activity Score
The number of cases in which companies, their attorneys, firms, and judges were involved, whether as plaintiffs (Brands), defendants (Generics), or both, formed the basis of their Activity Scores. The Activity Score is simply the count of ANDA cases linked to each entity. For companies, attorneys, and firms, this includes cases handled as Plaintiff, Defendant, or in other roles such as counterclaimant (Overall Cases). For judges, the Activity Score is measured by the total number of cases presided over, without separating Brand and Generic matters. This approach captures every instance of participation in ANDA litigation.
Each participant in the study was assigned an Activity Score (“Cases” in the tables) and a corresponding rank (“Activity Rank” in the tables), reflecting the number of ANDA cases filed between July 1, 2020, and June 30, 2025. Success and Performance Scores (see below) were based on the subset of these cases that had an outcome or decision recorded in PACER as of July 7, 2025.
Because recent activity is widely viewed as a stronger indicator of involvement, we adjusted the weight of older cases. As a result, an entity with five cases filed in 2025 could rank higher than another with the same number of cases spread across the full five-year period. In addition, to reduce the gap between entities with very high numbers of cases and those with moderate numbers, the Activity Score was calculated using a logarithmic function. This makes comparisons across different activity levels more balanced and meaningful.
Success Score
Plaintiffs (Brands) and Defendants (Generics) were scored for various ANDA outcomes, as outlined above and summarized in Table 1.
The Plaintiff Success Score is calculated as the sum of all points earned by an entity when acting as the Plaintiff (Brand). Entities receive full points for wins and half-points for settlements. This total is then divided by the number of scorable cases, multiplied by 100, and expressed as a percentage. The same calculation is applied across companies, attorneys, and law firms.
The Defendant Success Score is determined in the same manner but reflects outcomes in which the entity acted as the Defendant (Generic). The Overall Success Score combines results from both Plaintiff and Defendant cases. Since attorneys and law firms play a direct role in case outcomes on behalf of their clients, they were scored and ranked using the same methodology as their clients for all scorable outcomes, including settlements.
Judges, however, were evaluated only for adjudicated case outcomes, where they had a direct role in determining the result. Accordingly, all categories in Table 1, except settlements, were considered. Judges were assigned a binary score: 0 for a Defendant win and 1 for a Plaintiff win. A score closer to 1 indicates that the judge more frequently ruled in favor of the Brand.
To ensure fairness in the evaluation, we introduced the concept of a confidence interval, developed based on feedback from the ANDA community. Recognizing that entities with higher activity levels face greater challenges in winning every case, we adjusted the performance scores to balance both success rate and activity. This allows for a more meaningful comparison across entities with different levels of involvement.
Finally, to promote inclusivity, we extended the calculation of the Success Score to all attorneys, law firms, and companies involved in at least one terminated case. This ensures that even entities with limited activity are acknowledged, providing a comprehensive view of performance across the entire ANDA landscape.
Normalization Using Machine Learning
The outcome of an ANDA litigation case is shaped by a variety of factors, including the arguments and strategies of the parties involved, the decisions of the presiding judge, the specific ANDA products and patents at issue, and the expertise of the attorneys representing each side. Expert witness testimony and other case-specific elements can also play a critical role in influencing the final result.
Given this complexity, assessing the success of an individual, whether an attorney or an expert, within a single case is inherently challenging. It is difficult to separate their impact from the combined effect of other factors influencing the case. To address this, we leverage a large dataset spanning multiple cases and apply advanced computational methods and machine learning algorithms to evaluate performance across a variety of scenarios.
Our approach to calculating the Success Score is designed for both fairness and accuracy. By using a robust analytical model, we isolate the performance of each entity, company, attorney, or law firm, while minimizing the influence of external factors, such as the actions of other participants or the decisions of judges. This allows us to provide a more precise and unbiased assessment of individual contributions to case outcomes.
In practice, we implement sophisticated adjustments to recalibrate the scores of all parties and their representatives, taking into account the various factors that may affect performance. Once these adjustments are applied, we calculate the average success score for each participant by aggregating their performance across all relevant cases. This method provides a reliable, data-driven view of success, offering a fair comparison of individual and organizational performance in the complex landscape of ANDA litigation.
Performance Score
This additional scoring metric addresses a key consideration for clients evaluating an attorney’s performance: not just extensive experience, as reflected by high activity or caseload, but also a strong success rate. Because handling more cases can sometimes dilute measured performance due to the law of averages, we developed the Performance Score to account for both volume and effectiveness. By combining Activity and Success into a single metric, the Performance Score allows for fair comparisons across companies, law firms, and attorneys with different caseloads, providing a more complete and informative view of the ANDA landscape.
It is important to emphasize that these scores are not a definitive measure of overall quality, skill, or impartiality of any firm, attorney, company, or judge. Just as even highly skilled doctors may face outcomes beyond their control, attorneys can experience losses due to factors outside their influence. Many may also take on complex cases that carry inherent risks. As such, we strongly encourage clients to perform their own due diligence, conduct careful analysis, and engage in interviews when selecting legal representation. Making decisions based on the specific needs of each case will always provide the best chance for successful outcomes.