Anticipat’s focus is simple: complete and accurate annotation of PTAB ex parte appeals decisions

Despite recent strides, the USPTO does not make it easy to extract all its data. This is especially true for ex parte appeals decisions from the Patent Trial and Appeal Board (PTAB)–even though these appeals decisions establish key data points about general patent prosecution. We discuss seven shortcomings of the PTO websites as well as Anticipat’s solution to each of these shortcomings.

1) No centralized repository – If you are looking for a decision without knowing the authority (i.e., precedential, informative, or final), you will likely have to search through three different databases on different web pages. This is because the different types of PTAB decisions are scattered across different web pages depending on the authority of the decision.

Anticipat houses all decisions in a single repository and it labels each decision with the respective authority. To date, Anticipat has all publicly available PTAB appeals decisions in its database.

2) Non-uniform and sporadic decision postings – The USPTO does not post every decision to the Final Decisions FOIA Reading Room webpage on its issued date. For example, if there are 100 decisions dated July 29, five may show up the day of July 29. Fifteen may show up on July 30 even though they are still dated and show up on the database as dated July 29. Twenty may show up on July 31. Fifty may show up August 1. Five may show up August 4. Three may show up August 5. Another one may show up August 6. And another may show up August 7. To monitor recent decisions, it can take time to keep track of which decisions have been looked at.

To fix this, Anticipat has multiple redundant scrapers to check for any backfilled decisions, making sure that every decision posted to the e-foia webpage is picked up. And it emails a recap of these annotated decisions on the 10th day to make sure that the complete set has been included.

3) Unreliable connection – Whether you’re just trying to load the main USPTO page or whether you’re searching for a particular decision, the PTO site (especially the FOIA Reading Room) can be slow or even unresponsive in letting you access data.

Anticipat solves this problem by being hosted on a scalable cloud server. The site should never be down, even during peak traffic.

4) Search functionality limited – The Final Decisions page allows limited search (e.g., date range, Appeal No., Application No., text search, etc.). But none of these searching capabilities are actually available for the 21 precedential and 180 informative decisions.

Even though the Final Decisions page allows for some search functionality, the type of searchable data underwhelms.  First, the input fields can be extremely picky.  For example, if you input an Application No. with a slash (“/”) or a comma (“,”), you get a “no results found” message. But for this particular input, the real problem is not that there are no results for the value input. Rather, it is that that you included a character not recognized by their program. This misleading message does not distinguish whether input values exist or whether the format of the query you entered simply is not consistent with the website’s expectations.  Further, there is no search capability for some of the most useful types of data: art unit, examiner, judge, type of rejection, outcome of the decision, the class, etc.

To overcome this, Anticipat permits loose input so that you unambiguously get the results you need without having to prophetically predict the required format. And it does this for decisions for each type of authority. Anticipat has also taken the time to supplement decisions with their respective application information, such as art unit, examiner, judges, grounds of rejection, outcomes, etc.  Only Anticipat’s database allows you to find all those cases using the most useful data for your analysis.

5) Unorganized data display – In addition to not being able to organize the data into one repository, as discussed in 3), the organization within the Final Decisions page is lacking. To its defense, the PTO does provide some organization to the various decisions. It organizes Final Decisions by (D) – Decision, (J) – Judgment, (M) – Decision on Motion, (O) – Order, (R) – Rehearing, and (S) – Subsequent Decision. However, the page does not allow you to display decisions by each type. Indeed, this organization of the types of decisions feels like more of an afterthought than as a way for users to effectively organize the data. Further, the organization does not go far enough. For example, within (D) – Decision are reexaminations, reissues, inter partes review, covered business methods, decisions on remand from the Federal Circuit, and regular appealed decisions.  There is no way to filter these different types of decisions from each other without manually screening all the decisions in the results list.

To fix this, Anticipat database tracks the various different types of decisions so that one can easily filter by certain subsets of decisions or search within specified subsets. Each sortable column can be sorted in ascending or descending order. Other columns of different information can be added by selecting the checkboxed fields.

6) Downtime from 1:00AM – 5:00AM EST – Every morning, the PTO takes the FOIA Reading Room website offline and performs maintenance on the website. This may not be a big deal to some people, but for someone in another time zone or just in night owl mode, this four hour wait time can cost you a lot of time in accessing your desired decision or data.

Being hosted on a cloud server, Anticipat has now regularly interrupted maintenance time. You are free to use at all hours of the day.

7) Errors – Coming from a federal government website, it’s understandable that some of the decisions data contain errors. Some errors are minor such as the name of the decision being cut off because it includes an apostrophe. Others are more consequential like mismatching a decision with another application number or combining one decision with two decisions.  Because every decision in the Anticipat database is verified using our proprietary systems, we work hard to catch and resolve the errors in the source data of every decision.


In conclusion, because of the above discussed deficiencies, ex parte PTAB data have been consistently overlooked because it simply cannot be effectively retrieved and analyzed by practitioners.  While you may not realize it yet, this may be costing you your time and your money. However, alleviates these deficiencies. Access the Research Database here.


Introducing Rejection Tags: A Way to Use Rationales and Types of Rejections for Patent Prosecution

In 1753, Swedish botanist Carl Linnaeus introduced a system for classifying plants. His two-term classification system assigned each organism a first generic name and a second, more specific name (e.g., Homo sapiens for humans). This system was different than previous classifications, but not extraordinary. But the elegance and simplicity of his system truly was groundbreaking, paving the way for all living organisms to be systematically and uniformly classified.

As the father of modern taxonomy, Linnaeus would be stunned to see how far technology has taken classification. Entire industries have been revolutionized through improved classifying of big data. And patent prosecution is no exception. We at Anticipat are excited to introduce “tags” on the Research Database for all grounds of rejections as a way of classifying patent prosecution rationales.

If the Board decision equivalent of a genus is a ground of rejection, then the species is the tag. Anticipat Research Database has long included the more generic grounds of rejection for each appealed decision (e.g., §101, §102, §103, §112, OTDP, etc.) But because of the wide-ranging categories of common reasons why a rejection is reversed or affirmed, it becomes helpful to dig deeper than cataloging the ground of rejection.  This deeper level represent the various possible points of contention. Identifying these specific categories allows you too find those specific decisions that are relevant to a certain issue without drowning in the ocean of Board decisions.

In short, a tag is a brief summary of a more granular point of contention regarding the ground of rejection. Assume that an applicant and Examiner are at odds on a ground of rejection. Depending on the rejection, this disagreement can take place over a number of finite points. Now with tags, you can easily look up the various points of contention for each ground of rejection. In other words, if you believe that a particular obviousness argument is worth pursuing, you can find decisions where the Board reversed an Examiner using that very same argument. Or if you don’t know which argument is worth pursuing, you can quickly find those arguments that have been most successful in reversing the Examiner. Here are examples of the ground of rejection/tag classification system.

  • 101 – nonstatutory subject matter

Some of the tags for the ground of rejection “§101 – nonstatutory subject matter” include:

  • Software/Data per se
  • Abstract Idea (prima facie case, step one, step two)
  • Law of Nature (prima facie case, step one, step two)
  • Naturally-Occurring Phenomenon (step one, step two)

Finding all the decisions with a particular ground of rejection is just the first step. Even more useful is weeding out less relevant decisions that fall within the same ground of rejection category. Take the abstract idea rejection. There are many different types of points of contention within 101 nonstatutory rejections that are less relevant to abstract idea: computer readable medium comprises a signal, software per se, combining multiple classes, law of nature, naturally occurring phenomenon, claiming a human, etc. Even within abstract idea, there are multiple points of contention such as 1) prima facie case (that the Examiner did not even do the minimum job in identifying and/or rejecting the claim as an abstract idea), 2) step 1 and 3) step 2 of the Mayo/Alice framework. Since we at Anticipat track all of these subcategories for you, you can look up in seconds to find decisions with your desired point of contention.


The ground of obviousness under 35 U.S.C. 103(a) includes our most advanced set of tags. This makes sense as obviousness is one of the most nuanced and developed ground of rejection. We keep track of over 20 points of contention within obviousness, such as the following:

  • Scope and Content of Prior Art – Broadest Reasonable Interpretation
  • Examiner Bears Initial Burden (Prima Facie Case)
  • Clear and Factually-Supported Articulation of Reasons for Obviousness (Prima Facie Case)
  • Hindsight Reasoning (Prima Facie Case)
  • Secondary Considerations
  • Combining/Substituting prior art elements according to known methods to yield predictable results
  • Use of known technique to improve similar devices (methods, or products) in the same way
  • Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results
  • “Obvious to try” – choosing from a finite number of identified, predictable solutions, with a reasonable expectation of success
  • Known work in one field of endeavor may prompt variations of it for use in either the same field or a different one
  • Proposed modification cannot render the prior art unsatisfactory for its intended purpose
  • Teaching away
  • Non-Analogous Art

The challenge with trying to find relevant arguments to overcoming an obviousness rejection, for example, is that any case decided by the Board could include one or more of these obviousness points. With the volume of obviousness decisions, it has been impractical to find relevant decisions.

The point of tags

You can do a lot of interesting things with tags. Take for example if you are stuck on a particular rationale used by an Examiner that you think is unreasonable. You can easily match your issue with decided cases at the Board and use Anticipat to quickly pull the decisions where the Board agreed with you.  If there are very few decisions on your side, that is a valuable reality check.  If there are many decisions on your side, you can review decisions to double check that your facts correspond with those in the decisions. You can also use the legal authority relied on by the Board for this particular point of contention in persuading your Examiner.

Tags will be incorporated into our soon-to-be-released Practitioner Analytics page to guide prosecution strategy. Using the Practitioner Analytics interface, tags can be ranked in the order of frequency in overturning an Examiner’s particular rejection. You can thus find better arguments faster and with more confidence. Sign up for an invitation to the soon-to-be-released Practitioner Analytics page.

Board decisions show that independent judges have agreed with the applicant’s position in a related case.  They are a powerful way to check and augment your existing experience.


You don’t have to be a famous botanist to appreciate how identifying board decisions using a rejection/tag relationship is a simple but powerful way of describing how the Board decides cases today. In the aggregate, this structure provides targeted information to inform your patent prosecution strategy. At about $1 per day, Anticipat Research Database is not only incredibly affordable, at current hourly billing rates it pays for itself in just seconds a day. Try it now for free.

Too Simplistic: How the USPTO measures outcomes for ex parte PTAB appeals


A patent applicant usually decides to appeal a rejection as a last resort because of the substantial cost and time. When the applicant decides to overlook the substantial cost and time, it is because she believes independent judges will objectively overturn at least one of (but hopefully all) the rejections. These administrative patent judges (APJs) have experience, technical backgrounds, and are independent from Examiners. So if this body of judges were to sustain Examiners’ rejections most of the time, you would think that the Examiners are doing a good job of examining applications. And if the Examiners are doing well, it would appear that the U.S. Patent & Trademark Office (USPTO) is doing well. But it’s not.

Currently, the USPTO measures decision outcomes of ex parte appeals in three different ways: affirmed, affirmed-in-part, or reversed. This is highlighted by the USPTO’s recently released statistics on outcomes of ex parte appeals for FY2017. These stats show that the Patent Trial and Appeal Board (PTAB) very frequently upholds Examiners on appeal, with a 55% affirmance rate. This rate is consistent with previous years’ affirmance rates. These affirmed rates suggest a job “well done” by the USPTO. However, the way the USPTO counts affirmances yields counterintuitive and misleading results, especially with cases involving multiple grounds of rejection. Indeed for accountability purposes, this way of measuring appeals cloaks the USPTO’s Examining Corps failures.



The USPTO currently measures ex parte appeals in relation to the total appealed claims—not the total pending rejections. If all of the appealed claims stand rejected under at least one ground of rejection, the decision is affirmed. Thus, only one ground of rejection affirmed for the appealed claims is required for a decision to be marked affirmed by the USPTO. Under this measuring system, assuming an Examiner rejects all claims under five different grounds, the decision is marked affirmed even if the Board reverses four of the five grounds.

This way that the USPTO measures appeals undermines use of ex parte appeals data for accurate accountability of the USPTO in two ways. First, the data do not show which grounds of rejections get overturned on appeal. As we have previously pointed out in this blog, several of the individual grounds are currently being completely reversed at rates over 50%. This means that for certain legal grounds of rejection, an Examiner’s rejections are bad over half the time. This is obviously not very favorable to the USPTO. But when bad rejections get overlooked because of one affirmed rejection, any accountability for an Examiner’s bad rejections is lost.

This way that the USPTO measures affirmances of ex parte appeals skews how often Examiners are truly upheld because not all grounds of rejection are equally critical for the application to move forward. In fact, some rejections require very minor claim amendments or Terminal Disclaimers that insignificantly affect the patent protection. So any system of measuring outcomes should take into consideration this actual effect of the specific grounds of rejection to provide true accountability. However, it becomes difficult to measure accountability of examiners, art units, tech centers, etc., when trivial affirmances by the Board mask substantive affirmances.

For example, a recent decision, Ex parte Lee et al., had two grounds of rejection on appeal: obviousness and double patenting for the same claims. The Board reversed the rejection on obviousness, but because the appellant did not argue the double patenting rejection, the Board summarily affirmed the double patenting rejection. The appellant did not fight the double patenting rejection because of an intention to file a Terminal Disclaimer, which would have rendered the rejection moot. However, because of the non-substantive affirmance of double patenting, the entire decision is marked as affirmed.   This, when all three Examiners involved with this case got, in the Board’s view, the substantive legal issue of obviousness completely wrong.

The outcome for Ex parte Lee is far from what one might expect. One would expect if the appellant won on the only issue it actually argued, then that outcome would be marked as “reversed.” Even being generous, you could permit an outcome “affirmed-in-part,” considering the Examiner did get affirmed on one issue (even if the affirmed ground was not on the merits). But you would certainly never consider this decision as affirmed—the application is going to issue as a patent. However, the bizarre outcome of “affirmed” is exactly how this decision is counted and reported to the public by the USPTO.

The second way that the USPTO’s measuring system is deficient is by not showing how many of the rejections get overturned. If the USPTO only needs one of the grounds of rejection to qualify an appealed decision as affirmed, the tracking system effectively ignores the outcome on appeal of all remaining rejections. This greatly skews the data in favor of counting affirmances of appeals. In fact, because most decisions involve more than one ground of rejection, including accurate one-ground decisions with inaccurate multiple-ground decisions make the USPTO’s affirmance statistics almost meaningless. It certainly does not accurately reflect the accountability of an Examiner’s rejections.

This is also true for the other senior Examiners also involved the appeal process. Before every appealed case, an appeal conference takes place consisting of the Examiner, the Examiner’s supervisor, and another primary examiner. For the appeal to proceed to the judge panel, this appeal conference must sign off that they agree that the Examiner’s current rejections would likely be affirmed by the APJs on the Board. In other words, before the judges even hear the case, the appeal conference has the authority not to take the case to the panel of judges. The appeal conferees can instead disagree with the pending rejections by issuing a Notice of Allowance or by reopening prosecution with a new Office Action. So for a decision that makes it to the judge panel, one might assume that for any ground of rejection issued by the Examiner, the supervisor and another primary examiner agree fully with all the rejections as they stand.

But with the current way of measuring appeal decisions involving multiple issues, if only one of those grounds of rejection sticks, the Examiner and this appeal board did a “good job”—they were affirmed! Thus, the appeal conference examiners really only care about one of the rejections being “good enough” for the appeal to proceed. Because of the variety of rejections, the appeal conference examiners can be sure they pick cases that they are sure have a “good enough” rejection which will not adversely affecting their reputation.

The USPTO’s practice of measuring outcomes would not necessarily be a skewed way of measuring were there only one ground of rejection per decision. Nor would this practice be skewed if decisions with multiple grounds of rejection were properly designated as “affirmed-in-part” when the decision reverses on one ground and affirms for another. However, since most decisions involve multiple issues, the outcomes data counts one ground as being a full affirmance, overshadowing the remaining grounds.

From the way that the USPTO currently advertises their appeals statistics, the agency seems proud of its affirmance rates. This, because the USPTO’s way of measuring affirmances happens to be favorable to the USPTO.  But if you accept the USPTO’s affirmance rates, you get counterintuitive and misleading results, especially with cases involving multiple grounds of rejection. The current measuring system of the USPTO lacks the necessary granularity, and the public only gets to see a roll up of all the flawed affirmance data.

Since certain rejections are reversed more often than others, and since there is wide variability across tech centers and art units, having additional granularity on appeals is critical to drawing meaning from the publicly available data. Without a more comprehensive way of measuring outcomes based on what substantively happened in each appeal, the USPTO Examining Corps is not truly held accountable.

A more accurate way of measuring appeals is keeping track of the outcome for each ground of rejection. This is exactly what Anticipat Research Database does. An important part of Anticipat’s mission is extracting value from appeals decisions by devising an intuitive way of processing decisions.

The Anticipat research database keeps track of all the rejection outcomes for each ground of rejection in ex parte appeals, for greater precision. You can see which specific rejections are being reversed across various art units, tech centers, etc. This more accurate data may not show up as a neat pie chart, but having the data is powerfully more useful for accurately holding the USPTO accountable. It is also helpful for setting expectations for patent prosecution strategies and evaluating the strength of rejections.  With the data, you can even see, for the very first time ever, how often the Board is agreeing with the Examiner’s supervisor.

Click here for a free seven day trial

Looking at Abstract Idea Appeals by Tech Center

Anticipat Beta Research Database recently released the tech center search filter and the tech center column in the table display. Appeals data can now be examined for trends using tech center groupings. In this post, we will showcase one example of how this might be useful using the “abstract idea” ground of rejection.

In the past 7 months, 123 decisions have been decided on the “abstract idea” ground of rejection. Here is the breakdown for the “abstract idea” statutory subject matter rejection across all tech centers.


1600 1700 2100 2400 2600 2800 3600 3700 3900
6 1 7 9 7 2 75 13 1

Tech center 3600 is software and business method-heavy. In the immediate aftermath of the Supreme Court deciding Alice, many suspected that these types of patents/patent applications would be on the chopping block. And many examiners have sought to enforce such an approach. Small wonder, then, that this tech center leads in abstract idea appeals.

To put context to these numbers, we looked at the intake number of appeals per tech center. This intake of FY17 is only a rough comparison of the decided appeals in FY17 since it takes many months for an appeal taken in to be decided.


Comparing intake of appeals and abstract idea decisions, appeals of tech center 3600 are very much overrepresented compared to the decided abstract idea appeals. With the exception of the two lowest tech centers, the appeal intake of tech center 3600 is anywhere from 2-4 times the intake of other tech centers. But tech center 3600 decided appeals are a much different story–anywhere from 5-36 times the decided appeals.

An obvious reason for this overrepresentation is that the abstract idea doctrine does not affect all tech centers equally. Tech centers 1600 and 1700, for example, are focused primarily in the biotech and chemical arts, and other judicial exceptions to statutory subject matter are more applicable (e.g., law of nature, naturally-occurring phenomenon). Other tech centers are focused on physical inventions where the abstract idea ground of rejection is theoretically less applicable. For example, tech center 2800 includes inventions in semiconductors, memory, circuits, optics and printing. And as can be seen, the ratio of abstract idea appeals in tech center 3600 to tech center 2800 is far greater than the ratio of the corresponding intake.

Another reason for the overrepresentation stems from the large number of abstract idea rejections introduced as “new.” For the relevant time period, there were 16 new rejections in this tech center alone. This number is more than any of the other tech centers’ total number of abstract idea decisions of any disposition. This shows that the abstract idea doctrine is top of mind for judges deciding these types of cases, even if the Examiner did not apply an abstract idea rejection for the appeal.

A final reason for the overrepresentation may stem from the high volatility of the abstract idea doctrine. We generally assume based on anecdotal evidence that filing for an appeal is a (comparatively) arduous alternative to working with the Examiner through argument, interviews, and making amendments to overcome rejections. Thus, appealing a case requires a level of dedication that is born of coming to an impasse with the Examiner. Often times such an impasse can take place when the law is in flux—the Examiner can hold to one position and the applicant can hold to a contrasting position. And with software-focused inventions, such as those in tech center 3600, often times the impasse is the Examiner asserting that there is nothing patent-eligible in the application. Such a position leaves little room for any other procedure but appeal.

It would be discouraging if the judges in tech center 3600 supported this abstract idea chopping block attitude reflected by some. But the outcomes data show that they do not. We previously reported that the rate that abstract idea rejections are reversed are in line with other Section 101 nonstatutory rejections: 25% wholly reversed and 27% partially reversed. As we drilled into the abstract idea rates on a tech center level, we found that the same reverse rates hold true for tech center 3600. That is, 19/75 decisions (25%) in tech center 3600 were wholly reversed and 20/75 decisions (27%) were at least partially reversed. This shows that in the face of significant abstract idea rejections in tech center 3600, judges reverse at the same rate as other tech centers.

The above discussion shows that, by using tech center-focused appeals data, a quantitative assessment of how strong an Examiner’s position for a particular ground of rejection can be deduced by comparing that ground of rejection by technology center.  While we have done this for “abstract idea” grounds in this post, this same analysis can be done for the other grounds of rejection reviewed by the PTAB.  With this information, the odds of success on appeal can be better calculated based on which technology center is examining a particular case.


Section 101 – nonstatutory subject matter decisions: different categories = different reversal rates


, ,

One of the sexiest topics in all of patent law has become §101, specifically, patent-eligible subject matter. Part of the recent appeal stems from high volatility and uncertainty in the law. But not all categories of patent-eligibility grounds are in such flux. Some §101 nonstatutory grounds of rejection (e.g., reciting a propagated signal) are relatively predictable and stable while the so-called judicial exceptions are more unpredictable. So we drilled deeper into the types of §101 rejection to get a more complete picture of reversal rates. We found a big difference in the observed reversal rates of particular categories.

The following chart shows a breakdown of the past seven months of decisions on grounds of §101 – nonstatutory subject matter. Data for this chart was pulled from the past seven months using Anticipat’s research database. Anticipat keeps track of issue-specific tags to allow for better identification of sub-issues within issues. So while the Examiner and PTAB may decide a particular issue on §101 – nonstatutory subject matter grounds, Anticipat goes a step further to delineate the specific type of §101 – nonstatutory subject matter ground.


  1. Statutory Classes

Section 101 nonstatutory rejections include the statutory class variety (e.g., does the claimed invention fall within the statutory classes? recite more than one class? claim a human?). This is otherwise known as step 1 of the Mayo/Alice framework. Of the 186 substantively decided §101 decisions since July 25, 2016, these step 1 types accounted for 38 or 25%. Twelve were wholly reversed, a reverse rate of 32%.

This higher reversal rate for classes makes intuitive sense. Administrative patent judges must understand that statutorily, patent-eligibility is broad. A process, machine, manufacture, and composition of matter originally allowed almost any human innovation at that time of the enactment of the 1952 Patent Act to be patent-eligible. As technology has since changed, not all inventions fit into this framework, such as propagated signals and software per se. But for the most part, the courts have fit inventions into these categories—from non-transitory computer readable media to engineered bacteria. The observed reversal rate indicates that judges may reverse the Examiners in an attempt to be more faithful to the statute and to case law than the Examiners are.

  1. Judicial Exceptions

The judicial exceptions to patent-eligibility, such as abstract ideas, law of natures, and natural phenomena, have surged in popularity in recent years. And the appealed decisions show it. Of the 186 decisions within the past seven months, 146 have been judicial exceptions. The most popular of the exceptions is the abstract idea.

Of the 119 abstract idea cases, 30 were wholly reversed and two were reversed in part, or a complete reversal rate of 25% and an at least partial reversal rate of 27%. This falls squarely within the overall §101 rates that we previously reported. Natural phenomena/product of nature types are slightly higher at 31% while the law of nature ground reversal rate is markedly lower at 7%.

  1. Analysis

The number of decisions for some of these categories should be more reliable as the number of decisions increase, but some take-home lessons are clear. A judicial exception rejection has a lower chance of getting reversed than the statutory rejections. The PTAB judges are likely averse to overruling an Examiner’s finding of a judicial exception, especially when there is a great deal of uncertainty in the courts.

Furthermore, law of nature rejections are very infrequently reversed. Part of this may be a lack of positive case law to specifically support law of nature rejections being erroneous. By contrast, several Federal Circuit decisions have been decided within the past year that are positive for the patentee/applicant in showing that the claims are not an abstract idea.  Because of this, the judges have more material to work with in finding a particular claimed invention passes the Alice framework.


Rates for New Rejections on Appeal


Opening up a can of worms is good practice on the lake–not in front of a PTAB judge panel. But when appealing a twice-rejected patent application, a can of worms can very well be opened when the Board newly introduces rejections. These new rejections are not extremely frequent, but understanding the risk of these new rejections is an important part of deciding to take a case on appeal.

In deciding rejections on appeal, the Board has discretion to sua sponte introduce a new rejection to the pending claims. In other words, an appeal is made to the Board seeking to overturn one rejection and in return the Board slaps a different, additional rejection. The Board can also designate existing rejections as new by using a different rationale than the Examiner, but this article focuses on the purely new type of rejection.

These entirely new rejections are introduced somewhat unpredictably. Thankfully, our appeals decisions data can help predict the risks of these new rejections.

A sample of 12,376 decisions over the past four years shows that purely new rejections are relatively rarely applied (1.7%). Data was gathered using Anticipat’s beta research database. It turns out that the Board overwhelmingly prefers to introduce some grounds of purely new rejection over others. Here is the breakdown:


As can be seen, §112(b) is the most frequent purely new rejection with 78 decisions. Second place goes to §101 nonstatutory subject matter rejections with 74. Next are all the §112(a) rejections with 29. Then obviousness with 16, followed closely by §102 anticipation with 11, and then §112(d) with 3. Finally,  there is one obviousness-type double patenting new rejection.

Some of these numbers are intuitive. As seen from a previous post on the frequency of appealed rejections, §103 rejections are on appeal in 92% of the decisions. Thus it would not be expected that too many more new §103 rejections would be applied by the Board to cases that do not already have them.

Section 101 is an increasingly more frequent new rejection. Many of the newly issued appeals decisions were for cases with appeals filed before the Supreme Court decided Alice v. CLS Bank. The Board appears to be thinking about patent-eligibility much more proactively. In addition to formally issuing a new §101 rejection, we have seen the Board suggesting in its decisions in a number of cases that the examiner consider §101 as a possible new rejection.

Also important to note, even the most frequent of these purely new rejections are fairly rare. The most frequent, §112(b), is newly introduced in only 0.63% of the cases. However, even with this low possibility, getting a new ground of rejection from the Board is still a risk practitioners should consider when taking a case on appeal.

Which rejections are most frequently appealed to the Board?

Examiners typically have a favorite arsenal of grounds of rejection to apply to an application. In other words, not all grounds of rejection are applied evenly. Therefore it is a small wonder that the types of rejections appealed to the Board are non-uniform. Knowing which types of rejections are currently getting appealed (and seeing the results of those appeals) arms the patent prosecutor with an additional weapon against the Examiner. Here is a breakdown of the types of rejections appealed to the Board.


The big winner is obviousness. In a sample, 92.31% of the cases on appeal included an obviousness rejection. This means that almost every decision on appeal had an obviousness rejection.

Second place is anticipation. In the same sample, 28.08% had a §102 rejection. Since multiple issues can be appealed simultaneously, almost every case was appealing either a §102 or a §103 rejection.

Third place is §112(b), which includes indefiniteness and omission of essential elements. 7.05% of decisions had a §112(b) rejection.

Fourth place is §112(a), which includes written description, enablement, best mode and new matter. In the sample, 6.35% had a §112(a) rejection.

Fifth place is §101 – nonstatutory subject matter. In the sample, 3.57% had a nonstatutory subject matter rejection.

Next is obviousness-type double patenting. While technically 3.32% were on appeal, many of these were not argued by the appellant, suggesting that the appellant was in fact not appealing this particular rejection. Thus, obviousness-type double patenting may be even lower on the list in terms of issues actually being appealed.

Next is §112(d), which was included in 0.24% of the decisions.

Finally are the other §101 grounds, such as lack of utility and statutory double patenting. 0.22% decided on one of these rejections.

Data was gathered using Anticipat’s beta research database. The sample of decisions consisted of 12,358 decisions from December 26, 2012 to January 9, 2017, with the various rejections selected. Decisions that included issues that were only introduced by the Board (new rejections) were then searched for and removed from the data set to only show those decisions that were previously on appeal.

Knowing which rejections get appealed shows three things. First, the types of appealed rejections show which rejections the applicant is willing to bet significant time and money on that the Examiner is wrong. Second, the data show a correlation to the number of applied rejections in Office Actions. The greater the number of appeals, the more frequently the rejection is being applied in Office Actions. Third, if different from the Office Actions, the appealed data can show the likelihood that the area of law is in flux. As we have previously discussed, rejections are not uniformly successfully appealed. Indeed, knowing the results of the different types of rejections on appeal provides a practitioner with predictive analytics to know the likelihood of succeeding on an appeal.

Non-statutory Obviousness-Type Double Patenting Decisions by the PTAB

Non-statutory obviousness-type double patenting.  It is quite a mouthful, and as just as difficult to say quickly three times fast as it is to try to understand when slogging through the relevant sections of the Manual of Patent Examining Procedure (MPEP).  While the name and analysis are complicated, the principle is not–the courts have stated that a patent applicant should not be allowed to patent the same invention merely by filing a second patent application on it later in time.

Reviewing approximately the last six months of ex parte appeals decisions by the Patent Trial and Appeal Board (PTAB), the Board considered this issue in 197 decisions.  In 37 cases, the Board reversed the Examiner’s determination for at least one claim; in 29 of the decisions, the Board’s reversal was for all claims.

The total reversal rate (14.7%) for this issue is lower than that observed over the same period in even non-statutory subject matter cases (see our most recent posts).  We suspect there are likely many reasons for this result.  Here are some possibilities:

  1. Appellants may not be contesting this issue on appeal as it can be relatively easily handled later through filing a terminal disclaimer. This is supported by the large number of summarily affirmed dispositions, meaning the Board affirmed the rejection because the applicant failed to raise any arguments on the issue.
  2. The issue is a narrow, technical one that Examiners and the Board find more black and white.
  3. For many industries (other than biotech or pharmaceuticals), extra patent term is not as commercially important, so many applicants are choosing not to incur additional cost seriously contesting this issue on appeal. Instead, they are fine with filing a Terminal Disclaimer to render the rejection moot.

Current PTAB Decisions of §101 – Nonstatutory Subject Matter

Over the past 6 months, 144 ex parte appeals decisions involving 35 U.S.C. 101 have been handed down that specifically deal with the issue of whether the claims at issue are drawn to statutory subject matter. This subject is the most rapidly evolving area of IP law today, and from our experience one of the most ambiguous in view of what seems to be an “I’ll know it when I see it” approach of the Supreme Court’s two part test set forth in Alice Corp. Pty. Ltd. v. CLS Bank Int’l, 573 U.S. __, 134 S.Ct. 2347 (2014).

In the 144 decisions, the Board reversed the Examiner’s finding that the claims were directed to non-statutory subject matter in 37 cases–25.7% of the time.  Of these 37 cases, 36 involved a reversal as to all claims on appeal.

Compared to the reversal rates for obviousness and novelty (see our other posts), the Board is upholding non-statutory subject matter decisions by Examiners much more often.  However, the 25% total reversal rate indicates that even in the unsettled world of section 101, the Board does not agree with the Examiners in 1 out of 4 cases.

The data indicate that the patent-eligibility provision is not being treated by the PTAB as the exception that swallows the rule, which has been the fear of many practitioners observing the steady stream of invalidations on section 101 grounds flowing out of the courts.  Instead, the data indicates that a substantial number of cases are, in its judgment, passing the two part test.