“Particular Machine” Relied on in Overturning an Abstract Idea Rejection for NLP invention directed to Abstract Idea


, ,

In its heyday, the machine-or-transformation required that all process claims be implemented by a particular machine in a non-conventional and non-trivial manner or transforms an article from one state to another. While the Supreme Court in Bilski v. Kappos overruled the Federal Circuit’s reliance on the test as the exclusive test for patent-eligibility, it left open the test as being an important clue. In a recent decision, the PTAB shows that analysis of this test can be helpful in overturning an abstract idea rejection under step two of the Alice/Mayo framework.

In Ex Parte Milman, August 22, 2017, the PTAB overturned a 101 – non statutory rejection in an application. The claimed invention generated free text descriptions using natural language processing to generate an anatomical path for display on a graphical user interface. This was found to be directed to an abstract idea.

However, in step two of the analysis, the Board disagreed with the Examiner’s finding that the claims did not recite significantly more than the asserted abstract idea. The Examiner had found that the method is deployed on generic hardware and “the computer appears to perform only generic functionality that is well known in the art, e.g. processing data.” The Board found that the Examiner did not adequately show that the reliance on natural language processing capability involves a general purpose computer performing well-known generic functionality, rather than being a “particular machine” that is the result of implementing specific, non-generic computer functions. See Bilski v. Kappos, 561 U.S. 593, 601 (2010).

Ever since Bilski, the machine-or-transformation test has taken a backseat. This is even when the Supreme Court emphasized that the machine-or-transformation test could serve as an important clue for patent-eligibility. In the meantime, the Supreme Court in Mayo and Alice cemented a two-step test for patent-eligibility that used a different analysis than the machine-or-transformation. But Ex Parte Milman makes clear that the analysis of the machine-or-transformation test is still applicable. At least as it relates to claims that recite natural language processing, an Examiner must show that the claim are not more than a particular machine that is the result of implementing specific, non-generic computer functions.


Guide your patent prosecution strategy with Anticipat

We at Anticipat are excited to announce a new product called Practitioner Analytics. The tool helps practitioners use what is found to be successful on appeal at the Board in all aspects of routine patent prosecution. But before we explain the tool, we touch on some present realities of a patent practitioner responding to an Office Action.

Status Quo
As a patent professional, you may spend a lot of time reviewing Office Actions and determining response strategies. You may wade through each Office Action rejection-by-rejection. The complexities of patent law make this process difficult and time-consuming.

The gut feeling is a powerful way for the practitioner to approach each rejection. Maybe for one rejection, based on your experience and/or knowledge of the patent laws, your gut feeling tells you that the Examiner brings up a good point and you consider amending the claims. For another rejection, based on this same experience and knowledge, you see that a rejection is unreasonable so you consider traversing the rejection without amending the claims. For other rejections, you may initially not know how to proceed due to a lack of experience or up-to-date knowledge of the rejection.

So a practitioner’s gut feeling can guide the strategy in responding to the Office Action only so far, especially with constant developments in the law. In addition to being inefficient, there’s always a chance that the practitioner’s own experience is incomplete. Plus, this whole process can be difficult to gauge the strength of your strategy.

Furthermore, the client‘s preferences can make the strategy even more complex, necessitating diving into seldom explored areas of patent law. For example, the client may be intent on maintaining a certain claim breadth to safeguard entrants into the market or to cover a competitor product, which makes the patent prosecution strategy more difficult. Hence you may have to rely on a less persuasive strategy in overcoming a particular rejection.

With all the complexities that go into patent law, do you ever feel like there must be a better way to keep current on response strategies in a more efficient, fact-based way?

Luckily, there is a large body of appeals decisions at the PTAB where judges routinely overturn Examiner rejections. The judges apply the rules and laws using the same arguments and legal support that Applicants can use to overcome rejections in responding to Office Actions. If an argument works before the Board, that argument has high odds of ultimately winning out. So in a way the Board weeds through much of the possible argumentation and distills the arguments effective in overcoming all kinds of rejections. And because of the sheer volume of appeals decisions, these decisions include rationales for overcoming practically every ground of rejection. Plus, because the decisions are authored by independent judges at the PTO, they are an accurate reflection of the standards and arguments used to scrutinize both Examiner and Appellant arguments.

The only problem is that these decisions are posted in bulk form with minimal search capabilities, the content of each decision is disorganized, and manually wading through the decisions is horrific information overload.

Also, the USPTO overly simplifies decision outcomes, which does not tell you very much about what happened in any given appeal decision. So how do you make use of the data in the thousands of appeals decisions that issue every year?

Solution: Anticipat Practitioner Analytics

Anticipat Practitioner Analytics provides more than statistics. It is a PTAB legal research tool that can quickly get you helpful fact-based information about arguments and strategy you can use for a specific application. How does it do this?

Practitioner Analytics powerfully and efficiently guides prosecution strategy. By inputting an application number into the Analytics search engine, the page returns lists of decisions where the Board reversed for various possible rejections.

This can help practitioners in three important areas

Area 1: Organize persuasive arguments
Practitioner Analytics organizes rationales that the Board uses in reversing an Examiner’s ground of rejection. It does so by aggregating reversal rationales at the Board by each organizational level in the Office (Examiner, art unit, tech center). The specific legal rationales argued before the Board in each of these organization levels is listed underneath a bar chart showing real reversal rates at each level. At the click of a mouse the practitioner can select the legal issue in their specific case and see how it was treated in Board decisions coming from the Examiner involved, the Examiner’s Art Unit, the Tech Center, and then across the entire USPTO. The Practitioner can then compare the facts in their case to those cases in a list of decided appeals cases where this issue was involved to further predict the outcome before the PTAB.

Practitioner Analytics improves the caliber of argumentation and saves time in legal research by organizing and ranking persuasive reversal rationales for each Examiner, art unit, tech center, and global USPTO levels for each ground of rejection.

Area 2: Assess strength of rejections
Appellants typically won’t spend the time and money on a full appeal if they’re not sure of their position. Similarly, weak Examiner positions tend to get weeded out by the preappeal conference and appeal conference. So the appeal decision is actually a good objective data point for what kinds of rejections the Examiner corps is not incentivized to back down from but still will lose at the Board. This information is invaluable when deciding whether to pursue an appeal or not.

Anticipat provides you with the percentage of reversed decisions at each level (Examiner, art unit, tech center, USPTO). The higher the reversal rate, the less reasonable the Examiner’s rejection.

This reversal rate information enhances a professional’s anecdotal experience by identifying anomalies in how a particular ground of rejection’s reversal rate at the Board compares to other groups. This can guide a practitioner’s strategy in responding to Office Action rejections. That is, knowing how this particular Examiner or art unit’s reversibility rate compares with other groups can suggest when to hold firm to a position. For practitioner’s with relatively little appeals experience in a particular technology, this data instantly tells you what is working and what is not, without having had to spend years learning in the School of Hard Knocks.

Area 3: Get favorable case law straight from the Board
Practitioner Analytics also stores the legal support cited by the Board in each particular decision for each legal issue (tag) identified.

This means that in the aggregate, Practitioner Analytics provides the case law/MPEP/guidelines relied on to reverse or affirm the Examiner for each particular rationale at a mouse click, allowing you to keep current on relevant case law now being used by the Bard and identify trends in persuasive legal authority used specific to the rejections in a specific case.

With Practitioner Analytics, you can use successful approaches at the Board in your own practice without having to wait decades to gain experience.
Practitioner Analytics empowers you with knowledge about the strength of rejections at examiner, art unit and tech center levels
Practitioner Analytics provides a simple and intuitive interface so that you can quickly identify successful reversal rationales for examiner, art unit and tech center specific information
Pracitioner Analytics keeps you up to date on specifically tagged legal issues referencing the case law the board itself uses on that issue.
Anticipat Analytics enhances your ability to provide quality and cost-effective advocacy, saving you countless hours in legal research.  Right now, try it with unlimited access for free for two weeks.

Update on ex parte PTAB Appeals Reversal Rates: High Reversal Rates Maintained Except for 101 – Nonstatutory Rejections


About six months ago, the AIPLA ex parte subcommittee published a paper that showed the reversal rates across various grounds of rejection. Some of the findings were very surprising, including over 50% reversal rates for Section 102 and 112 rejections. Here, we provide an update to this paper, which doubles the data set from the time of the AIPLA publication. We find that the reversal rates have not budged from these initial rates, outside of a downtick in reversal rates for Section 101 non statutory rejections. This signals that the surprising results were not a sample size anomaly.


Section 101 – Non statutory

Of the 629 decisions, 130 were reversed and 7 affirmed-in-part. This translates into 21% pure reversals and 22% at least partial reversals.

Section 102 – Anticipation 

Of the 2187 Section 102 decisions, 1065 were reversed and 177 affirmed-in-part. This translates into 49% pure reversals and 57% at least partially reversed.

Section 112(a)


Of 203 decisions, 104 were reversed and 8 affirmed-in-part. This translates into 51% reversed and 55 at least partially reversed.

New Matter

Of 27 decisions, 13 were reversed. This translates into 48% reversal rate.

Written Description

Of 531 decisions, 276 were reversed and 19 were partially reversed. This translates into 52% reversal rate and 56% at least partially reversed.

In total, out of 761 decisions, 393 or 52% were reversed and 55% were at least partially reversed.

Section 112(b) – indefiniteness

Of 806 decisions, 390 were reversed and 34 were partially reversed. This translates into a 48% reversal rate and 53% at least partially reversed.

Section 112(d)

Of  38 decisions, 16 were reversed and 1 was partially reversed. This translates into a reversal rate of 42% and 45% at least partially reversed.

Section 103 Obviousness

Of 9329 decisions, 3139 were reversed and 907 were partially reversed. This translates into a reversal rate of 34% and an at least partial reversal rate of 43%.

Obviousness type double patenting

Of the 418 decisions, 67 were reversed and 13 were partially reversed. This translates into a 16% reversal rate and a 19% at least partial reversal rate.

Data Set

The above data was pulled using Anticipat Research in the range of 7/25/2016 to 7/25/2017. You can perform legal research for these grounds of rejection and others on Anticipat Research. Click here for a free trial to give it a try.


The past six months have shown that the high reversal rates for Sections 102 and 112 rejections reported previously are here to stay. While Section 102 reversal rates dropped some, 49% is still very high. Given the large number of decisions, especially for obviousness, it is interesting to note that the reversal rates are as stable as they are.

Meanwhile, the past six months have experienced far fewer 101 non statutory rejections. Specifically, a reversal rate drop of 4% based on six months of additional decisions seems significant.


Anticipat’s focus is simple: complete and accurate annotation of PTAB ex parte appeals decisions

Despite recent strides, the USPTO does not make it easy to extract all its data. This is especially true for ex parte appeals decisions from the Patent Trial and Appeal Board (PTAB)–even though these appeals decisions establish key data points about general patent prosecution. We discuss seven shortcomings of the PTO websites as well as Anticipat’s solution to each of these shortcomings.

1) No centralized repository – If you are looking for a decision without knowing the authority (i.e., precedential, informative, or final), you will likely have to search through three different databases on different web pages. This is because the different types of PTAB decisions are scattered across different web pages depending on the authority of the decision.

Anticipat houses all decisions in a single repository and it labels each decision with the respective authority. To date, Anticipat has all publicly available PTAB appeals decisions in its database.

2) Non-uniform and sporadic decision postings – The USPTO does not post every decision to the Final Decisions FOIA Reading Room webpage on its issued date. For example, if there are 100 decisions dated July 29, five may show up the day of July 29. Fifteen may show up on July 30 even though they are still dated and show up on the database as dated July 29. Twenty may show up on July 31. Fifty may show up August 1. Five may show up August 4. Three may show up August 5. Another one may show up August 6. And another may show up August 7. To monitor recent decisions, it can take time to keep track of which decisions have been looked at.

To fix this, Anticipat has multiple redundant scrapers to check for any backfilled decisions, making sure that every decision posted to the e-foia webpage is picked up. And it emails a recap of these annotated decisions on the 10th day to make sure that the complete set has been included.

3) Unreliable connection – Whether you’re just trying to load the main USPTO page or whether you’re searching for a particular decision, the PTO site (especially the FOIA Reading Room) can be slow or even unresponsive in letting you access data.

Anticipat solves this problem by being hosted on a scalable cloud server. The site should never be down, even during peak traffic.

4) Search functionality limited – The Final Decisions page allows limited search (e.g., date range, Appeal No., Application No., text search, etc.). But none of these searching capabilities are actually available for the 21 precedential and 180 informative decisions.

Even though the Final Decisions page allows for some search functionality, the type of searchable data underwhelms.  First, the input fields can be extremely picky.  For example, if you input an Application No. with a slash (“/”) or a comma (“,”), you get a “no results found” message. But for this particular input, the real problem is not that there are no results for the value input. Rather, it is that that you included a character not recognized by their program. This misleading message does not distinguish whether input values exist or whether the format of the query you entered simply is not consistent with the website’s expectations.  Further, there is no search capability for some of the most useful types of data: art unit, examiner, judge, type of rejection, outcome of the decision, the class, etc.

To overcome this, Anticipat permits loose input so that you unambiguously get the results you need without having to prophetically predict the required format. And it does this for decisions for each type of authority. Anticipat has also taken the time to supplement decisions with their respective application information, such as art unit, examiner, judges, grounds of rejection, outcomes, etc.  Only Anticipat’s database allows you to find all those cases using the most useful data for your analysis.

5) Unorganized data display – In addition to not being able to organize the data into one repository, as discussed in 3), the organization within the Final Decisions page is lacking. To its defense, the PTO does provide some organization to the various decisions. It organizes Final Decisions by (D) – Decision, (J) – Judgment, (M) – Decision on Motion, (O) – Order, (R) – Rehearing, and (S) – Subsequent Decision. However, the page does not allow you to display decisions by each type. Indeed, this organization of the types of decisions feels like more of an afterthought than as a way for users to effectively organize the data. Further, the organization does not go far enough. For example, within (D) – Decision are reexaminations, reissues, inter partes review, covered business methods, decisions on remand from the Federal Circuit, and regular appealed decisions.  There is no way to filter these different types of decisions from each other without manually screening all the decisions in the results list.

To fix this, Anticipat database tracks the various different types of decisions so that one can easily filter by certain subsets of decisions or search within specified subsets. Each sortable column can be sorted in ascending or descending order. Other columns of different information can be added by selecting the checkboxed fields.

6) Downtime from 1:00AM – 5:00AM EST – Every morning, the PTO takes the FOIA Reading Room website offline and performs maintenance on the website. This may not be a big deal to some people, but for someone in another time zone or just in night owl mode, this four hour wait time can cost you a lot of time in accessing your desired decision or data.

Being hosted on a cloud server, Anticipat has now regularly interrupted maintenance time. You are free to use at all hours of the day.

7) Errors – Coming from a federal government website, it’s understandable that some of the decisions data contain errors. Some errors are minor such as the name of the decision being cut off because it includes an apostrophe. Others are more consequential like mismatching a decision with another application number or combining one decision with two decisions.  Because every decision in the Anticipat database is verified using our proprietary systems, we work hard to catch and resolve the errors in the source data of every decision.


In conclusion, because of the above discussed deficiencies, ex parte PTAB data have been consistently overlooked because it simply cannot be effectively retrieved and analyzed by practitioners.  While you may not realize it yet, this may be costing you your time and your money. However, Anticipat.com alleviates these deficiencies. Access the Research Database here.


Introducing Rejection Tags: A Way to Use Rationales and Types of Rejections for Patent Prosecution

In 1753, Swedish botanist Carl Linnaeus introduced a system for classifying plants. His two-term classification system assigned each organism a first generic name and a second, more specific name (e.g., Homo sapiens for humans). This system was different than previous classifications, but not extraordinary. But the elegance and simplicity of his system truly was groundbreaking, paving the way for all living organisms to be systematically and uniformly classified.

As the father of modern taxonomy, Linnaeus would be stunned to see how far technology has taken classification. Entire industries have been revolutionized through improved classifying of big data. And patent prosecution is no exception. We at Anticipat are excited to introduce “tags” on the Research Database for all grounds of rejections as a way of classifying patent prosecution rationales.

If the Board decision equivalent of a genus is a ground of rejection, then the species is the tag. Anticipat Research Database has long included the more generic grounds of rejection for each appealed decision (e.g., §101, §102, §103, §112, OTDP, etc.) But because of the wide-ranging categories of common reasons why a rejection is reversed or affirmed, it becomes helpful to dig deeper than cataloging the ground of rejection.  This deeper level represent the various possible points of contention. Identifying these specific categories allows you too find those specific decisions that are relevant to a certain issue without drowning in the ocean of Board decisions.

In short, a tag is a brief summary of a more granular point of contention regarding the ground of rejection. Assume that an applicant and Examiner are at odds on a ground of rejection. Depending on the rejection, this disagreement can take place over a number of finite points. Now with tags, you can easily look up the various points of contention for each ground of rejection. In other words, if you believe that a particular obviousness argument is worth pursuing, you can find decisions where the Board reversed an Examiner using that very same argument. Or if you don’t know which argument is worth pursuing, you can quickly find those arguments that have been most successful in reversing the Examiner. Here are examples of the ground of rejection/tag classification system.

  • 101 – nonstatutory subject matter

Some of the tags for the ground of rejection “§101 – nonstatutory subject matter” include:

  • Software/Data per se
  • Abstract Idea (prima facie case, step one, step two)
  • Law of Nature (prima facie case, step one, step two)
  • Naturally-Occurring Phenomenon (step one, step two)

Finding all the decisions with a particular ground of rejection is just the first step. Even more useful is weeding out less relevant decisions that fall within the same ground of rejection category. Take the abstract idea rejection. There are many different types of points of contention within 101 nonstatutory rejections that are less relevant to abstract idea: computer readable medium comprises a signal, software per se, combining multiple classes, law of nature, naturally occurring phenomenon, claiming a human, etc. Even within abstract idea, there are multiple points of contention such as 1) prima facie case (that the Examiner did not even do the minimum job in identifying and/or rejecting the claim as an abstract idea), 2) step 1 and 3) step 2 of the Mayo/Alice framework. Since we at Anticipat track all of these subcategories for you, you can look up in seconds to find decisions with your desired point of contention.


The ground of obviousness under 35 U.S.C. 103(a) includes our most advanced set of tags. This makes sense as obviousness is one of the most nuanced and developed ground of rejection. We keep track of over 20 points of contention within obviousness, such as the following:

  • Scope and Content of Prior Art – Broadest Reasonable Interpretation
  • Examiner Bears Initial Burden (Prima Facie Case)
  • Clear and Factually-Supported Articulation of Reasons for Obviousness (Prima Facie Case)
  • Hindsight Reasoning (Prima Facie Case)
  • Secondary Considerations
  • Combining/Substituting prior art elements according to known methods to yield predictable results
  • Use of known technique to improve similar devices (methods, or products) in the same way
  • Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results
  • “Obvious to try” – choosing from a finite number of identified, predictable solutions, with a reasonable expectation of success
  • Known work in one field of endeavor may prompt variations of it for use in either the same field or a different one
  • Proposed modification cannot render the prior art unsatisfactory for its intended purpose
  • Teaching away
  • Non-Analogous Art

The challenge with trying to find relevant arguments to overcoming an obviousness rejection, for example, is that any case decided by the Board could include one or more of these obviousness points. With the volume of obviousness decisions, it has been impractical to find relevant decisions.

The point of tags

You can do a lot of interesting things with tags. Take for example if you are stuck on a particular rationale used by an Examiner that you think is unreasonable. You can easily match your issue with decided cases at the Board and use Anticipat to quickly pull the decisions where the Board agreed with you.  If there are very few decisions on your side, that is a valuable reality check.  If there are many decisions on your side, you can review decisions to double check that your facts correspond with those in the decisions. You can also use the legal authority relied on by the Board for this particular point of contention in persuading your Examiner.

Tags will be incorporated into our soon-to-be-released Practitioner Analytics page to guide prosecution strategy. Using the Practitioner Analytics interface, tags can be ranked in the order of frequency in overturning an Examiner’s particular rejection. You can thus find better arguments faster and with more confidence. Sign up for an invitation to the soon-to-be-released Practitioner Analytics page.

Board decisions show that independent judges have agreed with the applicant’s position in a related case.  They are a powerful way to check and augment your existing experience.


You don’t have to be a famous botanist to appreciate how identifying board decisions using a rejection/tag relationship is a simple but powerful way of describing how the Board decides cases today. In the aggregate, this structure provides targeted information to inform your patent prosecution strategy. At about $1 per day, Anticipat Research Database is not only incredibly affordable, at current hourly billing rates it pays for itself in just seconds a day. Try it now for free.

Too Simplistic: How the USPTO measures outcomes for ex parte PTAB appeals


A patent applicant usually decides to appeal a rejection as a last resort because of the substantial cost and time. When the applicant decides to overlook the substantial cost and time, it is because she believes independent judges will objectively overturn at least one of (but hopefully all) the rejections. These administrative patent judges (APJs) have experience, technical backgrounds, and are independent from Examiners. So if this body of judges were to sustain Examiners’ rejections most of the time, you would think that the Examiners are doing a good job of examining applications. And if the Examiners are doing well, it would appear that the U.S. Patent & Trademark Office (USPTO) is doing well. But it’s not.

Currently, the USPTO measures decision outcomes of ex parte appeals in three different ways: affirmed, affirmed-in-part, or reversed. This is highlighted by the USPTO’s recently released statistics on outcomes of ex parte appeals for FY2017. These stats show that the Patent Trial and Appeal Board (PTAB) very frequently upholds Examiners on appeal, with a 55% affirmance rate. This rate is consistent with previous years’ affirmance rates. These affirmed rates suggest a job “well done” by the USPTO. However, the way the USPTO counts affirmances yields counterintuitive and misleading results, especially with cases involving multiple grounds of rejection. Indeed for accountability purposes, this way of measuring appeals cloaks the USPTO’s Examining Corps failures.



The USPTO currently measures ex parte appeals in relation to the total appealed claims—not the total pending rejections. If all of the appealed claims stand rejected under at least one ground of rejection, the decision is affirmed. Thus, only one ground of rejection affirmed for the appealed claims is required for a decision to be marked affirmed by the USPTO. Under this measuring system, assuming an Examiner rejects all claims under five different grounds, the decision is marked affirmed even if the Board reverses four of the five grounds.

This way that the USPTO measures appeals undermines use of ex parte appeals data for accurate accountability of the USPTO in two ways. First, the data do not show which grounds of rejections get overturned on appeal. As we have previously pointed out in this blog, several of the individual grounds are currently being completely reversed at rates over 50%. This means that for certain legal grounds of rejection, an Examiner’s rejections are bad over half the time. This is obviously not very favorable to the USPTO. But when bad rejections get overlooked because of one affirmed rejection, any accountability for an Examiner’s bad rejections is lost.

This way that the USPTO measures affirmances of ex parte appeals skews how often Examiners are truly upheld because not all grounds of rejection are equally critical for the application to move forward. In fact, some rejections require very minor claim amendments or Terminal Disclaimers that insignificantly affect the patent protection. So any system of measuring outcomes should take into consideration this actual effect of the specific grounds of rejection to provide true accountability. However, it becomes difficult to measure accountability of examiners, art units, tech centers, etc., when trivial affirmances by the Board mask substantive affirmances.

For example, a recent decision, Ex parte Lee et al., had two grounds of rejection on appeal: obviousness and double patenting for the same claims. The Board reversed the rejection on obviousness, but because the appellant did not argue the double patenting rejection, the Board summarily affirmed the double patenting rejection. The appellant did not fight the double patenting rejection because of an intention to file a Terminal Disclaimer, which would have rendered the rejection moot. However, because of the non-substantive affirmance of double patenting, the entire decision is marked as affirmed.   This, when all three Examiners involved with this case got, in the Board’s view, the substantive legal issue of obviousness completely wrong.

The outcome for Ex parte Lee is far from what one might expect. One would expect if the appellant won on the only issue it actually argued, then that outcome would be marked as “reversed.” Even being generous, you could permit an outcome “affirmed-in-part,” considering the Examiner did get affirmed on one issue (even if the affirmed ground was not on the merits). But you would certainly never consider this decision as affirmed—the application is going to issue as a patent. However, the bizarre outcome of “affirmed” is exactly how this decision is counted and reported to the public by the USPTO.

The second way that the USPTO’s measuring system is deficient is by not showing how many of the rejections get overturned. If the USPTO only needs one of the grounds of rejection to qualify an appealed decision as affirmed, the tracking system effectively ignores the outcome on appeal of all remaining rejections. This greatly skews the data in favor of counting affirmances of appeals. In fact, because most decisions involve more than one ground of rejection, including accurate one-ground decisions with inaccurate multiple-ground decisions make the USPTO’s affirmance statistics almost meaningless. It certainly does not accurately reflect the accountability of an Examiner’s rejections.

This is also true for the other senior Examiners also involved the appeal process. Before every appealed case, an appeal conference takes place consisting of the Examiner, the Examiner’s supervisor, and another primary examiner. For the appeal to proceed to the judge panel, this appeal conference must sign off that they agree that the Examiner’s current rejections would likely be affirmed by the APJs on the Board. In other words, before the judges even hear the case, the appeal conference has the authority not to take the case to the panel of judges. The appeal conferees can instead disagree with the pending rejections by issuing a Notice of Allowance or by reopening prosecution with a new Office Action. So for a decision that makes it to the judge panel, one might assume that for any ground of rejection issued by the Examiner, the supervisor and another primary examiner agree fully with all the rejections as they stand.

But with the current way of measuring appeal decisions involving multiple issues, if only one of those grounds of rejection sticks, the Examiner and this appeal board did a “good job”—they were affirmed! Thus, the appeal conference examiners really only care about one of the rejections being “good enough” for the appeal to proceed. Because of the variety of rejections, the appeal conference examiners can be sure they pick cases that they are sure have a “good enough” rejection which will not adversely affecting their reputation.

The USPTO’s practice of measuring outcomes would not necessarily be a skewed way of measuring were there only one ground of rejection per decision. Nor would this practice be skewed if decisions with multiple grounds of rejection were properly designated as “affirmed-in-part” when the decision reverses on one ground and affirms for another. However, since most decisions involve multiple issues, the outcomes data counts one ground as being a full affirmance, overshadowing the remaining grounds.

From the way that the USPTO currently advertises their appeals statistics, the agency seems proud of its affirmance rates. This, because the USPTO’s way of measuring affirmances happens to be favorable to the USPTO.  But if you accept the USPTO’s affirmance rates, you get counterintuitive and misleading results, especially with cases involving multiple grounds of rejection. The current measuring system of the USPTO lacks the necessary granularity, and the public only gets to see a roll up of all the flawed affirmance data.

Since certain rejections are reversed more often than others, and since there is wide variability across tech centers and art units, having additional granularity on appeals is critical to drawing meaning from the publicly available data. Without a more comprehensive way of measuring outcomes based on what substantively happened in each appeal, the USPTO Examining Corps is not truly held accountable.

A more accurate way of measuring appeals is keeping track of the outcome for each ground of rejection. This is exactly what Anticipat Research Database does. An important part of Anticipat’s mission is extracting value from appeals decisions by devising an intuitive way of processing decisions.

The Anticipat research database keeps track of all the rejection outcomes for each ground of rejection in ex parte appeals, for greater precision. You can see which specific rejections are being reversed across various art units, tech centers, etc. This more accurate data may not show up as a neat pie chart, but having the data is powerfully more useful for accurately holding the USPTO accountable. It is also helpful for setting expectations for patent prosecution strategies and evaluating the strength of rejections.  With the data, you can even see, for the very first time ever, how often the Board is agreeing with the Examiner’s supervisor.

Click here for a free seven day trial

Looking at Abstract Idea Appeals by Tech Center

Anticipat Beta Research Database recently released the tech center search filter and the tech center column in the table display. Appeals data can now be examined for trends using tech center groupings. In this post, we will showcase one example of how this might be useful using the “abstract idea” ground of rejection.

In the past 7 months, 123 decisions have been decided on the “abstract idea” ground of rejection. Here is the breakdown for the “abstract idea” statutory subject matter rejection across all tech centers.


1600 1700 2100 2400 2600 2800 3600 3700 3900
6 1 7 9 7 2 75 13 1

Tech center 3600 is software and business method-heavy. In the immediate aftermath of the Supreme Court deciding Alice, many suspected that these types of patents/patent applications would be on the chopping block. And many examiners have sought to enforce such an approach. Small wonder, then, that this tech center leads in abstract idea appeals.

To put context to these numbers, we looked at the intake number of appeals per tech center. This intake of FY17 is only a rough comparison of the decided appeals in FY17 since it takes many months for an appeal taken in to be decided.


Comparing intake of appeals and abstract idea decisions, appeals of tech center 3600 are very much overrepresented compared to the decided abstract idea appeals. With the exception of the two lowest tech centers, the appeal intake of tech center 3600 is anywhere from 2-4 times the intake of other tech centers. But tech center 3600 decided appeals are a much different story–anywhere from 5-36 times the decided appeals.

An obvious reason for this overrepresentation is that the abstract idea doctrine does not affect all tech centers equally. Tech centers 1600 and 1700, for example, are focused primarily in the biotech and chemical arts, and other judicial exceptions to statutory subject matter are more applicable (e.g., law of nature, naturally-occurring phenomenon). Other tech centers are focused on physical inventions where the abstract idea ground of rejection is theoretically less applicable. For example, tech center 2800 includes inventions in semiconductors, memory, circuits, optics and printing. And as can be seen, the ratio of abstract idea appeals in tech center 3600 to tech center 2800 is far greater than the ratio of the corresponding intake.

Another reason for the overrepresentation stems from the large number of abstract idea rejections introduced as “new.” For the relevant time period, there were 16 new rejections in this tech center alone. This number is more than any of the other tech centers’ total number of abstract idea decisions of any disposition. This shows that the abstract idea doctrine is top of mind for judges deciding these types of cases, even if the Examiner did not apply an abstract idea rejection for the appeal.

A final reason for the overrepresentation may stem from the high volatility of the abstract idea doctrine. We generally assume based on anecdotal evidence that filing for an appeal is a (comparatively) arduous alternative to working with the Examiner through argument, interviews, and making amendments to overcome rejections. Thus, appealing a case requires a level of dedication that is born of coming to an impasse with the Examiner. Often times such an impasse can take place when the law is in flux—the Examiner can hold to one position and the applicant can hold to a contrasting position. And with software-focused inventions, such as those in tech center 3600, often times the impasse is the Examiner asserting that there is nothing patent-eligible in the application. Such a position leaves little room for any other procedure but appeal.

It would be discouraging if the judges in tech center 3600 supported this abstract idea chopping block attitude reflected by some. But the outcomes data show that they do not. We previously reported that the rate that abstract idea rejections are reversed are in line with other Section 101 nonstatutory rejections: 25% wholly reversed and 27% partially reversed. As we drilled into the abstract idea rates on a tech center level, we found that the same reverse rates hold true for tech center 3600. That is, 19/75 decisions (25%) in tech center 3600 were wholly reversed and 20/75 decisions (27%) were at least partially reversed. This shows that in the face of significant abstract idea rejections in tech center 3600, judges reverse at the same rate as other tech centers.

The above discussion shows that, by using tech center-focused appeals data, a quantitative assessment of how strong an Examiner’s position for a particular ground of rejection can be deduced by comparing that ground of rejection by technology center.  While we have done this for “abstract idea” grounds in this post, this same analysis can be done for the other grounds of rejection reviewed by the PTAB.  With this information, the odds of success on appeal can be better calculated based on which technology center is examining a particular case.


Section 101 – nonstatutory subject matter decisions: different categories = different reversal rates


, ,

One of the sexiest topics in all of patent law has become §101, specifically, patent-eligible subject matter. Part of the recent appeal stems from high volatility and uncertainty in the law. But not all categories of patent-eligibility grounds are in such flux. Some §101 nonstatutory grounds of rejection (e.g., reciting a propagated signal) are relatively predictable and stable while the so-called judicial exceptions are more unpredictable. So we drilled deeper into the types of §101 rejection to get a more complete picture of reversal rates. We found a big difference in the observed reversal rates of particular categories.

The following chart shows a breakdown of the past seven months of decisions on grounds of §101 – nonstatutory subject matter. Data for this chart was pulled from the past seven months using Anticipat’s research database. Anticipat keeps track of issue-specific tags to allow for better identification of sub-issues within issues. So while the Examiner and PTAB may decide a particular issue on §101 – nonstatutory subject matter grounds, Anticipat goes a step further to delineate the specific type of §101 – nonstatutory subject matter ground.


  1. Statutory Classes

Section 101 nonstatutory rejections include the statutory class variety (e.g., does the claimed invention fall within the statutory classes? recite more than one class? claim a human?). This is otherwise known as step 1 of the Mayo/Alice framework. Of the 186 substantively decided §101 decisions since July 25, 2016, these step 1 types accounted for 38 or 25%. Twelve were wholly reversed, a reverse rate of 32%.

This higher reversal rate for classes makes intuitive sense. Administrative patent judges must understand that statutorily, patent-eligibility is broad. A process, machine, manufacture, and composition of matter originally allowed almost any human innovation at that time of the enactment of the 1952 Patent Act to be patent-eligible. As technology has since changed, not all inventions fit into this framework, such as propagated signals and software per se. But for the most part, the courts have fit inventions into these categories—from non-transitory computer readable media to engineered bacteria. The observed reversal rate indicates that judges may reverse the Examiners in an attempt to be more faithful to the statute and to case law than the Examiners are.

  1. Judicial Exceptions

The judicial exceptions to patent-eligibility, such as abstract ideas, law of natures, and natural phenomena, have surged in popularity in recent years. And the appealed decisions show it. Of the 186 decisions within the past seven months, 146 have been judicial exceptions. The most popular of the exceptions is the abstract idea.

Of the 119 abstract idea cases, 30 were wholly reversed and two were reversed in part, or a complete reversal rate of 25% and an at least partial reversal rate of 27%. This falls squarely within the overall §101 rates that we previously reported. Natural phenomena/product of nature types are slightly higher at 31% while the law of nature ground reversal rate is markedly lower at 7%.

  1. Analysis

The number of decisions for some of these categories should be more reliable as the number of decisions increase, but some take-home lessons are clear. A judicial exception rejection has a lower chance of getting reversed than the statutory rejections. The PTAB judges are likely averse to overruling an Examiner’s finding of a judicial exception, especially when there is a great deal of uncertainty in the courts.

Furthermore, law of nature rejections are very infrequently reversed. Part of this may be a lack of positive case law to specifically support law of nature rejections being erroneous. By contrast, several Federal Circuit decisions have been decided within the past year that are positive for the patentee/applicant in showing that the claims are not an abstract idea.  Because of this, the judges have more material to work with in finding a particular claimed invention passes the Alice framework.