Board Reverses Abstract Idea Rejection for Improper Oversimplification; Board Should Evaluate Examiner Rejections, Not Conduct Own Examination


, ,

In continuing to monitor high interest areas at the PTAB, today’s Anticipat Recap email included a recently issued decision that reversed the Examiner’s abstract idea rejection. We go over why the Board reversed and discuss an interesting point raised related to the scope of Board activism. 

In Ex Parte Grokop et al., Appeal No. 2016-003047, the Board disagreed with the Examiner’s conclusion of step 1 that the claims were directed to an abstract idea. The Board cited Enfish and McRO to establish two important points that touch on judicial exception rejections: the claims cannot be oversimplified and specific requirements of the claims must be considered.

According to the Board, the Examiner’s characterization of the claim as directed to “generic audio analysis” oversimplified the claims and failed to account for the specific requirements of the claim. This included claim features of capturing only a single frame from each block, analyzing the collection of captured frames, and determining a characteristic of an ambient environment based on that analysis.

Interestingly, at the end of this analysis, the Board made a point to establish that the Board need not examine the claims. Instead, the Board’s job is to evaluate whether the Examiner’s rejection was proper. To support this, the Board cited to the MPEP (“The Board’s primary role is to review the adverse decision as presented by the Examiner, and not to conduct its own separate examination of the claims.” MPEP § 1213.02). It also cited to the statutory code (“An applicant for a patent, any of whose claims has been twice rejected, may appeal from the decision of the primary examiner to the Patent Trial and Appeal Board.” 35 U.S.C. § 134(a); and “The Patent Trial and Appeal Board shall. . . review adverse decisions of examiners upon applications for patents pursuant to section 134(a).” 35 U.S.C. § 6(b)).

This last point is important because often times rejections (including abstract idea rejections) get affirmed under a new theory or analysis, requiring that the rejection be designated as new. Or claims that were previously unrejected under Section 101 are found to be patent-ineligible by the Board as an entirely new ground of rejection. It seems like this panel would focus entirely on whether the Examiner’s pending rejections are proper and not examine the claims by itself. 

There may be advantages and disadvantages to Appellants under this approach. Advantageously, the appellant need not fear an unfriendly Board panel that seemingly arbitrarily introduces a new 101 rejection. Additionally, the appellant need not fear a bad Examiner rejection getting strengthened by the Board as an affirmed as new outcome. But the downside is that if the Board purely evaluates the Examiner rejections, the Examiner could simply turn around a more well-articulated rejection once the application makes it back. So on the flip side, under the approach where the Board more actively evaluates the claims, if the appellant survives such an approach without a new rejection, the Examiner may be less likely to simply flip around a more well-articulated rejection.


Presentation Recap on Abstract Idea Developments at the PTAB


, , , ,

Trent Ostler did a deep dive on abstract idea developments at the PTAB yesterday at the AIPLA Joint Committee Hot Topics Presentation (Patent Law Committee and ECLC). He used data from for all his results. In case you missed it, here it is:

I’m going to talk about Section 101 developments at the PTAB of ex parte appeal decisions. As many are aware, ex parte appeal decisions involve those applications that have been twice rejected, appealed, and gone all the way to a written decision by a panel of judges at the PTAB.


Now, the umbrella of Section 101 nonstatutory subject matter includes a variety of rejections. But as Theresa indicated, the most activity is in the abstract idea space. So here, we’re going to exclusively focus on developments of abstract ideas at the Board.

Before we get in too deep, I’m going to lay a foundation for an important point on appeals.


Typically when we think about outcomes for these decisions, we think of the following pie chart put out by the PTO. This pie chart shows that most of the time, the Examiner gets upheld (called here as affirmed at 56%). The pie chart indicates that a much smaller percentage of the time the Examiner gets overturned (called reversed at 29%), and the remaining chunk is a mix between the two (called affirmed-in-part).

A big problem with this chart is that this treats every appealed application the same. In reality, some grounds of rejection are much more likely to be overturned by the PTAB than others, as has been shown by the ex parte PTAB subcommittee of AIPLA.


Here is an illustration of rates of rejection for the past year and a half taken from, a relatively new website that keeps track of all grounds of rejection and outcomes for ex parte appeals. Plus, it offers free academic use and steeply discounted Examiner use. The graphs show reversal rates with the blue being a rejection wholly reversed and the orange being reversed in part.

Some of these results are surprising. Section 102 anticipation rejections and Section 112 rejections are entirely reversed about 50% of the time. We’ve found these rates to be remarkably consistent even with multiple grounds of rejection being decided.

101 rejections are reversed about 21%. If we drill down into abstract ideas, the rate is even lower: about 17%. This is one of the lowest reversed ground of rejection. But at the same time, this also goes to show that it is not a completely futile endeavor. Almost a fifth of the time the Examiner’s abstract idea rejection gets overturned.


Within the past year and a half, abstract ideas rejections have been reversed throughout each tech center. Some tech centers have higher reversal rates than others. The rate is especially low in the business method art units. In the biotech tech center 1600, the rate is higher.


Over the course of the last year and a half, there have been about 100 reversed abstract idea rejections. Some time periods are reversed at a higher rate than others. This may be due to the board perhaps correcting an overreaction of abstract ideas directly after Alice. It may also be as a result of Federal Circuit decisions that are either favorable to patent-eligibility or unfavorable depending on the time.

These PTAB decisions follow three different general arguments for reversals. Each of these arguments can stand alone in reversing a rejection and can be used in combination.

  • Prima Facie Case (17 decisions) – The Examiner did not provide sufficient articulation
  • Step 1 (76 decisions) – Not “Directed To” Abstract Idea
  • Step 2 (44 decisions) – Claim Elements Alone or in Combination Transform Abstract Idea into Something More

We’ll briefly step through what these different abstract idea arguments look like in practice.

First, the prima facie case. 


(see Many of us practitioners, especially who work in the computer arts, have seen this: a rejection that doesn’t meet the minimum threshold required for a prima facie case. This decision shows the Board overturn the Examiner’s rejection for not making that case. Can’t be conclusory, has to analogize to a case with an abstract idea, has to explain why it’s not more than the asserted abstract idea. If the Examiner doesn’t do this, reversed.

Next, step 1.

pic8( This is the most frequent category for overturning abstract ideas. This is in part due to recent decisions that hold that technological improvements are relevant in step 1, even if the guidelines suggest otherwise. Here, the Board breaks down the Examiner’s asserted analogous abstract idea. The Board then recharacterizes the claimed invention as an improved device rather than an abstract idea. Importantly, the Board supports its conclusion using the specification of the application including the background. 

Finally, step 2. 

pic9( This is often times used, as is shown in the following example, in conjunction with step 1. Here, the Board deconstructs the difference or delta between the Examiner’s asserted abstract idea and what is actually in the claims. As is often the case, there’s more to the claim than how the Examiner characterizes them. Here the Board recognizes that the examiner failed to show that the claim elements do not amount to significantly more or add meaningful limitations. This step here can bleed somewhat in to the prima facie analysis. The Board can either disagree with the Examiner’s assertion or rule that the examiner’s assertion does not provide the necessary analysis.

Anticipat has a lookup tool where you can put in the specific argument (e.g., step 1, step 2, prima facie case) and you can retrieve all the relevant decisions, mapped to your particular art unit or Examiner. Having relevant decisions can guide your strategy in responding to Office Actions or in your appeal brief strategy for including the most successful arguments. 

Next, which are the best legal authority for each type of argument? Here we discuss what judges rely on in reversing the various steps under the abstract idea rejection. These are not just legal authorities that appear in the decision somewhere, but rather these are cases where the PTAB either explicitly analogized to these cases or cited the authority in deriving its holding. Anticipat Analytics allows for looking up the legal authority for each type of argument used.

For step 1, the clear leading cases cited when reversing are DDR Holdings and Enfish.


For step 2, the clear leading legal authority used in reversing rejections is Bascom.


The PTAB decisions show similar volatility as the courts in deciding abstract idea rejections. Here are some of the more contentious areas that are being decided both in reversing and in affirming.

First, what is a technological improvement? To what extent does the Examiner need to provide evidence of assertions of routine/conventional activity? How close does the Examiner need to analogize to a similar case for showing the claim “directed” to an abstract idea? To what extent must the Examiner look to the specification to interpret the abstractness of the claims? These questions do not have clear answers, but the PTAB at least has more answers than the Federal Circuit – just out of sheer volume of decisions.

Another consideration is that when considering appealing an application, even if the application currently does not have an abstract idea rejection, the judges may introduce one sua sponte.

It is relatively rare, but it does happen.


It can happen in one of three ways:

First, the panel formally introduces a previously unapplied abstract idea rejection. Second, the panel can strengthen an existing rejection with additional analysis and designating the rejection as new. Third, the Board sometimes suggests that the Examiner consider 101 without issuing a formal, new 101 rejection. Keep this in mind as you consider an appeal. You don’t want to open up a can of worms if you don’t have to.


In sum, you can see the reversal rates for 101 rejection directly related to your area of interest. You can see the arguments used in overcoming other rejections (including the legal authorities relied upon) and incorporate it into your own practice.

PTAB Mocks Alice Supreme Court in Reversing 101 Rejection — Claims Include “Talismanic” Inventive Concept when the conventional computer components are arranged to provide specific advantages to the users



In ex parte Lynch, Appeal No. 2016-002985, the Board reversed a Section 101 rejection, holding that the claimed invention provides an improvement in the functioning of the computer. Specifically, the claimed invention allows a user to register for new websites without entering all of their information each time, but with the option of modifying the information if necessary. The Board seemed to acknowledge that the claims were directed to an abstract idea under step 1. But the Board held that the claimed conventional computer components when considered as an ordered combination do include an inventive concept sufficient to render the claims eligible for patenting. Finally, in an apparent mocking of the Supreme Court, the Board concluded that the claims include the talismanic inventive concept.

By way of background, the Court in Alice v. CLS Bank analogized the claims at issue to the claims in Bilski. The Alice petitioner had argued that one of the claims recited a formula, which should have brought the claim outside the realm of abstract ideas. However, the Court disagreed by explaining that Bilski belies this argument. The Court explained that the Bilski “Court did not assign special significance to that fact, much less the sort of talismanic significance petitioner claims.” Id. at 10.

Understanding how claims are and are not abstract ideas remains elusive, as suggested by the Board in this case. But this case illustrates supports that claims reciting general purpose components can be patent-eligible when the components are arranged to provide specific advantages to the users. To support this, the Board referenced three cases.

First, the Board analogized to Bascom for the proposition that conventional computer components are patent-eligible as they carve out a specific location for the filtering system (a remote ISP server) and require the filtering system to give users the ability to customize filtering for their individual network accounts.

Second, the Board referenced Amdocs where a claim that required arguably generic components “necessarily requires that these generic components operate in an unconventional manner to achieve an improvement in computer functionality.”

Third, the Board references Trading Technologies that held, “Abstraction is avoided or overcome when a proposed new application or computer-implemented function is not simply the generalized use of a computer as a tool to conduct a known or obvious process, but instead is an improvement to the capability of the system as a whole.”

This case shows that abstract idea rejections are still tricky. But as shown here, it is not fatal to patent-eligibility for claims to recite conventional computer components as long as the ordered combination provides a specific improvement. This shows that understanding which legal authority relied on by the Board can be important in knowing how to reverse Examiners in other applications. Anticipat Practitioner Analytics does just this. Click here for a free trial. 

“Particular Machine” Relied on in Overturning an Abstract Idea Rejection for NLP invention directed to Abstract Idea


, ,

In its heyday, the machine-or-transformation required that all process claims be implemented by a particular machine in a non-conventional and non-trivial manner or transforms an article from one state to another. While the Supreme Court in Bilski v. Kappos overruled the Federal Circuit’s reliance on the test as the exclusive test for patent-eligibility, it left open the test as being an important clue. In a recent decision, the PTAB shows that analysis of this test can be helpful in overturning an abstract idea rejection under step two of the Alice/Mayo framework.

In Ex Parte Milman, August 22, 2017, the PTAB overturned a 101 – non statutory rejection in an application. The claimed invention generated free text descriptions using natural language processing to generate an anatomical path for display on a graphical user interface. This was found to be directed to an abstract idea.

However, in step two of the analysis, the Board disagreed with the Examiner’s finding that the claims did not recite significantly more than the asserted abstract idea. The Examiner had found that the method is deployed on generic hardware and “the computer appears to perform only generic functionality that is well known in the art, e.g. processing data.” The Board found that the Examiner did not adequately show that the reliance on natural language processing capability involves a general purpose computer performing well-known generic functionality, rather than being a “particular machine” that is the result of implementing specific, non-generic computer functions. See Bilski v. Kappos, 561 U.S. 593, 601 (2010).

Ever since Bilski, the machine-or-transformation test has taken a backseat. This is even when the Supreme Court emphasized that the machine-or-transformation test could serve as an important clue for patent-eligibility. In the meantime, the Supreme Court in Mayo and Alice cemented a two-step test for patent-eligibility that used a different analysis than the machine-or-transformation. But Ex Parte Milman makes clear that the analysis of the machine-or-transformation test is still applicable. At least as it relates to claims that recite natural language processing, an Examiner must show that the claim are not more than a particular machine that is the result of implementing specific, non-generic computer functions.

Guide your patent prosecution strategy with Anticipat

We at Anticipat are excited to announce a new product called Practitioner Analytics. The tool helps practitioners use what is found to be successful on appeal at the Board in all aspects of routine patent prosecution. But before we explain the tool, we touch on some present realities of a patent practitioner responding to an Office Action.

Status Quo
As a patent professional, you may spend a lot of time reviewing Office Actions and determining response strategies. You may wade through each Office Action rejection-by-rejection. The complexities of patent law make this process difficult and time-consuming.

The gut feeling is a powerful way for the practitioner to approach each rejection. Maybe for one rejection, based on your experience and/or knowledge of the patent laws, your gut feeling tells you that the Examiner brings up a good point and you consider amending the claims. For another rejection, based on this same experience and knowledge, you see that a rejection is unreasonable so you consider traversing the rejection without amending the claims. For other rejections, you may initially not know how to proceed due to a lack of experience or up-to-date knowledge of the rejection.

So a practitioner’s gut feeling can guide the strategy in responding to the Office Action only so far, especially with constant developments in the law. In addition to being inefficient, there’s always a chance that the practitioner’s own experience is incomplete. Plus, this whole process can be difficult to gauge the strength of your strategy.

Furthermore, the client‘s preferences can make the strategy even more complex, necessitating diving into seldom explored areas of patent law. For example, the client may be intent on maintaining a certain claim breadth to safeguard entrants into the market or to cover a competitor product, which makes the patent prosecution strategy more difficult. Hence you may have to rely on a less persuasive strategy in overcoming a particular rejection.

With all the complexities that go into patent law, do you ever feel like there must be a better way to keep current on response strategies in a more efficient, fact-based way?

Luckily, there is a large body of appeals decisions at the PTAB where judges routinely overturn Examiner rejections. The judges apply the rules and laws using the same arguments and legal support that Applicants can use to overcome rejections in responding to Office Actions. If an argument works before the Board, that argument has high odds of ultimately winning out. So in a way the Board weeds through much of the possible argumentation and distills the arguments effective in overcoming all kinds of rejections. And because of the sheer volume of appeals decisions, these decisions include rationales for overcoming practically every ground of rejection. Plus, because the decisions are authored by independent judges at the PTO, they are an accurate reflection of the standards and arguments used to scrutinize both Examiner and Appellant arguments.

The only problem is that these decisions are posted in bulk form with minimal search capabilities, the content of each decision is disorganized, and manually wading through the decisions is horrific information overload.

Also, the USPTO overly simplifies decision outcomes, which does not tell you very much about what happened in any given appeal decision. So how do you make use of the data in the thousands of appeals decisions that issue every year?

Solution: Anticipat Practitioner Analytics

Anticipat Practitioner Analytics provides more than statistics. It is a PTAB legal research tool that can quickly get you helpful fact-based information about arguments and strategy you can use for a specific application. How does it do this?

Practitioner Analytics powerfully and efficiently guides prosecution strategy. By inputting an application number into the Analytics search engine, the page returns lists of decisions where the Board reversed for various possible rejections.

This can help practitioners in three important areas

Area 1: Organize persuasive arguments
Practitioner Analytics organizes rationales that the Board uses in reversing an Examiner’s ground of rejection. It does so by aggregating reversal rationales at the Board by each organizational level in the Office (Examiner, art unit, tech center). The specific legal rationales argued before the Board in each of these organization levels is listed underneath a bar chart showing real reversal rates at each level. At the click of a mouse the practitioner can select the legal issue in their specific case and see how it was treated in Board decisions coming from the Examiner involved, the Examiner’s Art Unit, the Tech Center, and then across the entire USPTO. The Practitioner can then compare the facts in their case to those cases in a list of decided appeals cases where this issue was involved to further predict the outcome before the PTAB.

Practitioner Analytics improves the caliber of argumentation and saves time in legal research by organizing and ranking persuasive reversal rationales for each Examiner, art unit, tech center, and global USPTO levels for each ground of rejection.

Area 2: Assess strength of rejections
Appellants typically won’t spend the time and money on a full appeal if they’re not sure of their position. Similarly, weak Examiner positions tend to get weeded out by the preappeal conference and appeal conference. So the appeal decision is actually a good objective data point for what kinds of rejections the Examiner corps is not incentivized to back down from but still will lose at the Board. This information is invaluable when deciding whether to pursue an appeal or not.

Anticipat provides you with the percentage of reversed decisions at each level (Examiner, art unit, tech center, USPTO). The higher the reversal rate, the less reasonable the Examiner’s rejection.

This reversal rate information enhances a professional’s anecdotal experience by identifying anomalies in how a particular ground of rejection’s reversal rate at the Board compares to other groups. This can guide a practitioner’s strategy in responding to Office Action rejections. That is, knowing how this particular Examiner or art unit’s reversibility rate compares with other groups can suggest when to hold firm to a position. For practitioner’s with relatively little appeals experience in a particular technology, this data instantly tells you what is working and what is not, without having had to spend years learning in the School of Hard Knocks.

Area 3: Get favorable case law straight from the Board
Practitioner Analytics also stores the legal support cited by the Board in each particular decision for each legal issue (tag) identified.

This means that in the aggregate, Practitioner Analytics provides the case law/MPEP/guidelines relied on to reverse or affirm the Examiner for each particular rationale at a mouse click, allowing you to keep current on relevant case law now being used by the Bard and identify trends in persuasive legal authority used specific to the rejections in a specific case.

With Practitioner Analytics, you can use successful approaches at the Board in your own practice without having to wait decades to gain experience.
Practitioner Analytics empowers you with knowledge about the strength of rejections at examiner, art unit and tech center levels
Practitioner Analytics provides a simple and intuitive interface so that you can quickly identify successful reversal rationales for examiner, art unit and tech center specific information
Pracitioner Analytics keeps you up to date on specifically tagged legal issues referencing the case law the board itself uses on that issue.
Anticipat Analytics enhances your ability to provide quality and cost-effective advocacy, saving you countless hours in legal research.  Right now, try it with unlimited access for free for two weeks.

Update on ex parte PTAB Appeals Reversal Rates: High Reversal Rates Maintained Except for 101 – Nonstatutory Rejections


About six months ago, the AIPLA ex parte subcommittee published a paper that showed the reversal rates across various grounds of rejection. Some of the findings were very surprising, including over 50% reversal rates for Section 102 and 112 rejections. Here, we provide an update to this paper, which doubles the data set from the time of the AIPLA publication. We find that the reversal rates have not budged from these initial rates, outside of a downtick in reversal rates for Section 101 non statutory rejections. This signals that the surprising results were not a sample size anomaly.


Section 101 – Non statutory

Of the 629 decisions, 130 were reversed and 7 affirmed-in-part. This translates into 21% pure reversals and 22% at least partial reversals.

Section 102 – Anticipation 

Of the 2187 Section 102 decisions, 1065 were reversed and 177 affirmed-in-part. This translates into 49% pure reversals and 57% at least partially reversed.

Section 112(a)


Of 203 decisions, 104 were reversed and 8 affirmed-in-part. This translates into 51% reversed and 55 at least partially reversed.

New Matter

Of 27 decisions, 13 were reversed. This translates into 48% reversal rate.

Written Description

Of 531 decisions, 276 were reversed and 19 were partially reversed. This translates into 52% reversal rate and 56% at least partially reversed.

In total, out of 761 decisions, 393 or 52% were reversed and 55% were at least partially reversed.

Section 112(b) – indefiniteness

Of 806 decisions, 390 were reversed and 34 were partially reversed. This translates into a 48% reversal rate and 53% at least partially reversed.

Section 112(d)

Of  38 decisions, 16 were reversed and 1 was partially reversed. This translates into a reversal rate of 42% and 45% at least partially reversed.

Section 103 Obviousness

Of 9329 decisions, 3139 were reversed and 907 were partially reversed. This translates into a reversal rate of 34% and an at least partial reversal rate of 43%.

Obviousness type double patenting

Of the 418 decisions, 67 were reversed and 13 were partially reversed. This translates into a 16% reversal rate and a 19% at least partial reversal rate.

Data Set

The above data was pulled using Anticipat Research in the range of 7/25/2016 to 7/25/2017. You can perform legal research for these grounds of rejection and others on Anticipat Research. Click here for a free trial to give it a try.


The past six months have shown that the high reversal rates for Sections 102 and 112 rejections reported previously are here to stay. While Section 102 reversal rates dropped some, 49% is still very high. Given the large number of decisions, especially for obviousness, it is interesting to note that the reversal rates are as stable as they are.

Meanwhile, the past six months have experienced far fewer 101 non statutory rejections. Specifically, a reversal rate drop of 4% based on six months of additional decisions seems significant.


Anticipat’s focus is simple: complete and accurate annotation of PTAB ex parte appeals decisions

Despite recent strides, the USPTO does not make it easy to extract all its data. This is especially true for ex parte appeals decisions from the Patent Trial and Appeal Board (PTAB)–even though these appeals decisions establish key data points about general patent prosecution. We discuss seven shortcomings of the PTO websites as well as Anticipat’s solution to each of these shortcomings.

1) No centralized repository – If you are looking for a decision without knowing the authority (i.e., precedential, informative, or final), you will likely have to search through three different databases on different web pages. This is because the different types of PTAB decisions are scattered across different web pages depending on the authority of the decision.

Anticipat houses all decisions in a single repository and it labels each decision with the respective authority. To date, Anticipat has all publicly available PTAB appeals decisions in its database.

2) Non-uniform and sporadic decision postings – The USPTO does not post every decision to the Final Decisions FOIA Reading Room webpage on its issued date. For example, if there are 100 decisions dated July 29, five may show up the day of July 29. Fifteen may show up on July 30 even though they are still dated and show up on the database as dated July 29. Twenty may show up on July 31. Fifty may show up August 1. Five may show up August 4. Three may show up August 5. Another one may show up August 6. And another may show up August 7. To monitor recent decisions, it can take time to keep track of which decisions have been looked at.

To fix this, Anticipat has multiple redundant scrapers to check for any backfilled decisions, making sure that every decision posted to the e-foia webpage is picked up. And it emails a recap of these annotated decisions on the 10th day to make sure that the complete set has been included.

3) Unreliable connection – Whether you’re just trying to load the main USPTO page or whether you’re searching for a particular decision, the PTO site (especially the FOIA Reading Room) can be slow or even unresponsive in letting you access data.

Anticipat solves this problem by being hosted on a scalable cloud server. The site should never be down, even during peak traffic.

4) Search functionality limited – The Final Decisions page allows limited search (e.g., date range, Appeal No., Application No., text search, etc.). But none of these searching capabilities are actually available for the 21 precedential and 180 informative decisions.

Even though the Final Decisions page allows for some search functionality, the type of searchable data underwhelms.  First, the input fields can be extremely picky.  For example, if you input an Application No. with a slash (“/”) or a comma (“,”), you get a “no results found” message. But for this particular input, the real problem is not that there are no results for the value input. Rather, it is that that you included a character not recognized by their program. This misleading message does not distinguish whether input values exist or whether the format of the query you entered simply is not consistent with the website’s expectations.  Further, there is no search capability for some of the most useful types of data: art unit, examiner, judge, type of rejection, outcome of the decision, the class, etc.

To overcome this, Anticipat permits loose input so that you unambiguously get the results you need without having to prophetically predict the required format. And it does this for decisions for each type of authority. Anticipat has also taken the time to supplement decisions with their respective application information, such as art unit, examiner, judges, grounds of rejection, outcomes, etc.  Only Anticipat’s database allows you to find all those cases using the most useful data for your analysis.

5) Unorganized data display – In addition to not being able to organize the data into one repository, as discussed in 3), the organization within the Final Decisions page is lacking. To its defense, the PTO does provide some organization to the various decisions. It organizes Final Decisions by (D) – Decision, (J) – Judgment, (M) – Decision on Motion, (O) – Order, (R) – Rehearing, and (S) – Subsequent Decision. However, the page does not allow you to display decisions by each type. Indeed, this organization of the types of decisions feels like more of an afterthought than as a way for users to effectively organize the data. Further, the organization does not go far enough. For example, within (D) – Decision are reexaminations, reissues, inter partes review, covered business methods, decisions on remand from the Federal Circuit, and regular appealed decisions.  There is no way to filter these different types of decisions from each other without manually screening all the decisions in the results list.

To fix this, Anticipat database tracks the various different types of decisions so that one can easily filter by certain subsets of decisions or search within specified subsets. Each sortable column can be sorted in ascending or descending order. Other columns of different information can be added by selecting the checkboxed fields.

6) Downtime from 1:00AM – 5:00AM EST – Every morning, the PTO takes the FOIA Reading Room website offline and performs maintenance on the website. This may not be a big deal to some people, but for someone in another time zone or just in night owl mode, this four hour wait time can cost you a lot of time in accessing your desired decision or data.

Being hosted on a cloud server, Anticipat has now regularly interrupted maintenance time. You are free to use at all hours of the day.

7) Errors – Coming from a federal government website, it’s understandable that some of the decisions data contain errors. Some errors are minor such as the name of the decision being cut off because it includes an apostrophe. Others are more consequential like mismatching a decision with another application number or combining one decision with two decisions.  Because every decision in the Anticipat database is verified using our proprietary systems, we work hard to catch and resolve the errors in the source data of every decision.


In conclusion, because of the above discussed deficiencies, ex parte PTAB data have been consistently overlooked because it simply cannot be effectively retrieved and analyzed by practitioners.  While you may not realize it yet, this may be costing you your time and your money. However, alleviates these deficiencies. Access the Research Database here.


Introducing Rejection Tags: A Way to Use Rationales and Types of Rejections for Patent Prosecution

In 1753, Swedish botanist Carl Linnaeus introduced a system for classifying plants. His two-term classification system assigned each organism a first generic name and a second, more specific name (e.g., Homo sapiens for humans). This system was different than previous classifications, but not extraordinary. But the elegance and simplicity of his system truly was groundbreaking, paving the way for all living organisms to be systematically and uniformly classified.

As the father of modern taxonomy, Linnaeus would be stunned to see how far technology has taken classification. Entire industries have been revolutionized through improved classifying of big data. And patent prosecution is no exception. We at Anticipat are excited to introduce “tags” on the Research Database for all grounds of rejections as a way of classifying patent prosecution rationales.

If the Board decision equivalent of a genus is a ground of rejection, then the species is the tag. Anticipat Research Database has long included the more generic grounds of rejection for each appealed decision (e.g., §101, §102, §103, §112, OTDP, etc.) But because of the wide-ranging categories of common reasons why a rejection is reversed or affirmed, it becomes helpful to dig deeper than cataloging the ground of rejection.  This deeper level represent the various possible points of contention. Identifying these specific categories allows you too find those specific decisions that are relevant to a certain issue without drowning in the ocean of Board decisions.

In short, a tag is a brief summary of a more granular point of contention regarding the ground of rejection. Assume that an applicant and Examiner are at odds on a ground of rejection. Depending on the rejection, this disagreement can take place over a number of finite points. Now with tags, you can easily look up the various points of contention for each ground of rejection. In other words, if you believe that a particular obviousness argument is worth pursuing, you can find decisions where the Board reversed an Examiner using that very same argument. Or if you don’t know which argument is worth pursuing, you can quickly find those arguments that have been most successful in reversing the Examiner. Here are examples of the ground of rejection/tag classification system.

  • 101 – nonstatutory subject matter

Some of the tags for the ground of rejection “§101 – nonstatutory subject matter” include:

  • Software/Data per se
  • Abstract Idea (prima facie case, step one, step two)
  • Law of Nature (prima facie case, step one, step two)
  • Naturally-Occurring Phenomenon (step one, step two)

Finding all the decisions with a particular ground of rejection is just the first step. Even more useful is weeding out less relevant decisions that fall within the same ground of rejection category. Take the abstract idea rejection. There are many different types of points of contention within 101 nonstatutory rejections that are less relevant to abstract idea: computer readable medium comprises a signal, software per se, combining multiple classes, law of nature, naturally occurring phenomenon, claiming a human, etc. Even within abstract idea, there are multiple points of contention such as 1) prima facie case (that the Examiner did not even do the minimum job in identifying and/or rejecting the claim as an abstract idea), 2) step 1 and 3) step 2 of the Mayo/Alice framework. Since we at Anticipat track all of these subcategories for you, you can look up in seconds to find decisions with your desired point of contention.


The ground of obviousness under 35 U.S.C. 103(a) includes our most advanced set of tags. This makes sense as obviousness is one of the most nuanced and developed ground of rejection. We keep track of over 20 points of contention within obviousness, such as the following:

  • Scope and Content of Prior Art – Broadest Reasonable Interpretation
  • Examiner Bears Initial Burden (Prima Facie Case)
  • Clear and Factually-Supported Articulation of Reasons for Obviousness (Prima Facie Case)
  • Hindsight Reasoning (Prima Facie Case)
  • Secondary Considerations
  • Combining/Substituting prior art elements according to known methods to yield predictable results
  • Use of known technique to improve similar devices (methods, or products) in the same way
  • Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results
  • “Obvious to try” – choosing from a finite number of identified, predictable solutions, with a reasonable expectation of success
  • Known work in one field of endeavor may prompt variations of it for use in either the same field or a different one
  • Proposed modification cannot render the prior art unsatisfactory for its intended purpose
  • Teaching away
  • Non-Analogous Art

The challenge with trying to find relevant arguments to overcoming an obviousness rejection, for example, is that any case decided by the Board could include one or more of these obviousness points. With the volume of obviousness decisions, it has been impractical to find relevant decisions.

The point of tags

You can do a lot of interesting things with tags. Take for example if you are stuck on a particular rationale used by an Examiner that you think is unreasonable. You can easily match your issue with decided cases at the Board and use Anticipat to quickly pull the decisions where the Board agreed with you.  If there are very few decisions on your side, that is a valuable reality check.  If there are many decisions on your side, you can review decisions to double check that your facts correspond with those in the decisions. You can also use the legal authority relied on by the Board for this particular point of contention in persuading your Examiner.

Tags will be incorporated into our soon-to-be-released Practitioner Analytics page to guide prosecution strategy. Using the Practitioner Analytics interface, tags can be ranked in the order of frequency in overturning an Examiner’s particular rejection. You can thus find better arguments faster and with more confidence. Sign up for an invitation to the soon-to-be-released Practitioner Analytics page.

Board decisions show that independent judges have agreed with the applicant’s position in a related case.  They are a powerful way to check and augment your existing experience.


You don’t have to be a famous botanist to appreciate how identifying board decisions using a rejection/tag relationship is a simple but powerful way of describing how the Board decides cases today. In the aggregate, this structure provides targeted information to inform your patent prosecution strategy. At about $1 per day, Anticipat Research Database is not only incredibly affordable, at current hourly billing rates it pays for itself in just seconds a day. Try it now for free.