In an ideal world, your chance of getting a patent allowed is based on the merits of your patent application and independent of the largely random assignment of the patent examiner. As any patent attorney knows, however, this is not the case. Some examiners allow patents too easily and others seem predisposed against allowing any patents at all.
This ideal can be described as outcome consistency. The outcome of a patent application should be largely the same regardless of the assigned patent examiner. Outcome consistency is needed to ensure fairness. It is unfair for an applicant to be denied a patent for a worthy invention because it was assigned a hard examiner, and it is unfair to the public for a patent to be granted for an unworthy invention because it was assigned to an easy examiner. The lack of outcome consistency among patent examiners is a known issue that the USPTO is working on improving, and this article presents visualizations to help diagnose areas for improvement.
The patent application grant rate across the USPTO is 66% (computed as described here). One would expect that a distribution of examiner grant rates would follow a bell-like curve with (i) the average examiner having a grant rate of 66% and (ii) a reasonably small standard deviation such that most examiners are close to the average.
Here is the actual distribution of examiner grant rates across the USPTO (this is a weighted histogram according to the number of cases handled by an examiner with SPEs excluded):
The distribution here is clearly far from ideal in that examiner grant rates run the full gamut from 0% to 100%. The standard deviation is 22%, and the dashed line is the closest bell-like curve (a beta distribution).
Possible reasons for the wide spread of grant rates include the following:
- Technology areas may have inherently different grant rates based on the difficulty of discovering an invention in a technology area.
- Examiners may have different difficulty levels.
Because of the hierarchical structure of the USPTO, the first possible reason is quite easy to investigate. The USPTO comprises three organizational levels: (i) 8 technical centers, (ii) 59 groups, and (iii) 568 art units. As you go down the hierarchy, the technology addressed is more specific. Accordingly, one would expect the examiner distributions for individual groups and art units to more closely resemble a bell-like curve with a smaller standard deviation.
Outcome Consistency Within Groups
I computed grant rate distributions for examiners of each of the 59 groups (I did not do so for art units because there is not enough data at only 10-20 examiners per art unit). Most groups had better outcome consistency than the USPTO as a whole but some were worse.
For comparison, here are distributions for Group 2630 (Digital and Optical Communications) with a standard deviation of 10% and Group 2620 (Selective Visual Display Systems) with a standard deviation of 23%:
Because the distribution for Group 2630 is much closer to a bell curve and has a much smaller variance, we can conclude that Group 2630 is doing a much better job of achieving outcome consistency across examiners than Group 2620.
Why does Group 2620 have such poor outcome consistency? Since all of the patents examined by Group 2620 relate to Selective Visual Display Systems, diversity of technology seems to be an unlikely culprit. From my experience as a patent attorney and having worked with many different examiners, I would conclude that the poor outcome consistency is due to different difficulty levels of individual examiners.
Outcome Consistency Across Groups
Because different groups address different technologies, one would expect some variation of grant rates between different groups. One would also expect that groups that address similar technologies (e.g., groups in the same tech center) would have similar grant rates. The example above with groups 2620 and 2630, however, shows that this is not the case.
To view outcome consistency across groups, I computed the following distribution:
Each blue dot corresponds to an examiner, and the size of the dot corresponds to the number of cases handled by the examiner. The horizontal axis is grant rate, and the vertical axis is the group. Examiners in the same group are aligned horizontally so you can view the distribution of examiners in a group by scanning any horizontal line in the above distribution. Divisions between tech centers are indicated by the yellow lines. You can access an interactive version of the grant rate by group distribution, which allows you to click on a blue dot to see the details of an individual examiner.
As expected, the above distribution shows that some technology areas have, on average, higher grant rates than other technology areas. What I find most striking about the above distribution, however, is the large variation of examiner grant rates across all technology areas. While some groups (such as group 2630 discussed above) have lower variation of examiner grant rates, adjacent groups with similar technology (such as group 2620) do not share this lower variation, and accordingly there is a lack of outcome consistency across all technology areas of the USPTO.
If a patent applicant is fortunate enough to have his or her patent assigned to a group with low variation (such as group 2630) then, from that point forward, there is higher outcome consistency. If the applicant had used slightly different words in the claims, however, then the application could have been assigned to a different group with a very different grant rate and/or a much higher variability of grant rates.
In my practice, I have applications with similar technology where some applications get assigned to one of the business method arts units with very low grant rates (in Tech Center 3600) and others get assigned to more technical art units with much higher grant rates (in different tech centers, such as Tech Centers 2100, 2400, 2600). Accordingly, to get better results for my clients, I draft my claims in a manner to try to avoid getting classified into one of the business method art units.
Outcome consistency thus relates not only to the variability of examiners within a group, but also to the variability of examiners across the USPTO since applications with similar technology can be assigned to different groups and even to different tech centers.
Outcome Consistency and Patent Quality
The lack of outcome consistency is caused by at least two factors. First, the inconsistency in application classification can cause applications with similar technology to be assigned to different groups with very different grant rates. Second, in most groups, the variability of examiner grant rates is high. As a result, your chance of getting a patent granted depends in large part on the mostly random assignment of a patent examiner to your application.
Patent quality is paramount to the mission of the USPTO, and patent quality presumably includes making sure (i) that unworthy patents are not granted and (ii) that worthy patents are granted. When worthy patents are not granted, applicants are unfairly denied their patent rights. When unworthy patents are granted, the public may be harmed by unfair competitive advantages.
The data presented here may help identify areas for improving the second cause of outcome consistency identified above. This downloadable table shows the standard deviation of grant rates for each group in the USPTO. The groups from this table with the largest standard deviations of examiner grant rates may be candidates for review to find ways to decrease variability.
Image Source: Deposit Photos.
Charts provided by Author.