Drilling Down on Criticism of Top-Down Approach to Determining Essentiality

By Tim Pohlmann
July 21, 2021

“While both the Concur IP report and the Cooper report indeed take two different approaches to conducting patent essentiality determination based on subject matter experts’ claim standard section mapping, the cited 2017 IPlytics EU report did not conduct or claim to conduct any essentiality determination.”

https://depositphotos.com/158530450/stock-photo-new-modern-microphone.htmlLast week, an article was published on the Social Science Research Network (SSRN) website by Matteo Sabattini, who is the director of IP Policy at Ericsson. SSRN is an open research paper repository that does not peer-review any articles that are uploaded. Sabattini’s new article, a summary of which was also recently published on IPWatchdog, is titled, “When is a portfolio probably standard-essential?” and cites several studies that determined the overall essentiality rate for 4G and 5G. Here, Sabattini cites the Concur IP study that was part of the expert witness report of witness Dr. Zhi Ding in the TCL v. Ericsson litigation case, as well as several studies published by David Edward Cooper from Hillebrand Consulting, who is an Ericsson commissioned subject matter expert and who also testified in court, e.g. for the Unwired Planet v. Huawei case. Finally, Sabattini also mentions the EU commission 2017 study conducted by IPlytics:

For example, a number of specialized consulting firms have published reports that try to estimate what share of families that were declared as potentially essential to the 4G communication standard were truly essential. Among those, Concur IP, IPLytics, PA Consulting and David Cooper [Coo2019].

While both the Concur IP report and the Cooper report indeed take two different approaches to conducting patent essentiality determination based on subject matter experts’ claim standard section mapping, the cited 2017 IPlytics EU report did not conduct or claim to conduct any essentiality determination, even though Sabattini lists the 2017 EU report as one of the essentiality SEP determination reports. The 2017 EU report was commissioned by the EU Commission’s Directorate-General for Internal Market, Industry, Entrepreneurship and SMEs (DG GROW) to IPlytics. Also, the study was peer-reviewed by the EU Commission expert economists (among others, Fabio Domanico) as well as other external economists. This EU report, however, was not targeted to perform any SEP essentiality determination and communicates this explicitly:

The data collection of disclosed SEPs relies on the information that is published by the SSOs and makes no attempt at verifying the accuracy of the essentiality declaration itself. By no means, individual SEP declarations can be understood as evidence of actual essentiality of the declared patents. (page 10, Chapter III)

Even more, a whole chapter in the 2017 EU study (page 48) discusses methods of SEP determination:

Since SSOs do not verify the essentiality of patents to standards, disputes whether or not a patent really claims an invention reading on a particular standard have to be solved during bilateral negotiations (where the parties may typically produce and argue over claim charts) and may eventually lead a trial. Ultimately, only a court may decide whether a patent is essential or not for a particular implementation of a standard and for a particular application of this standard in a specific product.

The study even more explicitly states that several studies exist that show that only a share of the self-declared patents are essential, citing the Myers 2005 study:

A number [of] studies indicate that only 20-28% of patent families declared essential were actually essential for key technologies.

Sabattini in his research paper, however, claims that, in the 2017 EU study, IPLytics simply uses the declarations with minor nominal adjustments, i.e. a de facto essentiality rate of 100%.”

And further assumes that:

…IPLytics in [IPL2017] does not even attempt to estimate the overdeclaration rate, and simply uses the raw declaration data. Therefore, ?(????????)=1.

Sabattini’s claims that IPlytics assumes a 100% essentiality rate for all self-declared patents are untrue, as anyone who reads the 2017 EU study will quickly see. In fact, the opposite is true, as IPlytics is one of the companies regularly publishing reports and blog posts making the public aware of the limitations of self-declared patent data. A recent IPWatchdog article – just to name one out of many – for example, discusses in a whole chapter the “Limitations of SEP Declaration Data”.

As an economist, I have published numerous research articles over 12 years that make use of self-declared patent data. Many of these articles are peer-reviewed and have been accepted and published in well-known economic journals receiving world-wide citations by leading economists. I was among the first to publish data statistics on self-declared patents, e.g. in the 2011 EU fact finding study, always making explicit in any study and report that these patent declaration databases are created to document the FRAND obligation and should not be misinterpreted as an accurate picture of verified SEPs. Even more, as the CEO and founder of IPlytics, I have organized numerous IPlytics webinars to educate the industry about what SEPs are and what the data limitations of self-declared patents are, discussing these issues with well-known industry experts in conferences or online panel discussions, e.g. in the recent IPWatchdog webinar: “Determining Essentiality for SEPs. Webinar Slides, Live Poll Results, Recording”. A few months ago, I was in a panel discussion in the course of the EU transparency webinars, where I presented slides about the limitation of patent declaration databases and how these could be improved.

[[Advertisement]]

It is very difficult to understand why, with all of these publications communicating the data limitation on self-declared patents, the Sabattini article wrongly cites the 2017 EU study and concludes that IPlytics assumes any self-declared patent to be 100% essential.

Addressing Sabattini’s Conclusions

The Sabattini article first makes the case that a minimum number of rigorous claim charts is needed for smaller patent portfolios to provide enough evidence that a patent portfolio is standard essential. This is important to reduce risks of unknown information about the patent portfolio essentiality rate. His model and the conclusion make a lot of sense to me, and I believe his model can, in this case, also provide a good answer. Sabattini also compares the results of the Concur IP analysis that was commissioned by TCL in the court case against Ericsson to the Cooper report that was sponsored by Ericsson. Without going into the details of how Sabattini applies these differences in his model, both the Cooper report and the Concur IP report have one major shortcoming: both are highly biased. Concur IP was paid by TCL and encouraged (explicitly or implicitly having in mind who pays them) to make the self-declared patent essentiality rate as high as possible to dilute the Ericsson portfolio. On the other hand, Cooper was hired and paid by Ericsson to publish multiple reports claiming that the overall essentiality rate is as low as possible – as to Cooper only at 8% (here, the Ericsson portfolio was left out of the analysis).

With an overall on average low essentiality rate, the Ericsson portfolio share of 4G and 5G patents, which is indeed worldwide actively licensed and rigorously claim-charted, looks much bigger – as they have evidence for much higher essentiality rates for their own portfolio. I do not want to promote any essentiality rate or have any opinion about the essentiality rate being closer to 8 % or 30%. But there is no doubt that the essentiality rate of the Cooper report, as well as the essentiality rate conducted in the TCL v. Ericsson case by Concur IP, are both subject to bias. And here it does not matter if you spend several hours on 200 declared patents (as Cooper did) or a few minutes per patent on 2,600 declared patents (as Concur IP did), both reports, no matter what the approach is, are biased. Indeed, different reports conclude different essentiality rates when they are commissioned and paid by opposing parties. Concluding that due to these different report outcomes a top-down approach is biased against patent owners and favors infringers does not seem logical.

Also, Sabattini’s report does not consider the law of large numbers, which states that as a sample size grows, its mean gets closer to the average of the whole population. In other words, the more self-declared patents are charted, the more accurate and the lower this risk and marginal error. So, while the Cooper report may be more accurate per patent, the sample size of e.g. 200 5G self-declared patents is too small to provide any reliable result on the overall essentiality rate of the whole 5G population.

Top-Down Can Work

To summarize, I agree with Sabattini that the top-down approach needs a rigorous methodology to map self-declared patents to standards, but it also needs a certain sample size that must be bigger than 200. If a rigorous methodology is used and the sample size is big enough, a top-down approach does work and should be considered as one approach in the FRAND determination process—as long as such SEP essentiality determination is neutral and not biased.

Image Source: Deposit Photos
Image ID:158530450
Copyright:iqoncept 

The Author

Tim Pohlmann

Tim Pohlmann is the CEO and founder of IPlytics. He earned his doctoral degree with the highest distinction from the Berlin Institute of Technology, with a dissertation on patenting and coordination in standardisation. He then went on to work as a post-doctoral researcher and consultant for the Law and Economics of Patents Group at CERNA, MINES ParisTech.

In his work as an economist and consultant, Dr Pohlmann was confronted with the challenge that standards databases such as those of the European Telecommunications Standards Institute and the Institute of Electrical and Electronics Engineers have no real, meaningful connection with comprehensive global patent databases. He realised that if we are to keep pace with the next technology revolution, then as IP professionals, we need to rethink – even revolutionise – how we approach both patent and standards data, to provide business-ready knowledge for actionable decision making across our organisations.

Warning & Disclaimer: The pages, articles and comments on IPWatchdog.com do not constitute legal advice, nor do they create any attorney-client relationship. The articles published express the personal opinion and views of the author as of the time of publication and should not be attributed to the author’s employer, clients or the sponsors of IPWatchdog.com. Read more.

Discuss this

There are currently No Comments comments.