“Not all declared patents are essential and not all essential patents are declared. Both described scenarios show that patent declaration data needs refinement, filtering, extrapolation and a neutral and objective SEP determination and valuation metric.”
One of the major challenges when licensing, transacting, or managing Standard Essential Patents (SEPs) is that there is no public database that provides information about verified SEPs. Standard-setting organizations (SSOs) such as ETSI (4G / 5G), IEEE (Wi-Fi), or ITUT (HEVC/VVC) maintain databases of so-called self-declared patents to document the fair, reasonable and non-discriminatory (FRAND) obligation. However, SSOs do not determine whether any of the declared patents are essential, nor are the declarants required to provide any proof or updates. As a result, in the course of licensing negotiations, patent acquisitions, or litigation, the question about which patents are essential and which are not is one of the most debated when negotiating SEP portfolio value, royalties, or infringement claims. Artificial Intelligence (AI) solutions have started to support the process of understanding how patent claims relate to standards to assess larger SEP portfolios without spending weeks and months and significant dollars on manual reviews by technical subject matter experts and counsel.
Limitations of SEP Declaration Data
As Justice Birss concluded in Unwired Planet vs. Huawei, “…in assessing a FRAND rate counting patents is inevitable”. However, SEP declaration counting is typically subject to two limitations:
- Maximal declaration situation
SSOs such as ETSI (the organization that specifies 4G/5G standards) encourage standards developers to declare any patent that could potentially be essential for standards. A few declaring companies conduct claim charts before declaring patents most others declare any potential patent without any in-depth analysis. Also, often companies submit patent declarations when patents are yet pending, and the standard is still evolving. Thus, patent claims as well as standards specifications are likely subject to change after the initial declaration. By design of the declaration practice some of these declared patents end up being not essential. Publicly self-declaring all potentially essential patents for a given standard is an important part of the FRAND obligation and should not be called “over declaration”. Still such patent declarations must not be confused with verified SEPs as a good share of the declared patents is not essential.
- Minimal declaration situation
Other declaration database such as IEEE (the organization that specifies Wi-Fi) and ITU (the organization that specifies HEVC/VVC) allow patent owners to submit so-called blanket declarations, where declaring companies must not declare specific patent numbers but only submit a blanket statement without any further details about potentially essential patents. By design of the blanket declaration practice these databases provide no information about the magnitude of SEP ownership across companies. In other words, there is no transparency about a declaring company owning e.g. just a single SEP or several thousands of SEPs.
To summarize: There are two big problems. Not all declared patents are essential and not all essential patents are declared. Both described scenarios show that patent declaration data needs refinement, filtering, extrapolation and a neutral and objective SEP determination and valuation metric. In the past, SEP essentiality determination was solely conducted by subject matter experts (SMEs) who mapped and charted patent claims and standards sections. However, there is no practical way for humans to determine patent essentiality for large populations of declared patents. Not only are there too many patents (IPlytics counts over 300,000 world-wide declared patents) but it is rare for e.g. two different experts to agree on the other’s approach to mapping patents to standardized technologies an any claim chart is biased towards the company commissioning the claim charting work.
Limitations of Human SEP Determination
The TCL v Ericsson case is a good example of the limitations when employing human experts to count, valuate and determine the overall essentiality rate. In this litigation, Ericsson and TCL argued about the quality and essentiality rate of the Ericsson SEP portfolio compared to the overall number of 2G, 3G and 4G SEPs. TCL commissioned subject matter experts to conduct a study of a random sample of 2,600 ETSI declared 2G, 3G and 4G patents to determine the essentiality rate. The procedure of essentiality assessment received several criticisms. It was calculated that the commissioned experts must have spent on average only about 20 minutes per patent and charged on average $100 per patent for their assessment. The time spent and amount paid for SEP determination for this litigation case very much differed to fees charged to verify SEPs e.g. in the course of determining patent pools licenses. Most experts would thus agree that it indeed is reasonable to question whether a human can map a patent against complex technical specifications that may have up to 600 pages and hundreds of sections in just 20 minutes. Another even more prevailing criticism, however, was the bias of the experts who conducted the patent mapping. The experts retained by TCL knew which side they were on. This case shows that human SEP determination is subject to two main drawbacks:
- The budget and time needed to thoroughly map and chart tens of thousands of declared SEPs to complex standards such as 2G- 5G, Wi-Fi or HEVC(VVC is often economically not feasible.
- Human experts are biased towards the party that sponsors the analysis.
The latest technical advances in AI-based algorithms allow machines to assist the work of subject matter experts by extrapolating given claim charts to larger samples of data. While AI-based SEP determination may not be as accurate as an expert spending hours or days over every patent, it has the advantage of repeatability, scalability and objectivity. A sophisticated AI algorithm can determine essentiality in milliseconds, whereas an expert will require days or sometimes weeks and months to come to the same decision.
The Complexity of SEP Data
One reason why human SEP determination is both costly and time consuming is the complexity of the standardized technology. Standards such as 5G consist of over a thousand so-called technical specifications (TS). These TS may have up to 600 pages and hundreds of so-called sections. To identify if a declared patent relates to a standard, experts must study and understand all patent claims and map identified claim elements against all possible standards sections. Even more, one patent may be declared to several standards documents that also must be considered when mapping the patent claims. The following data example well illustrates the complexity of data:
Figure 1: SEP declaration to multiple standards (ETSI SEP database example):
Figure 2: Combination of declared SEPs and standards (ETSI SEP database example):
Figure 1 and figure 2 well illustrate that the number of patent declarations submitted to multiple standards documents creates almost two million combinations when only considering the ETSI declaration data. And as each ETSI standards specification has on average 212 so-called standard sections per document, and each declared patent on average 20 claims, the number of claim section combination 1,778,400 x 212 x 20 exceeds over 7.5 billion combinations of declared patents’ claims and standards sections combinations. Such amounts of data are impossible for humans to analyze and not economically feasible to put in the hands of expert teams that must work for months or even years to determine the essentiality of the patents. This data example shows that even if recent policy makers suggest involving patent offices in the claim charting exercise, there will never be enough budget and human capacity to chart all world-wide declared patents. Especially since the number of patent declarations is sharply increasing by ten thousands of yearly newly declared patent families.
Computer-Based Patent Essentiality Scoring
In computer science, an inverted index is a database index storing a mapping from content, such as words, in a set of documents. The purpose of an inverted index is to allow fast full-text searches and text comparison, at a cost of increased processing when a document is added to the database. Inverted indexing is the most popular data structure implemented in document retrieval systems used on a large scale, for example, in Internet search engines. Indexing, searching and comparing even billions of data points, such as patent claim and standard section data, can be conducted in milliseconds when the index is deployed on highly scalable cloud computers. State of the art semantic algorithms use techniques where documents are represented as vectors in term spaces, allowing comparing the actual content of a patent claim and standard section rather than the overlap of keywords (figure 3).
Claim language and language in standard specifications are often very different: patent claims are drafted by patent attorneys using broad terminology so that the claims apply to as many applications as possible. Standard specifications are written by technical engineers that develop the standard and use very specific language. To overcome this, semantic models are trained with human-created claim chart samples to understand the context of claims and standards where the algorithms can learn to recognize different expressions for certain concepts of patent claim elements. In machine learning, semantic analysis of a corpus is the task of building structures that approximate concepts from a large set of documents where the index is trained on a smaller set of training data.
Figure 3: Semantic claim sections analysis
In addition to the semantic comparison of patent claims and standards sections, computer-based algorithms can extend the patent and standard data correlation by mapping the patent’s listed inventors (name, surname, affiliation) to the participation at corresponding standards meetings or by mapping the patent’s applicant/assignee’s accepted standards contributions that relate to the declared standard. A peer-reviewed article written by economists provides evidence that the patent intensity (of later declared patents) in related pre-standards meeting periods is 2.6 times higher than that in the idle period between the meetings. The researchers find that this effect is highest for participating firms when the inventor was present at the meeting. Figure 4 further provides evidence of the cross correlation of patent inventors and standard meeting participation. In figure 4, the IPlytics Platform was used to cross corelate the inventor participation at 3GPP (3rd Generation Partnership Project) meetings for 5G declared patent portfolios. The analysis shows that for, on average, 72% of all 5G declared patents, the inventor (first name, last name, entity) participated at the relevant 5G 3GPP standards meeting where the declared TS was discussed.
Figure 4: Top SEP declaring companies as to share of sending at least one listed inventor of the declared patent to the relevant working group
AI-Based SEP Determination to Support Decision Making
AI-based semantic claim section comparisons and cross correlation of inventor participation and the submission of accepted technical contribution at standards meetings are strong indicators of patents being relevant to a given standard and can be integrated as features in AI-based SEP prediction models that score patents as to their likelihood of being standard essential. Making use of verified SEP training data from expert claim charts allows extrapolating information about essentiality to a much larger set of patents. This allows valuating and determining large patent portfolios that are economically not feasible to be manually mapped by experts. Furthermore, AI-based SEP prediction models allow estimating the likelihood of SEP essentiality for patents that have not even been declared due to blanked declaration statements. While AI-based SEP determination may not replace the work of experts, it supports valuating and determining essentiality of SEPs for various use cases:
- Patent portfolio manager use AI-based SEP determination to valuate their own portfolio in comparison to competitor portfolios with regards to essential assets.
- Patent licensing manager use AI-based SEP determination to understand the value and relevance of a licensed patent portfolio with regards to standards.
- Patent transaction manger use AI-based SEP determination to identify and valuate SEP portfolios for patent acquisition purposes – to understand what can likely commercialized and what rather not.
- Economists use AI-based SEP determination to valuate a potential SEP portfolio share in course of a top-down analysis – to calculate the numerator and denominator.
An upcoming IPWatchdog: Determining Essentiality for Standard Essential Patents: Challenges, Benefits & Solutions will discuss how to make use of AI-based assessment when managing, licensing, transacting, or litigating SEPs. Joining the conversation will be Mang Zhu, Chief IP Strategy Officer at ZTE, David Yurkerwich, Senior Managing Director at Ankura, David Barkan, Litigation Principal at Fish & Richardson, Daniel Weinger, a Member at Mintz, Gene Quinn, President & CEO of IPWatchdog and Tim Pohlmann, CEO of IPlytics.