“While it is true that the software may be abstract, if it is considered to be a computer-implemented invention, it is bound to produce a technical effect, or as we call it in our guidelines, a “further technical effect,” than just the currents flowing in the computer processor.” – Anton Versluis
I recently had the opportunity to speak on the record with three examiners at the European Patent Office (EPO) about their advice, pet peeves, and approaches to examining computer implemented inventions, particularly in the field of artificial intelligence (AI), and how the EPO system compares with the U.S. patent examination system. Part I of this interview is available here. In this installment we discuss the meaning of and differentiation between artificial intelligence, machine learning and blockchain technologies. We also discuss what applicants should be doing when preparing patent applications relating to artificial intelligence technologies for filing at the EPO.
The examiners, in the order in which they first speak in the interview that follows, are:
Jean-Marc Deltorn (JMD) holds a PhD in Theoretical physics, a Master of Laws and is currently finishing a PhD in Law on the interplay between AI and privacy. Deltorn has been a patent examiner at the EPO in the Information and Communication Technologies (ICT) sector since 2003, working in a variety of fields, from image analysis to pattern recognition and machine learning and focusing currently on speech recognition. He is also is a member of the EPO Guidelines Working Group on computer-implemented inventions (CIIs), and has co-authored the course material on “patentability of artificial intelligence”.
Abderrahim Moumen (AM) holds a master’s degree and a Ph.D. degree in the field of Telecommunications and Radars. He joined the EPO in 2000 as an examiner in the ICT sector. In 2017, he was appointed team manager leading a team of examiners dealing with patent applications in the field of antennas and microwaves. He specializes in the area of antennas (including smart antennas and antenna optimization), telecommunications, and the Internet of Things (IoT).
Anton Versluis (AV) studied Applied Physics with a specialization in Artificial Intelligence. He worked on Smart Robotics for the handicapped at the Netherlands Research Institution TNO and joined the EPO in 2006 as examiner in Pattern Recognition (G06K9), a technical area with a large role for AI, for instance in Self Driving Vehicles.
Without further ado, here is Part II of my interview with Examiners Deltorn, Moumen, and Versluis.
GQ: I would like to talk specifically now about artificial intelligence (AI); I know that’s a big word that encompasses a lot of things. Sometimes I wonder whether AI is more of a marketing word, because there are an awful lot of things that people will call AI that are really not. And then there’s machine learning and, on some level, the future of so much of the innovation that we’re looking at that could be paradigm shifting, like autonomous driving, for example, really is going to require massive computing power and either quasi-AI or AI in order to make sure that everybody is safe and secure. And when you talk about security and transitioning information, now you are talking about blockchain. So, there are all these different buzzwords and different fields and subsets, and AI is sort of the umbrella. How would you explain how the EPO categorizes AI?
AM: At the EPO, we consider AI to refer to the simulation or mimicking of human intelligence or human behavior by machines. It is a broad definition, and discussing human intelligence is very complicated. But I believe a large part of AI is based on machine learning and refers to the ability for systems to learn from the data provided, i.e. the input data, and to continuously improve by doing experiments. So, there is a notion of improvement over time without additional input from the user. Blockchain is something completely different. It is a technology used for the exchange of information in networks without having a central system or authority controlling that process. It’s basically a distributed ledger. AI could be used together with blockchain, but they are not directly related to each other.
AV: I would agree with that. You can use AI in blockchain and you can use blockchain in AI. They are separate entities that can work together. After so many years in AI, I would say that AI is a system that learns from data and is able to make decisions from using that data, like a classification or an action to be taken, or the separation of an object in an image from its background. Blockchain is a way of finding a common truth among various players that everybody agrees on.
JMD: In fact, the notion of AI is not exactly novel. The term was first coined in 1955 by John McCarthy, and it has been a long road towards the current hype. Today, machine learning, and in particular deep learning, has become basically the Swiss knife of AI. It is used in a broad range of industrial applications. This is not only due to the availability of larger amounts of CPU (or other specialized hardware) and to the development of new algorithmic solutions to train large neural networks, but also to the availability of vast corpora of training data. There is indeed now enough data available to build those “models” that Abder and Anton have just mentioned. But even if deep learning dominates the scene at the moment, we are looking at a very dynamic technical landscape. We may see a resurgence of different approaches, such as symbolic AI, or graphical models, for example. Such approaches may help alleviate or circumvent some of the limitations of the current systems, for example by improving the explanatory features of deep learning models. We may then see the development of new architectures.
There is a large ecosystem of AI algorithms and applications of AI that we examiners are encountering in our current practice, not just machine learning per se. In fact, there is a whole gamut of algorithmic processes that fall under the rather broad umbrella of AI.
AM: Just to simplify, I would say that AI or machine learning is the use of algorithms and mathematical models that are implemented by using software and running on machines to solve a particular technical problem. For us, they therefore are computer-implemented inventions (CII) and may be patentable under the EPC if certain conditions are met.
For Computer-Implemented Invention and AI, Clarity and Technical Contribution are Key
GQ: Okay, that’s a great pivot, because I was going to start to ask you all about how applicants should treat AI-related applications. Is there anything different that should be done? I know you were just talking about algorithms, and they come up in any kind of CII situation. Do you find them more important or need more explanation or more depth when you are dealing with an AI invention, for example?
JMD: At the EPO, AI applications are treated in the same manner as all other CIIs. One key element examiners need to address is to identify where the technical contribution of the invention lies, even when the claim includes algorithmic features. Such a technical contribution can consist either in using a particular algorithm for a technical purpose, or it can refer to an algorithm adapted to a specific hardware platform.
AV: I would say that AI algorithms tend to be more abstract than other algorithms. Conceptually, you are dealing with data and then derivatives of that data and then derivatives of those derivatives. In claim language, too, that can make reading pretty complicated. When you have a first variable referring to a second variable, which is a mean of the third variable and so on, it becomes a challenge to keep track of all the different variables. That not only goes for claim language, but also for language in the description that can lead to errors. Which is why we would recommend making the invention as clear as possible with diagrams and formulas.
AM: I think it is very important that the description states where the invention lies: the structure of the AI mathematical model itself, or the input data and/or the training procedure in the case of machine learning, for example.
AV: Or the technical application.
AM: Or the technical application, or the post-processing of the data, for instance, once the data is generated by AI, and how this data is being used in a particular application. The invention will more often relate to one of these aspects, while the rest most of the time are standard, well-known algorithms or concepts. As long as the invention is clear from the description, it makes life much easier for the patent examiner.
GQ: I want to go back to that. One attorney I know, who I would consider a very, very good attorney and particularly good at prosecuting, who really understands the technology and works in the AI area a lot, was just lamenting about dealing with an examiner here in the U.S. on a rejection that she had received dealing with some AI technology. The examiner kept telling her it was abstract and her argument was, well, of course, it’s abstract, it’s AI. I caught her at one of those moments when she was just going over it and was kind of frustrated, but I want to go back to that because AI is an abstraction on an abstraction layered with another abstraction. That is, in fact, where the benefit of some of these really sophisticated AI types of inventions lie. Because that is what gives you the appearance of human intelligence, in the learning that doesn’t require additional human input. Is there anything else that applicants can do or that you’ve seen be successful, or are there things that you’ve seen that have been mistakes or that were specifically not helpful to you in trying to have them explain the invention?
AV: What would be very helpful is to explain the technical effect of the invention and also of the fallback positions: While it is true that the software may be abstract, if it is considered to be a computer-implemented invention, it is bound to produce a technical effect, or as we call it in our guidelines, a “further technical effect” than just the currents flowing in the computer processor. Here’s an example to make it clear what I mean: Take a self-driving vehicle, for instance, and you have the AI collecting all the data from the sensors and it then determines that the steering wheel should turn to the right. That obviously is a technical effect because the AI is making decisions. Then you can have a piece of software that calibrates the sensors. It reads the sensors, it takes other real-life measurements, and it then determines the calibration numbers that are to be applied to the values in order for the sensors to produce proper readouts. That’s already a little more abstract, but can be patented, because the technical effect is to cause the AI to function better. Going a step further, you could even have a memory manager that does nothing but move memory around in the AI’s processing unit to make sure it functions optimally. Then you need to indicate for the piece of software that does nothing but move data around why this moving around produces a technical effect on the whole system.
JMD: It is quite important to understand what is deemed “technical”, namely, how an otherwise algorithmic feature may contribute to the technical character of an invention. This is essentially the nexus of the reasoning that we employ at the EPO to assess the requirement of novelty and inventive step in mixed-type inventions. How do examiners approach such “mixed-type” inventions that contain both algorithmic and hardware features? How are these mathematical or algorithmic features evaluated? The examiners follow a well-established protocol that consists in evaluating whether these features (which, taken in isolation, may be considered abstract) have technical character in the context of the claimed invention. In our practice, an algorithmic feature may possess such a technical character either by its technical purpose or by being adapted to a specific technical implementation. The first of these two dimensions relates to the application of the AI or machine learning step to solving a technical problem in a technical field. The second dimension evaluates whether the algorithmic design is motivated by technical considerations of the internal functioning of a hardware implementation, e.g. a computer. A technical character may only be conferred to an algorithmic feature through one or the other of these two dimensions, either by its technical application or by specific implementation. If either one of these conditions is met, the feature is considered to contribute to the technical character of the invention and must be therefore taken into account in the evaluation of inventive step. Making that relationship clear in the application is very useful for us examiners.
AM: In addition to what Jean-Marc just said, I think it is very helpful if the applicant himself cites the closest prior art. This forms a good starting point. Then he should discuss the problem that the invention is trying to solve, and most importantly, how this problem is being solved using AI or ML. I think this enables examiners to identify in which part of the AI solution the invention lies and effectively reach a decision on patentability.
Weighing the Risks
GQ: So, let me ask you this. If an applicant does that, how much at risk are they for EPO patent examiner to then come back and say, oh, well you defined what the problem was, so therefore your solution is obvious?
AV: That is called hindsight and we guard against it, as laid down in our Guidelines.
GQ: Because maybe you don’t know, but in the U.S., that’s what they do. So, if you are dealing with U.S. first-filed applications, you probably don’t see that quite as much as you would like and it is frustrating because that level of information is really quite helpful. The way that the patent eligibility law in the U.S. has developed is that the judges are saying, tell us what the improvement is so that we can know whether this thing is really eligible to start with. It makes all the sense in the world really, right? Because if you have an innovation, it’s got to be an improvement on some level. The problem becomes in the U.S. because of our KSR decision from the Supreme Court, if you do a really good job of explaining what’s wrong with the prior art and why you’ve solved that, examiners use that and say, oh, well, what you did was obvious. And I think that that’s just wrong.
AM: At the EPO, we do not simply state that something is not inventive. First, the examiner has to apply the problem-solution-approach in the assessment of the inventiveness of the claims. He or she needs to identify the differences between the claimed subject-matter and the closest prior art, identify the problem solved, the technical effect on which the invention is based, and produce a reasoned assessment on why the claimed solution is considered to be inventive (or not). Finally, all three members of the Examining Division have to endorse the decision.