House Subcommittees Hold Hearing on Artificial Intelligence Challenges and Opportunities

Congresswoman Barbara Comstock (R-VA), House Research and Technology Subcommittee Chairwoman

On the morning of Tuesday, June 26th, both the House Subcommittee on Research and Technology and the House Subcommittee on Energy held a hearing titled Artificial Intelligence – With Great Power Comes Great Responsibility. The day’s discussion centered on issues surrounding the nascent technological field of artificial intelligence (AI), including both the potential negative and positive impacts that improved AI technologies could pose to the U.S. workforce and society in general.

As the hearing charter published by the House subcommittees notes, the purpose of the day’s hearing was focused on understanding the state of AI technology, including the difference between narrow and general intelligence, as well as the types of research currently being conducted to advance general AI technology. The panel witnesses included Dr. Tim Persons, chief scientist at the U.S. Government Accountability Office (GAO); Greg Brockman, co-founder and chief technology officer of the nonprofit AI tech organization OpenAI; and Dr. Fei-Fei Li, board chairperson and co-founder of the academic collaborative organization AI4ALL. Dr. Jaime Carbonell, director of the Language Technologies Institute at Carnegie Mellon University, was scheduled to appear on the panel but was unable due to a medical emergency.

Although it’s not readily apparent, readers may be interested to note the ties that members of the witness panel have to Google and Elon Musk. Before co-founding AI4ALL, Dr. Li was the chief scientist of the AI and machine learning department of Google Cloud. For OpenAI, Greg Brockman’s written testimony notes that the organization has collaborated with Alphabet’s DeepMind AI development subsidiary on the design of AI systems. OpenAI was also founded in part by Elon Musk, who contributed to the $1 billion initial funding for the organization. OpenAI also has a strong tie to Google through its research director, Ilya Sutskever, who joined OpenAI after serving on Google Brain, a deep learning AI research team supported by the tech giant.

[[Advertisement]]

Opening remarks offered by members of the joint subcommittees were indicative of the tension most people felt between the potential of AI to improve the quality of human life while remaining aware of the potential negatives which could arise through job displacement or the use of AI tech with malicious intent. “Depending on who you ask, AI is the stuff of dreams or nightmares,” said Congressman Dan Lipinski (D-IL). “I believe it is definitely the former, and I strongly fear that it could also be the latter.” Lipinski cited both a national AI research strategic plan created under the administration of former President Barack Obama as well as a GAO report issued this March on emerging opportunities and challenges posed by AI in discussing the significant potential for AI in finance, cybersecurity and other areas as well as the need to develop computational ethics and determine the long-term impact of this tech sector on the job market. In his opening remarks, Congressman Marc Veasey (D-TX) cited job market research published last year by Gartner which forecast the creation of 2.3 million AI-related jobs by 2020, outpacing the expected 1.8 million jobs which would be eliminated by AI tech.

Dr. Fei-Fei Li, AI4ALL Co-Founder and Board Chairperson

Congresswoman Barbara Comstock (R-VA), the chairperson of the joint subcommittee hearing, questioned Li as to how education could be transformed by the development of AI tech. Li’s answer spoke to the need to democratize science and technology education related to AI to reach underrepresented populations such as ethnic minorities or female students in order to ensure that AI tech developments served the widest swath of American society possible. “Humanity has never created a technology so similar or trying to resemble who we are,” Li said, “and we need AI technologists and leaders of tomorrow to represent this technology.” As well, Li noted that AI could augment educators in the classroom and even improve possibilities for lifelong learning of individuals even after they graduate from higher education institutions.

Greg Brockman, OpenAI Co-Founder and CTO

Rep. Lipinski questioned Brockman about an apparent discrepancy between his testimony and the GAO report on AI tech emergence, namely that Brockman seemed to believe that AI tech development was occurring at a more rapid pace than the GAO indicated. Brockman cited a study published by OpenAI this May which found that the computing power for the largest AI training runs have been doubling every 3.5 months over the past six years, a 300,000-fold increase over the six-year period. As Brockman noted, this represented a massive increase of computational power over what has been typically presaged by Moore’s Law, which would have accounted for a 12-times increase during the study period. “We’re talking about a technological revolution on the scale of the agricultural revolution,” Brockman said. “If we aren’t careful in terms of thinking ahead and trying to be prepared, we really could be caught unaware.”

Dr. Tim Persons, GAO Chief Scientist

GAO’s Persons pushed back slightly on Brockman’s assumptions by arguing that, despite exponential increases in computing power, many people in the tech development community were mildly skeptical as to the rate at which general AI would appear. Near the beginning of the hearing, subcommittee members noted a difference between general AI, the type of artificial intelligence which mimics human intelligence to complete many types of tasks, and narrow AI, which is highly application-specific. For example, the narrow AI behind the Siri digital voice assistant can perform natural language processing but isn’t optimized for navigating self-driving cars, and the inverse is true for the narrow AI tech which steers autonomous vehicles. “I think a lot of the driving force here is the concern about general AI and taking over the world,” Persons said. “It’s just much harder to mimic human intelligence, especially in an environment where intelligence isn’t even really defined or understood.”

The issue of AI replacing human workers and putting the U.S. workforce out of jobs came up periodically throughout the hearing. Persons noted that the federal government already had agencies like the Bureau of Labor Statistics (BLS) which could collect data specific to this issue, though there may need to be new types of data collected or an updated understanding of employment metrics in response to the types of challenges presented by AI to the workforce. Li spoke to the fact that AI technologies wouldn’t replace human workers so much as augment their current work, especially routine tasks performed by workers. She used an example of a nurse in an ICU unit, which she said was a personal example to her because her mother had recently been in an ICU unit. AI tech could assist nurses with charting patient data and computer typing tasks and reduce the time spent away from the care of patients. “No matter how rapidly we develop the technology, in the most optimistic assessment, it’s very hard to imagine that entire profession of nursing would be replaced,” Li said. “Yet within nursing jobs, there are many opportunities that certain tasks can be assisted by AI technology.” Elsewhere, Brockman added that the augmentation of work with AI technology would likely resemble the advent of personal computers from a few decades ago where a wide degree of American workers had to learn how to augment tasks using PCs.

The specter of increased Chinese investment into AI tech development was also discussed during the day’s hearing. During his opening remarks, Congressman Randy Weber (R-TX), chair of the House Energy Subcommittee, spoke to the concerns over increased tech investment by China into AI programs and how that threatens U.S. dominance in the field. Congresswoman Debbie Lesko (R-AZ) inquired as to what steps the United States was taking in order to guard against espionage from China, a hot-button issue given the Trump Administration’s actions against China in response to deceptive trade practices. Persons noted that the protection of intellectual property in the AI sphere was critical to reduce the risks of IP theft by foreign actors, especially during a time where Internet access has made it easier to gain unauthorized access to IP. Brockman added that the dialogue on what types of AI-related information could be shared, and that which was too valuable to share, has gotten underway among stakeholders in the field of AI development. Such a dialogue was important, he noted, because a great deal of early AI development has taken place in academic institutions which have an overarching tendency to publish and disseminate their research findings.

Share

Warning & Disclaimer: The pages, articles and comments on IPWatchdog.com do not constitute legal advice, nor do they create any attorney-client relationship. The articles published express the personal opinion and views of the author as of the time of publication and should not be attributed to the author’s employer, clients or the sponsors of IPWatchdog.com.

Join the Discussion

12 comments so far.

  • [Avatar for Mark Nowotarski]
    Mark Nowotarski
    July 2, 2018 06:12 pm

    NIghtwriter @10,

    Very interesting. Thanks for the feedback. Are there any cases in particular that might have gotten the AI 101 ball rolling (downhill, that is)?

  • [Avatar for Anon]
    Anon
    June 30, 2018 03:33 pm

    Night Writer,

    I, for one, appreciate your 101 battles.

  • [Avatar for Night Writer]
    Night Writer
    June 30, 2018 09:32 am

    @9 Mark

    101’s are going up in many AUs. This is well-known. For example in 24xx AUs the number of 101s is through the roof. Right now they can be overcome, but they do put more restraints on the claims. I’ve spoken to many examiners who say they know are supposed to evaluate all claims for 101. The goal is for this to happen in all the AUs. What is going to happen probably is that randomness will kill any claim if you get stuck with an examiner that decides to use 101 to reject your claims and won’t remove.

    Plus, for AI, there are many cases at the CAFC that have held AI ineligible. In fact, Taranto held in a case that was nonprecedential (probably had to be or it would have wiped out half of AI and it was absurd in the reasoning) that any computer that simulated human thought was per se obvious. This isn’t 101 directly, but it implicates 101.

    Anyway, I am at a large law firm. We have seminars on 101 constantly and constantly appeal the 101 decisions. I’d say our law firm is at the cutting edge of fighting 101 rejections.

  • [Avatar for Mark Nowotarski]
    Mark Nowotarski
    June 29, 2018 02:48 pm

    Nightwriter@?

    Got your response but don’t see your post here. If you wouldn’t mind sharing the art units you are seeing more 101’s in, I’d be happy to look into it deeper. You can contact me through my web site.

  • [Avatar for Night Writer]
    Night Writer
    June 29, 2018 01:16 pm

    @5 Mark Nowotarski

    It happens when the AI is tied into what the USPTO thinks is a business method. And the 101 rejections are ramping up in other art units so I get a lot more 101 rejections now then I used to.

  • [Avatar for angry dude]
    angry dude
    June 29, 2018 11:52 am

    Forget about patents on AI

    AI is all about very very complex algorithms which can be compiled and kept as trade secrets for many many years

    You gotta be an idiot to disclose any of that deep know-how publicly – e.g. as detailed flowcharts or actual code in patent applications

    Copyright still applies to (criminal) copying of binary executables and such

    Plus, those algos can run in the cloud to be completely inaccessible for decompiling and reverse engineering

  • [Avatar for Herbert L Roitblat]
    Herbert L Roitblat
    June 29, 2018 09:40 am

    Part of the confusion stems from the variety of ways that the term “algorithm” is used. I have an article here: https://www.information-management.com/opinion/is-artificial-intelligence-patentable-should-it-be that speaks to this issue. I argue that computers are the tools that we use to implement modern technological innovation, that artificial intelligence depends on multiple algorithms, but is not an algorithm itself, and that AI should definitely be patentable.

  • [Avatar for Mark Nowotarski]
    Mark Nowotarski
    June 29, 2018 07:55 am

    Nightwriter@1

    Have you been getting 101 rejections on AI patents? Has anyone else?

    Just curious.

  • [Avatar for Bemused]
    Bemused
    June 28, 2018 12:59 pm

    “So have we really gotten to the place where the creation of a replacement for man is not inventive?”

    Gene: I submit that the answer to that question depends upon the man being replaced.

    There are a handful of judges that come to mind (*cough* *cough* CAFC/SCOTUS) that could reasonably be replaced with say a cucumber or an eggplant with no real loss of intellect or reasoning.

    Just sayin’….

  • [Avatar for Jianqing Wu]
    Jianqing Wu
    June 28, 2018 12:07 pm

    AI should be encouraged if it does not degrade quality of products and services and will not post threat to human lives. If their argument is focused on job opportunity, the stone age would be the best model. However, humans in the entire history sought to increase productivity.

    I’d rather to have a 2 hours work day and have the rest time for doing things I want. So, the political system must be prepared to change work hours so that all people can have suitable roles in a high productivity society. Unfortunately, the U.S. government is unable to do that. Then, we will see a HUGE problem in employment. Some may earn zillions of dollars and other may earn nothing.

    Maybe, the government will be able to do the obvious thing like that. This has to be done right now when AI is replacing people.

  • [Avatar for Gene Quinn]
    Gene Quinn
    June 28, 2018 10:41 am

    Night-

    Apparently because it is an abstract idea without any particular inventive concept.

    So have we really gotten to the place where the creation of a replacement for man is not inventive? WOW.

  • [Avatar for Night Writer]
    Night Writer
    June 28, 2018 10:12 am

    Part of this discussion should be how can AI be ineligible under 101 when it is going to replace 10’s of millions of jobs? How can a machine that replaces a person not be eligible for patentability?