On the morning of Tuesday, June 26th, both the House Subcommittee on Research and Technology and the House Subcommittee on Energy held a hearing titled Artificial Intelligence – With Great Power Comes Great Responsibility. The day’s discussion centered on issues surrounding the nascent technological field of artificial intelligence (AI), including both the potential negative and positive impacts that improved AI technologies could pose to the U.S. workforce and society in general.
As the hearing charter published by the House subcommittees notes, the purpose of the day’s hearing was focused on understanding the state of AI technology, including the difference between narrow and general intelligence, as well as the types of research currently being conducted to advance general AI technology. The panel witnesses included Dr. Tim Persons, chief scientist at the U.S. Government Accountability Office (GAO); Greg Brockman, co-founder and chief technology officer of the nonprofit AI tech organization OpenAI; and Dr. Fei-Fei Li, board chairperson and co-founder of the academic collaborative organization AI4ALL. Dr. Jaime Carbonell, director of the Language Technologies Institute at Carnegie Mellon University, was scheduled to appear on the panel but was unable due to a medical emergency.
Although it’s not readily apparent, readers may be interested to note the ties that members of the witness panel have to Google and Elon Musk. Before co-founding AI4ALL, Dr. Li was the chief scientist of the AI and machine learning department of Google Cloud. For OpenAI, Greg Brockman’s written testimony notes that the organization has collaborated with Alphabet’s DeepMind AI development subsidiary on the design of AI systems. OpenAI was also founded in part by Elon Musk, who contributed to the $1 billion initial funding for the organization. OpenAI also has a strong tie to Google through its research director, Ilya Sutskever, who joined OpenAI after serving on Google Brain, a deep learning AI research team supported by the tech giant.
Opening remarks offered by members of the joint subcommittees were indicative of the tension most people felt between the potential of AI to improve the quality of human life while remaining aware of the potential negatives which could arise through job displacement or the use of AI tech with malicious intent. “Depending on who you ask, AI is the stuff of dreams or nightmares,” said Congressman Dan Lipinski (D-IL). “I believe it is definitely the former, and I strongly fear that it could also be the latter.” Lipinski cited both a national AI research strategic plan created under the administration of former President Barack Obama as well as a GAO report issued this March on emerging opportunities and challenges posed by AI in discussing the significant potential for AI in finance, cybersecurity and other areas as well as the need to develop computational ethics and determine the long-term impact of this tech sector on the job market. In his opening remarks, Congressman Marc Veasey (D-TX) cited job market research published last year by Gartner which forecast the creation of 2.3 million AI-related jobs by 2020, outpacing the expected 1.8 million jobs which would be eliminated by AI tech.
Congresswoman Barbara Comstock (R-VA), the chairperson of the joint subcommittee hearing, questioned Li as to how education could be transformed by the development of AI tech. Li’s answer spoke to the need to democratize science and technology education related to AI to reach underrepresented populations such as ethnic minorities or female students in order to ensure that AI tech developments served the widest swath of American society possible. “Humanity has never created a technology so similar or trying to resemble who we are,” Li said, “and we need AI technologists and leaders of tomorrow to represent this technology.” As well, Li noted that AI could augment educators in the classroom and even improve possibilities for lifelong learning of individuals even after they graduate from higher education institutions.
Rep. Lipinski questioned Brockman about an apparent discrepancy between his testimony and the GAO report on AI tech emergence, namely that Brockman seemed to believe that AI tech development was occurring at a more rapid pace than the GAO indicated. Brockman cited a study published by OpenAI this May which found that the computing power for the largest AI training runs have been doubling every 3.5 months over the past six years, a 300,000-fold increase over the six-year period. As Brockman noted, this represented a massive increase of computational power over what has been typically presaged by Moore’s Law, which would have accounted for a 12-times increase during the study period. “We’re talking about a technological revolution on the scale of the agricultural revolution,” Brockman said. “If we aren’t careful in terms of thinking ahead and trying to be prepared, we really could be caught unaware.”
GAO’s Persons pushed back slightly on Brockman’s assumptions by arguing that, despite exponential increases in computing power, many people in the tech development community were mildly skeptical as to the rate at which general AI would appear. Near the beginning of the hearing, subcommittee members noted a difference between general AI, the type of artificial intelligence which mimics human intelligence to complete many types of tasks, and narrow AI, which is highly application-specific. For example, the narrow AI behind the Siri digital voice assistant can perform natural language processing but isn’t optimized for navigating self-driving cars, and the inverse is true for the narrow AI tech which steers autonomous vehicles. “I think a lot of the driving force here is the concern about general AI and taking over the world,” Persons said. “It’s just much harder to mimic human intelligence, especially in an environment where intelligence isn’t even really defined or understood.”
The issue of AI replacing human workers and putting the U.S. workforce out of jobs came up periodically throughout the hearing. Persons noted that the federal government already had agencies like the Bureau of Labor Statistics (BLS) which could collect data specific to this issue, though there may need to be new types of data collected or an updated understanding of employment metrics in response to the types of challenges presented by AI to the workforce. Li spoke to the fact that AI technologies wouldn’t replace human workers so much as augment their current work, especially routine tasks performed by workers. She used an example of a nurse in an ICU unit, which she said was a personal example to her because her mother had recently been in an ICU unit. AI tech could assist nurses with charting patient data and computer typing tasks and reduce the time spent away from the care of patients. “No matter how rapidly we develop the technology, in the most optimistic assessment, it’s very hard to imagine that entire profession of nursing would be replaced,” Li said. “Yet within nursing jobs, there are many opportunities that certain tasks can be assisted by AI technology.” Elsewhere, Brockman added that the augmentation of work with AI technology would likely resemble the advent of personal computers from a few decades ago where a wide degree of American workers had to learn how to augment tasks using PCs.
The specter of increased Chinese investment into AI tech development was also discussed during the day’s hearing. During his opening remarks, Congressman Randy Weber (R-TX), chair of the House Energy Subcommittee, spoke to the concerns over increased tech investment by China into AI programs and how that threatens U.S. dominance in the field. Congresswoman Debbie Lesko (R-AZ) inquired as to what steps the United States was taking in order to guard against espionage from China, a hot-button issue given the Trump Administration’s actions against China in response to deceptive trade practices. Persons noted that the protection of intellectual property in the AI sphere was critical to reduce the risks of IP theft by foreign actors, especially during a time where Internet access has made it easier to gain unauthorized access to IP. Brockman added that the dialogue on what types of AI-related information could be shared, and that which was too valuable to share, has gotten underway among stakeholders in the field of AI development. Such a dialogue was important, he noted, because a great deal of early AI development has taken place in academic institutions which have an overarching tendency to publish and disseminate their research findings.