As a teenager, I immersed myself in science fiction. While the visions of many films and novels haven’t come to pass, I’m still amazed by legendary writer Isaac Asimov’s ability to imagine a future of artificial intelligence and robotics. Now, amid all the hype around generative AI and other AI tools, it’s time for us to follow Asimov’s lead and write a new set of rules.
Of course, AI rules for the 21st century won’t be quite as simple as Asimov’s three rules of robotics (popularized in I, Robot). But amid anxiety around the rise of AI tools and a misguided push for a moratorium on advanced AI research, industry can and should be pushing for rules for responsible AI development. Certainly, the past century’s advances in technology have given us plenty of experience in evaluating both the benefits of technological progress and the potential pitfalls.
Technology itself is neutral. It’s how we use it – and the guardrails we set up around it – that dictate its impact. As humans, harnessing the power of fire allowed us to stay warm and extend food storage time. But fire can still be destructive.
Think of how the recent Canadian wildfires threatened lives and property in Canada and damaged U.S. air quality. Nuclear power in the form of atomic bombs killed thousands in Japan during WWII, but nuclear energy lights up much of France and powers U.S. aircraft carriers.
We’re at a pivotal moment for the future of an amazing, complex and consequential technology. We can’t afford to let other countries take the lead.
In the case of AI, new tools and platforms can solve big global problems and create valuable knowledge. At a recent meeting of Detroit-area Chief Information Officers, attendees shared how generative AI is already speeding up time-to-market and making their companies more competitive.
Generative AI will help us “listen” to different animal species. AI will improve our health by supporting drug discovery and disease diagnosis. Similar tools are providing everything from personalized care for elders to better security for our homes. More, AI will improve our productivity, with a new study by McKinsey showing generative AI could boost the global economy by $4.4 trillion annually.
With all this possibility, can such an amazing technology also be bad? Some of the concerns around AI platforms are legitimate. We should be concerned about the risk of deep fakes, political manipulation, and fraud aimed at vulnerable populations, but we can also use AI to recognize, intercept and block harmful cyber intrusions. Both barriers and solutions may be difficult and complex, and we need to work on them.
Some may also be simple; we already see schools experimenting with oral exams to test a student’s knowledge. Addressing those issues head-on, rather than sticking our heads in the sand with a pause on research that would be impossible to enforce and ripe for exploitation by bad actors, will position the United States as a leader on the world stage.
While the U.S. approach to AI has been mixed, other countries seem locked in to a hyper-regulatory stampede. The EU is on the precipice of passing a sweeping AI Act that would require companies to ask permission to innovate. In practice, that would mean that only the government or huge companies with the finances and capacity to afford the certification labyrinth covering privacy, IP, and a host of social protection requirements could develop new AI tools.
A recent study from Stanford University also found that the EU’s AI Bill would bar all of the currently existing large language models, including OpenAI’s GPT-4 and Google’s Bard. Canadian lawmakers are moving forward an overly broad AI bill that could similarly stifle innovation. Most concerning, China is rapidly pursuing civil and military AI dominance through massive government support. More, it has a different view of human rights and privacy protection which may help its AI efforts but is antithetical to our values. The U.S. must act to protect citizens and advance AI innovation or we will be left behind.
What would that look like? To start, the U.S. needs a preemptive federal privacy bill. Today’s patchwork of state-by-state rules mean that data is treated differently each time it ‘crosses’ an invisible border – causing confusion and compliance hurdles for small businesses. We need a national privacy law with clear guidelines and standards for how companies collect, use, and share data. It would also help create transparency for consumers and ensure that companies can foster trust as the digital economy grows.
We also need a set of principles around responsible AI use. While I prefer less regulation, managing emerging technologies like AI requires clear rules that set out how this technology can be developed and deployed. With new innovations in AI unveiled almost daily, legislators should focus on guardrails and outcomes, rather than attempting to reign in specific technologies.
Rules should also consider the level of risk, focusing on AI systems that could meaningfully hurt Americans’ fundamental rights or access to critical services. As our government determines what ‘good policy’ looks like, industry will have a vital role to play. The Consumer Technology Association is working closely with industry and policymakers to develop unified principles for AI use.
We’re at a pivotal moment for the future of an amazing, complex and consequential technology. We can’t afford to let other countries take the lead.
toptechtrends.com/2023/06/23/why-smart-ai-regulation-is-vital-for-innovation-and-u-s-leadership/”>Why smart AI regulation is vital for innovation and U.S. leadership by toptechtrends.com/author/gary-shapiro/”>Walter Thompson originally published on toptechtrends.com/”>TechCrunch