In Silicon Valley, some of the brightest minds believe a universal basic income (UBI) that guarantees people unrestricted cash payments will help them to survive and thrive as advanced technologies eliminate more careers as we know them, from white collar and creative roles — lawyers, journalists, artists, software engineers — to labor jobs. The idea has gained enough traction that dozens of guaranteed income programs have been started in U.S. cities since 2020.
Yet even Sam Altman, the CEO of OpenAI and one of the highest-profile proponents of UBI, doesn’t believe that it’s a complete solution. As he said during a sit-down earlier this year, “I think it is a little part of the solution. I think it’s great. I think as [advanced artificial intelligence] participates more and more in the economy, we should distribute wealth and resources much more than we have and that will be important over time. But I don’t think that’s going to solve the problem. I don’t think that’s going to give people meaning, I don’t think it means people are going to entirely stop trying to create and do new things and whatever else. So I would consider it an enabling technology, but not a plan for society.”
The question begged is what a plan for society might look like in that case, and computer scientist Jaron Lanier, a founder in the field of virtual reality, writes in this week’s New Yorker that “data dignity” could be one solution, if not the answer.
Here’s the basic premise: Right now, we mostly give our data for free in exchange for free services. Lanier argues that it will become more important than ever that we stop doing this, that the “digital stuff” on which we rely — social networks in part but also increasingly AI models like OpenAI’s GPT-4 — instead “be connected with the humans” who give them so much to ingest in the first place.
The idea is for people to “get paid for what they create, even when it is filtered and recombined through big models.”
The concept isn’t brand new, with Lanier first introducing the notion of data dignity in a 2018 Harvard Business Review piece titled, “A Blueprint for a Better Digital Society.” As he wrote at the time with co-author and economist Glen Weyl, “[R]hetoric from the tech sector suggests a coming wave of underemployment due to artificial intelligence (AI) and automation” and a “future in which people are increasingly treated as valueless and devoid of economic agency.”
But the “rhetoric” of universal basic income advocates “leaves room for only two outcomes,” and they are extreme, they observed. “Either there will be mass poverty despite technological advances, or much wealth will have to be taken under central, national control through a social wealth fund to provide citizens a universal basic income.”
But both “hyper-concentrate power and undermine or ignore the value of data creators,” the two wrote.
Of course, assigning people the right amount of credit for their countless contributions to everything that exists in the world is not a minor challenge (even as one can imagine AI auditing startups promising to tackle the issue). Lanier acknowledges that even data-dignity researchers can’t agree on how to disentangle everything that AI models have absorbed or how detailed an accounting should be attempted.
But he thinks — perhaps optimistically — that it could be done gradually. “The system wouldn’t necessarily account for the billions of people who have made ambient contributions to big models—those who have added to a model’s simulated competence with grammar, for example. [It] might attend only to the small number of special contributors who emerge in a given situation.” Over time, however, “more people might be included, as intermediate rights organizations—unions, guilds, professional groups, and so on—start to play a role.”
Of course, the more immediate challenge is the black-box nature of current AI tools, says Lanier, who believes that “systems must be made more transparent. We need to get better at saying what is going on inside them and why.”
While OpenAI had at least released some of its training data in previous years, it has since closed the kimono completely. Indeed, Greg Brockman toptechtrends.com/2023/03/15/interview-with-openais-greg-brockman-gpt-4-isnt-perfect-but-neither-are-you/”>told TechCrunch last month of GPT-4, its latest and most powerful large language model to date, that its training data came from a “variety of licensed, created, and publicly available data sources, which may include publicly available personal information,” but he declined to offer anything more specific.
As OpenAI stated upon GPT-4’s release, there is too much downside to revealing too much. “Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar.”
The same is true of every large language model currently. Google’s Bard chatbot, for example, is based on the LaMDA language model, which is trained on datasets based on internet content called Infiniset, about which little is known, although a year ago, Google’s research team wrote that it incorporated 2.97 billion documents and 1.12 billion dialogs with 13.39 billion utterances.
OpenAI — whose technology in particular is spreading like wildfire — is already in the crosshairs of regulators because of its aversion to greater transparency. The Italian authority has blocked the use of ChatGPT, and French, German, Irish, and Canadian data regulators are also investigating how it collects and uses data.
But as Margaret Mitchell, an AI researcher and chief ethics scientist at startup Hugging Face, who was formerly Google’s AI ethics co-lead, tells Technology Review, it might actually be nearly impossible at this point to identify individuals’ data and remove it from its models.
As explained by the outlet: “The company could have saved itself a giant headache by building in robust data record-keeping from the start, she says. Instead, it is common in the AI industry to build data sets for AI models by scraping the web indiscriminately and then outsourcing the work of removing duplicates or irrelevant data points, filtering unwanted things, and fixing typos. These methods, and the sheer size of the data set, mean tech companies tend to have a very limited understanding of what has gone into training their models.”
That’s an obvious challenge to the proposal of Lanier, who calls Altman a “colleague and friend” in his New Yorker piece.
Whether it renders it impossible is something only time will tell.
Certainly, there is merit in wanting to give people ownership over their work; whether or not OpenAI and others had the right to scrape the entire internet to feed its algorithms is already at the heart of numerous and wide-ranging copyright infringement lawsuits against them.
So-called data dignity could also go a long way toward preserving humans’ sanity over time, suggests Lanier in his fascinating New Yorker piece.
Whereas universal basic income “amounts to putting everyone on the dole in order to preserve the idea of black-box artificial intelligence,” ending the “black box nature of our current AI models” would make an accounting of people’s contributions easier, making them more likely to continue making contributions.
Importantly, Lanier adds, it could also help to “establish a new creative class instead of a new dependent class.”
toptechtrends.com/2023/04/21/as-ai-eliminates-jobs-a-way-to-keep-people-afloat-financially-thats-not-ubi/”>As AI eliminates jobs, a way to keep people afloat financially (that’s not UBI) by toptechtrends.com/author/connie-loizos/”>Connie Loizos originally published on toptechtrends.com/”>TechCrunch