Technology · June 24, 2023

This week in AI: Big tech bets billions on machine learning tools

Keeping up with an industry as fast-moving as toptechtrends.com/category/artificial-intelligence/”>AI is a tall order. So until an AI can do it for you, here’s a handy roundup of the last week’s stories in the world of machine learning, along with notable research and experiments we didn’t cover on their own.

If it wasn’t obvious already, the competitive landscape in AI — particularly the subfield known as generative AI — is red-hot. And it’s getting hotter. This week, Dropbox launched its first corporate venture fund, toptechtrends.com/2023/06/21/dropbox-launches-50m-ai-focused-venture-fund-intros-new-ai-features/”>Dropbox Ventures, which the company said would focus on startups building AI-powered products that “shape the future of work.” Not to be outdone, AWS toptechtrends.com/2023/06/22/aws-launches-100m-program-to-fund-generative-ai-initiatives/”>debuted a $100 million program to fund generative AI initiatives spearheaded by its partners and customers.

There’s a lot of money being thrown around in the AI space, to be sure. Salesforce Ventures, Salesforce’s VC division, toptechtrends.com/2023/06/12/salesforce-pledges-to-invest-500m-in-generative-ai-startups/”>plans to pour $500 million into startups developing generative AI technologies. Workday recently added $250 million to its existing VC fund specifically to back AI and machine learning startups. And Accenture and PwC have announced that they plan to invest $3 billion and $1 billion, respectively, in AI.

But one wonders whether money is the solution to the AI field’s outstanding challenges.

In an enlightening panel during a Bloomberg toptechtrends.com/2023/06/22/get-a-clue-says-panel-about-generative-ai-its-being-deployed-as-surveillance-devices/”>conference in San Francisco this week, Meredith Whittaker, the president of secure messaging app Signal, made the case that the tech underpinning some of today’s buzziest AI apps is becoming dangerously opaque. She gave an example of someone who walks into a bank and asks for a loan.

That person can be denied for the loan and have “no idea that there’s a system in [the] back probably powered by some Microsoft API that determined, based on scraped social media, that I wasn’t creditworthy,” Whittaker said. “I’m never going to know [because] there’s no mechanism for me to know this.”

It’s not capital that’s the issue. Rather, it’s the current power hierarchy, Whittaker says.

“I’ve been at the table for like, 15 years, 20 years. I’ve been at the table. Being at the table with no power is nothing,” she continued.

Of course, achieving structural change is far tougher than scrounging around for cash — particularly when the structural change won’t necessarily favor the powers that be. And Whittaker warns what might happen if there isn’t enough pushback.

As progress in AI accelerates, the societal impacts also accelerate, and we’ll continue heading down a “hype-filled road toward AI,” she said, “where that power is entrenched and naturalized under the guise of intelligence and we are surveilled to the point [of having] very, very little agency over our individual and collective lives.”

That should give the industry pause. Whether it actually will is another matter. That’s probably something that we’ll hear discussed when she takes the stage at Disrupt in September.

toptechtrends.com/2023/06/13/announcing-the-security-stage-agenda-at-techcrunch-disrupt/”>Announcing the Security Stage agenda at TechCrunch Disrupt

Here are the other AI headlines of note from the past few days:

  • toptechtrends.com/2023/06/21/deepminds-robocat-learns-to-perform-a-range-of-robotics-tasks/”>DeepMind’s AI controls robots: DeepMind says that it has developed an AI model, called RoboCat, that can perform a range of tasks across different models of robotic arms. That alone isn’t especially novel. But DeepMind claims that the model is the first to be able to solve and adapt to multiple tasks and do so using different, real-world robots.
  • toptechtrends.com/2023/06/22/robots-learn-to-perform-chores-by-watching-youtube/”>Robots learn from YouTube: Speaking of robots, CMU Robotics Institute assistant professor Deepak Pathak this week showcased VRB (Vision-Robotics Bridge), an AI system designed to train robotic systems by watching a recording of a human. The robot watches for a few key pieces of information, including contact points and trajectory, and then attempts to execute the task.
  • toptechtrends.com/2023/06/21/otter-is-introducing-a-meeting-oriented-ai-chatbot/”>Otter gets into the chatbot game: Automatic transcription service Otter announced a new AI-powered chatbot this week that’ll let participants ask questions during and after a meeting and help them collaborate with teammates.
  • toptechtrends.com/2023/06/20/consumer-group-calls-on-eu-to-urgently-investigate-the-risks-of-generative-ai/”>EU calls for AI regulation: European regulators are at a crossroads over toptechtrends.com/2023/06/14/europe-takes-another-big-step-towards-agreeing-an-ai-rulebook/” target=”_blank” rel=”noopener”>how AI will be regulated — and ultimately used commercially and noncommercially — in the region. This week, the EU’s largest consumer group, the European Consumer Organisation (BEUC), weighed in with its own position: Stop dragging your feet, and “launch urgent investigations into the risks of generative AI” now, it said.
  • toptechtrends.com/2023/06/19/vimeo-introduces-a-trio-of-ai-powered-editing-features/”>Vimeo launches AI-powered features: This week, Vimeo announced a suite of AI-powered tools designed to help users create scripts, record footage using a built-in teleprompter and remove long pauses and unwanted disfluencies like “ahs” and “ums” from the recordings.
  • toptechtrends.com/2023/06/20/voice-generating-platform-elevenlabs-raises-19m-launches-detection-tool/”>Capital for synthetic voices: ElevenLabs, the viral AI-powered platform for creating synthetic voices, has raised $19 million in a new funding round. ElevenLabs picked up steam rather quickly after its launch in late January. But the publicity hasn’t always been positive — particularly once bad actors began to exploit the platform for their own ends.
  • toptechtrends.com/2023/06/19/gladia-turns-any-audio-into-text-in-near-real-time/”>Turning audio into text: Gladia, a French AI startup, has launched a platform that leverages OpenAI’s Whisper transcription model to — via an API — turn any audio into text into near real time. Gladia promises that it can transcribe an hour of audio for $0.61, with the transcription process taking roughly 60 seconds.
  • toptechtrends.com/2023/06/21/harness-releases-generative-ai-assistant-to-help-ease-developer-workloads/”>Harness embraces generative AI: Harness, a startup creating a toolkit to help developers operate more efficiently, this week injected its platform with a little AI. Now, Harness can automatically resolve build and deployment failures, find and fix security vulnerabilities and make suggestions to bring cloud costs under control.

Other machine learnings

This week was CVPR up in Vancouver, Canada, and I wish I could have gone because the talks and papers look super interesting. If you can only watch one, check out Yejin Choi’s keynote about the possibilities, impossibilities, and paradoxes of AI.

toptechtrends.com/wp-content/uploads/2023/06/choi-trans.jpg” alt=”” width=”1024″ height=”473″ srcset=”https://toptechtrends.com/wp-content/uploads/2023/06/choi-trans.jpg 1713w, https://toptechtrends.com/wp-content/uploads/2023/06/choi-trans.jpg?resize=150,69 150w, https://toptechtrends.com/wp-content/uploads/2023/06/choi-trans.jpg?resize=300,139 300w, https://toptechtrends.com/wp-content/uploads/2023/06/choi-trans.jpg?resize=768,355 768w, https://toptechtrends.com/wp-content/uploads/2023/06/choi-trans.jpg?resize=680,314 680w, https://toptechtrends.com/wp-content/uploads/2023/06/choi-trans.jpg?resize=1536,710 1536w, https://toptechtrends.com/wp-content/uploads/2023/06/choi-trans.jpg?resize=1200,555 1200w, https://toptechtrends.com/wp-content/uploads/2023/06/choi-trans.jpg?resize=50,23 50w” sizes=”(max-width: 1024px) 100vw, 1024px”>

Image Credits: CVPR/YouTube

The UW professor and MacArthur Genius grant recipient first addressed a few unexpected limitations of today’s most capable models. In particular, GPT-4 is really bad at multiplication. It fails to find the product of two three-digit numbers correctly at a surprising rate, though with a little coaxing it can get it right 95% of the time. Why does it matter that a language model can’t do math, you ask? Because the entire AI market right now is predicated on the idea that language models generalize well to lots of interesting tasks, including stuff like doing your taxes or accounting. Choi’s point was that we should be looking for the limitations of AI and working inward, not vice versa, as it tells us more about their capabilities.

The other parts of her talk were equally interesting and thought-provoking. You can watch the whole thing here.

Rod Brooks, introduced as a “slayer of hype,” gave an interesting history of some of the core concepts of machine learning — concepts that only seem new because most people applying them weren’t around when they were invented! Going back through the decades, he touches on McCulloch, Minsky, even Hebb — and shows how the ideas stayed relevant well beyond their time. It’s a helpful reminder that machine learning is a field standing on the shoulders of giants going back to the postwar era.

Many, many papers were submitted to and presented at CVPR, and it’s reductive to only look at the award winners, but this is a news roundup, not a comprehensive literature review. So here’s what the judges at the conference thought was the most interesting:

toptechtrends.com/wp-content/uploads/2023/06/imgedit_rationale.png” alt=”” width=”1024″ height=”711″ srcset=”https://toptechtrends.com/wp-content/uploads/2023/06/imgedit_rationale.png 1536w, https://toptechtrends.com/wp-content/uploads/2023/06/imgedit_rationale.png?resize=150,104 150w, https://toptechtrends.com/wp-content/uploads/2023/06/imgedit_rationale.png?resize=300,208 300w, https://toptechtrends.com/wp-content/uploads/2023/06/imgedit_rationale.png?resize=768,533 768w, https://toptechtrends.com/wp-content/uploads/2023/06/imgedit_rationale.png?resize=680,472 680w, https://toptechtrends.com/wp-content/uploads/2023/06/imgedit_rationale.png?resize=1200,833 1200w, https://toptechtrends.com/wp-content/uploads/2023/06/imgedit_rationale.png?resize=50,35 50w” sizes=”(max-width: 1024px) 100vw, 1024px”>

Image Credits: AI2

VISPROG, from researchers at AI2, is a sort of meta-model that performs complex visual manipulation tasks using a multi-purpose code toolbox. Say you have a picture of a grizzly bear on some grass (as pictured) — you can tell it to just “replace the bear with a polar bear on snow” and it starts working. It identifies the parts of the image, separates them visually, searches for and finds or generates a suitable replacement, and stitches the whole thing back again intelligently, with no further prompting needed on the user’s part. The Blade Runner “enhance” interface is starting to look downright pedestrian. And that’s just one of its many capabilities.

“Planning-oriented autonomous driving,” from a multi-institutional Chinese research group, attempts to unify the various pieces of the rather piecemeal approach we’ve taken to self-driving cars. Ordinarily there’s a sort of stepwise process of “perception, prediction, and planning,” each of which might have a number of sub-tasks (like segmenting people, identifying obstacles, etc). Their model attempts to put all these in one model, kind of like the multi-modal models we see that can use text, audio, or images as input and output. Similarly this model simplifies in some ways the complex inter-dependencies of a modern autonomous driving stack.

 

toptechtrends.com/wp-content/uploads/2023/06/auton-planning.jpg” alt=”” width=”1024″ height=”284″ srcset=”https://toptechtrends.com/wp-content/uploads/2023/06/auton-planning.jpg 2559w, https://toptechtrends.com/wp-content/uploads/2023/06/auton-planning.jpg?resize=150,42 150w, https://toptechtrends.com/wp-content/uploads/2023/06/auton-planning.jpg?resize=300,83 300w, https://toptechtrends.com/wp-content/uploads/2023/06/auton-planning.jpg?resize=768,213 768w, https://toptechtrends.com/wp-content/uploads/2023/06/auton-planning.jpg?resize=680,189 680w, https://toptechtrends.com/wp-content/uploads/2023/06/auton-planning.jpg?resize=1536,426 1536w, https://toptechtrends.com/wp-content/uploads/2023/06/auton-planning.jpg?resize=2048,568 2048w, https://toptechtrends.com/wp-content/uploads/2023/06/auton-planning.jpg?resize=1200,333 1200w, https://toptechtrends.com/wp-content/uploads/2023/06/auton-planning.jpg?resize=50,14 50w” sizes=”(max-width: 1024px) 100vw, 1024px”>

DynIBaR shows a high-quality and robust method of interacting with video using “dynamic Neural Radiance Fields,” or NeRFs. A deep understanding of the objects in the video allows for things like stabilization, dolly movements, and other things you generally don’t expect to be possible once the video has already been recorded. Again… “enhance.” This is definitely the kind of thing that Apple hires you for, and then takes credit for at the next WWDC.

DreamBooth you may remember from a little earlier this year when the project’s page went live. It’s the best system yet for, there’s no way around saying it, making deepfakes. Of course it’s valuable and powerful to do these kinds of image operations, not to mention fun, and researchers like those at Google are working to make it more seamless and realistic. Consequences… later, maybe.

The best student paper award goes to a method for comparing and matching meshes, or 3D point clouds — frankly it’s too technical for me to try to explain, but this is an important capability for real world perception and improvements are welcome. Check out the paper here for examples and more info.

Just two more nuggets: Intel showed off this interesting model, LDM3D, for generating 3D 360 imagery like virtual environments. So when you’re in the metaverse and you say “put us in an overgrown ruin in the jungle” it just creates a fresh one on demand.

And Meta released a voice synthesis tool called Voicebox that’s super good at extracting features of voices and replicating them, even when the input isn’t clean. Usually for voice replication you need a good amount and variety of clean voice recordings, but Voicebox does it better than many others, with less data (think like 2 seconds). Fortunately they’re keeping this genie in the bottle for now. For those who think they might need their voice cloned, toptechtrends.com/2023/05/08/acapela-lets-anyone-back-up-their-own-voice-for-free-in-minutes-just-in-case/”>check out Acapela.

toptechtrends.com/2023/06/24/this-week-in-ai-big-tech-bets-billions-on-machine-learning-tools/”>This week in AI: Big tech bets billions on machine learning tools by toptechtrends.com/author/kyle-wiggers/”>Kyle Wiggers originally published on toptechtrends.com/”>TechCrunch

About The Author