Seattle crosswalk signals with deepfake Bezos audio may have been hacked with just a cellphone

Recently, there’s been a string of protests against tech billionaires in several cities on the West Coast, including Seattle.
But they haven't come in the form of snappy signs or marches. Rather, crosswalks have been hacked to play satirical impressions of billionaires when pedestrians hit the buttons to cross.
Instead of the little robotic voice that tells you to wait until it's safe to cross, at least five intersections in Seattle last week played something like this: "Hi, I'm Jeff Bezos. This crosswalk is sponsored by Amazon Prime with an important message. Please don't tax the rich, otherwise all the other billionaires will move to Florida, too."
The voice sounds like Amazon founder Jeff Bezos – tech experts say it’s likely generative artificial intelligence, or GAI.
In Seattle, the Bezos recordings ended with a song by comedian Bo Burnham.
RELATED: 'Please don't tax the rich.' Seattle crosswalks hacked with audio deepfake of Jeff Bezos
According to the Seattle Department of Transportation, the crosswalk signals were hacked to play these messages. Seattle wasn’t the only place this happened.
In Silicon Valley it was reported that some crosswalk signals played recordings that sounded like Meta's Mark Zuckerburg and Tesla's Elon Musk. Amazon, Meta, and Tesla did not respond to requests for comment.
Ava Pakzad, a University of Washington student, said the stunt gave her a laugh.
"It's really funny," Pakzad said. "And it's nice to see that some people are doing what they can."
Three of the tampered signals were in the University District. Another was close to Amazon’s headquarters in South Lake Union.
Many people who work for the company didn’t want to talk about the fake messages. But Maeceon Mace, who works at a restaurant close to Amazon HQ, said he’s not a fan of the crosswalk prank.
"[It's] really weird and out of the ordinary," Mace said, "because anything could be hacked now. If our cross signs could be hacked, anything could be hacked now."
It isn't clear who's responsible for the hacks.
SDOT hasn’t said exactly how this happened or who might have done it. In an email to KUOW, a spokesperson said they temporarily turned off communications at some crosswalks that appeared to be hacked wirelessly. SDOT is working with the crosswalk button vendor to strengthen security, the spokesperson added.
RELATED: Learning tool or BS machine? How AI is shaking up higher ed
David Kohlbrenner, who co-directs the Security and Privacy Research Lab at the University of Washington, said this probably wasn't very difficult to do.
"They're not very secure. That's on purpose," Kohlbrenner said. "The intent of them is that they're quick to use. They're usable by people out in the field, and so they don't want them to have a lot of complexity with interacting with them."
He said each crosswalk signal can be logged into with a phone app and Bluetooth. That makes it easy for the city to quickly fix a signal or update the audio at a specific crosswalk.
All you need is the password to login.
Kohlbrenner said he thinks what may have happened in this case is the signals used the default password from the manufacturer, which is usually simple and easy to guess.
"So, if you don't change the initial password, then you would just be able to walk up, connect to the device, and then upload a sound file that you would like," he said.
In December, something similar happened in South Lake Union when a construction road sign was hacked to display an anti-CEO message.
"There's a lot of infrastructure that works like this that is not designed for somebody malicious to come after it," Kohlbrenner said. "We rely on people being kind of reasonable citizens to not do that."
When it comes to the AI tech that likely made this stunt possible, UW research Cecilia Aragon said it’s also pretty simple.
"All they need to do is have recorded samples of this person's voice," said Aragon, whose research includes AI-generated audio. "Basically, anybody who's been recorded and is a semi-public figure is vulnerable to this type of fakery."
RELATED: Using AI to detect AI-generated deepfakes can work for audio — but not always
Aragon said it's called voice cloning. With just a few audio samples, AI can learn a person's speech patterns, accent, and inflection.
Then, you can get it to say whatever you want.
"So it's really kind of scary," she said.
There aren’t strong regulations yet on this kind of voice cloning, Aragon said. But in the short term, the lesson is this: Update your passwords.