Did the late Pope Francis really stride around in a puffy down jacket? Did an elephant really wrap a crocodile in its jaws? More seriously, is the person with a telephone voice that seems 99.8 percent the voice of your daughter really in such grave danger that only a quick bank transfer could rescue her?
As long as there has been human society, there have been scammers and scams. The too-good-to-be-true deal. The sob story that pulls at the heartstrings at the same time as it opens your wallet. The grifter who wheedles his way into your life, family or work, exploiting every psychological weakness he can find.
And the internet has made it much worse, with the age of AI threatening to make every single one of us, no matter how smart and savvy, into a potential victim.
Probably everyone reading this got at least a dozen emails in the 1990s from a Nigerian “prince” asking for $1,000 to get him to the U.S. and then he’d pay you $100,000. Or saying that there was $1,000,000 in a bank account with your name on it if only you paid a service charge to get the money transferred.
Some people fell for that, crude as it was, and lost money—sometimes many thousands.
Today, with AI, that fraud would no longer be crude. The “prince” could have a video conference with you from a room looking exactly like a lavishly appointed palace. Official-looking documents could be produced at the touch of a button.
Remember what the great American author Mark Twain once wrote: “It is easier to fool people than to convince them that they have been fooled.” Most people only wake up when their bank accounts have been raided and the “prince” has long faded into safety.
Even more sinister is the recent spate of “sextortion” scams where fraudsters convince teenagers to send explicit photos of themselves and then extort large sums of money from them under threat of public release. This sad scam has caused several teenage Americans to commit suicide and countless others to have severe mental distress.
This is now tragically even easier to do. A 65-year-old man could easily manipulate the AI on live chat to make him appear to be a young, attractive woman, preying on the lonely, the desperate and the inexperienced.
To remove AI deepfakes from the internet is a game of whack-a-mole that can be very expensive and time consuming.
As a lawyer, I dealt with a situation where as soon as I removed images, they popped up elsewhere and we had to go to great lengths to stop it, including sending someone to an internet server farm in Romania to surveil the activities there. In another case, a man was harassing my pro bono female client for turning him down for a date, and every time we got one of his websites taken down, another one popped up.
The Take It Down Act, a rare bipartisan bill signed by President Donald Trump that enacted stricter penalties for the distribution of non-consensual intimate imagery, sometimes called “revenge porn,” as well as deepfakes created by AI, is a step in the right direction.
This will hopefully be followed by the bipartisan bill No Fakes Act, which seeks to create federal protections for artists’ voice, likeness and image from unauthorized AI-generated deepfakes that could be extended to all individuals victimized by AI deepfakes.
Groups supporting this bill have argued that Americans across the board—whether teenagers or high-profile music artists—were at risk of their likenesses being misused, and this legislation, reintroduced in the Senate last month, would combat deepfakes by holding individuals or companies liable if they produced an unauthorized digital replica of an individual in a performance.
But having laws in one country is not enough—the internet is global, and images hop countries in a nanosecond.
Denmark is leading the charge against deepfakes by granting individuals ownership of their facial features, voices and physical likenesses. Culture Minister Jakob Engel-Schmidt said: “No one can copy your voice or face without consent. This is not just about deepfakes—it’s about reclaiming control over our digital identities.”
This is the minimum that we need in this AI age.
The deepfake crisis demands urgent coordinated action, not just in places sensitive to the issues already, like Denmark or the United States, but also in places where the vast server farms are hosted and bots by the tens of thousands can be released at the touch of a button.
We need clear legal frameworks to protect our digital identities. We need standardized takedown protocols, so lawyers protecting the innocent can quickly and efficiently go on the attack. We need global cooperation at the governmental level, ensuring that innocent citizens like you and me are served by technology, and not exploited or harmed by it.
Bryan Sullivan is an attorney who has represented high-profile clients in entertainment, intellectual property and corporate investments.
Read the full article here