Digital Heresy
Digital Heresy Podcast
Oversight of A.I.: Rules for Artificial Intelligence (Part 2)

Oversight of A.I.: Rules for Artificial Intelligence (Part 2)

DH and AP Reacts


On Tuesday, May 16th, 2023, the U.S. Senate held a Subcommittee Hearing on AI. Among the witnesses, Sam Altman (CEO - OpenAI), Christina Montgomery (Chief Privacy & Trust Officer - IBM), and Gary Marcus (Professor Emeritus - New York University).

This episode is a “live listen and react” featuring commentary from myself, and A Priori. The overall hearing is nearly 3 hours long, this is part two, covering the second hour of the hearing.

The hearing is worth watching/listening to even if you don’t care for our commentary - lots of good information to consider and discuss in your circles. You can watch the original recording in the link above, or on any of a number Youtube videos like the one embedded below:


Tech Impact on Creative Ownership

In part two we cited several examples illustrating how, for a century now, generations of content creators have been concerned with advancements in technologies that disrupt the current ecosystem.

I couldn’t find the old news op-eds from early print, but the Smithsonian has a great write up on the history of the phonograph and the concern it caused-

"Others worried it would kill off amateur musicianship. If we could listen to the greatest artists with the flick of a switch, why would anyone bother to learn an instrument themselves? “Once the talking machine is in a home, the child won’t practice,” complained the bandleader John Philip Sousa. But others wryly pointed out that this could be a blessing—they’d be spared “the agonies of Susie’s and Jane’s parlor concerts,” as a journalist joked. In reality, neither critic was right. During the first two decades of the phonograph—from 1890 to 1910—the number of music teachers and performers per capita in the U.S. rose by 25 percent, as Katz found. The phonograph inspired more and more people to pick up instruments." - How the Phonograph Changed Music Forever

mentioned a very recent example involving Ed Sheeran, who was being sued over the song “Thinking Out Loud” which bears similarities to the Marvin Gaye song “Let’s Get It On”. While Ed won the case, pay close attention to what he says starting at 1:10 in the following clip:

“What I was saying is like… yes, it’s a chord sequence that you hear on successful songs, but if you say that a song in 1973 ‘owns this’, then what about all the songs that came before? We found songs like… from like the 1700s that had similar melodic stuff”

See the problem? So at the point of which you can start to tell AI it “can’t do Impressionistic paintings that feature melting clocks” or it “can’t produce environmental black-and-white photos that look like Ansel Adams’ took the shot”, you create the slipperiest of slopes with implications that will absolutely harm real life artists more than protect them.

I cited another famous example from 1990. When Vanilla Ice hit five platinum albums, the practice of music “sampling” common to Hip-Hop was under a lot of scrutiny - check his thoughts in the following interview:

Certainly a controversial figure at the time, but the Ice Ice Baby reference goes back to the bassline of the song, which was sampled off the Queen/David Bowie track Under Pressure and modified slightly to create the loop -

Agree or disagree, it raises the same point Ed’s case raises - do Queen and David Bowie forever own that particular sequence of notes, played at that pitch?

I prefer to flip the problem around and look at the other extreme-

Let’s say that you yank all art datasets for GPT and other generative AI, and it has to learn the fine arts for itself completely from the ground up. How long before you get Shakespeare? (Answer: Not long with modern computing)

Next, remember that AI can generate millions and billions of examples of content all day every day, tirelessly, ad infinitum.

Now let’s say that any and all music, literature, photos, paintings, etc. generated by AI are genuine (because it was self taught) and now, therefore, it has 100% copyright claim to all of that work as long as it’s not exact copies of existing work by happenstance.

What do you suppose the odds are that anything created after that point, in any of those respective fields, will be considered rip-offs of one of the billions of pieces of work the AI owns? (Answer: Pretty high). Is that fair? Do you see the problem for humanity?

With the ability to generate an absolute infinite amount of content and dominate that virtual land-rush, the only people who stand to lose ARE the very artists you sought out to protect - because the sheer volume of works that can be produced by computers easily out-paces what we can do.

And god-forbid AI invent a new style of music, or poetry, or music, that Humanity hadn’t encountered before, and we really like it. What then? Are we beholden to only letting AI generate future releases of that style because we have to swallow our own medicine and “not use AI generated content to influence our own works?”

Leave a comment

A.I. - but it’s Artificial Influence - and it’s backed by Real Humans automating Real Devices

The second reference I made in the theme of astroturfing, or the potential for artificially inflated influence and narratives brought about by ChatGPT. I argue: That ship already sailed a decade ago, and GPT software doesn’t really enable that risk as some “new threat”

I talked about Click Farms which have been been around for very, very long while, and how people get around bot-detection by having entire farms of old broken-screen devices doing nothing but clicking Like, Watching Ads, and pumping views for people day-in-and-day-out:

That documentary is over 5 years old and the phone labs you see depicted there are well established. So let’s say we’re already roughly 8-10 years into a semi-artificial internet where 1.5MM followers shouldn’t actually tell you anything about how reliable a person is, when they can buy 5000 followers for $110 a pop and PR and office campaigns often have budges in the millions.

So what is the threat that GPT could pose in this space?

Well, these days its mostly obvious when a comment is artificial, like seeing something like this as the first comment on like, every single news article regardless of the topic:

But I think you’ll start to see more sophisticated comments, that may even be prompted to respond generically on the article itself, as a new refresh to the astroturfing tech out there. Today, I can go to the OpenAI sandbox and create a persona prompt that says

“Read the following text and generate a short, relevant comment about how the article relates to my online business and drives web traffic to my site at”

And from there I could paste in, or automate, article after article and it will probably do a good job of making a passable comment. That’s today and I cannot for the life of me see a way to limit that behavior in ChatGPT.

So instead, I prefer to shift the conversation to three areas:

  1. What you do to find manufactured consent farms and shut them down when you do find them.

  2. What you do with people who are found to use MCFs to artificially boost their platform.

  3. What tools and behaviors can we use, as consumers, to identify and expunge influence artificially created by these farms. (This pertains more to Comments, Reviews, Tweets, etc.)

Each one of those items is an entire conversation for another day, but it’s certainly food for thought :)

If you made it this far, thank you for reading (and listening!!) and we not only appreciate your real support, but DH will never stoop to buying likes or followers for our publication!

Digital Heresy is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Digital Heresy
Digital Heresy Podcast
A podcast about Artificial Intelligence for the enthusiast floundering between excitement and existential dread. The podcast is an extension of the Digital Heresy Substack, where we cover topics in deeper philosophical detail.
Music by Karl Casey @ White Bat Audio