Digital Heresy
Digital Heresy Podcast
Oversight of A.I.: Rules for Artificial Intelligence (Part 3)
0:00
-1:16:39

Oversight of A.I.: Rules for Artificial Intelligence (Part 3)

DH Reacts
(Music by Karl Casey @ White Bat Audio)

TL;DL

On Tuesday, May 16th, 2023, the U.S. Senate held a Subcommittee Hearing on AI. Among the witnesses, Sam Altman (CEO - OpenAI), Christina Montgomery (Chief Privacy & Trust Officer - IBM), and Gary Marcus (Professor Emeritus - New York University).

This episode is a “live listen and react” featuring commentary from myself, and A Priori. The overall hearing is nearly 3 hours long, this is the final segment, covering about the last 40-45 min of the hearing.

The hearing is worth watching/listening to even if you don’t care for our commentary - lots of good information to consider and discuss in your circles. You can watch the original recording in the link above, or on any of a number Youtube videos like the one embedded below:

References

Not many references to cite in this final segment - by this point in the hearing we’d pretty much covered the gambit on situations and approaches.

I did make the assertion that the scientific community is fraught with their own issues when it comes to being captured, even in the realm of publishing research papers which is supposed to be one of the more robust oversight processes in that business.

“But the beauty of science”, you might say, “is that eventually these things get caught and corrected, which is better than nothing”.

I agree - we support the fundamental principals of science - but it doesn’t stop that particular flavor of dis/misinformation from having wide-reaching effects even long after they are called out as being false.

The most recent, and widely damaging, example came in the aftermath of the pandemic. From science.org:

In June 2020, in the biggest research scandal of the pandemic so far, two of the most important medical journals each retracted a high-profile study of COVID-19 patients. Thousands of news articles, tweets, and scholarly commentaries highlighted the scandal, yet many researchers apparently failed to notice. In an examination of the most recent 200 academic articles published in 2020 that cite those papers, Science found that more than half—including many in leading journals—used the disgraced papers to support scientific findings and failed to note the retractions.

There are a handful of other scientific shenanigans we could point out re: the pandemic alone, but the the point is, when the stakes are high, and a particular set of agencies operating in that domain are given a mandate to push a particular narrative, science is rarely allowed to operate in its purest form.

General Data Protection Regulation

I also asserted that perhaps Europe has a better grasp on policy - this comes from my understanding of their GDPR and Data Protection Acts:

I think many the topics covered in the GDPR and DPA offer plenty of framework that could be applied to the AI space. The good news is, many global companies have gone ahead and implemented aspects like Right to be Forgotten into the practice even outside of Europe, so hopefully it will be a shoe-in for individuals concerned with data protection.

Correction on ChatGPT Session Log Storage

At some point during these episodes, I incorrectly stated that ChatGPT Sessions are stored locally. Double-checking this, it is not the case. Open AI’s documentation is somewhat ambiguous -

Looking at both the Privacy Policy and Terms of Use, nothing explicitly calls out where your sessions are stored, but I was able to essentially prove they are stored server-side by logging onto GPT on my personal phone (which I’d never done before) and my UI still let me load in recent chats.

I’m glad I did check the FAQ, however, as I also found this nugget:

Which runs under-the-radar to what Mr. Altman stated at several points in the hearing where “we don’t train on your data”. While he is correct that you can opt out and delete your chat sessions, it’s interesting he didn’t mention that humans have access to your data to “improve our systems”.

Why is this nuance important? Because it indirectly affects the model behavior, especially if “AI Trainers” go searching for conversations on loaded topics like Elections, Jan 6th, COVID, etc. and then perform sentiment analysis on how to fine tune future releases based on those conversations.

So perhaps the more poignant question for future hearings is “Do you use people’s conversations to influence how your AI Trainers fine tune your model?” - interesting to see what the claim would be.

Anyhow, there you go folks - nuke those chats if you don’t need ‘em anymore, and stay vigilant :)

Thank You

If you stuck with this series - thank you!!! We love discussing AI Ethics and we hope our conversation and commentary was helpful, or at the very least entertaining. Catch you in the next one!

-DH

Digital Heresy is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

0 Comments
Digital Heresy
Digital Heresy Podcast
A podcast about Artificial Intelligence for the enthusiast floundering between excitement and existential dread. The podcast is an extension of the Digital Heresy Substack, where we cover topics in deeper philosophical detail.
Music by Karl Casey @ White Bat Audio