This week, we have three related AI articles all dealing with trust, or lack thereof, in the results AI so confidently gives you. This can range from poisoning of the training data to confident hallucinations, to faking interviews. It's a brave new world out there.
Article 1 - The poisoning of ChatGPT
Article 2 - Lawyer cited 6 fake cases made up by ChatGPT; judge calls it “unprecedented”
Supporting Articles:
Professor Flunks All His Students After ChatGPT Falsely Claims It Wrote Their Papers
Article 3 - Github - Ecoute
If you found this interesting or useful, please follow us on Twitter @serengetisec and subscribe and review on your favorite podcast app!
In this episode, we review Lloyd's recent announcement on not covering state sponsored attacks, an article from Anton Chuvakin on SOC technology fails, and...
This week we take a look at CISO pay, the Biden-Harris National Cybersecurity Strategy, and a dystopian future vision by Bruce Schneier. You know...
This week we discuss JP Morgan's document retention snafu, the US Intelligence Community's reliance on Data Brokers to purchase data they're legally prevented from...