unboiled.info
  • Communities
  • Create Post
  • heart
    Support Lemmy
  • search
    Search
  • Login
  • Sign Up
aninjury2all@awful.systems to TechTakes@awful.systemsEnglish · 2 months ago

A.I. Mysticism as Responsibility-Evasion PR Tactic - Citations Needed Podcast

citationsneeded.libsyn.com

external-link
message-square
0
link
fedilink
  • cross-posted to:
  • podcasts@hexbear.net
1
external-link

A.I. Mysticism as Responsibility-Evasion PR Tactic - Citations Needed Podcast

citationsneeded.libsyn.com

aninjury2all@awful.systems to TechTakes@awful.systemsEnglish · 2 months ago
message-square
0
link
fedilink
  • cross-posted to:
  • podcasts@hexbear.net
Citations Needed: Episode 217: A.I. Mysticism as Responsibility-Evasion PR Tactic
citationsneeded.libsyn.com
external-link
“Israel built an ‘AI factory’ for war. It unleashed it in Gaza,” laments the Washington Post. “Hospitals Are Reporting More Insurance Denials. Is AI Driving Them?,” reports Newsweek. “AI Raising the Rent? San Francisco Could Be the First City to Ban the Practice,” announces San Francisco’s KQED. Within the last few years, and particularly the last few months, we’ve heard this refrain: AI is the reason for an abuse committed by a corporation, military, or other powerful entity. All of a sudden, the argument goes, the adoption of “faulty” or “overly simplified” AI caused a breakdown of normal operations: spikes in health insurance claims denials, the skyrocketing of consumer prices, the deaths of tens of thousands of civilians. If not for AI, it follows, these industries and militaries, in all likelihood, would implement fairer policies and better killing protocols. We’ll admit: the narrative seems compelling at first glance. There are major dangers in incorporating AI into corporate and military procedures. But in these cases, the AI isn’t the culprit; the people making the decisions are. UnitedHealthcare would deny claims regardless of the tools at its disposal. Landlords would raise rents with or without automated software. The IDF would kill civilians no matter what technology was, or wasn’t, available to do so. So why do we keep hearing that AI is the problem? What’s the point of this frame and why is it becoming so common as a responsibility-avoidance framing? On today’s episode, we’ll dissect the genre of “investigative” reporting on the dangers of AI, examining how it serves as a limited hangout, offering controlled criticism while ultimately shifting responsibility toward faceless technologies and away from powerful people. Later on the show, we’ll be speaking with Steven Renderos, Executive Director of MediaJustice, a national racial justice organization that advances the media and technology rights of people of color. He is the creator and co-host, with the great Brandi Collins-Dexter, Bring Receipts, a politics and pop culture podcast and is executive producer of Revolutionary Spirits, a 4-part audio series on the life and martyrdom of Mexican revolutionary leader Francisco Madero.
alert-triangle
You must log in or register to comment.

TechTakes@awful.systems

techtakes@awful.systems

Subscribe from Remote Instance

Create a post
You are not logged in. However you can subscribe from another Fediverse account, for example Lemmy or Mastodon. To do this, paste the following into the search field of your instance: !techtakes@awful.systems

Big brain tech dude got yet another clueless take over at HackerNews etc? Here’s the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

Visibility: Public
globe

This community can be federated to other instances and be posted/commented in by their users.

  • 226 users / day
  • 226 users / week
  • 226 users / month
  • 3.15K users / 6 months
  • 1 local subscriber
  • 1.82K subscribers
  • 601 Posts
  • 12.5K Comments
  • Modlog
  • mods:
  • David Gerard@awful.systems
  • UI: unknown version
  • BE: 0.19.11
  • Modlog
  • Instances
  • Docs
  • Code
  • join-lemmy.org