Flite reflects on an AI hope vs. AI hype


Quick blog post today, before I hit the rest of my day’s daily tasks. I subscribed back to EFF (the Electronic Frontier Foundation) - I forget why I cancelled in the first place. My recent following of Cory Doctorow’s blog Pluralistic and the blood in the machine blog have me more concerned these days and front of my mind re: general enshittification of the internet, late stage capitalism, etc. etc.

Anyways, EFF has a podcast series called “How to Fix the Internet” which is nice. Not just like every other series on the internet screaming about how broken everything is, or at the other end how AI is going to fix everything in our lives (as tech oligarchy dominates our national politics, legal frameworks, policy making, suck all the water and electricity in rural municipalities and beyond (hey Jersey) and mass layoff workers for relatively automated shittier versions of their services). But something more measured and constructive. Feels good, and like better medicine for my ears than a lot of other content these days.

I only got halfway in the episode, but some general passages stood out for me. BTW - this episode interviews Arvind Narayanan, CS professor at Stanford, author of the “AI Snake Oil” newsletter, and upcoming book “AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference”.

The episode is titled “Separating AI Hope from AI Hype”.

Tech is not automatically a force for good. It takes a lot of effort and agency to ensure that it will be that way.

Can’t agree with this more.

Uh, one kind of example of uh, if a user adaptation to it is to always be in a critical mode where you know that out of 10 things that AI is telling you, one is probably going to be wrong.

Arvind is speaking of using LLMs as a critical user here, with which I definitely agree in principle, but I had to share because I think that out of 10 things AI tells you it’s more like 5, or 8 things that will be wrong imo, which probably speaks to my level of cynicism (I still find LLMs useful and find uses for them once or twice a day).

So there’s a lot of studies that look at what is the impact of AI in the classroom that, to me, are the equivalent of, is eating food good for you? It’s addressing the question of the wrong level of abstraction.

I agree, and in this case here Arvind is talking about education’s knee jerk reaction to just completely banning ChatGPT entirely in classrooms. Which, tbh, I’d have to agree probably the better approach is to add supplemental material to the syllabus on how to educate students to be critical thinkers and users of LLMs instead.

it’s just not known who is going to commit a crime. Yes, some crimes are premeditated, but a lot of the others are spur of the moment or depend on things, random things that might happen in the future. It’s something we all recognize intuitively, but when the words AI or machine learning are used, some of these decision makers seem to somehow suspend common sense and somehow believe in the future as actually accurately predictable.

Arvind is speaking here about what appears to have been some attempts or thought exercises around integrating AI into our justice system in order to try and predict who might commit a crime and prejudge them based upon that prediction (woo boy - hello Minority Report precogs).

but when the words AI or machine learning are used, some of these decision makers seem to somehow suspend common sense and somehow believe in the future as actually accurately predictable

highlighting again because I think this sentence just very accurately summarizes our hype culture right now around AI

I think one way in which we try to capture this in the book is that AI snake oil, or broken AI, as we sometimes call it, is appealing to broken institutions. So the reason that AI is so appealing to hiring managers is that yes, it is true that something is broken with the way we hire today. Companies are getting hundreds of applications, maybe a thousand for each open position. They’re not able to manually go through all of them. So they want to try to automate the process. But that’s not actually addressing what is broken about the system, and when they’re doing that, the applicants are also using AI to increase the number of positions they can apply to. And so it’s only escalating the arms race, right? I think the reason this is broken is that we fundamentally don’t have good ways of knowing who’s going to be a good fit for which position, and so by pretending that we can predict it with AI, we’re just elevating this elaborate random number generator into this moral arbiter. And there can be moral consequences of this as well. Like, obviously, you know, someone who deserved a job might be denied that job, but it actually gets amplified when you think about some of these AI recruitment vendors providing their algorithm to 10 different companies. And so every company that someone applies to is judging someone in the same way. So in our view, the only way to get away from this is to make necessary. Organizational reforms to these broken processes. Just as one example, in software, for instance, many companies will offer people, students especially, internships, and use that to have a more in-depth assessment of a candidate. I’m not saying that necessarily works for every industry or every level of seniority, but we have to actually go deeper and emphasize the human element instead of trying to be more superficial and automated with AI.

I love this line:

AI snake oil, or broken AI, as we sometimes call it, is appealing to broken institutions.

Never thought of it in those terms, but that’s a very interesting generalization.

we have to actually go deeper and emphasize the human element instead of trying to be more superficial and automated with AI

Shouting this from the mountaintops.