Generators Of Disagreement With AI Alignment

George Hosu
11 min readSep 7, 2022

I often find myself disagreeing with most of the things I read about AI alignment. The closest I probably get to accepting a Berkely-rationalism or Bostrom-inspired take on AI is something like Nintil’s essay on the subject. But even that, to me, seems rather extreme, and I think most people that treat AI alignment as a job would view it as too unconcerned a take on the subject.

This might boil down to a reasoning error on my end, but:

I know a lot of people that seem unconcerned about the subject. Including people working in ML with an understanding of the field much better than mine, and people with an ability to reason conceptually much better than mine, and people at the intersection of those two groups. Including some of my favorite authors and researchers.

And, I know a lot of people that seem scared to death about the subject. Including people working in ML with an understanding of the field much better than mine, and people with an ability to reason conceptually much better than mine, and people at the intersection of those two groups. Including some of my favorite authors and researchers.

So I came to think that there might be some generators of disagreement around the subject that are a bit more fundamental than simple engineering questions about efficiency and scaling. After…

--

--

George Hosu
George Hosu

Written by George Hosu

You can find my more recent thoughts at https://www.epistem.ink | I cross-post some of the articles to medium.