Generators Of Disagreement With AI Alignment

George Hosu
11 min readSep 7, 2022

I often find myself disagreeing with most of the things I read about AI alignment. The closest I probably get to accepting a Berkely-rationalism or Bostrom-inspired take on AI is something like Nintil’s essay on the subject. But even that, to me, seems rather extreme, and I think most people that treat AI alignment as a job would view it as too unconcerned a take on the subject.

--

--

George Hosu

You can find my more recent thoughts at https://www.epistem.ink | I cross-post some of the articles to medium.