Munk AI debate: confusions and possible cruxes — LessWrong
Highlights
Treating efforts to solve the problem as exogenous or not ⤴️
Ambiguously changing the subject to “timelines to x-risk-level AI”, or to “whether large language models (LLMs) will scale to x-risk-level AI” ⤴️
Vibes-based “meaningless arguments” ⤴️
Ambiguously changing the subject to policy ⤴️
Ambiguously changing the subject to Cause Prioritization ⤴️
immediate AI problems are not an entirely different problem from possible future AI x-risk. Some people think they’re extremely related ⤴️
Yann thinks he knows, at least in broad outline, how to make a subservient human-level AI. And I think his proposed approach would not actually work, but would instead lead to human-level AIs that are pursuing their own interests with callous disregard for humanity. ⤴️
And all that is happening right now—who knows what the AI research community is going to be doing in 2040? ⤴️
High-functioning human sociopaths are an excellent example of how it is possible for there to be an intelligent agent who is good at the “is” aspects of common sense, and is aware of the “ought” aspects of common sense, but is not motivated by the “ought” aspects. ⤴️
Well anyway, forecasting the future is very hard (though not impossible). But to do it, we need to piece together whatever scraps of evidence and reason we have. We can’t restrict ourselves to one category of evidence, e.g. “neat mathematical models that have already been validated by empirical data”. I like those kinds of models as much as anyone! But sometimes we just don’t have one, and we need to do the best we can to figure things out anyway. ⤴️
Debates are a terrible way to arrive at the truth, since they put people into a soldier mindset. ⤴️
Bad actors use the trick of focusing on unimportant but controversial issues to keep everyone from noticing how they are being exploited routinely. ⤴️