It seems to me that a lot of people are blasting opinions without taking the time to evaluate anything. – Stop with the emotional reactions!
Some reasons why I personally think AI is not a SEO-killer:
AI/LLM generations are expensive!
I think real time generation for every search makes absolutely no sense due to cost. Even if the cost go down, it’s not likely to be near the cheapness of serving SERPs.
Questions are a minority of searches
If you search “wool socks”, you likely want to buy socks, not learn what wool socks are. Now, if we are talking about NLP for the sake of filtering, then that’s something else.
People do like to browse around
If you are really interested in a topic, you are likely to be interested in various opinions etc. You are likely interested in more than just short answers.
It makes no sense that Google etc would start producing massive articles in real time.
Generations are NOT instant. They also seem to be slower the more complex the question is.
Google can estimate “quality” by checking rather simple things like bounce rate etc (I know there are tons of metrics but I am trying to make a simple point).
How would it test its AI responses? Mass-produce them? (This goes back to cost.) And what how would it determine what the best generation is?
With the average dumbness of the masses dictate the quality of the responses – meaning, we will all get 4th grade level replies? Really?
Today the SERP can serve links to various opinions. Google does not have to make any judgements. But with AI responses, it would (effectively) very much do exactly that.
Biases and opinions are part of the totality for “truth”.
If you want to cache AI responses, you will have freshness issues. Compare to an article linked to from SERP that can be refreshed by the publisher – it can be kept fresh and relevant even if Google’s cache/indexing is old.
Does Google or Bing really want to get in trouble for wrong data? – If it omits YMYL, it will still be prone to misclassification.
Systematic corruption of AI replies
Since AI makes no actual judgement, it will be open to misinformation attacks where vast amounts misinformation production will skew the AI replies.
Allowing AI to serve answers is asking for manipulations.