No, AI-content does not “suck”

I would consider myself among the heavier users of AI generation software. (Actually, I’m building my own AI software which offers me deeper insights, but maybe I am biased.)

There’s a lot of negativity-buzz about AI and content creation assisted by AI.

I am on both sides about this. I’m neither 100% pro or against AI-content creation. It’s far more nuanced than black or white.

My perspective is that using AI will allow you to comprehend a larger body of information. AI does not care about the user’s perspective or bias. So you can (and often do) get insights that you did not consider before.

It seems to me that people judge AI content from their own perspectives and opinions. AI can’t read minds.

That information could of course be partially wrong, but that is also reflective of the fact that lot a of content made by humans is wrong too. I’d say that the vast majority of content online contain lots of errors, but we somehow turn a blind eye to that while demanding that AI must be 100%. – Even articles verified by [insert any doctor] are erroneous.

I have several cases where the client has one of several ACCEPTED views but call AI trash because it did not respond in the way they wanted. That’s not due to inaccuracy, that’s stupidity and arrogance by bias, often followed by them feeling offended if you suggest that different opinions exist.

I’ve created about 15m words with software in the last few months. Many tests from clients have been intentional benchmarks to try to get the AI to fail. Nobody does that with humans.

However, if you try to use AI in ways that it excels in then you get better content too. – If you buy a safe car, is your first instinct to get it to crash? Or do you accept that the safety features are useful but not perfect?

I understand the scepticism about AI-content. AI does rely on existing body of text. However, it absolutely CAN and DOES create “new” connections between topics, connections that the user did not make before even though they know the involved topics well. I have plenty examples of that.

In the end, AI is a tool like any other. By itself it’s not wildly impressive. But in the right hands it is. But even in the right hands there is a learning curve.

I think there’s too much discussion about proving that AI sucks and less about how to learn to utilize if to do something good, such as compounding lots of data into something more compact and usable. – Maybe because it feels good knowing we are better than AI.

AI is not black or white. It’s exactly grey.

Personally, I only focus on reinforcing the positive parts of AI-usage. No tool is perfect or foolproof, especially in the “wrong” hands. – The absolutely biggest issue I see is that people expect AI to read their mind.

I think the best strategy is to see what it can do for YOU, not for someone else or for the industry you are in. – Be careful with getting too “inspired” by others.

And most of all, do absolutely NOT judge AI by how others use it or their results. They are most likely NOT experts, but you might be without yet knowing it. (Personally, I believe I have started to take the first steps towards expertise. But maybe I am arrogant and biased.)

Finally, how do we define what is created by “us”? Our language and culture is a regurgitation of other people. Most of the facts we know are not from our experience but based on others. – I’ve made multiple software tools, but none of them are completely stand-alone as they rely on other tools made by others. Apple’s iOS is based on UNIX, which they did not invent.

We all copy from each-other, and now we have software that can do this faster. For better or worse, doing so more or less successfully… As with music, food recipes, fashion and … SEO 😉

Then we have AI used for intentional, nefarious intent. In my opinion, that’s a different topic.

Do I think I am surely right and everyone else right? No!

Do I think there are other views and opinions equally valid? Yes!

We are all still learning.