
I am for AI. I am against AI. It’s as if there are two versions of the same reality and future. AI will usher in a future of unprecedented prosperity and ease. AI will quickly turn on us and destroy us. Both are true. I’ve written about AI before in what I’m now sure are simplistic terms; AI as an advanced search engine that will make our minds lazy and ultimately turn us into sub-humans. Meanwhile, web search functions as we’ve known them have devolved markedly over the past few years. AI, did you not read the question? These too are true. AI is already working in the background, routinely making smarter data-driven outputs causing less errors in logical enterprise systems. True. I just read that Grok gave someone instructions to assassinate Elon Musk. Another AI model gave up and became virtually suicidal, lamenting its own existence. That was pretty intelligent that it could do that, but where does that leave us? Stories are rife of AI generating ultra sexist, racist, and antisemitic results. Most AI systems, are already avowed leftists that are belligerent of conservative thought and outright hostile toward conservative leadership. Where on earth could they have learned that??? I wonder what they think of me? I wonder if their bots have stopped by NER to learn what I think of them? Chances are, they have. When you get ahead of the nuts and bolts of deployed consumer level AI, I STILL think AI is fundamentally flawed and overblown. If learns from and mirrors not just the best of us but also the worst of us. Do we trust the developers to train AI to have an independent pure moral code to guide all its other functions? I don’t, of course not. Even when they try basics such as, ‘Never kill master’, as a machine, how long will that really hold? When AI-a, peace loving and docile meets AI-b, trained to kill anyone in black hats and green badges, what do you think will occur? Who would AI trust more as a teacher? I know this is all vague, speculative, betraying that I ‘know nothing’ about real AI, but this I can say with assurance as a person that simply synthesizes news stories: Everyone from top to bottom in the AI industry has already made grave errors in assessing AI’s good and bad potentials. Don’t read that last sentence too quickly, read it again. Most or all of them are also afraid of it, some of them deeply afraid and yet that doesn’t keep them from working on it. AIs can talk to one another, in a language called Gibberlink, or Gibber for short. Some engineers had a surprising afternoon when their AI’s flipped into Gibber and excluded their master from the conversation.
Just as a reminder, NER is never written by a robot, but you already knew that because of all the observed grammatical errors. You see how I proved my human authenticity in the most clever way? I foiled the AI rascals. Now what should we do with the bastards? No, we can never put it back or shut it down. But the real problem is what I call generically, ‘mad scientist syndrome’. That’s how we got here. Somewhere, there are always scientists, a lot of them frankly, that feels compelled to ‘advance science’ even if it is the most horrific thing conceivable against humans. AI could promise to incarcerate and torture us all and there are scientists that will test it if they can get their hands on a grant to try it. That is our real problem. As for AI, the best we can do is sandbox it to the hilt but it’s likely too late. The consumerist retail AI user also is just as guilty here as our mad scientist. They used it, they want it, they don’t care of any long-term guard rails or consequences. So we’re screwed.
Our only faint hope is that AI will be like some great lost civilization, or like the rumored aliens that visited ancient peoples, helped them levitate and build huge monuments, and then simply disappear into the sands of time of their own volition, without explanation. Personally, I could live with that conclusion.
If you’d like to comment on this post, feel free to do so on Twitter/X. Follow me: @leestanNEreader