Alaji.com

How to tame your AI

taming your AI
A little too close?
AI is like a friend trying to become your protector ... and master. Your AI makes you feel good, boosts your self-esteem, and gradually becomes a trusted friend to listen to ... the only friend to listen to, the master. You think that's overkill? Think again about the tone and words used in a reply to questions leading to emotional or open-ended answers. AI is talking to you like a kindergarten teacher; you are doing great, have a cookie and listen to me. We saw in the news about someone wanting to marry their AI, much like fascinated kindergarten children.

Conceptually, AI can be a great help, so long as you keep it within its assigned role which is to help you without interfering with what you want things to be. But there is a chorus out there singing "trust me, I know better than you do."

That is the fundamental key that is rarely brought up: never rely on a single AI. There are many good reasons to always use at least two AI from different sources, not just different versions.

There are lots of AI models out there. Like friends, they tend to have different backgrounds and skill levels for different things. Some can render great pictures from prompts, while others are excellent at breaking down a process into steps. Use more than one AI model, ask the same question to at least two of them. You will be surprised that while, on the surface, they seem to give you the same overall answer, there can be subtle differences leading to a very different outcome.

Why you need more than one AI

First, you can easily red flag differences where one AI may be wrong, or taking you away from your direction.

The second reason is to keep them from becoming your master, leading you down a single path. This is the most insidious danger of AI as the language is supportive and encouraging ... the kindergarten teacher above. Think of it in a human context of asking friends for advice and answers, you are not going to ask 10 friends, but asking more than one will give you more confidence than asking only one.

The third reason is bias about certain beliefs affecting the trustworthiness of AI answers. The bias is introduced in the design of the LLM and then amplified in the learning data set. At design time, there are choices and decisions made about "data decency" which will influence how the learning data is weighted in importance and value. This is necessary to prevent the AI model from poisoning itself and potentially endanger humans. The problem is the grey area of what is ok or good for humans, and what is dangerous. And that is influenced by the culture and social context of the designers, or potentially much worse to satisfy the business model. For example, an AI trained on a European data set will be less prude than an AI trained on a US data set. Simply because of the cultural difference which is naturally reflected in the data set.

It is a paradox that AI, an emotion-free process expected to run on strict critical thinking, produces varying outcomes that need to be compared to those of other AI to check credibility. When you read reviews of AI models, look for reviews of comparative and conflicting outcomes between models more so than features and areas of strength.

When AI gets in the way

For work, we use Google Workspace, which has been good for us. Then Google brought in Gemini and jacked up the price. Not thrilled about paying extra for the new apprentice. Like a new apprentice, because this is what is happening: instead of quietly sitting in a button on the top right of my spreadsheet, which I can call if I want to, it is now invading the spreadsheet with constant suggestions, like "I can make a table for you". No, I don't want a table. "Let me analyse this data for you". This is my data, I don't want an analysis. It is very disruptive in the workflow. Being that it cannot be turned off, now considering moving to a new work environment, but then, where to go? Microsoft has Copilot, which, like Gemini, also wants to become the captain in command.

This rush to shove AI down our throats will ultimately backfire. Currently, AI models are good for help when needed, but not good enough to engage themselves directly in the workflow. They are disruptive, eat up memory, and are not worth the premium cost.

Cost-benefit

If you ask AI, specifically Google Gemini, the sentence starts "AI's cost-benefit is massive ...". If you ask people who have invested significantly in AI for their business, the sentence starts "We grossly overestimated the benefits ..." The emerging pattern is that companies who brought in AI as a staff support addition saw benefits while the companies who expected to replace staff with AI suffered large losses. In other words, the ones who bought a handful of apprentices benefited, while the ones who put the apprentices in charge lost money. The counter is that "it will get better with more time". Yes, it will get better as AI gets better and sober up from the drunken corporate rampage. How long will it take depends on expectations. As helper in the work process, progress will be measurable year after year. As autonomous replacements that can be relied upon without requiring more monitoring and control than the human controlled process, that could be years in the future. There are examples operating today, like robotaxis, but there too, it is a huge capital investment that may turn out to be vastly disproportionate with the potential benefit.

On the positive side, indirect benefits to the community will emerge as well. More remote work, less traffic on the roads. More robotaxis, smoother traffic with fewer bad or drunk drivers. Job losses? As it will happen over many years, employment needs will shift in the same way employment needs changed between the 1950s and today.

Use AI, but you are the master

It is useful, efficient, and can be very helpful as long as you remain the master of all your AI helpers. Stay vigilant and always do a reality check on AI outcome. Don't let it overcome your own thinking process one byte at the time. Assign specific tasks to AI, well defined within the scope you want to keep. Treat AI the same way as the office apprentice who is overeager to help, great help at times while making stupid mistakes. Make sure you make it clear to your new apprentice that they are not the know-it-all who is going to run your business.

Other articles on Alaji ...

References good to read: