On the surface, it sounds like a great get-out-of-jail-free card: “Oh, I’m so sorry, the AI said this, and I just went with what it said.” Not so fast!
While it would be nice to have a default scapegoat like that, it didn’t work when you blamed Rover for eating your homework, and it won’t work now. Let’s discuss why AI makes mistakes, how these mistakes can trip you up, and how to avoid these pitfalls.
First Off… Why Does AI Make Mistakes At All?
It all boils down to how AI works.
In the case of Large Language Models—LLMs—it’s basically because the AI is more akin to autocomplete instead of an encyclopedia. An LLM functions as a probability engine that builds sentences statistically.
The LLM is fed trillions of pieces of text—books, articles, code, and entire websites—which are broken down and standardized into tokens. From that point forward, everything it produces is simply a chain of tokens arranged by probability.
There’s no inquiry into the truthfulness of the resulting statement; it’s just that a sentence that starts with “My favorite food is” is more likely to end with “pizza” than something like “liver and onions” or “mahogany” or “brave.” A hallucination—the term for a mistake made by AI—is simply the result of the math pointing in the factually incorrect direction.
This is why AI errors can get you and your business in hot water if you aren’t careful. All the AI is doing is solving a math problem… you’re still the one in charge.
The Three Major Ways You Could Be Liable for AI’s Actions
Defamation
Let’s say Brand A and Brand B both operate in the same industry, competing with one another to produce and sell similar widgets. If Brand A has an AI tool come up with marketing materials for their widgets, and that AI includes language that falsely claims that Brand B uses some illegal process or substance to produce their products, Brand A could be in some very hot water if those marketing materials were to be shared.
The AI doesn’t know it is being libelous; it’s just crunching the numbers. It is your responsibility to confirm the accuracy of your AI’s outputs and edit them to avoid these kinds of statements from being shared.
Contractual Language
AI is commonly appearing in chatbots for preliminary customer service interactions, helping reduce the friction between a troubled client and the business by giving the human representative more information to work with. Even here, however, you need to keep the AI on a short leash. In its programmed eagerness to please, the AI may start making up return policies, price points, and other details.
The thing is, there are now jurisdictions that will hold you to whatever the AI promises as a binding agreement. It is acting as your company's representative, after all.
Copyright Violations
As we said, LLMs operate by examining existing content and predicting the best-match word that comes next. Unfortunately, this can often align with what the original creator of a book, article, or other creative work wrote, as they would have had the same goal.
This ultimately leaves you inadvertently plagiarizing via AI and effectively stealing copyright-protected materials.
We’ll Help You Avoid These Issues
By working with National Technologies Group, you can enjoy the benefits of modern technology—including AI—without the data privacy and security risks, all the while also knowing that your business IT in general is protected and optimized. AI isn’t inherently bad, it just needs to be used mindfully and appropriately. Find out how to get started by giving us a call at +61295186000.










