What Type of World Do We Want To Live In?
Perry C. Douglas August 16, 2024
I recently read an article in The New Yorker titled In the Age of A.I., What Makes People Unique? — More than ever, we’re challenged to define what’s valuable about being human, by Joshua Rothman. The gist of the article was that we should see “A.I.” for what it is: advancement in automation technologies for decision-making, learning, and problem-solving. However, as powerful and useful as AI’s abilities may be, it still doesn’t come close to real humanlike intelligence.
The human condition is authentic and holistically intelligent and relies on our human senses and experiences in relating to the real world. The authors highlight how authentic human intelligence is unique and indispensable, and we shouldn’t underestimate or try to diminish its value, or try to replace it. AI, more particularly generative AI (GenAI) just gives us the appearance of intelligence but it’s a constructed illusion. Crafted by those whose interests are best served by us buying into the hyped narratives. Particularly the ones put forth by the Tech-Bros in California, who are telling us that AI can be as humanlike intelligence.
The author also leans the book titled, A.I. Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference by two computer scientists, Arvind Narayanan and Sayash Kapoor. The book provides a critical analysis of AI capabilities and separates the hype from the true and measured advances in AI technology. It examines the societal impacts of artificial intelligence, with a focus on reproducibility, transparency, and accountability in AI systems.
Narayanan and Kapoor urge us to be skeptical and dogmatic about the blanket term “A.I.”, which is dishonest and being used to keep us in awe, but that it’s really a “smoke screen for underperforming technologies.” Additionally, they explain how “A.I. [promoters]” have massively overshot the “Everything AI” hype, and have separated from reality and basic commonsense.
Therefore, the authors ask that we take a minute and use our a priori knowledge and commonsense to imagine an alternate universe run by machines that don’t know the meaning of anything it does. What will that look like? Most likely a highly unrealistic and undesirable; unnatural and counterproductive to human existence. Imagining a life where you give up your humanity and surrender your mind to machines is not a life worth living.
From an applied intelligence (ai) perspective, it is a foregone conclusion that AI is not intelligent, so we don’t waste any precious time chasing rainbows.
We understand human existence and that authentic intelligence requires the consciousness and contextualization of “knowing” but AI has none of those. So it can’t ever know the physical and metaphysical world. AI may have syntax but it does not know meaning. Meaning requires consciousness and context, experienced-based empirical knowledge. This is what forms authentic human intelligence, don’t be fooled by clever AI narratives.
Narayanan and Kapoor say they are deeply skeptical about today’s AI, and with all the hype and false narratives needed to support it by the Silicon Valley Tech-Bros and their acolyte VCs, only reflects on the sheer inadequacies of AI, they say.
Model Collape…AI Models Are Just Spewing Gibberish
Additional recent research featured in the Globe and Mail article titled “AI models ‘collapse’ and spout gibberish over time, research finds.” The main theme is about how text data is “unsuitably” used to train LLMs, which powers chatbots, ChatGPT, and other AI applications. The researchers conclude that the process itself is confusing and just seems to be going around in circles. Researcher Ilia Shumailov is a former fellow at the Vector Institute in Toronto and a junior research fellow at the University of Oxford. He says that things are just not adding up in AI, and it’s getting harder and harder to see AI’s usefulness and adoption in the enterprise, under its current misguided direction.
The startling conclusion reached by the group of researchers is that:
Training AI models on AI-generated data renders them useless. Text models spout gibberish, and image models barf garbage. They dubbed the phenomenon “model collapse,” and that generative AI models need massive amounts of data to find patterns, build associations and output coherent results.
Tell us how you really feel!
Abeba Birhane, a senior fellow in trustworthy AI at the Mozilla Foundation, wrote on X that model collapse is the “Achilles’ heel that’ll bring the gen AI industry down.” Ed Zitron, who pens a popular Substack often expounding on the shortcomings of generative AI, wrote, “It’s tough to express how deeply dangerous this is for AI.”
The researchers contend that “model collapse” is a real problem as more and more AI-generated content finds its way online. The more data added the worse the problem seems to get. And the high costs of building LLMs push profitability expectations “out at least 15 years or so,” said Microsoft’s CFO on a recent quarterly earnings call.
LLMs are not passing quality control either — just pilling up garbage on top of garbage and nothing can’t seem to come out right, relative to the AI hype promotion game that is being played. The research also provides a slew of technical reasons for these models collapsing, but observable logic tells us it’s because LLMs just make way too many silly and incomprehensible mistakes. These errors get encoded and become compounded over time. It’s like a virus that continues to affect the system no matter what new fixes you install. It’s a monster that continues to grow.
The fixes put forward just create more problems. Adding more and more data and building bigger and better models is nonsensical and more on the side of desperation. Julia Kempe, professor of computer science at New York University says “With scaling laws, when we double the amount of data, error rates should go down,” but they just seem to rise and add more errors and complexity. And “if the data is generated by some other model and you want to scale that model up, it just won’t work.”
Technology is supposed to deliver efficiency gains, be reliable and make things easier. But with all the errors and hallucinations with AI, reliability is elusive, and who needs the hassle of double-checking all the time? How is that gaining any efficiency? Again, technology is supposed to make life easier, not increasingly frustrating.
A great amount of AI-generated data on the internet is synthetic and corrupted, which in turn corrupts future models. It’s like building on quicksand — failing all around!
Is this the end of the road for generative AI? It doesn't have to be that way, applied intelligence (ai) believes that AI is here to stay. But its optimal use will not be the squandering of the technology as it is now. We are not trying to predict the future and also not here to crusade about the wrong trajectory of GenAI. Instead, we are here to offer a different way of thinking and applied intelligence technology in support of that.
The problem for LLMs when we come right down to it, for example, mathematics, games and computer coding, have clearly defined sets of rules that must be correct for them to work. But that’s not the case with LLMs. There are no rules. It’s essentially an invented pure mathematics construct with a not-so-elegant impossible language, only the machine understands.
AI doesn’t know the world and in its current state doesn’t know how to make its way into the enterprise either, relative to the highly specific and tailored demands/tasks that companies want to use LLMs for. Such as dealing with customer service inquiries, answering specific and nuanced questions, and making proper sense of data to be utilized in building effective growth strategies.
6ai is redefining how strategy is crafted in the age of AI. Empowering strategy for human progress, the six-step (6ai) process identifies the whitespaces of market opportunity so industry leaders can execute with confidence.
So applied intelligence applies scientific wisdom, a mix of mathematics and philosophy or formal and informal deduction — a process of logic, creating a wholesome, human-centric solution: a six-step (6ai) decisioning process to build winning strategies to thrive in the 21st century.
And 6ai Technologies believes that humans are naturally at the centre of human existence, and it defies commonsense to try and replace them. “We will continue to need humans to be arbiters of our data,” says researcher Dr. Kempe, and “We should appreciate our data labellers and pay them well.”
Therefore, 6ai takes things to the next level and takes from the most useful elements of AI along with human general intelligence, to offer a simple software solution to lead growth. 6ai’s software solution doesn’t require Data Scientists either, to tell us what we should think and know. Running 6ai will assist staff in finding out all they need to know themselves. The software empowers ordinary people to build effective strategies within the organization, without the need for any specialized training, skills, or technical expertise; or expensive outside consultants, advisors, or specialists.
Challenging the Tech Bros
Silicon Valley Tech Bros must be challenged because they continue to dictate what technology is getting built, and how and who gets to benefit from it. They are not representative of broader society and, therefore, don’t function in the best interest of society, only the interests of the Silicon Valley few.
For example, Elon Musk and Sam Altman believe because they’ve contributed technology to the world, and because they’re rich (and filled with hubris) they are the rightful technology governors in the universe.
Because they feel so much smarter than everybody else, they should be able to do whatever they want, including being the arbiters of our lives. They are a part of the Silicon Valley New Tech Aristocracy (NTA); a dangerous over-concentration of power in the hands of the very few, and non-diverse white male Tech Bros culture currently dominating.
AI is more about chasing power than anything else and the NTA has a clear path for the retrieval and control of our data without obtaining our consent or without compensation. Not adhering to copyright laws, their objective is to have us in a cloud feudalism system, increasingly paying more cloud rents.
Tech-Bros must not be allowed to run free with AI; they skew the capitalist system and stifle innovation and creativity. Commonsense tells us that if everyone is chasing the same AI development then where will the future incumbent challenging entrepreneurship, differentiation and innovation come from? Where will the new technology discoveries that are needed to advance our world come from?
AI is not the end-of-all- technology and it is short-sighted, dangerous, and just plain stupid to believe that AI can solve all of society’s problems.
Furthermore, “AI-Tech” isn’t the only sector in the world that needs to be invested in. Investing in human capacity is a major investment that must continually be made. Without investment in human ingenuity, we won’t get any new tech discoveries, which we can’t even imagine now. So applied intelligence focuses on technology for insight and discovery, investing in human capacity and capabilities.
Another area of concern is the sheer lack of enforceable regulations and safety standards. Currently, there are more regulations and safety standards at your local sandwich shop than in AI.
A world where the few Tech Bros are writing all the (biased) algorithms that shape our world is not the right path, nor is it safe for humanity. A path where the NTA has all the power and gets to decide what the most important “problems” to solve are just leads to even greater inequality and societal conflict.
We need to produce technology that is accessible and democratized so anyone who wants the compete in the capitalist system can do so. We strive to level the playing field so winning requires good strategy and not privilege.
A world in which the NTA control the media and content is problematic for democracy, i.e., Elon Musk buying Twitter/X for his own right-wing nut-job political agenda. I’ve never been one to sit there and take it from anyone and not fight back. Life is just too damn short to allow it to be dictated by others through mediums and technology ecosystems.
Recently, Sam Altman has been peddling to raise 7 trillion dollars so he can have OpenAI “solve all physics,” and all the problems in the world too he seems to believe. Never mind the massive environmental impact, measured in terms of electricity usage, emissions and water usage; as Bloomberg recently put it: “AI is already wreaking havoc on global power systems.”
We also must not be wilfully ignorant about those AI “experts,” including Geoffrey Hinton the so-called “godfather of A.I.” And his disingenuous and hubris neural networks theory, gently telling us “how it might already make sense to talk about A.I. having emotions or subjective points of view.” This is the real snake oil salesman you have to watch out for because they come cloaked in a Ph.D!
Gary Marcus, is a scientist, best-selling author, and serial entrepreneur (Founder of Robust.AI and Geometric.AI, acquired by Uber). He is well-known for his challenges to contemporary AI and has anticipated many of the current limitations of AI decades ago through his research in human language development and cognitive neuroscience. Marcus recently wrote that GenAI is looking like a “bubble” and billions upon billions have been spent to make GenAI more powerful, but that it is “increasing more likely that GenAI will end up being a dud!”
There are two fundamental approaches to AI
The first is exclusionary and impractical in its pursuit to make AI competitive, surpassing and replacing human intelligence. This is the AGI (artificial general intelligence) route; science fiction but utilized nonetheless by the likes of OpenAI/ChatGPT and their acolytes.
The second approach, the road less travelled and the one pursued by applied intelligence/6ai Technologies, is inclusive, human-centric and based firmly on reality and understanding AI’s limitations. This approach does not pursue any alternative to human intelligence, instead, it takes an augmenting approach that utilizes GenAI properly, to amplify human intelligence. Acting purposefully and responsibly, and seeking to build human capacity, capabilities, and ingenuity.
So what type of world do we want to live in?
Do we want to live in a fake-intelligence world, putting our energy each day into deciphering deepfakes? A world that values replacing humans instead of building them up? Or are we willing to think more critically and pursue the applied intelligence world, where the technology we build and use is there to serve our humanity best and not the other way around?
Why do we need consultants when we have ai? | Nike Case Study
There are limits to using data more predominantly in running a business, the data-driven wave is catching many, and it sounds easy enough to just be data-driven, but that is proving to be a serious potential self-generated business problem.
Data rarely paints the whole picture. Data must be seen as the paint but it requires a human to take the brush and apply insight, vision, and creativity to paint the masterpiece.
Making critical business decisions based on the easiest path to gathering data and making quick decisions, for Nike, has turned out to be a costly decision.
Nike’s 25-billion-dollar blunder is a classic case of arrogance and ignorance. It is a case study showing data alone is not enough to provide good, real-world empirical-based answers to build enterprise growth strategies.
“Nike invested billions into something less effective but easier to be measured vs something more effective but less easy to be measured.”
On the advice of consulting giant McKinsey & Co., which was the first big mistake Nike made, Nike’s new CEO John Donahue decided to move the business to a more “data-driven” approach. Reorganizing the company towards digital direct-to-consumer sales, pivoting away from the former distinct categories model. The more “human-driven” model. The allure was that data-driven was “easier” with the ability to recognize patterns and make decisions quicker.
It is also important to note that many other big companies like Boeing, also fell for the same McKinsey boilerplate consulting sales pitch.
Believing that a business could simply drive most of its sales decisions on data alone is too simplistic and willfully ignorant. Coming up with new ideas is difficult enough and requires ground-level information gathering to validate the ideas. It is difficult to determine your customers’ preferences and patterns through data alone.
Nevertheless, using data is fine and necessary but the trick is to understand what the data is telling you or not telling you. Data without the support of “colour” can lead to making many wrong decisions.
As typical McKinsey consulting goes; selling the same disingenuous one-dimensional advice to everyone, of slashing costs eliminating duplicate processes, streamlining operations, and improving efficiency. Therefore, the advice given to move to a data-driven decisioning model was consistent with typical consulting cost-cutting measures and management techniques. Long the hallmark of the consulting industry.
How did that consulting advice work out for Nike?
A $25B in market cap loss
and tanking its stock price by 32%.
Nike might learned the hard way that data can tell you what happened in the past statistically, but finding relevant and useful insights in the data requires a more refined and sophisticated applied intelligence process.
Data doesn’t know meaning nor provide you with any empirically based knowledge to go with. It’s just ignorant data. However, Nike built its business bottom-up and had a “Why” underpinning its brand. Understanding “Why” consumers would buy Nike products, and directing marketing and sales accordingly is how Nike came to dominate the athletic sales space.
Connecting with the persona of its pro athletes to young athletes was a big reason why Nike became Nike; i.e., the first Air Jordan shoe that was responsible for the company taking off. Did the newly installed Nike CEO not see the movie “AIR”?
Pivoting to a primary data-driven business decisioning model and relying on online digital consumer data to make critical strategic business decisions, has proven counterintuitive, unproductive and void of logic and commonsense. It’s the opposite of how Nike grew. Data could never pick up how the many category athletes’ sponsorship and custom shoe deals made potential customers feel. Data/AI can’t pick up authentic observations, feelings or moods which is essential for building relevant and winning strategies.
Nike’s decision to eliminate individual product categories was a big mistake. Nike built their business by having an ear to the playing field and being right in the mix of sports. Relying on many well-coordinated human feedback loops, they were able to confidently apply to their marketing and sales strategy.
So it’s tempting to take the easy road and just reach for the data, but what may seem easy is often not good for business. A $25b market cap shareholder value loss is a hard lesson. Therefore, applied intelligence combines data analytics with human knowledge and experience and combines them for optimal results.
We must stop buying snake oil
The New Yorker article also poses an existential question through a barbershop story:
Recently, I got a haircut, and my barber and I started talking about A.I. “It’s incredible,” he told me. “I just used it to write a poem for my girl’s birthday. I told it what to say, but I couldn’t rhyme that well, so it did all the writing. When she read the poem, she cried! Then she showed it to her friend, who’s really smart, and I thought, Uh-oh, she’ll figure it out for sure.” Snip, snip, snip, snip. “She didn’t.”
Everyone in the barbershop laughed, a little darkly. Writing poems that make your girl cry — add that to the list of abilities that used to make (some) humans unique but no longer do.
I don’t know about you but I have no desire to live in a phony world, where we don’t know what is real or what is fake. A world where GenAI is better known for fake porn, fake voices, fake people, the circumvention of democracy, and fake poems written by well intention but ill-informed barbers.
Today’s AI systems may be able to generate acceptable poetry and stories but Shakespeare, but “Shakespeare” will never be. It might be able to write acceptable music but Mozart… can’t ever be, nor Bob Marley and Tupac too for that matter. GenAI might be able to generate “art” but it will never be able to produce the meaning and passion of “Starry Night,” by Post-Impressionist painter Vincent van Gogh.
There is nothing more demeaning to the human spirit than surrendering one’s mind to be controlled by others; and even worse, by a machine.
Therefore, applied intelligence puts humanity first and believes in the expressive superpower of human intelligence. It respects people and builds technology for people. Not to replace them.
The applied intelligence six-step (6ai) system empowers strategy for human progress. Offering an easy-to-use solution that guides users through an insight-generating process. Anchored in logic and empathy, and underpinned by scientific wisdom.
6ai is a higher dimensional level of relevant and useful information retrieval, from reliable sources that can be taken as empirically warrantable. Setting users up to discover useful insights to create a winning strategy.
6ai is for those top performers who are entrepreneurial and refuse to submit to mediocrity, and the status quo and want to develop strategy with scientific wisdom and rigour. For those who want to create and innovate with real things and in the real world. Those who don’t want to just participate but who want to help make our world. And those among us wanting to live life to the fullest and on their own terms! 6ai is for leaders!Leaders who want to stand out and never settle in the relentless pursuit of their own dreams.
Opmerkingen