AI Ain’t Coming for Your Job, It’s Coming for Your Liberty

The good news is that AI is not yet coming for your job. A couple of years back, Aleksandra Przegalinska and I, with help from the insightful work of Sergei Nirenburg, argued in an article and book that Artificial Intelligence (AI) was not an imminent threat to most people’s jobs.

The consensus at a bleeding edge AI symposium recently held at Central Michigan University remains that AI still isn’t a viable job candidate, a view that macroeconomist Scott Sumner reinforced by showing that Claude 3’s understanding of economics is B level at best. It can spit back definitions, in other words, but it cannot apply economic concepts to novel situations. Some other carbon flesh bag who knows how to effectively leverage AI might take your job, but the computer itself still isn’t ready.

The bad news is that at least two AI-related threats to your liberty persist.

Although the US economy has spluttered since the dreadfully daft policies implemented four years ago, unemployment has remained mercifully low, with most of the pain being spread via lower real wages. When unemployment spikes at some point in the future, however, sundry statists and central planners will raise the specter of an AI-induced employment crisis to try to dupe Americans into accepting untried socialist nostrums like Universal Basic Income. Faux evidence of the “effectiveness” of so-called UBI experiments have been planted in the media for years to prepare people for such a push.

Another threat to liberty looms even larger. The large language models (LLMs) that most people associate with AI—the Chat GPTs, Geminis, Claudes, and such—have become instruments of the Radical Left due to the same heady stew of cash carrots and deep state sticks that drove universities, think tanks, legacy media, and search engine companies to abandon all semblance of positive science for Woke drivel.

The self-anointed leaders of Western Civilization are not burning their old books as envisaged in Fahrenheit 451, they are simply leaving them out of the LLM AI training canon, thus essentially de-platforming them and denying access to their truths to all except an ever shrinking cadre of book nerds.

Unlike the Metaverse, AI is not going away. LLMs are rapidly replacing search engines by providing “answers” to queries rather than just pointing users to websites. This renders them much more convenient for users but also incredibly powerful tools for shaping public perceptions. They cost really big bucks to program, train, and operate, but rake in billions despite their obvious shortcomings.

The knowledge bases of these LLMs are broad but shallow and they still haven’t figured out that many users do not want “an” answer, they want “the” answer. I personally gave up on ChatGPT when it fed me a fake Alexander Hamilton quotation and then concocted a source when I challenged it on the quotation’s authenticity. (Typical liberal move to double down on a convenient falsehood like that.)

When a company recently offered me a nice load of bread to help train a next gen AI in economic history, though, I couldn’t resist. But after a month, the company exiled me because I was teaching the AI not to be a Woke moron. (I wish I could expose the company but I honestly do not know which it was as it worked in a cloak and dagger fashion through a maze of shell subcontractors.)

A ray of hope, though, recently poked through the gloom. SLMs, or small language models, have begun to appear. Instead of being broad and shallow, SLMs are narrow but their knowledge runs deep. Moreover, unlike LLMs, which use vast cloud resources, SLMs can run on a laptop with the aid of a neural processing unit (NPU), a silicon wafer that contains its own CPU (central processing unit), GPU (graphics processing unit), memory, and specialized security processor. NPUs can run trillions of operations per second without creating the heat associated with traditional CPUs and GPUs. They are pricier than traditional chips but they can run an SLM at much less cost than any LLM.

The hope is that due to lower entry costs, competitive markets in AI can still prevail. Big corporations will continue to provide their anodyne products but little guys can still enter the AI market and offer alternatives, directly to users or through a common interface that routes user queries to appropriate SLMs that then compete for the user’s approbation and the associated fee.

I firmly believe that most users will prefer well-reasoned, well-sourced, and positive responses over polemical, normative ones, especially in matters of health and well-being, economics and finance, and public policy. Users who ask whether ivermectin or some novel vaccine is “safe and effective” will pick the most comprehensive and reasonable response, not obvious claptrap, providing the incentive against “misinformation” that purportedly everyone wants.

Ditto users who wonder if higher taxes and more powerful government is really the answer to all of society’s ills. Right now, they will only hear from LLMs what that powerful government wants them to hear. But liberty lovers could begin training an SLM fed with the works of Bastiat, Hayek, Mises, Rothbard, Smith, and other classical liberals, so that their ideas shall not perish from this earth, lost in the great knowledge purge currently taking place at Alphasoft and such at the behest of the same players that brought new meaning to the term ‘March madness’ four Marches ago.

Subscribe on YouTube

Free the People publishes opinion-based articles from contributing writers. The opinions and ideas expressed do not always reflect the opinions and ideas that Free the People endorses. We believe in free speech, and in providing a platform for open dialog. Feel free to leave a comment!

Robert E. Wright

Robert E. Wright is the (co)author of two dozen books, including Fearless: Wilma Soss and America's Forgotten Investor Movement. All views his own.

View Full Bio

Add comment

Your email address will not be published. Required fields are marked *

Featured Product

Join Us