For a term that moves capital markets more than the invention of electricity or the Internet, AI exists on a broad, ambiguous spectrum of performance, theory, and application; a product thatt falls on a spectrum that has just very recently left the realm of theory & speculative fiction with the debut of products like Chat-GPT 4 & Claude in 2023/24.
Let’s pick a date in the near future. Say, 2054, which I had Siri pick. In 2054 the only thing I can safely predict about the abilities of synthetic intelligence: the probability of it existing will be infinitely more real than it is today, when it is not real at all.
We are creating a hyperreal AI Image that blurs the lines between reality and simulation, echoing Baudrillard’s concept of the hyperreal. This abstract entity holds immense power over our perceptions and actions, much like a deity in our collective consciousness. As we continue to develop and interact with this AI Image, it will inevitably influence the trajectory of our society, reflecting our deepest desires and fears in a digital realm.
These are prototypes, the prelude to something grand, ubiquitous, perfect knowledge of a reality that we increasingly dispose of or at least transform in order to manifest our euphoric dream.
For all the buzzy editorializing about “AI hallucinations,” contributes to a general effort on the part of mankind to maintain or, generously, explore, a collective hallucination of our own–a hallucination that suggests a superior, omniscient mentor is being created in order to change civilization forever (how, exactly, depends on who you’re talking to).
- The Spanish Inquisition, initiated by the Catholic Church in the 15th century, led to the torture and execution of thousands of individuals accused of heresy.
- The Salem Witch Trials in colonial America, influenced by Puritan beliefs, resulted in the wrongful execution of 20 individuals accused of witchcraft.
- The Jonestown Massacre in 1978, led by cult leader Jim Jones who claimed to be a prophet, resulted in the mass suicide of over 900 followers in Guyana.
- The Heaven’s Gate cult, led by Marshall Applewhite, believed in UFOs and committed mass suicide in 1997 in order to reach an alien spacecraft they believed was following the Hale-Bopp comet.
Yet, this euphoric reverie starkly contrasts with the practical ramifications of AI advancement. The infrastructure essential for AI, such as data centers and chip production, is rapidly expanding. This growth doesn’t just showcase human creativity but also signals substantial environmental, economic, and social repercussions. The resources needed for training AI models, for instance, have led to a surge in data center development. These vast repositories of digital data consume vast energy, contributing to climate change in an unsustainable manner in the long run.
Moreover, the production of chips and other crucial AI components has strained international relationships, sparking geopolitical tensions. While advancing AI capabilities, these industries exacerbate wealth inequality, consolidating power and resources among the already privileged. The affluent, who stand to gain the most from AI progress, evade job displacement risks and can invest in these technologies, further solidifying their status and influence.
This concentration of power and resources has tangible effects on the wider society. The data required for training AI models—our personal data—becomes a commodity traded among the wealthy, leaving the general public exposed. This dynamic not only raises ethical concerns about privacy and consent but also hints at a future where technology access—and consequently, power—is dictated by economic and social standing.
The diversion and misallocation of attention and resources toward realizing an AI dream come with a high price. While we brace for potential scarcities of essentials like food and shelter, we overlook an impending bandwidth and access crisis. In our haste to advance AI, we hurtle toward a future where vital resources must be rationed, not just physical goods, but also information and technology access.
It’s vital, then, to scrutinize our current path. The allure of AI, however captivating, mustn’t blind us to the immediate challenges and repercussions of its pursuit. History warns us that unchecked optimism in technological progress can yield disastrous results. As we near the potential realization of the AI dream, we must prepare for the chance that this dream could transform into a nightmare. The true test of our era might not be how we wield AI’s power, but how we confront the ethical, environmental, and social consequences of its rise. Our AI dream, molded in our likeness, could usher in profound changes, but at what price? The urgent need for a critical reevaluation of our priorities has never been more pressing, as we contemplate whether the utopia we seek with AI might inadvertently lead to a different kind of turmoil—a self-inflicted one.
- SGD – Stochastic Gradient Descent: A popular optimization method for training AI models, especially in neural networks.
- Adam – Adaptive Moment Estimation: Another optimization algorithm that is commonly used for training deep learning models.
- RMSprop – Root Mean Square Propagation: An optimizer that’s designed to adapt the learning rate for each of the parameters.
- BPTT – Backpropagation Through Time: A method used specifically for training Recurrent Neural Networks (RNNs).
- TD – Temporal Difference: A class of model-free reinforcement learning methods.
- DQN – Deep Q-Network: An algorithm that combines Q-learning with deep neural networks to let AI agents solve complex problems.
- PPO – Proximal Policy Optimization: A policy gradient method for reinforcement learning.
- A3C – Asynchronous Advantage Actor-Critic: A model that uses deep reinforcement learning techniques.
- GAN Training – Generative Adversarial Network Training: A method for training a generative model.
- Transfer Learning – Not an acronym, but a crucial method in training AI where a model developed for a task is reused as the starting point for a model on a second task.
- Fine-Tuning – Similar to transfer learning, it involves adjusting a pre-trained model to better fit a specific, often more limited, dataset.
- Federated Learning – FL: A machine learning technique that trains an algorithm across multiple decentralized devices or servers holding local data samples, without exchanging them.