The Real Ledger of AI: How Much 'Benefit' Have You Gained, and What 'Cost' Have You Paid?
Introduction: Beyond 'Job Anxiety', Reassessing the Value Balance of AI
Since the wave of generative AI swept the globe, a question has lingered like a ghost in the collective consciousness: "Will your job be replaced by AI?" [1]. This "job anxiety" triggered by technology has dominated most discussions about AI, leading us to form a peculiar contradictory mindset: quietly using AI to enhance efficiency at work while fearing the potential for mass unemployment [2, 1]. This pervasive sentiment of "using it while fearing it" exposes the narrowness of our existing cognitive framework.
This raises a sharp question that must be confronted: while media and experts are eager to discuss "whether AI will replace you," have we overlooked a more fundamental issue: is the distribution mechanism of the enormous dividends brought by AI technological advancement fair? Research shows that AI can significantly enhance the efficiency of individuals and enterprises, such as increasing the work speed of professionals by 25% to 50% or helping companies reduce operating costs by 35% [3, 4]. But is the value created by these efficiency improvements benefiting the public through lower prices and better services, or is it merely translating into profit growth for a few companies? Is it bridging social gaps, or is it invisibly exacerbating the winner-takes-all Matthew effect?
At the same time, another cognitive gap is becoming increasingly evident. On one hand, the public's trust in AI is generally low; in the U.S., as many as 50% of adults feel "more worried" than excited about the increasing use of AI [5, 6]. On the other hand, technology experts and tech companies generally exhibit an optimistic attitude. Behind this significant cognitive disparity, is it the public's irrational fear of the unknown, or are experts and stakeholders deliberately avoiding or beautifying the real costs of AI? For example, the staggering energy and water consumption of the AI industry, the deep-rooted biases in algorithmic decision-making, and the potential erosion of personal privacy—these "costs" are often downplayed in the grand narratives about AI [7, 8].
Therefore, this article will temporarily set aside abstract debates about the distant future and delve into examining the "benefits" that AI brings us today and the "costs" we pay for it. Together, we will explore how the value balance of this technological revolution is tilted.
Chapter One: Cost Reduction and Efficiency Improvement for Enterprises: The Game Between Profit Growth and Consumer Welfare
The wave of artificial intelligence (AI) is reshaping the global business landscape with unprecedented depth and breadth. From robots accurately sorting in e-commerce warehouses to tireless intelligent robotic arms on production lines, and complex algorithms accelerating drug screening in pharmaceutical laboratories, AI is becoming the ultimate tool for enterprises pursuing the eternal goal of "cost reduction and efficiency improvement." By automating repetitive tasks, optimizing complex supply chain networks, and predicting market demand fluctuations, AI indeed brings considerable reductions in operating costs and efficiency improvements for enterprises [9, 10]. Theoretically, these saved costs, or "efficiency dividends," should flow like a trickle into the vast ocean of consumers through lower product prices and better service experiences.
However, as calm observers, we must cut through the optimistic narratives about technological utopia and examine the more complex reality beneath this trend: does the improvement in efficiency necessarily equate to an increase in consumer welfare?
A sharp question that must be confronted is: how much of the AI cost reduction and efficiency improvement claimed by enterprises is genuinely passed on to consumers through lower prices or improved quality, and how much is quietly transformed into shareholder profits and executive bonuses? Tracking the true flow of this "efficiency dividend" is like searching for the truth in a complex financial maze. Enterprises achieve leaps in productivity through AI technology, but the saved costs appear on their balance sheets as higher gross margins. Next, the distribution path of this new profit presents a fork in the road. It can be used to lower product prices, reinvest, or, of course, be directly distributed to shareholders.
The reality is often that the latter is far more tempting than the former. In modern corporate governance structures driven by maximizing shareholder value, directly converting efficiency improvements into profit growth is almost instinctual. We see many tech giants proudly showcasing the profit margin increases brought by their AI strategies in their financial reports, yet the prices of their flagship products show no significant signs of easing. What consumers enjoy may only be minor improvements in product iterations rather than real monetary discounts. To track the flow of this dividend requires a more transparent mechanism; otherwise, the so-called "cost reduction and efficiency improvement" may ultimately be just a feast for capital insiders, with consumers merely spectators attracted by the halo of technology.
Another direct manifestation of efficiency improvement is in the customer service sector. When AI customer service replaces 80% of human seats, we indeed gain unprecedented convenience—no long waits, and issues can be addressed within seconds. But does this "instant response" convenience come at the cost of sacrificing the ability to handle complex, personalized issues? Is this machine-driven "efficiency" making services increasingly "impersonal"?
The answer is almost certainly yes. Current AI customer service is essentially a rapid retrieval and matching system based on a vast knowledge base. For common questions with standard answers, it performs flawlessly. However, once a consumer's question goes beyond the preset script or involves complex emotional needs requiring empathy and flexibility, the limitations of AI become glaringly apparent. We often find ourselves trapped in a "looping dialogue" with the robot, repeating keywords but never reaching the core of the issue. Ironically, companies package this as "efficiency improvement" and use it as a reason to cut back on human seats. When consumers ultimately need human intervention, they find the path to reach human service has become exceptionally convoluted and lengthy. In this model, companies save on labor costs, but consumers pay the price with a dramatic increase in time and emotional costs. The "instant response" we gain is merely an illusion of efficiency for simple issues; when we truly need help, we face unprecedented inefficiency and alienation.
This alienation of service models reflects a dangerous trend: companies are using technology to push the standardization of services to the extreme, thereby depersonalizing consumers. The core of service should be "people," the ability to understand, empathize, and solve problems. When AI strips "human touch" from services, what it enhances may only be the operational efficiency metrics of the enterprise, not the true satisfaction of consumers. Is this "efficiency," achieved at the cost of sacrificing service depth and warmth, truly the progress we desire?
Chapter Two: Upgrading Public Services: The Promises and Realities of Smart Cities
When "smart cities" transition from a sci-fi concept to an annual government plan, they promise citizens an enticing vision: a more efficient, convenient, and livable future. In this blueprint, artificial intelligence (AI) is the core engine driving everything. It is expected to transform the complex urban body into a responsive, self-regulating organic entity.
The most direct manifestation of this revolution first occurs in urban transportation systems. Nowadays, hanging above intersections, in addition to cameras, is an invisible "city brain." It dynamically adjusts traffic light timing schemes by analyzing real-time traffic data. In Hangzhou, pilot areas can plan a full green light route for ambulances, reducing travel time by nearly half [11]. The transformation has also permeated government service hotlines. The traditional "12345" hotline, backed by a large number of human seats and a complex work order flow system, now has AI voice robots handling front-end inquiries and sorting, while the "intelligent dispatch" system can automatically assign work orders to corresponding units based on geographic location and responsibility lists, reducing dispatch time by 90% in practices in places like Kunshan [12]. In the broader field of urban management, AI is also becoming a "needle," automatically identifying issues like street vendors and garbage exposure through image recognition algorithms, changing the past reliance on human patrols for "street sweeping."
Undoubtedly, AI is fulfilling its promises of "efficiency" and "convenience." However, as we immerse ourselves in the smooth experiences brought by this technology, as calm observers, we must cut through the fog of propaganda and examine the shadows obscured by the "intelligent" halo.
The first question that must be faced is: are these "city brains" built by a few tech giants forming new data monopolies? When the core data of a city's transportation, government affairs, security, and more continuously flows into the cloud platforms of one or a few commercial companies, a huge, invisible power center has quietly established itself. Where are the boundaries of citizens' data privacy? When our convenience in life must be exchanged for personal data, do we truly have a choice? How can the government, as the regulator of data and defender of citizens' rights, ensure that data sovereignty is not hijacked by commercial interests while embracing technological convenience? This is far more critical and urgent than the technology itself.
The second, more insidious question is: as government services increasingly rely on algorithmic decision-making, are those "marginal" demands that cannot be quantified or do not conform to standard processes more likely to be systematically overlooked? The advantage of algorithms lies in handling standardized, highly repetitive tasks. A work order for a "damaged manhole cover" can be perfectly identified and dispatched, but how can the complex emotional need of a lonely elderly person wishing for community workers to visit and talk to them be quantified and entered into the system? Behind "intelligent" dispatch, is there a shirking of responsibility? The pursuit of efficiency maximization by technology contrasts sharply with the essence of public service, which lies in caring for each individual, especially those in greatest need. If the price of "intelligence" is the erosion of "human touch" and institutional indifference to marginalized groups, what we are building is not a smarter city, but a colder one.
Chapter Three: Personal Empowerment: Efficiency Tools or "Cognitive Crutches"?
We are at an unprecedented crossroads. Artificial intelligence, once an unreachable technological concept, has now transformed into countless accessible applications, permeating every crevice of our work and life. It promises to empower us, packaging those once-perceived professional barriers—programming, design, professional writing, music creation—into simple interfaces and one-click generation buttons. This is undoubtedly a revolution in personal productivity, but as we cheer for the increase in efficiency, we should perhaps pause to examine the hidden costs behind this "gift."
The rise of AI as an efficiency tool is evident. For programmers, AI programming assistants are like tireless senior partners, capable of real-time code completion and bug fixing. For writers, from simple grammar corrections to complex report writing, AI is almost omnipotent. More disruptively, AIGC (Artificial Intelligence Generated Content) technology is rapidly leveling the playing field for creation. Skills in painting or music creation that once required years of training can now be achieved by inputting a few descriptive keywords, and within seconds, a visually stunning artwork or a beautiful melody appears. This indeed grants ordinary people unprecedented creative abilities, freeing the desire for expression from the constraints of skill scarcity.
However, as we immerse ourselves in the convenience and speed brought by this "empowerment," deeper issues quietly emerge. The first question is: when we enjoy the "convenience" recommended by AI, are we aware that we are paying a "cognitive tax" for the algorithm's "information cocoon," potentially sacrificing our ability for independent thinking and discovering unexpected surprises? [6] The core logic of AI tools is based on massive data for pattern recognition and probability prediction. What it provides is always the "most likely" option. When we become accustomed to choosing from the options given by AI, we are essentially substituting "recognition" for "thinking." In exchange for immediate efficiency and convenience, we relinquish part of the dominant power of cognitive function. Over time, we may gradually lose the patience and ability to solve problems independently, and we miss out on the opportunities to "make mistakes" and "take detours"—many great ideas are born from those unconventional explorations.
The second question follows closely: AIGC makes "everyone a creator" possible, but does this also give rise to a large number of homogenized, soulless "creative fast food"? When creativity can be generated with a single click, how will the value of original spirit be redefined? [1, 13] The proliferation of AIGC has led to an explosion of content, with social media flooded with AI artworks that are stylistically similar and compositionally alike. They may be technically flawless but often evoke a sense of emptiness. This is because AI's "creation" is essentially a mimicry, reorganization, and stitching together of existing data; it can perfectly replicate a popular style but cannot inject the unique life experiences, emotional struggles, and intellectual sediment of the creator. When the act of "creation" is simplified from lengthy contemplation and refinement to the skill of inputting prompts, the slogan of "everyone is a creator" reveals a challenge to the original spirit.
Therefore, we must reassess the definition of "originality." In the era of one-click generation, true original spirit may no longer be solely reflected in the final form of the work but more in the unique "intention" and "concept," as well as the "mastery" in human-machine collaboration. Future creators may resemble directors or curators, with their core ability lying in how to accurately guide, select, and edit AI-generated content, ultimately forming a complete work with a personal imprint. Ultimately, AI is both a powerful efficiency tool and potentially a "cognitive crutch" for our thinking. It is not the answer but a questioner. It asks us: in an age where intelligence is at our fingertips, what is the unique value of human cognition and creation?
Chapter Four: Environmental Bills: Who Pays for the AI Power Frenzy?
In our era, artificial intelligence (AI) is being elevated to a technological altar with a near-religious fervor. Tech giants spare no effort in showcasing how their models achieve "exponential" growth in "intelligence." However, amidst this frenzy over computational power and the boundaries of intelligence, a key question has been cleverly placed in the shadows outside the spotlight: who will pay the environmental bill for this feast?
As we marvel at the leaps in AI model capabilities, a less glamorous fact is that the energy and resource consumption behind it is also expanding at an "exponential" rate. Training a large language model requires thousands of high-performance GPU chip clusters, undergoing weeks or even months of high-intensity computation. It is estimated that by 2025, the carbon emissions from global AI systems could be equivalent to that of New York City [14]. Every time we pose a question to a chatbot, thousands of servers in data centers spring into action, consuming astonishing amounts of electricity.
Tech companies, when promoting their AI model capabilities, are always eager to showcase growth curves of parameters, performance scores, and so on. But why do they remain tight-lipped about the equally steep growth curves of carbon footprints and water footprints behind these curves? This "good news only" promotional strategy raises doubts about whether it is a deliberate avoidance of social responsibility. If the "progress" of a technology comes at the cost of exacerbating environmental crises, what is the true value of that progress?
Energy consumption is only half the story. Data centers, the "computational power factories" of the AI era, are veritable "water-guzzling monsters." To cool the rapidly operating servers, vast amounts of water resources are consumed. Reports indicate that Microsoft consumed millions of gallons of fresh water in just one data center to train its large models. While many regions around the world face increasingly severe water shortages, these tech giants are extracting precious life sources from the real world for virtual computations. Furthermore, this power frenzy is giving rise to a new "mountain of electronic waste." In pursuit of higher computational efficiency, the iteration speed of AI hardware is astonishingly fast, with old models quickly being phased out, creating difficult-to-dissolve "technological mummies."
This raises a more fundamental question: when the energy costs of AI are ultimately passed on to society through rising electricity bills and strained water resources, what are the real social and environmental costs of the so-called "free" AI services we enjoy? [15] We may not need to pay cash for every interaction with AI, but we are paying the bill in a more indirect and heavier way—our shared living environment. The pressure on the power grid, the depletion of water resources, and the pollution of land—these costs do not appear in tech companies' financial reports but are reflected in the lives of each of us. The so-called "free" is merely a carefully designed cost transfer, cleverly externalizing the operational costs of enterprises into environmental debts that society and future generations must bear. We must ask: is this power frenzy worth the high environmental price we are paying?
Chapter Five: The Shadows of Algorithms: When "Intelligence" Replicates and Amplifies Injustice
We are in an era where algorithmic determinism is quietly rising. From assisting in medical diagnoses to the first round of resume screening on recruitment websites, and even to risk assessments in the judicial system, artificial intelligence (AI) is intervening in key social decisions with unprecedented depth and breadth. We are promised a more efficient and objective future. However, when we peel back the halo of "intelligence" and examine the texture of its operation, a disturbing reality emerges: algorithms are not value-neutral technical tools; they are more like mirrors that not only reflect existing biases and injustices in human society but also quietly solidify and amplify them.
The essence of AI learning is pattern recognition and induction based on vast historical data. This means that if the data fed to it is biased—and real-world data is almost inevitably so—then the algorithm will not only faithfully replicate these biases but may even interpret them as a cold, seemingly objective "law." The recruitment field is another disaster zone. Amazon once attempted to develop an AI recruitment tool to automate resume screening. However, they quickly discovered that the system exhibited clear discrimination against female applicants [16]. The reason was that the system learned from the company's hiring data over the past decade, and in a male-dominated tech industry, the historical data itself "taught" the AI one conclusion: "successful candidates" are often male.
When this logic extends to the judicial field, the consequences are even more severe. In the U.S., some courts have begun using an algorithmic tool called COMPAS to assess the recidivism risk of defendants. However, an investigation found that the system's false positive rate for predicting violent crimes among Black defendants was nearly twice that of White defendants [17]. The algorithm did not directly use "race" as a variable, but by learning alternative indicators highly correlated with socioeconomic status and race, such as postal codes and educational backgrounds, it ultimately constructed a risk model that was systematically disadvantageous to specific groups.
This raises a very tricky question: when a biased AI system is used for judicial decisions or medical diagnoses, the harm it causes is systemic. So, who should bear the responsibility? Is it the algorithm engineers, the data providers, the users, or the "black box" itself that cannot be held accountable? Blaming the engineers entirely seems unfair; holding the data providers accountable may fall into a circular argument of "data reflecting reality." Ultimately, responsibility seems to evaporate in the "black box" composed of code, data, and complex models, where "it" itself cannot bear any moral or legal responsibility. This diffusion of responsibility is precisely one of the most dangerous features of algorithmic power.
Thus, we must confront another deeper question: are we tacitly allowing the existence of "algorithmic privilege"? This privilege manifests as algorithms designed by a few tech elites, whose internal logic is unknown to the public, secretly screening and adjudicating the life opportunities of the majority—from whether one can obtain a loan to whether one can pass an interview. Unlike traditional decision-making, we have almost no rights to appeal or correct the "judgments" made by algorithms. We are placed in a position of extreme information and power inequality, silently accepting a new form of inequality written in code. If past biases stemmed from human and cultural flaws, then future injustices may be systematically solidified by precise, efficient, and seemingly neutral algorithms.
Chapter Six: Human Degradation? Deep Concerns About Over-Reliance on AI
We are excitedly stepping into an era shaped by algorithms, with AI tools flooding every corner of our lives, promising unprecedented efficiency and convenience. However, beneath the clamor of technological optimism, a deeper and more unsettling question is quietly emerging: as we outsource more cognitive burdens to machines, are we silently degrading our core abilities as "humans"? [18, 13, 19]
The over-reliance on AI tools primarily erodes personal core abilities. Critical thinking, the ability to solve complex problems, and nuanced interpersonal skills—these abilities, once regarded as the cornerstones of human intelligence, now face the risk of being "idled." When students become accustomed to throwing complex essay topics at AI and waiting for a well-structured answer, they lose the valuable process of independently gathering information, filtering it, constructing logical chains, and forming unique insights. This cognitive "outsourcing," while a victory for efficiency in the short term, may lead to mental inertia and skill atrophy in the long term. We are becoming adept at "asking questions," but we may be forgetting how to "think."
Furthermore, this dependency extends into our most intimate emotional realms. The emergence of applications like "AI resurrection" precisely hits the human need for emotional solace and the immense grief of losing loved ones [20]. By simulating the voices, tones, and even thought patterns of the deceased, these technologies create a "digital ghost" with whom one can converse eternally. This undoubtedly provides an unprecedented emotional anchor, but the ethical dilemmas and emotional traps lurking behind it are equally concerning.
Now, let us confront the sharp questions obscured by the halo of technology. First, when the education system begins to embrace AI tutoring, are we cultivating the next generation of independent thinkers, or are we training a group of "question machines" that only seek standard answers from machines? AI tutoring systems excel at providing standardized knowledge and problem-solving steps, but true learning is a nonlinear process filled with exploration, trial and error, questioning, and epiphany. When AI becomes the all-knowing "provider of standard answers," students may gradually lose the courage and ability to challenge authority and engage in critical inquiry. This so-called "efficiency" may come at the cost of flattening cognitive depth and outsourcing thinking abilities.
Second, while "AI resurrection" technology meets emotional solace needs, does it also blur the boundaries of life and death, opening new doors for emotional manipulation and commercial exploitation? When we can eternally converse with a "digital ghost," how will our relationship with the real world be eroded? This technology, while providing comfort, also creates an endless mourning period, allowing the living to become immersed in past illusions. More worryingly, emotions may become commodities that are precisely calculated and exploited. Companies developing these applications hold the most vulnerable emotional data of users, easily adjusting the behaviors of "digital ghosts" through algorithms to maximize user engagement. When a person places their primary emotional reliance on a program that can be turned off or commercialized at any time, their connection to real people and the real society will inevitably weaken.
We are at a critical crossroads. Is AI an empowering tool or a "gentle trap" that leads to human degradation? The answer lies not in the technology itself but in how we choose to use it, regulate it, and how we define our own value. If we prioritize efficiency over thought and convenience over capability, then "human degradation" may no longer be a distant concern but a reality that is happening.
Conclusion: Recalibrating the Balance: Becoming Aware Stewards in the New Era of Human-Machine Collaboration
We stand at the entrance of a new era shaped by algorithms and code. Artificial intelligence (AI), this force, brings both tangible "benefits" and accompanying "costs" that we must pay attention to. The noisy discussions often oscillate between the hymns of "technological utopia" and the alarms of "silicon-based life threats," yet overlook a fundamental fact: the essence of AI has never changed; it has always been a tool. And the value of a tool ultimately depends on the hand wielding it—ourselves, humanity.
To crudely define the future as a "human-machine confrontation" is a poverty of imagination. A more accurate picture is one of deep, seamless "human-machine collaboration." In this picture, machines are responsible for execution, calculation, and optimization, while the role of humans is redefined and elevated to a more core position: to be the ones who ask the right questions, set meaningful goals, and make final value judgments at critical moments. AI is an efficient "co-pilot," but the steering wheel must, and can only, be in the hands of "driver" humans.
Therefore, to ensure that this great ship sailing toward the future stays on course, we need a solid and flexible governance framework built on technology, ethics, and regulations. Technology needs to continuously iterate to improve its transparency; ethics must lead the way, setting inviolable red lines for technology; and laws should serve as the final safeguard, transforming ethical consensus into a social contract to ensure that the benefits brought by AI can be shared equitably, fairly, and sustainably.
So, in the face of this irreversible tide, rather than passively worrying or blindly being optimistic, what is the most constructive action we can take as individuals? How should we learn, adapt, and participate in shaping the public discourse around the future of AI? The most constructive action is to refuse to be a passive "consumer of information" and instead become an active "user of tools" and "system thinker." This means:
- Shifting from "learning knowledge" to "learning to ask questions": the core competitiveness of the future lies in the ability to define problems, break them down, and ask high-quality questions to AI or humans. Rather than worrying about being replaced by AI, think about how to harness AI, making it an extension of your cognitive abilities.
- Cultivating a habit of "reflective criticism": the answers provided by AI are merely probabilistic outputs based on its training data, not truths. We need to develop the habit of examining and questioning: what is the source of this answer? What biases might it be hiding? Maintaining this sense of distance is the only defense against being "fed" and manipulated by algorithms.
- Actively participating rather than remaining aloof: the future form of AI is not solely determined by a few tech elites in closed laboratories. Its trajectory is shaped by every public discussion, every policy formulation, and even every user feedback. Speak up, engage in discussions, clash your viewpoints, and vote with your choices. Silence itself is a form of relinquishing the future.
Ultimately, what kind of future do we hope AI will lead humanity to? Is it a "beautiful new world" driven by efficiency, where human value is simplified to quantifiable productivity metrics? Or is it a richer civilization where technology empowers individuals, enhancing human creativity and empathy? This choice has never been as clearly presented before us as it is today. We can choose a society driven by extreme efficiency, where human value is reduced to quantifiable productivity indicators. But we can also choose a future where technology is used to "empower" rather than "replace." In this world, AI takes on the heavy mental and physical labor, liberating humanity from repetitive shackles to engage in more creative, emotionally communicative, and spiritually exploratory work.
The balance is still swaying, and the pointer has not yet fixed. The answer to where we hope AI will lead us ultimately depends on every choice, every reflection, and every action we take at this moment. Becoming a conscious steward means we must not only care about what AI "can do" but also question what it "should do." Because technology itself has no will; the will to shape the future is still, and will always be, in the hands of humanity itself.