CHAPTER 14

                                                           CHAPTER 14    


DATE-02.12.2025

  

Society as the Other-One: Understanding Weiser’s Warning and the Rise of Instrumentarian Power

Introduction: What This Passage Is Really Saying

The passage argues that Mark Weiser—the pioneer of “ubiquitous computing”—saw something far deeper than just a future filled with many computers. He sensed a new form of power emerging from the presence of countless interconnected devices around us. This power is not classical totalitarianism (like the Stalinist or fascist models of the 20th century) but something far more subtle, pervasive, and unprecedented.
This new power, later called instrumentarian power by Shoshana Zuboff, has the capacity to reshape society because it works through data, prediction, and behaviour shaping rather than fear or force.


Ubiquitous Computing: What Weiser Really Envisioned

Key idea: Weiser imagined a world where computers are everywhere—in every object, every room, every space—working silently in the background.

What does this mean in simple terms?

Weiser predicted that computers would become so common and so embedded in daily life that we would stop noticing them.
Not one computer on your desk—
but hundreds around you in your home, workplace, car, city.

Real-world examples today

  • Your phone tracks your steps, sleep, screen time.

  • Smart speakers (Alexa, Google Home) listen for commands—and sometimes more.

  • CCTV cameras with facial recognition track bodies and movements.

  • Smart TVs report what you watch and for how long.

  • GPS in cars tracks driving behaviour.

  • Shopping apps monitor what you browse and what you buy.

These devices quietly observe you and send data to powerful institutions.

What Weiser sensed

If all these devices can sense your presence, collect your data, and communicate with each other, then they create an infrastructure that is more powerful than any surveillance in the past.


Making Old Totalitarianism Look Like “Anarchy”

Key idea: Classical totalitarianism controlled your public behaviour.
Instrumentarian power can shape your internal choices.

Why did Weiser say totalitarianism would look like "sheer anarchy"?

Because older dictatorships used crude methods:

  • police,

  • spies,

  • censorship,

  • propaganda.

But imagine a system that knows:

  • what you think,

  • what you fear,

  • what you desire,

  • how likely you are to act in a certain way.

And it learns this from your devices—without needing fear, force, or prisons.

Practical manifestations

  • TikTok and Instagram algorithms shape what people find attractive, desirable, or dangerous—without users realizing the manipulation.

  • Political micro-targeting (e.g., Cambridge Analytica) influences voters based on psychological profiles.

  • E-commerce nudges push users into impulsive purchases using behavioural data.

  • Insurance apps adjust premiums based on your driving or health data, pressuring you to behave in certain ways.

  • City surveillance systems automatically detect “unusual behaviour,” influencing how people move and act.

Such systems influence behaviour softly and continuously, not violently and openly.


Instrumentarian Power: A New, Unprecedented Force

Key idea: Instrumentarian power does not want to conquer your mind or ideology.
It wants to predict and modify your behaviour for profit or control.

What makes it new?

Unlike totalitarian regimes that imposed political ideology, instrumentarian power operates through:

  • massive data collection,

  • algorithmic predictions,

  • behavioural nudges,

  • automated environments.

It doesn’t need to make you fear the government.
It just needs to shape your choices without you noticing.

Real-world examples

  • Google Maps predicts your route and nudges you to certain paths (influencing city traffic).

  • Netflix predicts what you will watch next and designs screens to keep you watching.

  • Online shopping sites personalize prices and offers based on your psychological weaknesses.

  • Smart city sensors predict crowd behaviour and send automated policing alerts.

In each case, your future behaviour becomes something to be measured, predicted, and steered.


Society as the “Other-One”: How This Power Redefines Social Life

Key idea: When machines observe everything, society becomes a mirror that constantly watches and shapes individuals.

What this means

The passage suggests that society itself becomes the “other”—
not a community of people, but an instrumental environment that reacts to your behaviour and influences it.

Practical manifestations

  • You modify your behaviour online because you know platforms are “watching.”

  • Young people shape their identities based on algorithmic trends instead of community norms.

  • Social life becomes performance: the self curated for the machine, not for human relationships.

  • Public spaces become behavioural laboratories—where sensors track your footfall, speed, gestures.

Society no longer simply consists of people interacting.
It becomes a data-driven system that classifies, predicts, and adjusts human behaviour.


What Might This Power Have in Store for Us?

Key idea: If this power is stronger than past totalitarianism, its future effects could be enormous.

Possible future manifestations

  1. Behavioural futures markets
    Companies may trade predictions of your future behaviour like stocks.

  2. Automated social control
    Algorithms may determine:

    • who gets loans,

    • who gets jobs,

    • who is flagged as a “risk,”

    • who is denied entry to places.

  3. Loss of human agency
    People may outsource choices to machines—
    what to eat,
    what to watch,
    what to believe,
    whom to vote for.

  4. Normalization of surveillance
    Constant tracking becomes an accepted part of life.

  5. Invisible governance
    The real power lies not with elected governments but with data systems that control behaviour subtly.


Conclusion: Why This Passage Matters Today

The passage warns that the computational environment around us is not merely technology—it is a new form of social power.
Unlike totalitarianism, which tried to dominate through coercion, instrumentarian power works silently through:

  • prediction,

  • personalisation,

  • nudging,

  • data-driven shaping of choices.

This makes it harder to detect, harder to resist, and more deeply embedded in everyday life.

Reasoned conclusion

To protect human freedom, societies must recognize that:

  • surveillance is no longer a political tool—it is an economic engine;

  • threats to autonomy don’t come from dictators—they come from systems that track and modify behaviour;

  • democracy must evolve to regulate not just governments, but the algorithmic infrastructures that shape human experience.

Unless we understand this shift, we risk entering a future where the greatest power over society is not political authority but a data-driven, algorithmic force that we neither see nor control.

   

The Normalization of Surveillance Capitalism: How Society Becomes a Laboratory for Behavioural Control

Introduction: What This Passage Is Warning Us About

The passage argues that ideas once considered disturbing and unacceptable—especially the behavioural conditioning vision imagined by B.F. Skinner—are now becoming normal, even inspiring for today’s technology giants.
Surveillance capitalism, which depends on predicting and controlling human behaviour, has moved beyond the digital world and is now turning society itself into a field for data extraction and behavioural modification.


Skinner’s “Walden Two”: From Rejected Utopia to Tech Industry Blueprint

Key idea: Skinner imagined a perfect society engineered by controlling human behaviour through positive reinforcement.
People in the 1950s were horrified by this idea.

Why was “Walden Two” revulsive?

Because it treated human beings as objects to be shaped, not free individuals making choices.
It assumed that if behaviour can be predicted and controlled, society becomes more efficient.

What the passage says

Today, surveillance capitalism is doing exactly what Skinner imagined—except now it is real, and it is marketed as innovation, efficiency, and personalization.

Real-world manifestations

  • Reward systems in apps (likes, hearts, streaks) condition behaviour like Skinner’s pigeons in a box.

  • Social media feeds reinforce addictive scrolling.

  • Gamified workplaces (Amazon warehouses, delivery platforms) reward employees to push for more speed and compliance.

  • Fitness apps use badges and milestones to condition daily routines.

Skinner’s thought experiment has become a commercial strategy.


Normalization and Habituation: When the Unacceptable Becomes Ordinary

Key idea: Practices once considered invasive now feel normal because we have become used to them.

How does normalization happen?

Gradually.
What begins as discomfort becomes convenience.
What begins as unusual becomes routine.

Practical examples

  • Fifteen years ago, people were shocked that Google scanned emails for advertising. Today it feels normal.

  • Smart speakers at home quietly listening were once frightening; many now see them as harmless helpers.

  • Sharing location data felt unsafe; now millions leave location tracking on permanently.

  • Facial recognition in public spaces seemed dystopian; today it is used in airports, malls, even schools.

Normalization is the silent lubricant of surveillance capitalism.


The “Prediction Imperative”: Why Big Tech Wants Total Information

Key idea: Surveillance capitalism needs to predict human behaviour with certainty to make profits.

Why?

Because companies sell “behavioural futures”—predictions about what people will do next—to advertisers, insurers, political campaigns, and others.

More data → better prediction

Thus the industry constantly pushes toward total information:

  • where you are,

  • what you buy,

  • what you say,

  • how you move,

  • what you feel,

  • who you talk to,

  • how long you pause on a screen.

Practical manifestations

  • Google’s smart city plans (e.g., Sidewalk Labs in Toronto) seek total data on urban life.

  • Meta’s metaverse vision hopes to capture body language, micro-expressions, and emotional cues.

  • AI wearables track heart rate, stress, and even mental states.

The closer they get to total information, the more accurate their behavioural predictions become.


From the Virtual World to the Real World: Expanding the Data Frontier

Key idea: Data extraction is no longer limited to your online activity.
Surveillance capitalism wants to monitor your offline life as well.

How this expansion works

The aim is to turn the entire real world into a data source.

Real-world manifestations

  • Smart homes (lights, ACs, fridges, locks) constantly report user behaviour.

  • Smart cars record driving patterns, routes, and conversations.

  • Retail stores track customer movements with cameras and sensors.

  • Wearables collect health, fitness, and emotional data.

  • Public transport apps record travel histories.

Your entire physical existence becomes data.


The “Reality Business”: Turning Everything Into a Computational Object

Key idea: Surveillance capitalism transforms all aspects of life—people, objects, activities—into things that can be measured and manipulated.

What does “equivalence without equality” mean?

Everything is treated as data—
but not with equal dignity.
Your value is not as a human being, but as a predictable data point.

Examples

  • A person’s worth is reduced to a “credit score.”

  • Job applicants are ranked by algorithms, not by human judgment.

  • Police categorize neighbourhoods by “crime risk scores.”

  • Insurers adjust premiums based on your digital footprint.

  • Social media ranks users by engagement potential, not humanity.

Everyone becomes equivalent as data, but no one is equal as a person.


Annexing Society Itself: The Final Frontier of Data Extraction

Key idea: Once data extraction enters personal behaviours, the next target is society’s fundamental structures.

What gets annexed?

  • friendships,

  • family relations,

  • community ties,

  • civic life,

  • political behaviour,

  • social norms.

Practical manifestations

  • WhatsApp and Facebook influence political polarization and voting patterns.

  • Social media shapes how children form identities and friendships.

  • Dating apps shape relationship trends and marriage patterns.

  • Workplace algorithms influence collaboration and teamwork.

  • Consumer data profiles shape how lenders treat entire communities.

  • Predictive policing shapes how society sees “dangerous” neighbourhoods.

Society becomes a computational field: every relation is rendered, measured, predicted, and nudged.


Conclusion: What This Passage Ultimately Warns Us About

The passage warns that we are entering a stage where surveillance capitalism no longer just observes individuals—it tries to shape society itself.

Reasoned conclusion

  1. Ideas once rejected as manipulative (like Skinner’s behavioural conditioning) are now embedded in our technologies.

  2. We have become accustomed to constant surveillance, mistaking it for convenience.

  3. Surveillance capitalism’s hunger for total information pushes it to extract data from every corner of real life.

  4. This transforms society into a predictable, controllable environment—an engineered reality, not a free one.

  5. If society becomes a computational object, human agency, dignity, and democratic choice weaken.

To protect freedom, we must:

  • recognize how behaviour is being shaped,

  • challenge the normalization of surveillance,

  • demand accountability for data use,

  • and reassert human agency over machine prediction.

Only then can society remain a space for self-determination rather than a behavioural laboratory built for profit.

 

DATE 02.12.2025 / PAGE 251–52

The Rise of an Instrumentarian Society: How Big Other Seeks Total Coordination and Control

(A clear, simple, nuanced explanation with practical manifestations and reasoned conclusions)


Big Other’s All-Seeing Presence: From Inevitable to All-Controlling

Key idea:
People now accept the presence of “Big Other”—the vast digital surveillance infrastructure—as something unavoidable.
But acceptance is not the final goal.

What is the real aim?

To achieve full visibility and control over social behaviour, interactions, and collective processes so that large corporations can operate at massive scale and influence.

Practical manifestations

  • Social media platforms track not only individuals but how entire communities behave.

  • Navigation apps track city-wide movement to influence traffic flows.

  • Delivery platforms track labour patterns to control supply chains.

  • Smart cities monitor energy use, human movement, and crowd density to shape urban behaviour.

This is not just watching individuals—
it is synchronizing society like a giant algorithm.


Totalitarianism vs. Instrumentarianism: Two Paths to Totality

Key idea:
Both totalitarianism (like 20th-century dictatorships) and instrumentarianism (data-driven behavioural control) aim for “totality,” but for different reasons.

Totalitarianism

  • Goal: political control

  • Method: fear, violence, coercion

  • Objective: obedience to ideology or leader

Instrumentarianism

  • Goal: market dominance

  • Method: data extraction, behavioural prediction, algorithmic control

  • Objective: certainty and profit

Practical distinctions

  • North Korea uses fear to control citizens.

  • Big Tech uses data and algorithms to shape citizen choices invisibly.

  • A dictator threatens punishment;
    a platform manipulates through “nudges,” personalized feeds, and predictive models.

Instrumentarianism does not want your soul—
it wants your behavioural patterns.


The Division of Learning: How Corporations Take Control of Knowledge

Key idea:
Instrumentarian power works by controlling who has access to large-scale learning.

What does this mean?

Only Big Tech companies (Google, Meta, Amazon, Baidu) have the:

  • computational power

  • datasets

  • algorithms

  • behavioural models

They learn about society at a depth no government or community can match.

Practical manifestations

  • Google knows global movement patterns via Maps.

  • Meta knows social emotions and political polarizations via Facebook.

  • TikTok understands human attention flows with precision.

  • Amazon predicts consumer desires better than consumers themselves.

This knowledge imbalance becomes a new form of power.


Optimizing Society for Market Goals: A New Kind of Social Engineering

Key idea:
Instrumentarianism reshapes society not for political ideology, but to serve market needs.

What does “societal optimization” mean?

It means designing society—its habits, emotions, behaviours—to produce:

  • more predictability,

  • more engagement,

  • more consumption,

  • more profitable behaviour.

Real-world manifestations

  • Uber directs drivers using algorithmic instructions, optimizing supply for profit.

  • Social media algorithms amplify emotions that keep users scrolling.

  • Smart home devices guide consumption patterns (electricity, entertainment).

  • Health apps nudge people into routines that feed commercial ecosystems.

Society becomes an optimized market machine, not a community of free human beings.


China vs. Surveillance Capitalists: Different Motivations, Similar Tools

Key idea:
The instrumentarian vision looks similar to China’s political surveillance system, but motivations differ.

China’s political elite

  • Uses data for political control and stability.

  • Surveillance is a state project aimed at obedience.

Surveillance capitalists

  • Use data for profit, not political domination.

  • The aim is a market opportunity, not state ideology.

Common features

  • Predictive policing models

  • Real-time surveillance

  • Social behaviour scoring

  • Algorithmic guidance

  • Datafication of daily life

Example

China uses its Social Credit System to enforce obedience.
Silicon Valley uses scoring systems (credit, reputation, engagement) to drive profitable behaviours.

Both reshape society—but toward different ends.


Society Becomes Data: “Equivalence Without Equality”

Key idea:
To surveillance capitalists, every person and every relationship is reduced to behavioural metrics.

Meaning

You and your neighbour become identical data points, even though your lived experiences are profoundly unequal.

Practical manifestations

  • Social media ranking systems don’t distinguish by dignity—just engagement.

  • Predictive policing treats neighbourhoods as risk profiles, ignoring historical injustices.

  • Credit scoring ignores context but determines life opportunities.

  • Employers use algorithmic filters that reduce human beings to data attributes.

Society becomes a predictive spreadsheet, not a human community.


The New Vision: A Society Designed Like a Machine Learning System

Key idea:
Just as industrial society copied the factory, today’s emerging society is copying machine learning models.

What does that mean?

A machine learning system needs:

  • constant data inputs,

  • continuous behavioural feedback,

  • calculated predictions,

  • refined control loops.

Surveillance capitalists want society to work the same way:
humans become the “data,”
algorithms become the “managers,”
and social life becomes a “predictive environment.”

Real-world manifestations

  • Smart cities functioning like autonomous control systems.

  • Supply chains run by predictive models instead of managers.

  • Automated workplaces where algorithms assign tasks.

  • Dating, friendships, and even political opinions shaped by algorithmic feedback loops.

The world becomes a giant, self-adjusting machine.


The Collapse of Social Trust and Social Relations

Key idea:
When machine predictions replace human relationships, society loses its human core.

Big Other replaces trust

Instead of trusting people, society starts trusting data:

  • algorithms judge risk, not communities;

  • machine predictions substitute for human understanding;

  • metrics substitute for relationships.

Practical manifestations

  • Ride-sharing relies on rating systems, not human trust.

  • Online sellers trust algorithmic fraud detection, not customer honesty.

  • Schools use surveillance proctoring instead of trusting students.

  • Employers use keystroke monitoring instead of trusting workers.

Society becomes a place where trust is automated, not earned.


Conclusion: What This Transformation Ultimately Means

The passage delivers a profound warning:
we are moving toward a society where human behaviour, relationships, and even the meaning of social life are reorganized around the needs of machine learning systems and market forces.

Reasoned conclusion

  1. Instrumentarianism is not violent, but it is deeply transformative.

  2. It reshapes society by data extraction, prediction, and behavioural steering, not by terror.

  3. Corporations imagine a future where society functions like a giant algorithm—predictable, coordinated, and profitable.

  4. Social trust, values, and relationships are replaced by computational certainty.

  5. Society, as we understand it—messy, emotional, unpredictable, deeply human—is at risk of becoming obsolete.

If we fail to recognize this transformation, the future will not be shaped by democratic choice or moral reasoning but by the cold logic of machine learning and market optimization.

The question is no longer whether society will change, but whether humans will remain central to it—or become inputs to a grand computational design. 

03.12.2025/PAGE 252

  

Totality Includes Society: How Instrumentarian Power Counts, Claims, and Converts Our Social World

1. Quick summary — what the passage is saying

In plain terms: leaders of surveillance-capitalist companies loudly announce how big and powerful their systems are — the devices, users, and data they control — like generals bragging about an army. That bragging isn’t just PR. It signals a wider aim: to fold society itself into a single, measurable, controllable domain. “Totality” here means not only collecting more data, but treating social life as an object to be rendered, calculated, and governed by technology.


2. The “inventory” metaphor — why companies count their troops

What it means: When CEOs list numbers (devices, users, cloud customers), they are showing scale — how many points of contact they have into everyday life. Scale is power: the more endpoints and people you reach, the more behaviour you can observe, predict, and influence.

Why it matters: Numbers aren’t neutral. A million devices is not just a statistic — it’s a standing army for data collection, experiment, and influence.

Practical manifestations

  • A mobile OS installed on hundreds of millions of phones can push updates, collect telemetry, and shape defaults that steer users’ behaviour.

  • A widely used office suite collects metadata about work patterns and collaboration flows that can be monetized or used to design “productivity” nudges.

  • Cloud services that host business apps give the provider deep visibility into organizational decisions and workflows.


3. Velocity and volume: why “how fast” and “how much” matter

What the passage says: The tech leaders stress not just how many devices exist, but how quickly data grows and how much is generated. Rapid, enormous flows of data make prediction models more powerful.

Why it matters: Fast, huge datasets allow companies to train better AI models. The models learn more quickly, catch subtle trends, and become more confident in predicting and shaping human behaviour.

Practical manifestations

  • Autonomous vehicles generating 100 GB/s feed models that improve driving decisions — and also create continuous records of what people do in public spaces.

  • Social platforms generate trillions of interactions; those interactions train recommendation engines that tightly shape what users see next.

  • Smart sensors in cities collect traffic, pollution, and movement data that enable automated management — and also automated classification of neighbourhoods and activities.


4. From pieces to totality: what “including society” actually means

What it means: It’s one thing to log clicks. It’s another to treat neighborhoods, workplaces, friendships, elections, and public rituals as measurable systems to be optimized. “Including society” means the instruments are not just attached to people — they are attached to the social processes that make collective life possible.

Why it matters: When social processes are rendered data-first, decisions about communities, rights, and resource distribution may be made by models and market logic rather than public debate or democratic institutions.

Practical manifestations

  • City planning informed primarily by sensor data rather than community input.

  • Hiring and credit decisions automated by algorithms trained on digital traces of communities.

  • Political campaigning optimized by micro-targeting that treats constituencies as clusters of predicted behaviours rather than citizens with dignity.


5. The rhetorical function of “shock and awe” in developer conferences

What the passage describes: CEOs like Satya Nadella publicly recite these figures to inspire developers and reassure investors. This spectacle both recruits collaborators and normalizes the idea that tech can — and should — “change the world.”

Why it matters: The rhetoric turns technical capacity into moral license. If you can do something at scale, the rhetorical thrust becomes: you should. That pressure encourages urgent deployment over careful ethical debate.

Practical manifestations

  • Developers build features framed as “broad impact” without full consideration of social harms.

  • Investors back ambitious projects that prioritize rapid scaling over rights or fairness.

  • Policy-makers feel pressure to “enable innovation” rather than regulate it.


6. The political and social consequence: normalization of power asymmetry

What happens next: As these companies grow their “troops,” they accumulate asymmetric knowledge and control. Citizens, governments, and small organizations cannot match that scale of learning.

Why it matters: Power concentrated in entities with superior data and compute changes who governs social outcomes. Democracy and local accountability can be sidelined by platforms that operate transnationally and at machine speed.

Practical manifestations

  • A company can change defaults or add features that alter civic discourse (e.g., amplification algorithms) faster than laws adapt.

  • Municipal decisions built on opaque vendor analytics create dependence on private firms for essential public functions.

  • Cross-border platforms can influence elections or markets in multiple countries simultaneously.


7. Reasoned conclusion — what the passage asks us to notice and do

The passage urges us to see beyond the spectacle of big numbers and marketing bravado. Those figures are a declaration of intent: to make social life visible, coordinated, and manipulable at scale. When corporations treat society as another dataset to be rendered and optimized, the consequences are political and moral as much as technical.

Key takeaways

  • Counting devices and users isn’t just business: it’s an assertion of capability to shape lives.

  • Velocity of data amplifies predictive power and the speed of social intervention.

  • Including “society” in the instrumentarian project means social norms, institutions, and communities become targets for optimization.

  • The rhetorical celebration of scale pressures faster rollout, often at the expense of democratic scrutiny and human dignity.

What to do next (practical steps)

  • Demand transparency about what data is collected and how models affect social processes.

  • Insist on democratic oversight for systems used in public and civic domains (smart cities, policing, elections).

  • Support regulations that constrain monopoly control over the “division of learning” (who owns training data and models).

  • Foster public alternatives — community-owned data infrastructures, open-source civic platforms, and data trusts.

If scale is power, then public oversight and social responsibility must match that scale. Otherwise, the “totals” tech leaders present onstage will quietly become totals that define how we live together.

03.12.2025/PAGE 253


Society as the Other-One — Google’s March Toward Total Knowledge and the Instrumentarian Project

Quick opening summary

This passage shows how Google’s leaders — Sundar Pichai, Eric Schmidt, and Larry Page — publicly describe the company’s reach and ambitions as if announcing a peaceful empire. They transform device counts, user numbers, and product rollouts into evidence of a moral and technical mission: to know everything and then reorganize the world on that knowledge. The stated goal is benevolent (“solve problems,” “bring abundance”), but the method is instrumentarian: folding people, places, purchases, and decisions into a data pipeline so algorithms can pre-empt, predict, and manage human life.


1. Counting the troops: why big numbers matter

What the passage says, simply: Pichai and Nadella both list huge numbers (devices, users, uploads) to show how many “touchpoints” their companies have into daily life.

Why this matters:

  • Each device or account is a sensor and a channel for influence.

  • Scale = leverage: the more users and devices you control, the more you can learn about and steer behaviour.

  • Publicly declaring the scale also motivates employees and normalizes the company’s role in society.

Practical manifestations:

  • A billion Gmail users means Google sees how people communicate and what topics matter to them.

  • Two billion Android devices let Google shape defaults, privacy settings, and which apps people install.

  • Millions of Photos uploads create massive datasets about places, faces, and routines.


2. “Assistant everywhere”: recasting devices as social infrastructure

What the passage says, simply: Google Assistant is framed as the interface that will be “everywhere” and help people throughout their lives.

Why this matters:

  • When assistants live in phones, cars, TVs, fridges, and speakers, they become the always-on medium through which everyday decisions are nudged.

  • The assistant is positioned not just as a tool but as a social actor that anticipates needs and supplies answers — potentially before the user even asks.

Practical manifestations:

  • Your phone suggests a calendar change because it detected a flight delay.

  • A smart speaker reminds you to buy groceries because it knows you’re low on staples.

  • A car assistant routes you differently based on traffic predictions that also benefit a partner company.


3. Utopian rhetoric: solving human problems or centralizing decision-power?

What the passage says, simply: Executives promise machine learning will cure disease, save species, and free people from toil — a classic utopian claim.

Why this matters:

  • Utopian goals make powerful technological projects morally attractive and politically easier to justify.

  • But framing “total knowledge” as benevolent masks the transfer of decision-rights from publics and institutions to private algorithms.

Practical manifestations:

  • A health AI that diagnoses illnesses faster — great — but if the dataset and control reside in one corporation, treatment priorities reflect corporate values, not democratic choice.

  • An agricultural AI that optimizes crop yields might favor monocultures beneficial to platform partners, harming local biodiversity.


4. Total knowledge as a requirement for “preemptive” services

What the passage says, simply: Google frames knowing “everything” as necessary for providing services that preempt problems — answering before questions are asked.

Why this matters:

  • Preemptive help trades reactive human choice for predictive automation.

  • To preempt well, systems must integrate private details of people’s lives: purchases, health, travel, relationships.

Practical manifestations:

  • Recommendations that appear as helpful (e.g., “We noticed you might run out of insulin”) but draw on sensitive medical, purchase, and activity data.

  • “Proactive” travel rebooking that defaults to a flight that benefits a partner carrier without clearly presenting alternatives.


5. People as “first class objects” in search: objectification of human life

What the passage says, simply: Google intends to treat people like items in a database — searchable, classifiable, and ranked.

Why this matters:

  • When persons are modelled as data objects, their human complexity is flattened into attributes useful for prediction (interests, likelihoods, credit, influence).

  • This objectification makes it technically easier to route, rank, monetize, or gate access to people.

Practical manifestations:

  • Job searches that prioritize candidates by algorithmic fit, not community context.

  • Social or reputation signals used to decide who sees which services or ads.

  • Dating or hiring filtered by predictive markers that exclude, not explain.


6. The moral logic: “we must go deeper” and subordinate decision rights

What the passage says, simply: Page and colleagues argue deeper integration is justified because their software can “solve” problems if society defers decision-making to it.

Why this matters:

  • The claim asks citizens to trust corporations with authority normally distributed across democratic and social institutions.

  • It presumes corporations’ definitions of “important problems” and their metrics for success.

Practical manifestations:

  • Municipal contracts to run urban services on private platforms that define safety and convenience on corporate terms.

  • Health or welfare automation that replaces human caseworkers with algorithmic triage.


7. The danger of “answers before you ask”: erosion of agency and scrutiny

What the passage says, simply: Preemptive answers sound convenient but risk removing deliberation and contestation.

Why this matters:

  • If systems surface a “best” answer and bury alternatives, users may stop seeking second opinions or collective deliberation.

  • Hidden ranking criteria and opaque models can enforce biases without public knowledge.

Practical manifestations:

  • A search result presented as “the answer” that downplays minority perspectives or local community knowledge.

  • An assistant’s default recommendation that becomes de facto policy in everyday life (e.g., health, finance), crowding out pluralism.


8. Reasoned conclusion — what to notice and how to respond

Key lesson: Google’s public boasting about reach and ambition is not mere corporate pride. It is an argument that technical capacity justifies broader control over social life. The rhetoric of benevolence (“solve big human problems”) is paired with a technical plan: gather total data, model people as objects, and then use algorithmic systems to preempt choices.

Why this should concern us:

  • Concentration of “division of learning” in private hands undermines democratic oversight.

  • Preemptive systems can erode personal agency, privacy, and plural deliberation.

  • The framing of people as data objects risks normalizing decisions made by opaque machine logic.

Practical steps forward:

  • Demand transparency: who trains models, with what data, and how are decisions made?

  • Insist on human-in-the-loop safeguards for domains that affect rights, welfare, and justice.

  • Support regulation that prevents monopolistic control over the infrastructure of social learning (data, models, delivery systems).

  • Encourage public, open alternatives (community data trusts, public assistants) that prioritize democratic values over profit.


05.12.25 / PAGE 253–255

Zuckerberg’s Promise and the Price of Totality: How Social Graphs, Cheap Internet, and Utopian Rhetoric Fold Society Into Big Other

1) Quick summary — what the passage is saying

In simple terms: Mark Zuckerberg argues Facebook can—and should—map, connect, and serve the whole world. He treats people’s shared data as a growing resource (“behavioral surplus”) that will keep multiplying, and he frames Facebook’s mission as building the next stage of human community: a global social infrastructure that supplies meaning, comfort, and moral validation. The means this utopian promise hides a cost: to deliver these miracles, Big Other must expand toward “totality,” erasing frictions, boundaries, and independent sources of social authority.


2) Behavioral surplus grows—and why that matters

What it means: Zuckerberg says people will keep sharing more data at an exponential rate. That extra data—what Zuboff calls “behavioral surplus”—is what platforms use to predict, influence, and monetize behaviour.

Why it matters: More shared data = better prediction models = more power to shape what people do, think, buy, and feel.

Practical manifestations

  • A messaging app becomes a source of political targeting because every forwarded post shows who shares which ideas.

  • Photo uploads let algorithms learn places, faces, and habits, enabling ads and recommendations that feel eerily tailored.

  • In markets from the USA to India, platforms monetize this surplus by selling attention or targeted services.


3) The social graph as the new map of the web

What it means: Zuckerberg claims Facebook’s map of social connections will be more useful than old hyperlink structures for guiding people’s journeys online.

Why it matters: If social relations determine visibility and access, then platforms can shape what knowledge spreads and which voices are amplified.

Practical manifestations

  • News that fits your network’s tastes is amplified; minority viewpoints get buried.

  • Recommendation engines push content favored by your friends or similar groups, reinforcing echo chambers.

  • Small businesses reliant on social visibility can be boosted or throttled by platform algorithms.


4) Affordable internet and the argument for inclusion — with strings attached

What it means: Offering cheap or free internet expands the pool of users and therefore the behavioral surplus. It’s framed as benevolence—bringing connection to “every person in the world.”

Why it matters: Access is vital, but the business model often ties access to a platform’s data capture and influence.

Practical manifestations

  • Free or subsidized access (zero-rated services) helps people in low-income regions (parts of Africa, India), but often only to curated content and services controlled by the platform.

  • In return for connectivity, users may give up privacy and choice: the network becomes a funnel feeding Big Other.


5) The utopian pitch: meaning, purpose, and “global community”

What it means: Zuckerberg promises that technology can heal modern anxieties by supplying community, purpose, and comfort at scale—replacing old stabilizing institutions.

Why it matters: This is powerful rhetoric: people want meaning and belonging. But if a private company supplies those needs, it gains extraordinary influence over cultural values and social norms.

Practical manifestations

  • People treat platform interactions as primary forms of friendship and validation.

  • Political mobilization and community support become mediated predominantly through corporate platforms.

  • Cultural trends and moral signals are shaped by platform affordances (likes, shares, amplified posts).


6) The cost of the “magical age”: Big Other must expand to totality

What it means: To realize these promises, the surveillance system must reduce friction and expand data capture everywhere—social relations, purchases, movements, conversations.

Why it matters: Removing boundaries means fewer checks and less independent social authority. Where democratic institutions, laws, civil society, or community norms once constrained power, now algorithmic design and corporate incentives can dominate.

Practical manifestations

  • Algorithmic moderation replaces public debate about speech norms.

  • Municipal services integrated with private platforms hand governance tasks to corporations.

  • Personal data used for commercial ends can determine credit, employment, and reputation without public recourse.


7) Power yearns for totality — authority is the counterweight

What it means: The passage reminds us of Goethe’s sorcerer’s apprentice: unchecked power runs wild. Corporations push for total reach; only human institutions and moral authority can stop runaway effects.

Why it matters: Without active authority—laws, regulation, civic norms, and individual moral judgment—the platform’s broom (data-power) will keep multiplying consequences beyond control.

Practical manifestations

  • Regulations (privacy laws, antitrust actions) can slow or redirect expansion.

  • Civil society watchdogs and journalism can expose harms.

  • Strong public infrastructure (open standards, public media) gives citizens alternatives.


8) Reasoned conclusion — what the passage asks us to notice and what to do

Key insight: Zuckerberg’s rhetoric mixes genuine gains (connection, tools, knowledge) with an implicit claim: society should let a private platform become the organizer of collective life. That claim converts social goods into corporate design problems and treats human relations as inputs for prediction and profit. The danger is not only surveillance but the privatization of social authority.

Practical steps to protect democratic life

  1. Demand transparency: Platforms must reveal how social graphs and recommendation systems work and how data is used.

  2. Preserve friction where needed: Not all convenience is virtuous—frictions (deliberation, consent steps, human review) protect rights.

  3. Invest in public alternatives: Support public communication infrastructures, local platforms, and community data trusts.

  4. Regulate purposefully: Laws should limit total capture (data minimization), ensure portability, and enforce human oversight for consequential decisions.

  5. Rebuild social authority: Strengthen schools, unions, community groups, and public media so meaning and moral validation are not solely platform-provided.

Final thought: Technology can indeed widen what is possible. But when the promise of universal connection and comfort is used to justify the annexation of social life into corporate systems, democracy and human dignity must be the metrics that decide what we accept—not the applause of a developer conference.

   

  "V. Confluence as Society
Microsoft scientists have been working for years on how to take the same logic of automated preemptive
control at the network’s edge and transpose it to social relations. As Nadella observed in 2017, if “we”
can do this in a “physical place,” it can be done “everywhere” and “anywhere.” He advised his audience
of applied utopianists, “You could start reasoning about people, their relationship with other people, the
things in the place....”
29

The imaginative range of this new thinking is demonstrated in a 2013 Microsoft patent application
updated and republished in 2016 and titled “User Behavior Monitoring on a Computerized Device.”
30
With conspicuously thin theory complemented by thick practice, the patented device is designed to
monitor user behavior in order to preemptively detect “any deviation from normal or acceptable behavior
that is likely to affect the user’s mental state. A prediction model corresponding to features of one or more
mental states may be compared with features based upon current user behavior.”    

   

V. समाज के रूप में संगम (Confluence as Society)

Microsoft की वह नई दिशा जहाँ “पूर्वानुमानित नियंत्रण” अब सीधे सामाजिक सम्बन्धों तक पहुँचता है


1. इस अंश का सरल सार

इस अंश में बताया गया है कि Microsoft के वैज्ञानिक कई वर्षों से यह सोच रहे हैं कि जैसे नेटवर्क पर मशीनें अपने-आप चीजों को पहले ही पहचानकर नियंत्रित कर लेती हैं, ठीक वही तर्क सामाजिक सम्बन्धों पर भी लागू किया जा सकता है।
अर्थात्—
अगर मशीनें किसी जगह (physical space) में मानव व्यवहार को समझकर प्री-एम्प्टिव (preemptive) प्रतिक्रिया दे सकती हैं, तो वे यह काम हर जगह और किसी भी जगह कर सकती हैं।

यही विचार आगे Microsoft के 2013 के पेटेंट में दिखता है, जिसमें मशीनें “सामान्य” और “असामान्य” व्यवहार को पहचानकर मानसिक स्थिति तक का अनुमान लगाने की कोशिश करती हैं।


2. वास्तविक अर्थ: मशीनें अब केवल डिवाइस नहीं—मानव सम्बन्धों को “सोचने” वाली प्रणालियाँ

Satya Nadella कहते हैं कि—

“यदि हम यह किसी भौतिक स्थान में कर सकते हैं, तो हम इसे कहीं भी कर सकते हैं… और यहाँ तक कि हम यह भी ‘तर्क’ लगाना शुरू कर सकते हैं कि लोग कौन हैं, उनका एक-दूसरे से क्या रिश्ता है, और उस जगह की चीज़ों के साथ उनका क्या सम्बन्ध है।”

अर्थ:

  • मशीनें अब केवल उपयोग करने वाली चीज़ नहीं रहेंगी।

  • वे यह समझने की कोशिश करेंगी कि आप किससे बात कर रहे हैं, कैसा महसूस कर रहे हैं, किसके साथ रिश्ता कैसा है, और किस गतिविधि से आपका कौन-सा मानसिक पैटर्न बनता है।

  • यानी तकनीक अब सामाजिक सम्बन्धों को भी डेटा के रूप में देखना और पूर्वानुमानित करना चाहती है।


3. 2013–2016 का Microsoft पेटेंट: “User Behavior Monitoring”

Microsoft ने एक पेटेंट दाख़िल किया जिसका नाम था:
“User Behavior Monitoring on a Computerized Device.”

पेटेंट क्या कहता है?

यह डिवाइस आपकी गतिविधियों को लगातार मॉनिटर करेगा ताकि वह यह अनुमान लगा सके कि:

  • आपका “सामान्य व्यवहार” क्या है,

  • कब आप “असामान्य” या “अस्वीकार्य” व्यवहार कर रहे हैं,

  • कौन-सा व्यवहार आपकी मानसिक स्थिति पर नकारात्मक प्रभाव डाल सकता है,

  • आपकी अभी की गतिविधि किन मानसिक अवस्थाओं की ओर संकेत कर रही है—उदासी? गुस्सा? तनाव? ध्यान भटकना? डर?

यह कैसे करेगा?

  • आपके टाइपिंग पैटर्न,

  • स्क्रीन पर आपकी गति,

  • एप्स के उपयोग का तरीका,

  • चेहरा/आवाज़/भावनाओं से संकेत,

  • सोशल मीडिया गतिविधि,

  • आपके पिछले व्यवहार का डेटाबेस

इन सभी को “prediction model” से मिलाकर यह तय किया जाएगा कि आपकी मानसिक स्थिति क्या है या क्या होने वाली है।


4. “पतला सिद्धान्त, मोटी प्रैक्टिस”—इसका मतलब क्या?

अंश में कहा गया है कि इस पेटेंट के पीछे की थ्योरी बहुत पतली है—यानी यह ठोस वैज्ञानिक समझ पर आधारित नहीं है कि “सामान्य” या “असामान्य” व्यवहार क्या होता है।
लेकिन इसकी प्रैक्टिस बेहद मोटी और गहरी है—मशीनें पूरा व्यवहार खंगालकर आपके भविष्य का अनुमान लगाने लगती हैं।

समस्या क्या है?

  • किसका व्यवहार “सामान्य” है, इसका फैसला मशीन करेगी।

  • मानसिक स्थिति का अनुमान गलत भी हो सकता है।

  • मशीनें गलत निर्णय के आधार पर किसी का काम, रिश्ते, अवसर प्रभावित कर सकती हैं।

  • व्यवहार को “पूर्वानुमान” के आधार पर बदलने का दबाव बन सकता है।


5. व्यवहार और मानसिक स्थिति का पूर्वानुमान—कुछ वास्तविक उदाहरण

1. ऑफिस सॉफ़्टवेयर में निगरानी

AI आपके ईमेल लिखने की गति, मीटिंग में बोलने की मात्रा या देर से जवाब देने को “तनाव” का संकेत मान सकता है।

2. स्मार्टफोन में मूड प्रेडिक्शन

फोन यह अनुमान लगा सकता है कि आप उदास हैं क्योंकि आपने संगीत बदल लिया, कम लोगों से बात की, या देर तक स्क्रीन पर रुके।

3. सोशल मीडिया पर व्यवहार विश्लेषण

Instagram या Facebook आपकी पोस्टिंग कम होने को बेचैनी या अवसाद का संकेत मान सकते हैं।

4. स्कूलों में AI मॉनिटरिंग

बच्चों की आँखों की दिशा देखकर AI यह तय करे कि छात्र का “ध्यान भटक रहा है।”

5. पुलिसिंग और सुरक्षा प्रणाली

किसी जगह की “असामान्य हरकतों” को खतरे का संकेत मान लेना—even if it is normal for that community.


6. इस तकनीक के पीछे छुपी व्यापक महत्वाकांक्षा

यह केवल उपयोगकर्ता की सहायता नहीं है।
यह समाज को समझने, सामाजिक सम्बन्धों को डेटा में बदलने, और समाज को नियंत्रित करने की दिशा में कदम है।

उद्देश्य है:

  • व्यवहार को पहले से पहचानना,

  • मानसिक स्थिति को समझना,

  • फिर “प्रभावित” करने वाले कदम उठाना—जैसे सुझाव, चेतावनी, सीमाएँ, सिफ़ारिशें।

यही “instrumentarian power” है—जहाँ लक्ष्य है व्यवहार की निश्चितता (certainty), न कि विचारों पर नियंत्रण।


7. तर्कसंगत निष्कर्ष: यह समाज को क्या दिशा दे रहा है?

इस अंश से स्पष्ट होता है:

  1. टेक कंपनियाँ अब केवल डिवाइस नहीं बनातीं—वे समाज के सम्बन्धों को मॉडल बनाकर समझने की कोशिश कर रही हैं।

  2. व्यवहार की निगरानी का लक्ष्य है—अनुमान लगाना और फिर व्यवहार को बदलना।

  3. मशीनें तय करेंगी कि क्या “सामान्य” है और क्या “असामान्य”—यह खतरनाक है।

  4. मानसिक और सामाजिक जीवन को डेटा में बदलकर कंपनियाँ समाज पर “पूर्वानुमानित नियंत्रण” स्थापित करना चाहती हैं।

  5. इससे लोकतंत्र, स्वतंत्रता, और गोपनीयता पर गहरा प्रभाव पड़ता है।

समाज के लिए सीख

  • प्रौद्योगिकी को अपनाएँ—लेकिन आँखें खोलकर।

  • नियम, नैतिकता, और लोकतांत्रिक नियंत्रण ज़रूरी हैं।

  • “असामान्य व्यवहार” को मशीन पर नहीं छोड़ा जा सकता।

  • मानवीय विविधता की रक्षा करनी होगी—ताकि मशीनें हमें एक साँचे में ढाल न दें।

अंततः, यह अंश बताता है कि भविष्य की लड़ाई केवल डेटा या प्राइवेसी की नहीं—
समाज की “परिभाषा” और “सामान्य व्यवहार” तय करने के अधिकार की है।

The scientists propose an application that can sit in an operating system, server, browser, phone, or

wearable device continuously monitoring a person’s behavioral data: interactions with other people or

computers, social media posts, search queries, and online activities. The app may activate sensors to

record voice and speech, videos and images, and movement, such as detecting “when the user engages in

excessive shouting by examining the user’s phone calls and comparing related features with the

predication model.”

All these behavioral data are stored for future historical analyses in order to improve the prediction

model. If the user normally restrains the volume of his or her voice, then sudden excessive shouting may

indicate a “psychosocial event.” Alternatively, the behavior could be assessed in relation to a “feature

distribution representing normal and/or acceptable behavior for an average member of a population... a

statistically significant deviation from that behavior baseline indicates a number of possible

psychological events.” The initial proposition is that in the event of an anomaly, the device would alert

“trusted individuals” such as family members, doctors, and caregivers. But the circle widens as the patent

specifications unfold. The scientists note the utility of alerts for health care providers, insurance

companies, and law-enforcement personnel. Here is a new surveillance-as-a-service opportunity geared

to preempt whatever behavior clients choose.


व्यवहार की सतत निगरानी: एक नई “सेवा” के नाम पर समाज को डेटा में बदलने की प्रक्रिया


1. वैज्ञानिक क्या प्रस्ताव रखते हैं?

वैज्ञानिक एक ऐसे ऐप/सिस्टम का प्रस्ताव रखते हैं जो—

  • ऑपरेटिंग सिस्टम

  • सर्वर

  • ब्राउज़र

  • मोबाइल फोन

  • पहनने योग्य डिवाइस (wearables)

इन सभी में लगातार चलता रहे और व्यक्ति की व्यवहार संबंधी गतिविधियों को निरंतर मॉनिटर करे।

इसे आप ऐसे समझें:
आपका फोन, आपकी घड़ी, आपका लैपटॉप—सब मिलकर आपको 24×7 “पढ़ते” रहें।


2. यह ऐप किन-किन चीज़ों की निगरानी करेगा?

(1) व्यक्ति की बातचीत और ऑनलाइन इंटरैक्शन

  • आपके कॉल्स

  • ईमेल

  • चैट

  • सोशल मीडिया लाइक, पोस्ट, शेयर

  • दूसरों के साथ आपकी ऑनलाइन बातचीत

(2) आपकी खोज (search) गतिविधि

  • आपने क्या खोजा?

  • कब खोजा?

  • कितनी बार खोजा?

(3) डिवाइस के सेंसर

ऐप आपके फ़ोन या डिवाइस के सेंसर को चालू करके:

  • आपकी आवाज़ रिकॉर्ड कर सकता है

  • आपकी वीडियो/फोटो कैप्चर कर सकता है

  • आपकी चाल, गति, हाथ-पैर की हरकत तक रिकॉर्ड कर सकता है

उदाहरण:

एक बिंदु में उल्लेख है कि ऐप “अत्यधिक चिल्लाने (excessive shouting)” का पता लगा सकता है—
यानी यह आपके कॉलों में आवाज़ का स्तर मापेगा और फिर मॉडल से तुलना करेगा।


3. यह डेटा कहाँ जाता है?

आपकी हर गतिविधि, हर आवाज़ का स्तर, हर खोज—
सब कुछ स्टोर किया जाएगा ताकि:

  • भविष्य में उसका ऐतिहासिक विश्लेषण हो सके,

  • भविष्य का “prediction model” और तेज़ और सटीक बनाया जा सके।

यानी आप नहीं जानेंगे, लेकिन आपका पूरा डिजिटल “इतिहास” मशीन लगातार बनाती रहेगी।


4. “सामान्य” बनाम “असामान्य” व्यवहार का निर्णय कौन करेगा?

मॉडल क्या करता है?

मॉडल पहले समझता है कि:

आपकी सामान्य आदतें क्या हैं?
जैसे—

  • आप सामान्यतः कम आवाज़ में बोलते हैं

  • आप आमतौर पर देर रात फोन नहीं उठाते

  • आप सप्ताह में 5 बार जिम जाते हैं

  • आप रोज सुबह 7 बजे जगते हैं

फिर मॉडल यह जाँचता है कि:

अगर अचानक आपकी गतिविधि बदली है,
तो क्या यह किसी मानसिक/भावनात्मक घटना (psychosocial event) का संकेत है?


5. “सामान्य जनसंख्या” से तुलना

एक खतरनाक तत्व यह है कि मॉडल आपकी तुलना:

“औसत” जनसंख्या के व्यवहार
से भी करेगा।

यानी:
अगर किसी बड़ी आबादी का सामान्य पैटर्न कुछ और है, और आपका व्यवहार अलग है, तो मशीन इसे “deviation” मानेगी— और इसे “संभावित मानसिक घटना” की श्रेणी में डाल सकती है।

समस्या

  • विविधता को बीमारी समझा जा सकता है

  • व्यक्तिगत व्यवहार मशीन के बनाए “औसत” से तय होगा

  • “सामान्य” की परिभाषा ही मशीन तय करेगी


6. शुरुआत में—“trusted individuals”

पहला विचार यह है कि कोई “असामान्य” गतिविधि दिखे तो अलर्ट:

  • परिवार

  • डॉक्टर

  • केयरगिवर

को भेजा जाए।

यह सुनने में अच्छा लगता है—
जैसे सुरक्षा के लिए या मानसिक स्वास्थ्य के लिए।


7. परंतु दायरा तेजी से बढ़ता है

पेटेंट की आगे की लाइनों में “विश्वासपात्र व्यक्ति” का दायरा अचानक फैल जाता है:

  • स्वास्थ्य सेवा प्रदाता

  • बीमा कंपनियाँ

  • कानून-व्यवस्था (law enforcement)

  • अन्य वाणिज्यिक संस्थाएँ

यह गंभीर क्यों है?

क्योंकि इससे—

  1. आपकी निजी मानसिक स्थिति

  2. आपका “असामान्य” व्यवहार

  3. आपके निजी कॉल्स और आवाज़ का विश्लेषण

बिना आपकी जानकारी
कॉर्पोरेट कंपनियों, बीमा एजेंसियों, और पुलिस के पास जा सकता है।


8. “Surveillance-as-a-service”: निगरानी अब एक सेवा

यह मॉडल निगरानी को एक नई सेवा बनाता है:

  • बीमा कंपनियाँ आपको “जोखिम भरा” या “खतरे में” मानकर प्रीमियम बढ़ा सकती हैं

  • नियोक्ता इसे कर्मचारी निगरानी के लिए इस्तेमाल कर सकते हैं

  • पुलिस “असामान्य व्यवहार” के आधार पर किसी को “संभावित खतरा” मान सकती है

  • मनोवैज्ञानिक डेटा बड़े पैमाने पर कंपनियों के पास जमा हो सकता है

सरल शब्दों में:

आपका व्यक्तित्व, आपका मूड, आपकी मानसिक स्थिति—सभी बाज़ार में बेचने योग्य डेटा बन जाते हैं।


तर्कपूर्ण निष्कर्ष

यह अंश इस बात को उजागर करता है कि तकनीक अब:

  • केवल हमारी सुविधा के लिए नहीं,

  • बल्कि हमारे व्यवहार, भावनाएँ, और मानसिक अवस्था को मापने और नियंत्रित करने की दिशा में बढ़ रही है।

इसके खतरनाक परिणाम

  1. मशीनें यह तय करेंगी कि क्या “सामान्य” है और क्या नहीं।

  2. व्यक्तिगत स्वतंत्रता और विविधता को जोखिम है।

  3. निजी मानसिक डेटा कॉर्पोरेट और सरकारी नियंत्रण का साधन बन सकता है।

  4. कानून-व्यवस्था और बीमा कंपनियों द्वारा दुरुपयोग संभव है।

  5. निगरानी एक उद्योग बन जाएगी—surveillance-as-a-service।

समाज के लिए चेतावनी

तकनीक की सीमा तय किए बिना,
हम अपने सबसे निजी हिस्सों—भावनों और व्यवहार—को
कंपनियों के हाथों में सौंप रहे हैं।

यह केवल निजता का प्रश्न नहीं है—
यह मानवीय गरिमा और स्वतंत्रता का प्रश्न है।

Microsoft’s patent returns us to Planck, Meyer, and Skinner and the viewpoint of the Other-One. In

their physics-based representation of human behavior, anomalies are the “accidents” that are called

freedom but actually denote ignorance; they simply cannot yet be explained by the facts.


Planck/Meyer/Skinner believed that the forfeit of this freedom was the necessary price to be paid for the

“safety” and “harmony” of an anomaly-free society in which all processes are optimized for the greater

good. Skinner imagined that with the correct technology of behavior, knowledge could preemptively

eliminate anomalies, driving all behavior toward preestablished parameters that align with social norms

and objectives. “If we could show that our members preferred life in Walden Two,” says Frazier-Skinner,

“it would be the best possible evidence that we had reached a safe and productive social structure.”

31

अनियमितता = स्वतंत्रता का भ्रम?

Planck–Meyer–Skinner के दृष्टिकोण और Microsoft के पेटेंट का गहरा संबंध


1. यह अंश क्या कह रहा है—सरल भाषा में

यह अंश बताता है कि Microsoft का जो नया “व्यवहार-पूर्वानुमान” सिस्टम है, उसकी सोच वास्तव में Planck, Meyer और B.F. Skinner जैसे वैज्ञानिकों की पुरानी धारणाओं से निकलती है।

उनकी सोच यह थी कि—

मानव व्यवहार में जो भी विचलन (anomaly) दिखता है, वह असल में ‘स्वतंत्रता’ नहीं बल्कि ‘अज्ञानता’ है।
यानि:
हम उसे इसलिए नहीं समझ पाते क्योंकि विज्ञान अभी इतना आगे नहीं पहुँचा।

यह दृष्टि मनुष्य को एक मशीन की तरह देखकर कहती है कि:
अगर सभी नियम और चर (variables) ज्ञात हों, तो मनुष्य का हर व्यवहार भविष्य में बताया जा सकता है।


2. उनका दृष्टिकोण: “अनियमितता (anomaly) कोई रहस्य नहीं—बस अभी समझ नहीं पाए”

Planck–Meyer–Skinner का मानना था कि—

  • मनुष्य तब “भिन्न” व्यवहार करता है जब हमें उसके मानसिक या सामाजिक कारणों की पूरी जानकारी नहीं होती।

  • जिस दिन यह जानकारी मिल जाएगी, उस दिन स्वतंत्रता जैसी कोई चीज़ नहीं बचेगी क्योंकि सब व्यवहार पूर्वानुमेय (predictable) हो जाएगा।

इस दृष्टिकोण का सबसे बड़ा दावा:

मानव स्वतंत्रता एक भ्रम है जो हमारी वैज्ञानिक सीमाओं से पैदा हुआ है।


3. इसके पीछे “सुरक्षा” और “सद्भाव” का वादा

वे मानते थे कि स्वतंत्रता त्यागने का मूल्य हमें देना चाहिए, ताकि—

  • समाज “अनियमितता-मुक्त” हो सके,

  • समाज में कोई अप्रत्याशित व्यवहार न हो,

  • सब कुछ स्थिर, सुरक्षित और “पूर्वानुमानित” हो,

  • और पूरा समाज एक “ऑप्टिमाइज़्ड मशीन” की तरह चल सके।

यही “सुरक्षा” और “सद्भाव” का वादा surveillance capitalism भी करता है।


4. Skinner का समाज मॉडल: Walden Two

Skinner ने अपने उपन्यास Walden Two में एक ऐसा समाज बनाया जहाँ—

  • सही तकनीक से मनुष्य के व्यवहार को पूरी तरह नियंत्रित किया जाता है,

  • लोग “स्वतंत्र” नहीं होते—वे “पूर्व निर्धारित आदतों” से चलते हैं,

  • पूरा समाज एक वैज्ञानिक प्रयोगशाला की तरह है,

  • जहाँ हर व्यक्ति के कार्य का उद्देश्य “समाज के व्यापक हित” में सेट किया गया है।

Skinner का तर्क:

यदि समाज के लोग कहें कि उन्हें यह जीवन पसंद है,
तो यह साबित करता है कि व्यवहार नियंत्रण सफल है
और समाज “सुरक्षित” और “उत्पादक” बन चुका है।


5. Microsoft का पेटेंट कैसे इसी सोच को आगे बढ़ाता है

ऊपर वाले वैज्ञानिकों की सोच यही थी—

  • व्यवहार के विचलन को “त्रुटि” मानो,

  • स्वतंत्रता को “गलतफहमी” मानो,

  • व्यवहार को “ऑप्टिमाइज़” करो,

  • और सामाजिक व्यवस्था को नियंत्रित करो।

Microsoft का पेटेंट भी यही करता है—

  • “असामान्य व्यवहार” को खोजता है,

  • उसे “घटना” (event) या “खतरा” बताता है,

  • उसे हटाने या सुधारने की तकनीक अपनाता है,

  • और धीरे-धीरे समाज को “algorithmic norms” में फिट करने की कोशिश करता है।

यानी Walden Two का डिजिटल संस्करण।


6. यह दृष्टिकोण क्यों खतरनाक है?

(1) मानव स्वतंत्रता को “त्रुटि” मान लेना

यह दृष्टि कहती है:
मनुष्य का स्वतंत्र, अप्रत्याशित व्यवहार—
अर्थात् उसकी मानवता—
एक गलती है जिसे सुधारा जाना चाहिए।

(2) विविधता और असहमति का स्थान खत्म हो जाता है

सभी को औसत मशीन मॉडल में फिट करने का दबाव बनता है।

(3) समाज “विचारशील” नहीं, “पूर्वानुमेय” बनने लगता है

लोकतंत्र तब जीवित रहता है जब लोग अनपेक्षित, स्वतंत्र, और आलोचनात्मक होते हैं।
लेकिन जब सब व्यवहार पहले से तय हो जाए, तो लोकतंत्र का मूल समाप्त हो जाता है।

(4) शक्ति का केंद्रीकरण

जो व्यवहार को define करता है, वही समाज को define करता है—
और यह शक्ति सरकार के पास नहीं,
बल्कि निजी कंपनियों के पास चली जाती है।


7. तर्कपूर्ण निष्कर्ष: यह अंश हमें क्या सिखाता है?

यह अंश समझाता है कि—

  • Microsoft की तकनीक सिर्फ “मॉनिटरिंग” नहीं,
    बल्कि मानव स्वतंत्रता की अवधारणा को पुनर्परिभाषित करने की कोशिश है।

  • Planck–Meyer–Skinner की पुरानी वैज्ञानिक सोच
    आज Big Tech के माध्यम से डिजिटल समाज में लागू हो रही है।

  • इस सोच में स्वतंत्रता = त्रुटि,
    विचलन = खतरा,
    और मानव व्यवहार = नियंत्रित करने लायक मशीन प्रक्रिया।

अंततः:

यदि समाज को “अनियमितता-मुक्त” और “पूर्वानुमेय” बना दिया जाए,
तो मनुष्य मशीन का एक हिस्सा बन जाएगा—
और उसकी स्वतंत्रता, विविधता, और सृजनशीलता धीरे-धीरे मिट जाएगी।

यही surveillance capitalism का सबसे गहरा खतरा है—
यह समाज को मशीन बना देता है,
और मनुष्य को एक algorithmic परमाणु।

In this template of social relations, behavioral modification operates just beyond the threshold of

human awareness to induce, reward, goad, punish, and reinforce behavior consistent with “correct

policies.” Thus, Facebook learns that it can predictably move the societal dial on voting patterns,

emotional states, or anything else that it chooses. Niantic Labs and Google learn that they can predictably

enrich McDonald’s bottom line or that of any other customer. In each case, corporate objectives define the

“policies” toward which confluent behavior harmoniously streams.

The machine hive—the confluent mind created by machine learning—is the material means to the final

elimination of the chaotic elements that interfere with guaranteed outcomes. Eric Schmidt and Sebastian

Thrun, the machine intelligence guru who once directed Google’s X Lab and helped lead the development

of Street View and Google’s self-driving car, make this point in championing Alphabet’s autonomous

vehicles. “Let’s stop freaking out about artificial intelligence,” they write.

व्यवहार-संशोधन का नया साँचा: जहाँ मशीनें समाज को “सही दिशा” में बहाती हैं


1. व्यवहार-संशोधन (Behavioral Modification) अब अवचेतन में चलता है

अंश कहता है कि आज की तकनीक मानव चेतना की सीमा से थोड़ा नीचे काम करती है।
इसका मतलब:

  • आपको पता भी नहीं चलता और आपका व्यवहार बदला जा रहा होता है।

  • आपको ऐसे “nudges” दिए जाते हैं कि आप वही काम करें जो “सही नीति” (correct policies) के अनुसार हो।

  • आपको कभी पुरस्कार, कभी हल्के दंड, कभी प्रेरणा, और कभी दबाव दिया जाता है—
    बिना आपके महसूस किए।

उदाहरण:

  • एक ऐप आपकी क़दमों की संख्या लक्ष्य पूरा होने पर रंग बदलकर आपको “पुरस्कार” देता है।

  • सोशल मीडिया “लाइक्स” के माध्यम से आपको कुछ खास तरह की पोस्टें डालने के लिए प्रेरित करता है।

  • प्लेटफ़ॉर्म ऐसे कंटेंट दिखाते हैं जिससे आपका मूड, राजनीति, ख़रीदारी—सब धीरे-धीरे प्रभावित होते हैं।


2. फेसबुक क्या सीख चुक़ा है?

फेसबुक का एल्गोरिद्म अब इतना डेटा-समृद्ध है कि वह जानता है:

  • लोगों की वोटिंग पैटर्न कैसे बदले जा सकते हैं,

  • कौन-सा कंटेंट उन्हें गुस्सा, खुशी या डर दिला सकता है,

  • कौन-सी पोस्ट उन्हें आंदोलनकारी बना सकती है,

  • क्या दिखाने से वे किस मुद्दे पर अपनी राय बदलेंगे।

सब कुछ “प्रीडिक्टेबली”—यानी पहले से अनुमान लगाकर।

उदाहरण:

केम्ब्रिज एनालिटिका ने दिखाया:
फेसबुक के एल्गोरिद्म ने लाखों अमेरिकी मतदाताओं की भावनाएँ और वोटिंग को प्रभावित किया।


3. Niantic Labs और Google कैसे कंपनियों को फायदा पहुँचाते हैं?

Niantic (Pokémon Go बनाने वाली कंपनी) और Google ने पाया:

  • अगर गेम में किसी दुकान को “जिम” या “पोकस्टॉप” बना दिया जाए,
    तो वहाँ पैदल ट्रैफिक 10–20 गुना बढ़ जाता है।

इसका मतलब:
वे यह भी अनुमान लगा सकते हैं कि McDonald’s जैसी कंपनियों की बिक्री में कितना बढ़ोतरी होगी।

और बात केवल अनुमान की नहीं—वे इसे नियंत्रित भी कर सकते हैं।

यह है “confluence” — जहाँ मशीन-लर्निंग के द्वारा समाज का व्यवहार “संरेखित” (aligned) किया जाता है, ताकि:

  • ग्राहक की बिक्री बढ़े,

  • ब्रांड की दृश्यता बढ़े,

  • और कंपनी के लक्ष्य पूरे हों।


4. कॉर्पोरेट उद्देश्यों = “Polices” (नीतियाँ)

इस मॉडल में “सही नीतियाँ” तय कौन करता है?

उत्तर: कंपनियाँ।

यह कंपनियाँ तय करती हैं कि:

  • किस तरह का व्यवहार “उचित” है,

  • कौन-सी खपत (consumption) “लाभदायक” है,

  • कौन-सी राजनीतिक भावना उनके लिए सुविधाजनक है,

  • किस मुद्दे पर समाज को अधिक संवेदनशील या उदासीन बनाना है।

फिर एल्गोरिद्म धीरे-धीरे जनता के व्यवहार को उसी दिशा में बहाता है—
बिल्कुल नदी की धारा की तरह।


5. मशीन-हाइव (Machine Hive): मशीनों की सामूहिक बुद्धि

अंश में “machine hive” शब्द आया है।
इसका मतलब है:

  • लाखों एल्गोरिद्म, सेंसर, ऐप, डिवाइस, डेटा-सेंटर्स —
    एक साथ मिलकर एक सामूहिक बुद्धि (collective mind) की तरह काम करें।

इसका उद्देश्य:

“अराजकता” (chaos) खत्म करना।

यह अराजकता क्या है?

  • अप्रत्याशित मानव व्यवहार

  • स्वतंत्र राय

  • अनियोजित निर्णय

  • असहमति

  • अचानक होने वाले बदलाव

मशीन-हाइव का लक्ष्य है:
सब कुछ पूर्वानुमेय (predictable), स्थिर (stable), और नियंत्रणीय (controllable) बनाना।


6. Schmidt और Thrun: “AI से डरना बंद करो”

Eric Schmidt और Sebastian Thrun—Google के दो बड़े तकनीकी दिमाग—लिखते हैं:

“AI से डरना बंद करो।”

लेकिन यह “शांत कराने वाला” वाक्य एक बड़ी समस्या छुपाता है:

  • वे AI के उस रूप की बात कर रहे हैं जहाँ मशीनें जीवन के अनियंत्रित पहलुओं को “ठीक” कर देंगी।

  • यह “कुल नियंत्रण” की दिशा में ले जाने वाला तर्क है।

सरल भाषा में:

वे कहना चाहते हैं—
AI को समाज संभालने दो,
क्योंकि मानव निर्णय अराजक, भावनात्मक, और त्रुटिपूर्ण होते हैं।

परंतु यही तर्क मशीन-हाइव को समाज के ऊपर बैठाने का बौद्धिक आधार बन जाता है।


तर्कपूर्ण निष्कर्ष: समाज की दिशा का नियंत्रण अब किसके हाथ में?

यह अंश दिखाता है कि:

1. व्यवहार-संशोधन अब व्यक्तियों को नहीं, पूरे समाज को लक्ष्य बनाता है।

Facebook, Google, Niantic—सभी जानते हैं कि वे सामाजिक भावनाएँ, चुनाव, उपभोक्ता आदतें नियंत्रित कर सकते हैं।

2. कॉर्पोरेट उद्देश्य “सही नीतियों” का रूप ले लेते हैं।

यह लोकतांत्रिक रूप से तय नहीं होतीं—
यह तय करती हैं वे कंपनियाँ जिनके पास डेटा है।

3. मशीन-हाइव मानव स्वतंत्रता का स्थान लेता है।

अनिश्चितता, स्वतंत्र निर्णय, विविधता—
सब “समस्या” की तरह मानी जाती हैं।

4. AI को “डरने की जरूरत नहीं” कहना एक वैचारिक रणनीति है।

ताकि लोग नियंत्रण को स्वीकार करें और प्रश्न न करें।

5. यह भविष्य केवल तकनीकी नहीं—राजनीतिक और नैतिक संकट भी है।

क्योंकि इसमें:

  • मानव निर्णय की जगह मशीनें लेंगी,

  • स्वतंत्रता की जगह पूर्वानुमेयता,

  • और लोकतांत्रिक विविधता की जगह एकरूपता आएगी।

Schmidt and Thrun emphasize the “crucial insight that differentiates AI from the way people learn.”
32
Instead of the typical assurances that machines can be designed to be more like human beings and
therefore less threatening, Schmidt and Thrun argue just the opposite: it is necessary for people to become
more machine-like. Machine intelligence is enthroned as the apotheosis of collective action in which all
the machines in a networked system move seamlessly toward confluence, all sharing the same
understanding and thus operating in unison with maximum efficiency to achieve the same outcomes. The
jackhammers do not independently appraise their situation; they each learn what they all learn. They each
respond the same way to uncredentialed hands, their brains operating as one in service to the “policy.”
The machines stand or fall together, right or wrong together. As Schmidt and Thrun lament,
When driving, people mostly learn from their own mistakes, but they rarely learn from the mistakes of others. People collectively
make the same mistakes over and over again. As a result, hundreds of thousands of people die worldwide every year in traffic
collisions. AI evolves differently. When one of the self-driving cars makes an error, all of the self-driving cars learn from it. In fact,
new self-driving cars are “born” with the complete skill set of their ancestors and peers. So collectively, these cars can learn faster
than people. With this insight, in a short time self-driving cars safely blended onto our roads alongside human drivers, as they kept
learning from each other’s mistakes.... Sophisticated AI-powered tools will empower us to better learn from the experiences of
others.... The lesson with self-driving cars is that we can learn and do more collectively.
33

This is a succinct but extraordinary statement of the machine template for the social relations of an
instrumentarian society. The essence of these facts is that first, machines are not individuals, and second,
we should be more like machines. The machines mimic each other, and so must we. The machines move
in confluence, not many rivers but one, and so must we. The machines are each structured by the same
reasoning and flowing toward the same objective, and so must we be structured.
The instrumentarian future integrates this symbiotic vision in which the machine world and social
world operate in harmony within in and across “species” as humans emulate the superior learning
processes of the smart machines. This emulation is not intended as a throwback to mass production’s
Taylorism or Chaplin’s hapless worker swallowed by the mechanical order. Instead, this prescription for
symbiosis takes a different road on which human interaction mirrors the relations of the smart machines as

individuals learn to think and act by emulating one another, just like the self-driving cars and the policy-
worshipping jackhammers.

In this way, the machine hive becomes the role model for a new human hive in which we march in
peaceful unison toward the same direction based on the same “correct” understanding in order to
construct a world free of mistakes, accidents, and random messes. In this world the “correct” outcomes
are known in advance and guaranteed in action. The same ubiquitous instrumentation and transparency that
define the machine system must also define the social system, which in the end is simply another way of
describing the ground truth of instrumentarian society.
In this human hive, individual freedom is forfeit to collective knowledge and action. Nonharmonious
elements are preemptively targeted with high doses of tuning, herding, and conditioning, including the full
seductive force of social persuasion and influence. We march in certainty, like the smart machines. We
learn to sacrifice our freedom to collective knowledge imposed by others and for the sake of their
guaranteed outcomes. This is the signature of the third modernity offered up by surveillance capital as its
answer to our quest for effective life together.

मशीन जैसे बनो: इंसानी समाज को मशीन-हाइव की ओर धकेलने का खाका


1. Schmidt और Thrun की “मुख्य अंतर्दृष्टि”: AI और मानव सीखने में मूलभूत अंतर

Schmidt और Sebastian Thrun कहते हैं कि AI को समझने की असली कुंजी यह नहीं है कि “मशीनें इंसानों जैसी बनें”—
बल्कि यह कि इंसानों को मशीनों जैसा बनना होगा।

वे तर्क देते हैं:

  • मनुष्य अकेले अपने अनुभवों से सीखता है।

  • एक व्यक्ति की सीख दूसरे व्यक्ति तक सीधे नहीं पहुँचती।

  • इसलिए समाज बार-बार वही गलतियाँ दोहराता है।

जबकि—

  • एक मशीन गलत होती है तो सभी मशीनें उस गलती से तुरंत सीख लेती हैं।

  • एक self-driving car के सीखने का मतलब है कि पूरी machine population सीख गई।

  • मशीनें “एक दिमाग” की तरह काम करती हैं।

यह मॉडल AI को मानवीय नहीं बनाता—यह मानव को “मशीन-ढाँचे” में ढालने का सुझाव देता है।


2. मशीन-हाइव का सिद्धांत: सभी मशीनें एक साथ, एक दिशा में, एक तर्क के साथ

उनके अनुसार:

  • नेटवर्क में हर मशीन एक ही नीति (policy) समझती है।

  • हर मशीन “एक साथ” काम करती है।

  • न कोई अलग राय, न अलग तर्क, न अलग अनुभव।

  • सभी एक जैसे निर्णय, एक जैसी प्रतिक्रिया।

उदाहरण के लिए—

  • 100 जैकहैमर मशीनें सोचती नहीं; सभी एक ही “प्रोटोकॉल” से चलते हैं।

  • एक मशीन किसी गलत हाथ में जाती है, तो सभी वही प्रतिक्रिया देती हैं।

  • न व्यक्तित्व है, न विविधता—सभी एक समान।

Schmidt–Thrun चाहते हैं कि इंसानी समाज भी ऐसा ही बने।


3. उनका तर्क: इंसान की सीख धीमी है, मशीन की तेज़—इसलिए हमें मशीन बनने में ही भलाई है

वे कहते हैं:

  • इंसान अपनी गलतियों से तो सीखता है, लेकिन दूसरों की गलतियों से बहुत कम।

  • इसी वजह से दुनिया भर में हर साल लाखों लोग सड़क दुर्घटनाओं में मारे जाते हैं।

  • लेकिन मशीनें दूसरों की गलतियों से तुरंत सीख जाती हैं।

  • एक नयी self-driving car पुरानी सभी कारों का पूरा अनुभव लेकर “जन्म लेती” है।

  • इसलिए मशीनें “सामूहिक बुद्धि” (collective intelligence) हैं।

सरल भाषा में:

अगर इंसान एक-दूसरे की गलतियों से उतने ही जल्दी सीखें जितनी जल्दी AI सीखता है, तो समाज मशीन की तरह त्रुटि-रहित हो जाएगा।

यह तर्क मनुष्य की स्वतंत्रता और विशिष्टता को “त्रुटि” की तरह देखता है।


4. यह विचार वास्तव में क्या प्रस्तावित करता है? — मशीनों की तरह इंसानी समाज का ढाँचा

अंश में कहा गया है कि यह पूरा विचार “instrumentarian society” का आधार है।

मशीनों में क्या होता है?

  • न कोई व्यक्तिगत पहचान

  • न कोई अलग राय

  • सामूहिक निर्णय

  • एक ही नीति

  • शून्य विरोध

  • शून्य आकस्मिकता

यही इंसानों के लिए प्रस्तावित है:

  • हमारी विविधता खत्म हो जाए

  • हमारी स्वतंत्र राय “त्रुटि” बने

  • हमारा व्यवहार “harmonized” हो

  • सब एक दिशा में चलें

  • सब एक उद्देश्‍य के लिए सोचें

  • सब पर एक समान नीति लागू हो

यही मशीन-ढाँचा मनुष्यों पर लागू करने की तैयारी है।


5. “मानव हाइव”—मनुष्य मशीनों की नकल करने लगते हैं

अंश कहता है कि यह नया समाज ऐसा होगा जहाँ—

  • मनुष्य “एक जैसे” ढंग से सोचेंगे

  • एक जैसी दिशा में बहेंगे

  • एक जैसी प्रेरणा मानेंगे

  • एक जैसे लक्ष्य होंगे

  • गलती, विरोध, रुकावट, अनिश्चितता—सब हट जाएँगे

यानी इंसान मशीनों के सामूहिक तालमेल की नकल करेंगे।

यह नहीं है Taylorism (पुरानी फैक्ट्री अनुशासन)

यह कुछ नया है:

  • यह मनुष्य के विचार, सीखने, प्रतिक्रिया, और भावनात्मक जीवन पर आधारित “मशीन-सदृश सामूहिकता” है।

  • यह मनुष्य को मशीन में फँसाने वाला नहीं—मनुष्य को मशीन जैसा “बनाने” वाला मॉडल है।


6. “सही दिशा” में सामूहिक मार्च: त्रुटि-मुक्त समाज का वादा

इस नए मॉडल में—

  • “सही निर्णय” पहले से तय होंगे

  • “सही दिशा” पहले से निर्धारित होगी

  • समाज उस दिशा में सामूहिक रूप से चलेगा

  • गलती, दुर्घटना, विरोध, असहमति—सब समाप्त हो जाएँगे

लेकिन इस “पूर्णता” की कीमत क्या है?


7. सबसे बड़ी कीमत: व्यक्तिगत स्वतंत्रता का अंत

अंश स्पष्ट कहता है:

  • इस “मानव-मशीन हाइव” में स्वतंत्रता को त्यागना होगा।

  • “असंगत” लोग संघर्ष से पहले ही ठीक कर दिए जाएँगे।

  • dissent—विरोध, असहमति—“असामान्य” घोषित हो जाएगी।

  • एल्गोरिद्म लोगों को “tuning, herding, conditioning” के जरिए नियंत्रित करेगा।

  • समाज एक “पूर्वानुमेय मशीन” की तरह चलेगा।

सरल भाषा में:

स्वतंत्रता की जगह सामूहिक निश्चितता।
विविधता की जगह एकरूपता।
लोकतांत्रिक जटिलता की जगह algorithmic व्यवस्था।


अंतिम तर्कपूर्ण निष्कर्ष: तीसरी आधुनिकता = मशीन की नक़ल करता मनुष्य

यह अंश बताता है कि surveillance capitalism जो “तीसरी आधुनिकता” (third modernity) प्रस्तुत कर रहा है, वह मूलतः एक मशीन-आधारित मानव समाज है—

जहाँ:

  • मनुष्य मशीन की तरह चलते हैं,

  • गलतियों से बचने के लिए स्वतंत्रता त्याग देते हैं,

  • सामूहिक सीख को सर्वोच्च मानते हैं,

  • व्यक्तिगत सोच और असहमति “जोखिम” बन जाती है,

  • और “सही” निर्णय किसी संस्थान या व्यक्ति नहीं, बल्कि एल्गोरिद्म तय करते हैं।

इसका अंतिम खतरा:

समाज कार्यक्षमता (efficiency) में तो अत्यंत तेज़ हो सकता है—
पर मानवता (humanity) में बेहद गरीब हो जाएगा।

मशीनें जिस पूर्णता का वादा करती हैं,
वह मनुष्य की स्वतंत्रता, विविधता, रचनात्मकता और नैतिक स्वायत्तता की कीमत पर आती है।

यही इस पूरे अंश की सबसे गहरी चेतावनी है।


Comments

Popular posts from this blog

CHAPTER 2 B

CHAPTER 5

CHAPTER 12