CHAPTER 15
CHAPTER 15
I. औज़ारवादी सत्ता के पुरोहित
पेज, नडेला और ज़करबर्ग जैसे “अनुप्रयुक्त यूटोपियन” अधिकारी अपनी सिद्धांतों के बारे में बहुत कम कहते हैं। जो जानकारी उपलब्ध है वह भी प्रायः बिखरी हुई और सतही है। लेकिन डेटा वैज्ञानिकों और “कम्प्यूटेशनल सामाजिक वैज्ञानिकों” का एक दल इस खाली स्थान को भरने के लिए आगे आया है—वे औज़ारवादी सत्ता की बढ़ती गति पर विस्तृत प्रयोगात्मक और सैद्धांतिक अध्ययन प्रस्तुत कर रहे हैं, जो औज़ारवादी समाज के सामाजिक सिद्धांतों की अनमोल समझ प्रदान करते हैं।
इसका एक उत्कृष्ट उदाहरण MIT के मीडिया लैब के ह्यूमन डायनेमिक्स लैब के निदेशक एलेक्स पेंटलैंड का कार्य है। पेंटलैंड उन दुर्लभ अनुप्रयुक्त यूटोपियन विचारकों में से हैं जिन्होंने अपने छात्रों और सहयोगियों के साथ मिलकर औज़ारवादी समाज के एक सिद्धांत को न केवल स्पष्ट रूप से प्रस्तुत किया है, बल्कि उस पर शोध किया है और उसे व्यापक रूप से प्रसारित भी किया है—यह सब उनके प्रचुर तकनीकी नवाचारों और व्यावहारिक अनुप्रयोगों के समानांतर चलता है। इस समूह द्वारा किए गए अध्ययन आज डेटा वैज्ञानिकों के बीच तेजी से सामान्य होती एक विश्वदृष्टि का समकालीन संकेत हैं, जिनके कम्प्यूटेशनल सिद्धांत और नवाचार निगरानी पूँजीवाद की प्रगति के साथ गतिशील अंतःक्रिया में मौजूद हैं—जैसा कि पिकार्ड की ‘अफेक्टिव कम्प्यूटिंग’ और पारादीसो की ‘डिजिटल सर्वज्ञता’ के मामले में देखा जा सकता है।
हालाँकि, अपनी कार्यप्रणाली के सामाजिक परिणामों पर पेंटलैंड जैसी अंतर्दृष्टि और दृढ़ता से विचार करने वाले बहुत कम हैं। उनका कार्य हमें यह अनमोल अवसर देता है कि हम उन शासन-संबंधी धारणाओं, सामाजिक सिद्धांतों और सामाजिक प्रक्रियाओं का आलोचनात्मक अध्ययन कर सकें जो एक औज़ारवादी समाज को परिभाषित करती हैं। मेरा उद्देश्य है—व्यवहार के पीछे छिपे सिद्धांत को समझना—क्योंकि निगरानी पूँजीवादी “समाज” को एक “प्रथम श्रेणी की वस्तु” के रूप में रूपांतरण, गणना, संशोधन, मुद्रा-निर्माण और नियंत्रण के लिए समाहित कर रहे हैं।
पेंटलैंड डेटा साइंस के क्षेत्र में सैकड़ों लेखों और शोध अध्ययनों के अत्यंत उत्पादक लेखक या सह-लेखक हैं, और एक प्रमुख संस्थागत व्यक्तित्व हैं जो कई संगठनों को सलाह देते हैं—जिनमें वर्ल्ड इकोनॉमिक फ़ोरम, डेटा-पॉप अलायंस, गूगल, निसान, टेलीफ़ोनिका और संयुक्त राष्ट्र महासचिव का कार्यालय शामिल हैं। पेंटलैंड की शोध प्रयोगशाला को दुनिया भर की प्रमुख कंपनियाँ, कंसल्टेंसी फर्में और सरकारें वित्तपोषित करती हैं—गूगल, सिस्को, आईबीएम, डेलॉइट, ट्विटर, वेरिज़ॉन, यूरोपीय आयोग, अमेरिकी सरकार, चीनी सरकार, “और कई अन्य संस्थाएँ, जो इस बात को लेकर चिंतित हैं कि हमें दुनिया में क्या चल रहा है, यह क्यों नहीं पता…।”
1
हालाँकि पेंटलैंड इस क्षेत्र में अकेले नहीं हैं, लेकिन वे इस विशिष्ट समूह के भीतर एक प्रकार के महापुजारी माने जाते हैं। हैल वेरियन के विपरीत, पेंटलैंड गूगल के बारे में “हम” कहकर बात नहीं करते, लेकिन उनका कार्य निगरानी-पूँजीवादी परिसरों में प्रमुखता से प्रस्तुत किया जाता है—जहाँ यह औज़ारवादी प्रथाओं को वैधता प्रदान करने में सामग्री और बौद्धिक आधार उपलब्ध कराता है।
गूगल में एक प्रस्तुति के दौरान—जहाँ पेंटलैंड एडवांस्ड टेक्नोलॉजी एंड प्रोजेक्ट्स ग्रुप के सलाहकार बोर्ड में हैं—पूर्व पेंटलैंड डॉक्टोरल छात्र और शीर्ष गूगल कार्यकारी ब्रैड होरोविट्ज़ ने अपने गुरु का परिचय “प्रेरणादायक शिक्षक” के रूप में कराया, जिनकी कई अनुशासनों में गहरी विशेषज्ञता है और जिनके पूर्व छात्र आज कम्प्यूटेशनल विज्ञान के सिद्धांत और व्यवहार दोनों क्षेत्रों का नेतृत्व कर रहे हैं।
2
पेंटलैंड को अक्सर “वेयरेबल्स का गॉडफ़ादर” कहा जाता है, विशेष रूप से गूगल ग्लास का। 1998 में उन्होंने भविष्यवाणी की थी कि वेयरेबल उपकरण “मनुष्य की इंद्रियों का विस्तार कर सकते हैं, स्मृति को बेहतर बना सकते हैं, पहनने वाले के सामाजिक जीवन में सहायता कर सकते हैं, और यहाँ तक कि उसे शांत और संतुलित रहने में मदद कर सकते हैं।”
3
थैड स्टार्नर, पेंटलैंड के एक डॉक्टोरल छात्र, ने MIT में रहते हुए एक प्रारंभिक “वेयरेबल” उपकरण विकसित किया था। 2010 में सर्गेई ब्रिन ने उन्हें गूगल में इसी शोध को आगे बढ़ाने के लिए नियुक्त किया—एक परियोजना जिसने अंततः गूगल ग्लास को जन्म दिया। पेंटलैंड के पचास से अधिक डॉक्टोरल छात्र आगे चलकर इस औज़ारवादी दृष्टि को विभिन्न शीर्ष विश्वविद्यालयों, उद्योग अनुसंधान समूहों और उन तीस कंपनियों में फैला रहे हैं जिनमें पेंटलैंड सह-संस्थापक, प्रायोजक या सलाहकार के रूप में भूमिका निभाते हैं। इन सभी ने पेंटलैंड के सिद्धांत, विश्लेषिकी और आविष्कारों के किसी न किसी पहलू को संगठनों और शहरों में वास्तविक लोगों पर लागू किया है।
4
पेंटलैंड की शैक्षणिक प्रतिष्ठा और उनका प्रखर बौद्धिक व्यक्तित्व उस सामाजिक दृष्टि को वैधता देता है जिसने कुछ दशक पहले तक बुद्धिजीवियों, सरकारी अधिकारियों और आम जनता को डरा दिया था। सबसे उल्लेखनीय बात यह है कि पेंटलैंड ने उस सामाजिक कल्पना को “पूरा” कर दिया जिसे स्किनर केवल देख ही पाए थे—बिग डेटा, सर्वव्यापी डिजिटल उपकरणों, उन्नत गणित, विस्तृत सिद्धांत, प्रतिष्ठित सह-लेखकों, संस्थागत मान्यता, भारी फंडिंग और उच्च-स्तरीय कॉरपोरेट मित्रों के साथ—और यह सब बिना उस वैश्विक प्रतिरोध, नैतिक आक्रोश और कटु प्रतिक्रिया का सामना किए जो कभी हार्वर्ड के मुखर व्यवहारवादी स्किनर पर हुई थी। यह तथ्य अकेले ही दर्शाता है कि मनोवैज्ञानिक सुन्नता कितनी गहरी है और हमने अपनी सामूहिक दिशाबोध कितनी गंभीरता से खो दी है।
स्किनर की तरह, पेंटलैंड भी यूटोपिया के डिज़ाइनर हैं—एक ऐसे विचारक जो प्राणियों पर किए गए अवलोकनों को जल्दी से समूची मानवता पर सामान्यीकृत कर देते हैं। वे औज़ारवादी व्यवस्था की व्यावहारिक संरचना और कम्प्यूटेशनल चुनौतियों के भी कुशल वास्तुकार हैं। पेंटलैंड अपनी सामाजिक सिद्धांत को “सोशल फ़िज़िक्स” कहते हैं—एक धारणा जो उन्हें प्लैंक, मेयर और मैकके के माध्यम से इस सदी का नया बी. एफ. स्किनर सिद्ध करती है।
5
और यद्यपि पेंटलैंड अपने पुराने व्यवहारवादी पूर्वज का नाम कभी नहीं लेते, उनकी पुस्तक Social Physics स्किनर की सामाजिक दृष्टि को इक्कीसवीं सदी में वापस बुलाती है—अब उन उपकरणों से लैस होकर जो स्किनर अपने जीवनकाल में नहीं पा सके। पेंटलैंड औज़ारवादी प्रवृत्ति को उस अनुसंधान और सिद्धांत से मान्यता देते हैं जो साहसपूर्वक स्किनर के नैतिक तर्क और ज्ञानमीमांसा पर आधारित है—यानी “द अदर-वन” के दृष्टिकोण पर।
प्रोफेसर पेंटलैंड ने भी अपना बौद्धिक सफ़र वहीं से शुरू किया जहाँ स्किनर ने किया था—पशु व्यवहार के अध्ययन से। जहाँ स्किनर निर्दोष व्यक्तिगत जीवों के सूक्ष्म व्यवहार पर ध्यान केंद्रित करते थे, वहीं पेंटलैंड ने पशु-जनसमूह के समग्र व्यवहार पर ध्यान केंद्रित किया। NASA के एनवायर्नमेंटल रिसर्च इंस्टीट्यूट में एक पार्ट-टाइम शोधकर्ता के रूप में, जब वे अभी स्नातक ही थे, उन्होंने अंतरिक्ष से कनाडा के बीवरों की आबादी का अनुमान लगाने की विधि विकसित की—बीवर तालाबों की गणना करके: “आप उनके जीवन-शैली को देख रहे होते हैं, और आपको एक अप्रत्यक्ष माप मिल जाता है।”
6
इस अनुभव ने पेंटलैंड को उस दूरस्थ और निर्लिप्त दृष्टि की ओर आकर्षित कर दिया—एक ऐसी दृष्टि जिसे वे बाद में “ईश्वर-दृष्टि” कहेंगे। विमान की खिड़की से शहर को ऊपर से देखते हुए आपने भी शायद यह अनुभव किया होगा: सभी खुशियाँ, दुख, संघर्ष और गतिविधियाँ अचानक चींटियों की चहल-पहल की तरह दिखाई देने लगती हैं। ऊपर से “हम” का बोध मिट जाता है और उसकी जगह “द अदर-वन” का दृष्टिकोण आता है। यही दृष्टिकोण पेंटलैंड के विज्ञान की नींव बना, जब उन्होंने मैकके के दूरस्थ अवलोकन और “टेलेस्टिमुलेशन” के सिद्धांतों को मनुष्यों पर लागू करना सीखा: “अगर आप कमरे के दूसरी तरफ लोगों को बात करते हुए देखें, तो आप बहुत कुछ समझ सकते हैं… यह अंतरिक्ष से बीवर देखने जैसा है, या जैसे जेन गूडॉल गोरिल्लाओं को देखती थीं। आप दूरी से देखते हैं।”
7
(यह टिप्पणी गूडॉल पर अनावश्यक छींटाकशी है, जिनकी प्रतिभा इसी बात में थी कि वे गोरिल्लाओं को “दूसरा” नहीं बल्कि “हमारा एक हिस्सा” मानकर समझती थीं।)
ईश्वर-दृष्टि औज़ारवादी समाज की कल्पना में एक अनिवार्य तत्व बननी थी, लेकिन यह समग्र चित्र अनेक छोटे-छोटे प्रयोगों के वर्षों में धीरे-धीरे उभरा। अगले खंड में हम उस यात्रा का अनुगमन करते हैं—कैसे पेंटलैंड और उनके छात्रों ने सामाजिक व्यवहार को दर्ज करना, मापना और गणना करना सीखा। इसी आधार पर हम पेंटलैंड की Social Physics की ओर बढ़ते हैं, जो समाज को एक औज़ारवादी “हाइव माइंड” के रूप में पुनर्परिभाषित करने की कोशिश करती है—नडेला की मशीनों की तरह—but अब गहराई से सिद्धांतित, और स्किनर की मूल अवधारणाओं, मूल्यों, विश्वदृष्टि और मानव भविष्य की दृष्टि की गूँज से भरी हुई।
Priests of Instrumentarian Power: Alex Pentland and the Making of a New Social Order
Introduction: The Rise of Applied Utopians
In the world of Big Tech, certain executives—Larry Page, Satya Nadella, and Mark Zuckerberg—are often described as “applied utopians.” They rarely articulate their social theories directly. Whatever little they say tends to be scattered, shallow, and tightly controlled.
Into this vacuum step a growing number of data scientists and “computational social scientists.”
Through experiments, models, and real-world deployments, they have begun constructing a new worldview—one that advances the accelerating power of instrumentarianism, a system where society itself becomes an object to be monitored, computed, modified, monetized, and controlled.
One of the most influential figures shaping this worldview is Alex “Sandy” Pentland, director of the Human Dynamics Lab at the MIT Media Lab. Pentland stands out because he not only builds technologies; he also theorizes the society these technologies will create.
Pentland as the Architect of an Instrumentarian Society
Pentland is among the few technologists who openly articulate a theory of how society should function under pervasive datafication. Together with his students and collaborators, he has produced a sustained body of research that explains—and normalizes—the instrumentarian logic.
His lab’s findings offer a glimpse into the mindset of a new professional class of data scientists whose algorithms, sensors, and models operate in tandem with the machinery of surveillance capitalism. Just as Rosalind Picard pioneered affective computing and Joseph Paradiso expanded the reach of digital omniscience, Pentland constructs the foundations of a society governed through behavioral data.
What distinguishes him is his willingness to consider the social implications—often in bold, sweeping terms. His work allows us to critically examine the governance models, values, and assumptions embedded in an instrumentarian future.
A Global Figure with Immense Institutional Reach
Pentland is not merely an academic. He is a prolific author and an influential institutional actor who has advised:
the World Economic Forum
the Data-Pop Alliance
Google, Nissan, Telefonica
and even the Office of the UN Secretary-General
His lab is funded by an impressive array of global corporations, consultancies, and governments—Google, Cisco, IBM, Deloitte, Twitter, Verizon, the European Commission, the US government, the Chinese government, and many others “who are all concerned about why we do not know what is actually happening in the world.”
This network gives Pentland a unique level of legitimacy in shaping discussions about data governance, behavioral prediction, and the future of social order.
A High Priest Among the Priests
Although he is not alone in this field, Pentland is often regarded as a high priest among the small elite who define instrumentarian thinking.
Unlike Google’s chief economist Hal Varian—who often uses “we” when speaking about Google—Pentland maintains a formal distance from corporate identity. Yet his work is enthusiastically showcased in surveillance-capitalist spaces and used to justify their practices.
In one presentation at Google, Brad Horowitz—Pentland’s former doctoral student and now a senior Google executive—introduced him as an “inspirational educator” whose students now lead computational science across industry and academia. This network greatly amplifies his influence.
The Godfather of Wearables
Pentland is frequently referred to as the “godfather of wearables,” especially in connection with Google Glass. As early as 1998, he predicted that wearable devices would:
extend human senses
improve memory
enhance social behavior
and help regulate emotion
Thad Starner, one of his students, built an early wearable prototype at MIT and was later hired by Sergey Brin to lead the effort that became Google Glass.
More than fifty of Pentland’s doctoral students now carry forward his vision in top universities, corporate laboratories, and over thirty start-ups where Pentland serves as cofounder, advisor, or sponsor. Each applies a fragment of his theory to real communities, workplaces, and cities.
Completing Skinner: The New Behaviorism Through Big Data
Pentland’s academic stature and intellectual confidence enable him to legitimize a social vision that would have provoked fear and outrage only a few decades ago.
In many ways, Pentland completes the project of B. F. Skinner:
big data replaces Skinner’s limited instruments
ubiquitous sensors replace Skinner’s controlled laboratories
advanced mathematics replaces behavioral intuition
global institutions replace isolated experiments
Pentland achieves what Skinner only imagined—but does so without attracting the moral backlash that once followed the outspoken Harvard behaviorist. This difference reveals how deeply society has become numb to surveillance and how much we have lost our collective sense of moral direction.
From Animal Behavior to “Social Physics”
Like Skinner, Pentland began by studying animal behavior. But instead of focusing on individual organisms, he analyzed population-level patterns.
During his undergraduate years, while working part-time at NASA’s Environmental Research Institute, he developed a method to estimate Canada’s beaver population from space—by counting beaver ponds. This was his early training in remote observation, which he later refined into what he calls the “God’s-eye view.”
Anyone who has gazed out of an airplane window and seen a bustling city shrink into an ant colony knows this feeling: at a distance, individuality dissolves, and people become mere patterns.
This is the foundational perspective of Pentland’s science. He applies principles developed for remote sensing—by thinkers like MacKay—to human social behavior:
“Watching people across a room is like watching beavers from outer space.
You observe from a distance.”
The analogy is unintentionally dismissive of Jane Goodall, whose true genius lay in understanding animals as kin, not objects. But Pentland’s framework thrives precisely on this distance.
The God-View and the Birth of an Instrumentarian Society
The “God-view” becomes central to the architecture of instrumentarian society. Through years of experimentation, Pentland and his students learned how to:
record social behavior
measure interactions
model collective patterns
compute flows of influence
and ultimately, predict and shape human action
These practices culminate in Pentland’s influential concept of Social Physics, a theory that treats society as a kind of hive mind, similar to Nadella’s vision of machine intelligence.
In Social Physics, human beings become nodes in a vast computational network whose movements, choices, and relationships can be optimized for productivity, stability, and profit.
This vision echoes Skinner’s values, worldview, and hopes for the future—now updated with the technological power that Skinner lacked.
Conclusion: Why Pentland Matters
Pentland is not simply an academic figure. He is a key architect of the emerging instrumentarian order—a world where society becomes an object of measurement, computation, and control.
His work matters because:
it legitimizes new ways of governing populations
it normalizes remote observation and behavioral prediction
it provides intellectual justification for surveillance-capitalist projects
it accelerates the shift from human judgment to algorithmic governance
Understanding Pentland’s theory and influence allows us to see the deeper logic of the systems shaping everyday life. It reveals how technological visions become social realities—and how easily society adapts to new forms of power that once would have provoked resistance.
In examining Pentland, we are not studying one scientist.
We are studying the formation of a new kind of power—instrumentarian power—and the people who serve as its modern priests.
12.12.25 — Page 264
औज़ारवादी “हाइव माइंड”—नडेला की मशीनों की तरह—लेकिन अब कहीं अधिक विस्तार से सिद्धांतबद्ध, और स्किनर की अवधारणाओं, मूल्यों, विश्वदृष्टि तथा मानव भविष्य की कल्पना की गहरी प्रतिध्वनि से भरा हुआ।
II. जब ‘बिग अदर’ समाज को निगल लेता है: सामाजिक संबंधों का ‘रेंडिशन’
स्किनर इस बात को लेकर गहरा क्षोभ व्यक्त करते थे कि मानव व्यवहार के अध्ययन के लिए ऐसे “उपकरण और विधियाँ” उपलब्ध नहीं हैं, जैसे भौतिक विज्ञान में भौतिकविदों के पास होती हैं। मानो उसी शिकायत के उत्तर में, पेंटलैंड और उनके छात्रों ने पिछले दो दशकों को ऐसे उपकरणों और विधियों के आविष्कार में लगा दिया है, जो समूचे मानव व्यवहार—विशेष रूप से सामाजिक व्यवहार—को अत्यधिक पूर्वानुमेय गणित में बदल सकें।
इस दिशा में एक प्रारंभिक और महत्वपूर्ण पड़ाव 2002 में आया, जब पेंटलैंड ने अपने तत्कालीन डॉक्टोरल छात्र तन्ज़ीम चौधरी के साथ मिलकर एक शोध प्रकाशित किया। उसमें उन्होंने लिखा:
“जहाँ तक हमें ज्ञात है, वर्तमान में आमने-सामने होने वाली मानवीय अंतःक्रियाओं को स्वचालित रूप से मॉडल करने की कोई उपलब्ध विधि नहीं है। इसका कारण संभवतः यह है कि किसी समुदाय के भीतर वास्तविक जीवन की अंतःक्रियाओं से विश्वसनीय माप प्राप्त करना अत्यंत कठिन है… हमारा मानना है कि लोगों के बीच भौतिक अंतःक्रियाओं को सेंस करना और मॉडल करना एक अब तक अप्रयुक्त संसाधन है।”
8
अर्थात्, डेटा और कंप्यूटरों के सर्वव्यापी हो जाने के बावजूद “सामाजिक” क्षेत्र अब भी एक फिसलन भरा और पकड़ से बाहर डोमेन बना हुआ था।
इस चुनौती के उत्तर में शोधकर्ताओं ने “सोशियोमीटर” को प्रस्तुत किया—एक पहनने योग्य सेंसर, जिसमें माइक्रोफोन, एक्सेलरोमीटर, ब्लूटूथ कनेक्शन, विश्लेषणात्मक सॉफ़्टवेयर और मशीन लर्निंग तकनीकों का संयोजन किया गया था। इसका उद्देश्य मानव समूहों के भीतर “संरचना और गतिशील संबंधों” का अनुमान लगाना था।
9
(बाद में चौधरी कॉर्नेल विश्वविद्यालय में पीपल अवेयर कंप्यूटिंग समूह का नेतृत्व करने लगीं।)
इसके बाद से पेंटलैंड और उनकी टीमें लगातार इस प्रयास में लगी रहीं कि सामाजिक प्रक्रियाओं के इंस्ट्रूमेंटेशन और इंस्ट्रूमेंटलाइज़ेशन के कोड को पूरी तरह समझा और साधा जा सके—एक ऐसी सर्वग्रासी सामाजिक दृष्टि के नाम पर, जिसका आधार ही व्यापक और सूक्ष्म व्यवहार-संशोधन है।
यहीं से “बिग अदर” का उदय होता है—एक ऐसी शक्ति जो समाज को बाहर से देखती है, उसे मापती है, गणना में बदलती है, और अंततः उसे नियंत्रित करने योग्य वस्तु के रूप में पुनर्गठित कर देती है।
13.12.25 — Page 264–265
व्यवहार-संशोधन।
2005 में अपने डॉक्टोरल छात्र नाथन ईगल के साथ किए गए एक सहयोगात्मक अध्ययन में पेंटलैंड ने मानव समाज से संबंधित अपर्याप्त डेटा की समस्या को फिर से रेखांकित किया। इस अध्ययन में सामाजिक विज्ञान द्वारा मानव व्यवहार को समझने में मौजूद “पूर्वाग्रह, डेटा की विरलता और निरंतरता के अभाव” की ओर संकेत किया गया। इसी के परिणामस्वरूप “घना और सतत डेटा” अनुपस्थित रहा, जो मशीन लर्निंग और एजेंट-आधारित मॉडलिंग समुदायों को मानव गतिशीलता के अधिक समग्र और सटीक पूर्वानुमानात्मक मॉडल निर्मित करने से भी रोकता है।
10
पेंटलैंड का यह आग्रह था कि अपेक्षाकृत नया क्षेत्र माना जाने वाला “डेटा माइनिंग” भी सामाजिक व्यवहार को गहराई से समझने के लिए आवश्यक बातचीत और आमने-सामने की अंतःक्रियाओं की “वास्तविक क्रियाशीलता” को पकड़ने में असमर्थ है।
11
लेकिन साथ ही वे यह भी समझते थे कि मानव गतिविधियों का एक तेजी से बढ़ता हुआ हिस्सा—लेन-देन से लेकर संचार तक—कंप्यूटर-मध्यस्थता के अधीन आता जा रहा है, और इसका प्रमुख कारण मोबाइल फ़ोन का सर्वव्यापी होना है।
टीम ने यह देखा कि मोबाइल फ़ोनों के इस बढ़ते हुए “सर्वव्यापी ढाँचे” का उपयोग किया जा सकता है, और इससे प्राप्त डेटा को उनके पहनने योग्य व्यवहार-निगरानी उपकरणों से आने वाले नए डेटा-प्रवाहों के साथ जोड़ा जा सकता है। इसका परिणाम एक अत्यंत क्रांतिकारी समाधान के रूप में सामने आया, जिसे पेंटलैंड और ईगल ने “रियलिटी माइनिंग” नाम दिया।
गुरु और शिष्य ने यह प्रदर्शित किया कि मोबाइल फ़ोन से प्राप्त डेटा का उपयोग “व्यक्तियों और संगठनों—दोनों—के व्यवहार में निहित नियमित नियमों और संरचनाओं को उजागर करने” के लिए किया जा सकता है। इसने व्यवहारिक अधिशेष (behavioral surplus) के संग्रह और विश्लेषण की प्रक्रिया को और आगे बढ़ाया, और व्यवहारिक बेदखली (behavioral dispossession) की प्रकृति में एक बड़े परिवर्तन की दिशा इंगित की—जो अब केवल आभासी अनुभवों तक सीमित न रहकर, वास्तविक और अंततः सामाजिक अनुभवों तक फैलती जा रही थी।
12
एक तकनीकी और सांस्कृतिक मील-पत्थर के रूप में, शोधकर्ताओं की यह घोषणा कि अब “वास्तविकता” स्वयं अधिशेष के संग्रह, खोज, निष्कर्षण, रेंडिशन, डेटा-करण, विश्लेषण, पूर्वानुमान और हस्तक्षेप के लिए एक वैध और व्यावहारिक क्षेत्र बन चुकी है—ने उन नई प्रथाओं के लिए रास्ता तैयार किया, जो आगे चलकर “रियलिटी बिज़नेस” के रूप में स्थापित होने वाली थीं।
13.12.25 — Page 265
वे प्रथाएँ, जो आगे चलकर “रियलिटी बिज़नेस” के रूप में स्थापित होने वाली थीं।
पेंटलैंड और ईगल ने अपने प्रयोग की शुरुआत MIT के मीडिया लैब से जुड़े 100 छात्रों और फैकल्टी सदस्यों के साथ की। उन्हें विशेष सॉफ़्टवेयर से लैस 100 नोकिया मोबाइल फ़ोन दिए गए। यही परियोजना आगे चलकर ईगल के डॉक्टोरल शोध-प्रबंध (डिसर्टेशन) का आधार बनी। इन दोनों शोधकर्ताओं ने निरंतर एकत्र किए गए व्यवहारिक डेटा की प्रकाशनकारी शक्ति (revelatory power) को उजागर किया, जिसे उन्होंने प्रत्येक प्रतिभागी से सीधे एकत्र किए गए सर्वे डेटा के माध्यम से सत्यापित भी किया।
उनके विश्लेषणों से व्यक्तिगत और सामूहिक जीवन के अत्यंत सूक्ष्म और विस्तृत चित्र उभरे—जिसे लेखकों ने “सामाजिक प्रणाली” (the social system) कहा। वे समय और स्थान से जुड़े नियमित पैटर्नों को पहचानने में सक्षम हुए—जैसे व्यक्ति कहाँ जाता है, क्या गतिविधियाँ करता है, और संचार के किन तरीकों का उपयोग करता है। इन सबके आधार पर वे यह भविष्यवाणी करने में सफल हुए कि कोई व्यक्ति अगले एक घंटे में कहाँ होगा और क्या कर रहा होगा—लगभग 90 प्रतिशत तक की सटीकता के साथ। इसके अतिरिक्त, उन्होंने किसी व्यक्ति के सहकर्मियों, सामान्य मित्रों और निकट संबंधों के बारे में भी अत्यंत सटीक अनुमान लगाए।
टीम ने कार्य-समूहों के भीतर संचार और अंतःक्रिया के पैटर्नों की पहचान की, साथ ही मीडिया लैब के व्यापक “संगठनात्मक लयों (organizational rhythms) और नेटवर्क गतिशीलताओं (network dynamics)” को भी रेखांकित किया।
(बाद में ईगल जाना (Jana) नामक एक मोबाइल विज्ञापन कंपनी के सीईओ बने, जो उभरते बाज़ारों में व्यवहारिक अधिशेष के बदले मुफ़्त इंटरनेट उपलब्ध कराती है।)
जैसे-जैसे पेंटलैंड की प्रयोगशाला में, उनके प्रोजेक्ट्स और सिद्धांतों के माध्यम से रियलिटी माइनिंग का सिद्धांत और व्यवहार विकसित होता गया, वैसे-वैसे 2008 में MIT Technology Review ने “रियलिटी माइनिंग” को अपनी “10 ब्रेकथ्रू टेक्नोलॉजीज़” की सूची में शामिल किया।
पेंटलैंड ने कहा:
“मेरे छात्रों और मैंने इस नए विज्ञान के विकास को तेज़ करने के लिए व्यवहार-मापन के दो प्लेटफ़ॉर्म विकसित किए हैं। आज ये प्लेटफ़ॉर्म दुनिया भर के सैकड़ों शोध समूहों के लिए विशाल मात्रा में मात्रात्मक डेटा उत्पन्न कर रहे हैं।”
13
यह गति के प्रति निष्ठा कोई साधारण विवरण नहीं है, बल्कि—जैसा कि हम जानते हैं—अनुप्रयुक्त यूटोपियन सोच की कला और विज्ञान का एक केंद्रीय तत्व है। पेंटलैंड ‘बिग अदर’ और औज़ारवादी सत्ता के तीव्र प्रसार को एक ऐसे “प्रकाश-गति वाले, अतिसंयोजित (hyperconnected) संसार” के रूप में समझते हैं, जहाँ दुनिया के किसी भी कोने से लाखों लोगों की आभासी भीड़ “कुछ ही मिनटों में” बन सकती है।
वे MIT समुदाय को इस भविष्य की अग्रिम पंक्ति (avant-garde) मानते हैं—ऐसे प्रतिभाशाली अग्रदूत, जो पहले ही इस अत्यधिक गति के साथ तालमेल बिठा चुके हैं और इसीलिए शेष समाज के लिए एक मॉडल प्रस्तुत करते हैं। अपने छात्रों और सहयोगियों पर विचार करते हुए पेंटलैंड लिखते हैं:
“मुझे यह भी देखने का अवसर मिला कि रचनात्मक संस्कृतियों को किस प्रकार बदलना पड़ता है ताकि वे MIT जैसे अतिसंयोजित, वॉर्प-स्पीड संसार में फल-फूल सकें—एक ऐसा वातावरण, जिसमें अब शेष दुनिया भी प्रवेश कर रही है।”
14
पेंटलैंड का तर्क है कि MIT की तीव्र तैनाती (rapid deployment) की मान्यताओं के अनुरूप उनके समूह का यह अनुकूलन केवल एक पूर्वाभास है—उस भविष्य का, जो हम सबके लिए आने वाला है।
13.12.25 — Page 265–266
MIT की तेज़ तैनाती (rapid deployment) की मान्यताओं के प्रति यह अनुकूलन, जैसा कि पेंटलैंड मानते हैं, केवल इस बात का पूर्वसंकेत है कि आगे चलकर हम सबके लिए क्या आने वाला है।
2008 में MIT Technology Review द्वारा “रियलिटी माइनिंग” पर दिए गए उत्साहपूर्ण सम्मान में, व्यवहारिक अधिशेष (behavioral surplus) से जुड़े उस समय के अब भी नए और विचलित कर देने वाले तथ्यों की ओर ध्यान दिलाया गया। पत्रिका ने लिखा:
“कुछ लोग अपने पीछे डिजिटल ब्रेडक्रंब्स छोड़ने को लेकर घबराते हैं। लेकिन सैंडी पेंटलैंड इसमें आनंद लेते हैं।”
पेंटलैंड चाहते हैं कि मोबाइल फ़ोन अपने उपयोगकर्ताओं के बारे में “और भी अधिक जानकारी” एकत्र करें। उनके शब्दों में:
“यह एक दिलचस्प ईश्वर-दृष्टि (God’s-eye view) है।”
15
वास्तव में, पेंटलैंड अपने लेखों में नियमित रूप से “डिजिटल ब्रेडक्रंब्स की पूर्वानुमानात्मक शक्ति” का उत्सव मनाते हैं। इस क्रम में वे उन्हीं शिष्ट शब्दों (euphemisms) और पतली तर्क-श्रृंखलाओं का सहारा लेते हैं, जो निगरानी पूँजीवाद के समर्थकों की मानक भाषा बन चुकी हैं—और जो मानवीय अनुभव की बेदखली (dispossession) को सामान्य और स्वीकार्य बनाने में योगदान देती हैं। उदाहरण के लिए, वे कहते हैं:
“जब हम अपने दैनिक जीवन में आगे बढ़ते हैं, तो अपने पीछे आभासी ब्रेडक्रंब्स छोड़ जाते हैं—डिजिटल रिकॉर्ड कि हम किसे फ़ोन करते हैं, कहाँ जाते हैं, क्या खाते हैं और कौन-से उत्पाद खरीदते हैं। ये ब्रेडक्रंब्स हमारे जीवन की उससे कहीं अधिक सटीक कहानी कहते हैं, जितना हम स्वयं प्रकट करना चुनते हैं… डिजिटल ब्रेडक्रंब्स हमारे व्यवहार को वैसे ही दर्ज करते हैं, जैसा वह वास्तव में घटित हुआ।”
16
पेंटलैंड उन शुरुआती लोगों में थे, जिन्होंने व्यवहारिक अधिशेष के व्यावसायिक महत्व को पहचाना। यद्यपि वे इस पर खुलकर चर्चा नहीं करते, फिर भी वे निगरानी पूँजीवाद की यथार्थ राजनीति (realpolitik) को एक औज़ारवादी समाज की अनिवार्य शर्त के रूप में स्वीकार करते प्रतीत होते हैं। पेंटलैंड की अपनी कंपनियाँ उनके अनुप्रयुक्त यूटोपियन दृष्टिकोण का ही विस्तार हैं—ऐसी प्रयोगशालाएँ, जहाँ औज़ारवादी तकनीकों को आज़माया जाता है और जनसंख्या को सर्वव्यापी रेंडिशन, निगरानी और संशोधन का अभ्यस्त बनाया जाता है, ताकि निगरानी से राजस्व अर्जित किया जा सके।
आरंभ से ही पेंटलैंड ने रियलिटी माइनिंग को व्यावसायिक अवसरों के एक नए ब्रह्मांड के प्रवेश-द्वार के रूप में देखा। 2004 में उन्होंने यह दावा किया था कि “कम्प्यूटेशनल शक्ति” से युक्त मोबाइल फ़ोन और अन्य वेयरेबल उपकरण, रियलिटी माइनिंग के लिए “आधार” प्रदान करेंगे—एक “रोमांचक नए व्यापारिक अनुप्रयोगों के समूह” के रूप में।
मूल विचार यही था कि व्यवसाय “वास्तविकता” की अपनी विशेष समझ का उपयोग करके व्यवहार को इस तरह ढाल सकें, जिससे उनके व्यावसायिक उद्देश्यों का अधिकतमकरण हो। पेंटलैंड नए प्रयोगों का वर्णन करते हैं, जिनमें भाषण-पहचान (speech-recognition) तकनीक के माध्यम से “व्यक्तियों की प्रोफ़ाइल उनके द्वारा उपयोग किए गए शब्दों के आधार पर” तैयार की गई। इसके ज़रिए किसी प्रबंधक को यह सुविधा मिलती है कि वह “सामंजस्यपूर्ण सामाजिक व्यवहार और कौशल वाले कर्मचारियों की एक टीम” गठित कर सके।
इस प्रकार, सामाजिक जीवन का सूक्ष्मतम स्तर भी अब प्रबंधन, अनुकूलन और नियंत्रण की वस्तु बनता चला जाता है—ठीक उसी दिशा में, जहाँ औज़ारवादी सत्ता समाज को पूरी तरह निगल लेने की ओर अग्रसर है।
13.12.25 — Page 266
प्रबंधक को “सामंजस्यपूर्ण सामाजिक व्यवहार और कौशल वाले कर्मचारियों की एक टीम गठित करने” में सक्षम बनाने के लिए।
17
2006 के अपने एक लेख में पेंटलैंड और ईगल ने स्पष्ट किया कि उनका एकत्र किया गया डेटा “कार्यस्थल में अत्यंत महत्वपूर्ण मूल्य” रखता है। इसी क्रम में, दोनों ने संयुक्त रूप से एक पेटेंट दायर किया—जिसका शीर्षक था: “अंतर-व्यक्तिगत संचार के लिए अल्प-दूरी रेडियो नेटवर्क और सेल्युलर टेलीफ़ोन नेटवर्क का संयुक्त उपयोग”—जो व्यवसायों के लिए रियलिटी माइनिंग के उपलब्ध औज़ारों की सूची में और वृद्धि करता था।
18
उसी वर्ष ईगल ने Wired पत्रिका को बताया कि रियलिटी माइनिंग अध्ययन “निरंतर मानव व्यवहार पर आधारित एक अभूतपूर्व डेटा-संग्रह” का प्रतिनिधित्व करता है, जो समूहों के अध्ययन में क्रांति ला सकता है और नए व्यावसायिक अनुप्रयोगों के द्वार खोल सकता है। रिपोर्ट के अनुसार, वे पहले से ही एक बड़ी कंपनी के साथ “वार्ता में थे,” जो उनके उपकरणों और विधियों को लागू करना चाहती थी।
19
पेंटलैंड का तर्क था कि उनके सोशियोमीटर द्वारा एकत्र की गई जानकारी—
“अदृश्य (unobtrusive) पहनने योग्य सेंसर, जो संचार, आवाज़ के उतार-चढ़ाव और शारीरिक भाषा को मापते हैं”—
“प्रबंधकों को यह समझने में मदद कर सकती है कि कौन किसके साथ काम कर रहा है और सहकर्मियों के बीच संबंधों का अनुमान लगाने में सहायक हो सकती है,” और यह कि यह “उन लोगों को पहचानने का एक कुशल तरीका होगा, जो एक-दूसरे के साथ अच्छी तरह काम कर सकते हैं।”
20
2009 में कई स्नातक छात्रों के साथ किए गए एक सहयोग में, पेंटलैंड ने सोशियोमेट्रिक बैज और उसकी मशीन विश्लेषिकी पर आधारित एक “पहनने योग्य कम्प्यूटिंग प्लेटफ़ॉर्म” के डिज़ाइन और तैनाती के परिणाम प्रस्तुत किए। लेखकों के अनुसार, इसका उद्देश्य ऐसी मशीनें विकसित करना था, जो “सामाजिक संचार की निगरानी कर सकें और वास्तविक समय में हस्तक्षेप प्रदान कर सकें।”
इस लक्ष्य को साधने के लिए, बाईस कार्यालय कर्मचारियों को एक महीने के लिए इस बैज से “इंस्ट्रूमेंटेड” किया गया, ताकि
“व्यक्तिगत और सामूहिक व्यवहार के पैटर्नों को स्वचालित रूप से मापा जा सके, अवचेतन सामाजिक संकेतों से मानव व्यवहार की भविष्यवाणी की जा सके, एक ही टीम में काम करने वाले व्यक्तियों के बीच सामाजिक निकटता (social affinity) की पहचान की जा सके, और हमारे सिस्टम के उपयोगकर्ताओं को फीडबैक प्रदान कर सामाजिक अंतःक्रियाओं को बेहतर बनाया जा सके।”
इस शोध ने विश्वसनीय परिणाम प्रस्तुत किए। इसमें संचार और व्यवहार के ऐसे पैटर्न सामने आए, जिनके बारे में लेखकों ने निष्कर्ष निकाला कि वे “सोशियोमेट्रिक बैज जैसे उपकरण के बिना उपलब्ध ही नहीं हो सकते थे।” उनके अनुसार,
“हमारे परिणाम… इस बात के लिए सशक्त रूप से तर्क देते हैं कि सामाजिक प्रणालियों को समझने के लिए स्वचालित सेंसिंग डेटा-संग्रह उपकरणों का उपयोग किया जाना चाहिए।”
उन्होंने चेतावनी दी कि संगठन तभी “वास्तव में संवेदनशील” (truly sensible) बनेंगे, जब वे
“सैकड़ों या हज़ारों वायरलेस पर्यावरणीय और पहनने योग्य सेंसरों को नियोजित करेंगे, जो मानव व्यवहार की निगरानी करने, सार्थक सूचना निकालने, और प्रबंधकों को समूह-प्रदर्शन मेट्रिक्स तथा कर्मचारियों को आत्म-प्रदर्शन मूल्यांकन और अनुशंसाएँ प्रदान करने में सक्षम हों।”
21
यहीं पर औज़ारवादी सत्ता का व्यावहारिक रूप पूरी स्पष्टता से उभरता है—जहाँ कार्यस्थल, संबंध, सहयोग और मानवीय व्यवहार स्वयं एक सतत निगरानी और संशोधन योग्य प्रणाली में परिवर्तित हो जाते हैं।
13.12.25 — Page 267
कर्मचारियों को आत्म-प्रदर्शन मूल्यांकन (self-performance evaluations) और अनुशंसाएँ प्रदान करने के लिए।
21
2002 में किया गया यह आविष्कार निरंतर विकसित किया जाता रहा और अंततः प्रयोगशाला से बाज़ार तक पहुँचाया गया। 2010 में पेंटलैंड और उनके 2009 के सह-लेखकों ने Sociometric Solutions नामक कंपनी की स्थापना की, ताकि स्किनर द्वारा लंबे समय से अभिलषित “उपकरणों और विधियों” को सीधे बाज़ार में उतारा जा सके। यह उन अनेक कंपनियों में से एक थी, जिन्हें पेंटलैंड ने अपने सोशल फ़िज़िक्स की कठोरताओं को कार्यालय कर्मचारियों जैसी नियंत्रित (captive) आबादियों पर लागू करने के लिए खड़ा किया।
22
Sociometric Solutions के सीईओ, बेन वेबर—जो पेंटलैंड के डॉक्टोरल छात्रों में से एक हैं—अपने इस उद्यम को “पीपल एनालिटिक्स” कहते हैं। इसी नाम की अपनी पुस्तक में वे “कनेक्शन, सहयोग और डेटा” से संचालित भविष्य की कल्पना करते हैं, जहाँ यह बैज या इससे मिलता-जुलता कोई उपकरण
“दुनिया भर के विभिन्न देशों में स्थित कंपनियों के लाखों लोगों पर—सिर्फ़ मिनटों के लिए नहीं, बल्कि वर्षों या दशकों तक—तैनात किया जाएगा… ज़रा सोचिए, हम लोगों को और अधिक प्रभावी ढंग से सहयोग करने में मदद करने के लिए कितना कुछ सीख सकते हैं…।”
23
यहाँ औज़ारवादी परियोजना अपने पूर्ण रूप में दिखाई देती है—जहाँ कार्यस्थल केवल उत्पादन का स्थान नहीं रह जाता, बल्कि एक स्थायी प्रयोगशाला में बदल जाता है, और मनुष्य स्वयं उस प्रयोग का डेटा-बिंदु बन जाता है।
नीचे आपके दिए गए पूरा का पूरा, सख़्त और वैचारिक रूप से तीखे निबंध का सटीक हिंदी रूपांतरण प्रस्तुत है।
भाषा जानबूझकर सरल, प्रत्यक्ष और राजनीतिक रखी गई है—तकनीकी जटिलताओं से मुक्त, सत्ता, सहमति, बाज़ार और दर्शन पर केंद्रित।
जब विज्ञान सत्ता बन जाए: बिना सहमति मानव व्यवहार का अधिग्रहण
आधुनिक दुनिया में नियंत्रण हमेशा बल, पुलिस या क़ानून के ज़रिए नहीं आता।
अक्सर वह चुपचाप आता है—विज्ञान, फंडिंग और अभिजात संस्थानों के माध्यम से। जो कुछ “तटस्थ शोध” या “नवाचार” के रूप में प्रस्तुत किया जाता है, वही धीरे-धीरे एक ऐसी व्यवस्था में बदल जाता है जो मानव जीवन को देखती है, भविष्यवाणी करती है और उसे ढालती है।
पिछले दो दशकों में, विशेषकर MIT जैसे अभिजात शोध संस्थानों से, एक शक्तिशाली परियोजना उभरी। इस परियोजना ने दावा किया कि वह समाज को “बेहतर समझना” चाहती है।
वास्तव में, उसका लक्ष्य मानव व्यवहार को पकड़ना, उसे डेटा में बदलना और बिना सार्वजनिक सहमति के उसे नियंत्रण और मुनाफ़े के लिए उपयोग करना था।
यह तकनीक की दुर्घटना नहीं है।
यह एक राजनीतिक और दार्शनिक चुनाव है।
मानव नियंत्रण का पुराना सपना
डिजिटल तकनीक से बहुत पहले, बी. एफ. स्किनर जैसे व्यवहारवादी चिंतकों का मानना था कि मनुष्यों को मशीनों की तरह इंजीनियर किया जा सकता है। स्किनर इस बात से निराश थे कि मनुष्यों को भौतिक वस्तुओं की तरह सटीक रूप से मापा और नियंत्रित नहीं किया जा सकता।
स्वतंत्रता, नैतिक चुनाव, भावनाएँ और अनिश्चितता—उनके लिए मूल्य नहीं, समस्याएँ थीं।
स्किनर के पास जो नहीं था, वह था वास्तविक मानव जीवन तक निरंतर पहुँच।
आधुनिक तकनीक ने यह कमी पूरी कर दी।
मोबाइल फ़ोन, डिजिटल नेटवर्क और स्थायी कनेक्टिविटी ने ऐसी स्थिति पैदा कर दी जहाँ लोग अपनी जेब में निगरानी उपकरण लेकर चलते हैं—ज़ोर-ज़बरदस्ती से नहीं, बल्कि डिज़ाइन के ज़रिए।
दैनिक जीवन मापने योग्य बन गया:
आवागमन, बातचीत, सामाजिक संपर्क, काम करने की आदतें, रिश्ते।
विज्ञान को आखिरकार वह मिल गया जिसकी उसे तलाश थी—
सामाजिक जीवन की पूर्ण दृश्यता।
समाज को संसाधन में बदलना
अभिजात शोधकर्ताओं ने खुले तौर पर सामाजिक संबंधों को एक “अप्रयुक्त संसाधन” कहा।
मानव रिश्ते अब नैतिक या निजी क्षेत्र नहीं रहे—वे कच्चा माल बन गए, जिसे निकाला जाना था।
“रियलिटी माइनिंग” के पीछे का विचार सरल और ख़तरनाक था:
वास्तविकता को ही संग्रहित किया जा सकता है।
वास्तविकता को तोड़ा जा सकता है।
डेटा में बदला जा सकता है।
विश्लेषित, भविष्यवाणी और उपयोग किया जा सकता है।
मानव जीवन को एक खदान के रूप में देखा गया।
और मनुष्य—उस खदान का अयस्क बन गए।
यह सब लोकतांत्रिक बहस के बिना हुआ। समाज से यह नहीं पूछा गया कि क्या वह ऐसा भविष्य चाहता है।
फ़ैसले अभिजात प्रयोगशालाओं में लिए गए—सामान्य लोगों से बहुत दूर।
कोई वास्तविक सहमति नहीं, केवल मौन अधिग्रहण
इस परियोजना का सबसे गंभीर पहलू तकनीक नहीं है—
सहमति का अभाव है।
लोगों से यह नहीं पूछा गया कि क्या उनकी चाल-ढाल, बातचीत, दोस्ती और कार्य-संबंध स्थायी अध्ययन की वस्तु बनने चाहिए।
जहाँ भागीदारी थी भी, वहाँ बड़े लक्ष्य को कभी ईमानदारी से स्पष्ट नहीं किया गया।
जो डेटा “शोध” के नाम पर इकट्ठा हुआ, वही बाज़ार के लिए इस्तेमाल हुआ।
जो डेटा “समझने” के लिए लिया गया, वही प्रबंधन और नियंत्रण के लिए काम आया।
मानव व्यवहार पहले ले लिया गया—
व्याख्या बाद में दी गई।
यह सूचित सहमति नहीं है।
यह व्यवहारिक अधिग्रहण है।
बाज़ारों की सेवा में विज्ञान
एक बार व्यवहार की भविष्यवाणी संभव हुई, तो वह बिकाऊ बन गया।
कार्यालय प्रयोगशालाएँ बन गए।
कर्मचारी विषय (subjects) बन गए।
सामाजिक रिश्ते मापे गए—ताकि दक्षता, मुनाफ़ा और नियंत्रण बढ़े।
लक्ष्य गरिमा या स्वतंत्रता नहीं था।
लक्ष्य था—बेहतर प्रबंधन और अधिक लाभ।
बाज़ारों को पूर्वानुमेयता पसंद है।
स्वतंत्रता जोखिम पैदा करती है।
इसलिए विज्ञान को स्वतंत्रता के विरुद्ध मोड़ दिया गया।
मानवीय अनिश्चितता—असहमति, प्रतिरोध, धीमापन—को दोष माना गया।
व्यवहार को सम्मान की चीज़ नहीं, अनुकूलन की वस्तु बना दिया गया।
अभिजात संस्थान और सत्ता की समस्या
यह तथ्य कि यह परियोजना MIT जैसे संस्थानों से निकली—अत्यंत महत्वपूर्ण है।
MIT कोई मामूली संस्था नहीं है।
यह दुनिया के सबसे शक्तिशाली ज्ञान केंद्रों में से एक है।
जब ऐसी संस्थाएँ समाज को बिना सहमति बदले, तो एक बात स्पष्ट हो जाती है:
यह अभिजात-प्रेरित सत्ता है, लोकतांत्रिक प्रगति नहीं।
कुछ अत्यधिक शिक्षित लोगों ने तय किया कि करोड़ों लोग कैसे जिएँगे, काम करेंगे और रिश्ते बनाएँगे—बिना उनसे पूछे।
यही आधुनिक तकनीकी तानाशाही है।
हिंसा से नहीं—विशेषज्ञता से।
डर से नहीं—अधिकार से।
क़ानून से नहीं—उन प्रणालियों से, जिन पर किसी ने मतदान नहीं किया।
“बिग अदर” का उदय
यह प्रणाली आदेश नहीं देती।
यह चुपचाप देखती है।
यह लोगों को बाहर से देखती है, मापती है, भविष्यवाणी करती है और धीरे-धीरे उनके चुनावों को ढालती है।
इसे अर्थ, गरिमा या नैतिक विकास से कोई सरोकार नहीं।
इसे केवल पैटर्न चाहिए।
यही है “बिग अदर”—
एक ऐसी सत्ता जो आदेश से नहीं, डेटा से शासन करती है।
इस व्यवस्था में मनुष्य नागरिक नहीं रह जाते।
वे प्रणाली के इनपुट बन जाते हैं।
दार्शनिक विफलता
इस परियोजना की जड़ में एक गहरी दार्शनिक भूल है।
यह मनुष्य को केवल व्यवहार में घटा देती है।
निर्णय की जगह भविष्यवाणी रख देती है।
स्वतंत्रता की जगह दक्षता।
लोकतंत्र की जगह प्रबंधन।
मनुष्य मशीन नहीं है।
समाज कोई अनुकूलन-योग्य सिस्टम नहीं है।
जीवन कोई डेटा-सैट नहीं है।
जब विज्ञान यह भूल जाता है,
तो वह विज्ञान नहीं रहता—
वह सत्ता बन जाता है।
चेतावनी—तकनीक का निषेध नहीं
यह ज्ञान या तकनीक के विरुद्ध तर्क नहीं है।
यह उस विज्ञान के विरुद्ध तर्क है जो लोगों पर बिना सहमति शासन करना चाहता है।
यदि समाज यह नहीं पूछता कि तकनीक को कौन नियंत्रित करता है, कौन उसे फंड करता है और वह किसके हित में काम करती है—
तो स्वतंत्रता चुपचाप ग़ायब हो जाएगी।
तानाशाही से नहीं—
डिज़ाइन से।
इस व्यवस्था का सबसे बड़ा ख़तरा यह नहीं कि वह क्रूर है।
बल्कि यह है कि वह सामान्य लगने लगती है।
और इसी तरह नियंत्रण टिकता है।
_________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________
How Science Quietly Turned Against Human Freedom
—from Elite Labs to Aadhaar, Workplaces, and Platforms**
In earlier times, power showed itself openly. Kings ruled by force. States ruled by law. People knew who governed them and how. Today, a new kind of power has emerged—quiet, technical, and hidden behind the language of science and efficiency. It does not announce itself. It does not seek consent. It simply measures, predicts, and reshapes human life.
This power did not come from revolutions or public debate.
It came from elite laboratories, elite institutions, and elite markets.
The Old Desire: Making Humans Predictable
Long before digital technology, thinkers like B. F. Skinner believed that human behaviour should be engineered like machines. Freedom, moral choice, and unpredictability were not virtues to him; they were obstacles. Order required control. Control required measurement.
What frustrated Skinner most was the lack of tools. Human life was too messy. Social relations were too complex. People could not be constantly observed.
That problem is now solved.
Mobile phones, platforms, biometric systems, and permanent connectivity have made human life measurable in ways Skinner could only dream of. People now carry tracking devices voluntarily—not because they are coerced, but because modern life makes refusal almost impossible.
Science finally gained access to everyday human reality.
When Society Became a Target
Once this access existed, elite researchers began to speak of social life as something “under-measured.” Conversations, friendships, teamwork, movement, cooperation—these were no longer ethical or cultural spaces. They became resources.
This thinking produced a dangerous idea:
reality itself could be mined.
Not forests.
Not oil.
But human life.
Society was reframed as raw material. People became sources of behavioural data. Reality became something to extract, analyse, predict, and intervene in.
This shift happened without democratic discussion. Ordinary people were never asked whether they wanted their lives turned into datasets. Decisions were made inside elite institutions—far from public scrutiny.
No Consent, Only Silent Capture
The most serious problem here is not technology.
It is consent.
People were never asked whether their social relations should be continuously monitored. Even when participation existed, the full purpose was never honestly stated. Data collected “for research” quietly moved into markets. Data collected “to understand” became data used to manage, rank, and control.
Human behaviour was taken first.
Justification came later.
This is not informed consent.
This is behavioural capture.
India: Aadhaar and the Normalisation of Surveillance
India provides one of the clearest examples of how this logic spreads from elite science into everyday governance.
Aadhaar was introduced as a tool for welfare efficiency. But it slowly became something else—a system that made biometric identification a condition for existence. Food, pensions, wages, healthcare, schooling—basic life functions became dependent on biometric verification.
People were told: If you have nothing to hide, you have nothing to fear.
This is the language of control, not democracy.
Aadhaar turned citizens into entries in a database. Errors became punishments. Exclusion became “technical failure.” Human dignity was replaced by system compliance.
The logic is the same:
measurement first, rights later.
Workplaces as Laboratories
This same logic now dominates workplaces—globally and in India.
Employee monitoring software tracks keystrokes, screen time, speech patterns, movements, even emotions. Workers are ranked, scored, and optimised. Social relations are analysed to improve “team performance.”
Trust is replaced by metrics.
Judgment is replaced by prediction.
Human beings become management problems.
Efficiency rises. Freedom collapses.
Platforms and Everyday Life
Digital platforms complete the circle.
What you watch, read, buy, like, pause, skip, or ignore is tracked. Platforms do not just predict your behaviour; they shape it. Attention is nudged. Emotions are managed. Choices are guided.
This is not persuasion.
It is conditioning.
People believe they are free because no one orders them. But their options are silently structured in advance.
Markets Enter the Picture
Once behaviour could be predicted, it could be sold.
Markets love predictability.
Freedom creates uncertainty.
Uncertainty threatens profit.
So science was redirected. The goal was no longer understanding society; it was managing it. Human unpredictability became a defect to be corrected. Resistance became inefficiency.
People were told this would help them live better lives. In reality, it helped companies gain unprecedented power over behaviour.
Elite Institutions and Democratic Failure
That institutions like MIT lead these developments matters deeply.
These are not neutral spaces. They shape governments, markets, and global norms. When elite institutions design systems that transform society without consent, democracy is bypassed.
This is how modern technological domination works:
no violence
no coup
no dictator
Just expertise, authority, and inevitability.
A small elite designs systems.
The rest of society is expected to adapt.
The Rise of the Big Other
This system does not rule by command. It rules by observation.
It watches silently.
It predicts quietly.
It shapes behaviour gently.
This is not Orwell’s Big Brother.
This is something more subtle and more dangerous.
This is the Big Other—a power that governs through data rather than law. Under it, humans are no longer citizens with rights. They are inputs in a system.
A Philosophical Reckoning
This project fails at the deepest philosophical level.
Kant taught that humans must never be treated merely as means, but always as ends. Behavioural surveillance does the opposite—it turns people into tools for optimisation.
Hannah Arendt warned that evil often arises not from cruelty, but from thoughtlessness. Instrumentarian systems eliminate thinking by replacing judgment with automation.
Karl Marx showed how capitalism turns human life into commodities. Behavioural data is the newest commodity—extracted from life itself.
Amartya Sen reminds us that development is freedom, not efficiency. A society that predicts and controls behaviour expands systems while shrinking human capability.
Together, they tell us this:
A society that knows everything about people but allows them no meaningful choice is not advanced. It is impoverished.
A Final Warning
This is not an argument against science or technology. It is an argument against rule without consent.
When science is captured by markets and elites, it stops serving humanity. When systems feel normal, resistance feels irrational. That is how freedom disappears—not dramatically, but quietly.
The greatest danger is not that this system is cruel.
The danger is that it feels inevitable.
And once control feels inevitable, freedom becomes unthinkable.
**How Science Quietly Turned Against Human Freedom
—from Elite Labs to Aadhaar, Workplaces, and Platforms**
In earlier times, power showed itself openly. Kings ruled by force. States ruled by law. People knew who governed them and how. Today, a new kind of power has emerged—quiet, technical, and hidden behind the language of science and efficiency. It does not announce itself. It does not seek consent. It simply measures, predicts, and reshapes human life.
This power did not come from revolutions or public debate.
It came from elite laboratories, elite institutions, and elite markets.
The Old Desire: Making Humans Predictable
Long before digital technology, thinkers like B. F. Skinner believed that human behaviour should be engineered like machines. Freedom, moral choice, and unpredictability were not virtues to him; they were obstacles. Order required control. Control required measurement.
What frustrated Skinner most was the lack of tools. Human life was too messy. Social relations were too complex. People could not be constantly observed.
That problem is now solved.
Mobile phones, platforms, biometric systems, and permanent connectivity have made human life measurable in ways Skinner could only dream of. People now carry tracking devices voluntarily—not because they are coerced, but because modern life makes refusal almost impossible.
Science finally gained access to everyday human reality.
When Society Became a Target
Once this access existed, elite researchers began to speak of social life as something “under-measured.” Conversations, friendships, teamwork, movement, cooperation—these were no longer ethical or cultural spaces. They became resources.
This thinking produced a dangerous idea:
reality itself could be mined.
Not forests.
Not oil.
But human life.
Society was reframed as raw material. People became sources of behavioural data. Reality became something to extract, analyse, predict, and intervene in.
This shift happened without democratic discussion. Ordinary people were never asked whether they wanted their lives turned into datasets. Decisions were made inside elite institutions—far from public scrutiny.
No Consent, Only Silent Capture
The most serious problem here is not technology.
It is consent.
People were never asked whether their social relations should be continuously monitored. Even when participation existed, the full purpose was never honestly stated. Data collected “for research” quietly moved into markets. Data collected “to understand” became data used to manage, rank, and control.
Human behaviour was taken first.
Justification came later.
This is not informed consent.
This is behavioural capture.
India: Aadhaar and the Normalisation of Surveillance
India provides one of the clearest examples of how this logic spreads from elite science into everyday governance.
Aadhaar was introduced as a tool for welfare efficiency. But it slowly became something else—a system that made biometric identification a condition for existence. Food, pensions, wages, healthcare, schooling—basic life functions became dependent on biometric verification.
People were told: If you have nothing to hide, you have nothing to fear.
This is the language of control, not democracy.
Aadhaar turned citizens into entries in a database. Errors became punishments. Exclusion became “technical failure.” Human dignity was replaced by system compliance.
The logic is the same:
measurement first, rights later.
Beyond India: China, Africa, Russia, and Latin America
What is unfolding in India is not unique. Variations of the same instrumentarian logic are spreading across the world, adapted to different political systems but driven by the same idea: governance through data, not consent.
China represents the most explicit version.
Here, surveillance is not hidden behind market language. Facial recognition, digital IDs, location tracking, and behavioural scoring are openly integrated into governance. The Social Credit ecosystem does not merely observe citizens—it shapes behaviour by rewarding conformity and punishing deviation. Travel, employment, education, and social participation become conditional. The system does not argue with citizens; it conditions them.
Africa presents a quieter but equally troubling story.
Across several countries, biometric ID systems, welfare databases, and mobile-money platforms are rolled out through partnerships between governments, global tech firms, and international financial institutions. These systems are often introduced as tools for inclusion. In practice, they create dependency: access to food aid, healthcare, or cash transfers becomes conditional on digital compliance. People with the least power are turned into experimental populations, with little legal protection and no meaningful choice.
Russia combines surveillance with authoritarian state power.
Digital monitoring, facial recognition in public spaces, workplace surveillance, and platform regulation are increasingly used to suppress dissent and discipline society. Behavioural data strengthens political control rather than markets, but the logic remains the same: citizens are observed continuously, and deviation carries consequences. Technology replaces persuasion. Visibility replaces legitimacy.
Latin America reveals how these systems thrive amid inequality.
In countries like Brazil, Mexico, and Colombia, predictive policing tools, welfare databases, and platform-driven labour systems disproportionately target the poor. Informal workers, migrants, and marginalised communities are tracked, scored, and excluded by opaque systems they cannot challenge. Surveillance deepens inequality rather than resolving it.
Across these regions, the pattern is unmistakable:
different governments
different ideologies
same method
Human life becomes legible to power.
And once legible, it becomes governable.
Workplaces as Laboratories
This same logic now dominates workplaces—globally and in India.
Employee monitoring software tracks keystrokes, screen time, speech patterns, movements, even emotions. Workers are ranked, scored, and optimised. Social relations are analysed to improve “team performance.”
Trust is replaced by metrics.
Judgment is replaced by prediction.
Human beings become management problems.
Efficiency rises. Freedom collapses.
Platforms and Everyday Life
Digital platforms complete the circle.
What you watch, read, buy, like, pause, skip, or ignore is tracked. Platforms do not just predict your behaviour; they shape it. Attention is nudged. Emotions are managed. Choices are guided.
This is not persuasion.
It is conditioning.
People believe they are free because no one orders them. But their options are silently structured in advance.
Markets Enter the Picture
Once behaviour could be predicted, it could be sold.
Markets love predictability.
Freedom creates uncertainty.
Uncertainty threatens profit.
So science was redirected. The goal was no longer understanding society; it was managing it. Human unpredictability became a defect to be corrected. Resistance became inefficiency.
People were told this would help them live better lives. In reality, it helped companies gain unprecedented power over behaviour.
Elite Institutions and Democratic Failure
That institutions like MIT lead these developments matters deeply.
These are not neutral spaces. They shape governments, markets, and global norms. When elite institutions design systems that transform society without consent, democracy is bypassed.
This is how modern technological domination works:
no violence
no coup
no dictator
Just expertise, authority, and inevitability.
A small elite designs systems.
The rest of society is expected to adapt.
The Rise of the Big Other
This system does not rule by command. It rules by observation.
It watches silently.
It predicts quietly.
It shapes behaviour gently.
This is not Orwell’s Big Brother.
This is something more subtle and more dangerous.
This is the Big Other—a power that governs through data rather than law. Under it, humans are no longer citizens with rights. They are inputs in a system.
A Philosophical Reckoning
This project fails at the deepest philosophical level.
Kant taught that humans must never be treated merely as means, but always as ends. Behavioural surveillance does the opposite—it turns people into tools for optimisation.
Hannah Arendt warned that evil often arises not from cruelty, but from thoughtlessness. Instrumentarian systems eliminate thinking by replacing judgment with automation.
Karl Marx showed how capitalism turns human life into commodities. Behavioural data is the newest commodity—extracted from life itself.
Amartya Sen reminds us that development is freedom, not efficiency. A society that predicts and controls behaviour expands systems while shrinking human capability.
Together, they tell us this:
A society that knows everything about people but allows them no meaningful choice is not advanced. It is impoverished.
A Final Warning
This is not an argument against science or technology. It is an argument against rule without consent.
When science is captured by markets and elites, it stops serving humanity. When systems feel normal, resistance feels irrational. That is how freedom disappears—not dramatically, but quietly.
The greatest danger is not that this system is cruel.
The danger is that it feels inevitable.
And once control feels inevitable, freedom becomes unthinkable.
Normalisation of Surveillance Without Consent
India provides one of the clearest examples of how this logic spreads from elite science into everyday governance.
Aadhaar was introduced as a tool for welfare efficiency. But it slowly became something else—a system that made biometric identification a condition for existence. Food, pensions, wages, healthcare, schooling—basic life functions became dependent on biometric verification.
People were told: If you have nothing to hide, you have nothing to fear.
This is the language of control, not democracy.
Aadhaar turned citizens into entries in a database. Errors became punishments. Exclusion became “technical failure.” Human dignity was replaced by system compliance.
The logic is the same:
measurement first, rights later.
What is crucial is that India is not an exception—it is an illustration.
China represents the most advanced and explicit form of this logic. There, surveillance is not hidden behind welfare or efficiency rhetoric. Facial recognition, location tracking, digital IDs, and behavioural scoring systems are openly integrated into governance. The Social Credit ecosystem does not merely observe citizens; it actively shapes behaviour by rewarding conformity and penalising deviation. Travel, employment, education, and social participation become conditional. Control operates not through law alone, but through continuous behavioural calibration.
Africa presents a quieter, more experimental version. Across several countries, biometric ID systems, digital welfare registries, and mobile-money platforms are introduced through partnerships between governments, global technology firms, and international development agencies. These systems are justified as tools of inclusion. In practice, they turn the poorest populations into testing grounds—where access to food aid, healthcare, or cash transfers depends on digital compliance. Consent is nominal. Exit is impossible.
Latin America reveals how surveillance systems deepen inequality. Predictive policing tools, welfare databases, and platform-based labour management systems disproportionately target informal workers, migrants, and marginalised communities. Opaque algorithms decide who is eligible for support, who is flagged as risky, and who is excluded. Surveillance here does not correct inequality; it automates it.
Europe and the UK, often imagined as safeguards against excess, show a more subtle transformation. Digital welfare systems, algorithmic risk assessments, predictive policing, and workplace monitoring are expanding under the language of efficiency, fraud prevention, and public safety. Even under GDPR, consent is diluted by necessity: people must submit to digital systems to access housing, benefits, jobs, or visas. Rights exist on paper, while systems quietly reorganise power in practice.
Across regions, political systems differ. Justifications differ.
But the underlying method is the same.
Human life is made legible to power.
And once legible, it becomes governable.
The Philosophical Reckoning (Retained and Extended)
At its core, this project fails at the deepest philosophical level.
Immanuel Kant insisted that human beings must never be treated merely as means, but always as ends in themselves. Instrumentarian systems violate this principle by turning human behaviour into a tool for optimisation, management, and profit.
Hannah Arendt warned that the greatest dangers to freedom often arise not from cruelty, but from thoughtlessness—from systems in which individuals stop judging and simply follow procedures. Algorithmic governance replaces judgment with automation and responsibility with metrics.
Karl Marx showed how capitalism turns human life into commodities. Behavioural data is simply the newest commodity—extracted not from labour alone, but from life itself, from movement, speech, emotion, and sociality.
Amartya Sen reminds us that development is not efficiency or growth, but freedom—the real capability to choose, to act, and to live with dignity. A society that expands systems while shrinking human agency is not developing. It is impoverishing itself.
Together, these thinkers converge on a single truth:
A society that knows everything about people but allows them no meaningful choice is not advanced. It is diminished.
Why This Matters
This is not an argument against technology.
It is an argument against rule without consent.
When science is captured by markets, states, and elite institutions, it stops serving humanity. When surveillance becomes normal, resistance appears irrational. That is how freedom disappears—not through force, but through design.
The greatest danger is not that these systems are brutal.
It is that they feel inevitable.
And once control feels inevitable, freedom becomes unthinkable.
Below is a rewriting of the same passage, but now explicitly framed as a constitutional–democratic critique.
Philosophy is retained and deepened, but every argument is now anchored in constitutional ideas: rule of law, consent of the governed, due process, separation of powers, accountability, proportionality, and fundamental rights.
Language remains clear, forceful, and accessible, not legalistic.
This section is designed to replace your earlier block, not the whole essay.
From Surveillance to Constitutional Breakdown: How Instrumentarian Power Undermines Democracy
Modern surveillance systems are not merely technological innovations. They represent a structural challenge to constitutional democracy itself.
Constitutions were written to restrain power—to ensure that authority flows from the people, operates through law, and remains accountable. Instrumentarian systems reverse this logic. They accumulate power silently, operate through infrastructure rather than legislation, and escape democratic scrutiny.
India offers a stark illustration of this shift.
India: Aadhaar and the Quiet Erosion of Constitutional Guarantees
Aadhaar was introduced as an administrative tool to improve welfare delivery. Over time, it became something far more serious: biometric identification was transformed into a precondition for citizenship in practice, if not in law.
Food, pensions, wages, healthcare, schooling—access to basic constitutional entitlements became dependent on successful authentication. A system error could mean starvation. A database mismatch could mean non-existence.
From a constitutional perspective, this is a fundamental inversion.
Rights are no longer guaranteed first and administered second.
Administration now determines whether rights exist at all.
Citizens were told: If you have nothing to hide, you have nothing to fear.
This statement is incompatible with constitutional democracy.
Constitutions exist precisely because individuals do have something to fear—unchecked power.
Aadhaar converted citizens into data subjects. Accountability was displaced from officials to machines. When harm occurred, responsibility dissolved into “technical failure.” Due process was replaced by system compliance.
This is not a technological flaw.
It is a constitutional rupture.
And it is not uniquely Indian.
A Global Constitutional Pattern
Across political systems, the same logic is visible.
In China, surveillance is constitutionally explicit rather than concealed. Digital IDs, facial recognition, and behavioural scoring are openly integrated into governance. Rights are conditional, not inherent. Behavioural conformity replaces legal protection. Law does not restrain power; it operationalises it.
In Africa, surveillance arrives through development programs. Biometric IDs and digital welfare systems are introduced as tools of inclusion, often under international pressure. But constitutional safeguards are weak. Refusal is not meaningful when access to food, healthcare, or cash transfers depends on compliance. The right to dignity is subordinated to administrative convenience.
In Latin America, predictive policing and algorithmic welfare systems undermine equality before law. Marginalised communities are disproportionately surveilled, flagged, and excluded. Decisions affecting life chances are made by opaque systems with no right to explanation, appeal, or remedy—violating core principles of due process.
In Europe and the UK, constitutional erosion occurs under the language of legality. GDPR promises consent, but consent is hollow when digital systems are unavoidable. Algorithmic decision-making in welfare, migration, policing, and employment expands faster than democratic oversight. Rights survive as formal text, while real power migrates into technical systems beyond public control.
Across contexts, the constitutional pattern is clear:
Power no longer waits for law.
Infrastructure becomes authority.
Democracy Without Choice Is Not Democracy
Constitutional democracy rests on a simple premise: the consent of the governed.
But consent loses meaning when refusal carries punishment.
Philosophers of democracy have long warned against this.
Immanuel Kant argued that legitimate authority requires treating individuals as autonomous agents, not objects of management. Systems that predict and nudge behaviour undermine autonomy by pre-structuring choice.
Jürgen Habermas insisted that democratic legitimacy arises from public reasoning and deliberation. Instrumentarian systems bypass deliberation entirely. They are deployed as technical necessities, not political choices. Citizens adapt because they must, not because they agreed.
Hannah Arendt warned that freedom dies when action is replaced by administration. When systems decide automatically, political responsibility disappears. No one governs; no one answers.
What remains is governance without governors—and accountability without actors.
Rule of Law Replaced by Rule of Systems
Constitutions require:
identifiable decision-makers
reasons for decisions
rights to appeal
proportional use of power
Instrumentarian governance undermines each of these.
Decisions are made by models.
Reasons are hidden behind algorithms.
Appeals are meaningless against systems.
Proportionality is replaced by optimisation.
This is not rule of law.
It is rule of system.
Michel Foucault helps us understand this shift. Power today does not operate primarily through commands or punishment, but through continuous assessment and normalization. Law fades into background infrastructure. Surveillance becomes permanent, not exceptional.
Dignity, Capability, and Constitutional Purpose
Constitutions are not efficiency documents.
They are dignity documents.
Amartya Sen reminds us that freedom is not merely the absence of restraint, but the presence of real capabilities. Surveillance systems shrink capability by narrowing the space of meaningful choice.
Martha Nussbaum grounds dignity in concrete human functions—bodily integrity, emotional expression, affiliation, practical reason. Constant monitoring chills all of these. People behave cautiously, not freely. Safely, not truthfully.
A society that conditions access to rights on behavioural compliance violates the constitutional promise of equal citizenship.
From Knowledge to Domination
Michael Polanyi warned that when knowledge is stripped of moral responsibility, it becomes authoritarian. Instrumentarian systems convert understanding into control and prediction into domination.
Science ceases to ask constitutional questions:
Who authorises this?
Who is accountable?
Who can refuse?
Who can appeal?
When these questions vanish, democracy follows.
The Constitutional Question We Can No Longer Avoid
The crisis before us is not technological.
It is constitutional.
Can democracy survive when systems decide faster than law?
Can consent exist when refusal means exclusion?
Can dignity endure when citizens are treated as data points?
Can accountability survive when power hides in infrastructure?
Instrumentarian power does not overthrow constitutions.
It bypasses them.
That is why it is so dangerous.
The greatest threat to democracy today is not dictatorship.
It is governance that feels administrative, inevitable, and neutral.
When power no longer needs justification,
freedom no longer has protection.
Surveillance Without Consent: A Constitutional–Democratic Reckoning
Modern surveillance systems are often presented as neutral instruments of efficiency, security, or inclusion. In reality, they represent a profound constitutional challenge. They reorganise power not through law, but through infrastructure; not through public consent, but through silent dependence. What is at stake is not privacy alone, but the very architecture of democratic self-government.
Constitutional democracy rests on a simple but demanding premise: power must be authorised, limited, and accountable to those over whom it is exercised. Instrumentarian systems—those that continuously measure, predict, and modify human behaviour—violate this premise at every level.
From Rights to Conditional Access
Across jurisdictions, a common pattern has emerged. Fundamental entitlements—welfare, work, mobility, education, security—are no longer accessed as rights guaranteed by law, but as services unlocked by successful system compliance.
India’s Aadhaar regime illustrates this shift with particular clarity. Introduced as an administrative aid, biometric identification gradually became a condition for accessing food rations, pensions, healthcare, wages, and schooling. When authentication fails, rights do not merely delay—they disappear. Exclusion is reframed as “technical error,” and responsibility dissolves into systems.
This is a constitutional inversion.
In a democracy, administration exists to serve rights.
Here, rights exist only if administration permits them.
Comparable dynamics are visible elsewhere, adapted to different political cultures. China integrates surveillance directly into governance, conditioning social participation through continuous behavioural assessment. In parts of Africa, biometric IDs and digital welfare systems are rolled out through development programs where refusal is practically impossible. In Latin America, predictive policing and algorithmic welfare deepen inequality by targeting marginalised populations without transparency or remedy. In Europe and the UK, algorithmic decision-making expands under formal legality, while meaningful consent erodes under necessity.
Different systems, same structure: power migrates from law to systems.
Democracy Without Consent
Consent is the moral foundation of democratic authority. Yet consent loses meaning when participation is compulsory and alternatives do not exist.
Modern surveillance regimes claim legitimacy through user agreements, opt-ins, or legal compliance. But when refusing a system means losing access to food, work, housing, or mobility, consent is a fiction. Democratic legitimacy cannot be derived from acquiescence under dependency.
This concern was central to Immanuel Kant, for whom freedom meant autonomy—the capacity to act as a self-legislating moral agent. Systems that predict and pre-structure behaviour treat individuals not as authors of action, but as objects of management. Such governance violates the foundational democratic idea that citizens are ends in themselves.
Jürgen Habermas argued that legitimacy arises from public reasoning—free, informed, and equal participation in decisions that bind all. Instrumentarian governance bypasses this entirely. Systems are introduced as technical necessities, not political choices. Debate follows deployment, if at all. Citizens adjust because they must, not because they have agreed.
What emerges is democracy in form, but not in substance.
The Rule of Law Replaced by the Rule of Systems
Constitutionalism demands identifiable decision-makers, reasons for decisions, rights of appeal, and proportional use of power. Instrumentarian systems undermine each requirement.
Decisions are made by models rather than officials.
Reasons are obscured behind proprietary algorithms.
Appeal becomes meaningless when no human judgment is acknowledged.
Proportionality is replaced by optimisation.
This is not the rule of law. It is the rule of system.
Hannah Arendt warned that the most dangerous forms of domination arise not from overt tyranny, but from thoughtless administration—when no one feels responsible because “the system” decided. Instrumentarian governance realises this danger at scale. Power operates continuously, automatically, and without visible authorship.
Michel Foucault helps explain why resistance is so difficult. Surveillance today does not merely punish; it normalises. Individuals internalise expectations, adjust behaviour in advance, and learn to govern themselves according to invisible standards. Power becomes productive, not prohibitive—and therefore harder to contest.
Dignity, Capability, and Democratic Purpose
Constitutions are not efficiency charters. They are dignity documents.
Amartya Sen reminds us that freedom is not merely non-interference, but the presence of real capabilities—the ability to choose, act, and participate meaningfully in social life. Systems that predict behaviour shrink the horizon of choice. They transform moral agency into behavioural compliance.
Martha Nussbaum grounds dignity in concrete human capabilities: bodily integrity, emotional expression, affiliation, practical reason. Continuous surveillance chills all of these. People act cautiously rather than freely, safely rather than truthfully. A society that requires constant legibility to power undermines the very conditions of democratic citizenship.
Karl Polanyi warned that when markets disembed from social limits, they destroy the fabric that sustains them. Instrumentarian surveillance represents a new form of disembedding—where human life itself becomes a resource for extraction, prediction, and control. Behavioural data is not just another commodity; it is the commodification of agency.
A Philosophical Conclusion: Freedom Beyond Efficiency
At its core, the rise of instrumentarian governance reflects a philosophical failure. It mistakes prediction for understanding, efficiency for legitimacy, and compliance for consent.
Democracy is not defined by how accurately a system can anticipate behaviour, but by how fully it respects human agency. Freedom is not the absence of friction, but the presence of choice. Dignity is not system compatibility, but the right to appear before power as a reasoning, contesting subject.
When science is severed from moral responsibility, it becomes administration. When administration escapes constitutional restraint, it becomes domination.
The most serious danger is not that these systems are cruel.
It is that they appear neutral, reasonable, and inevitable.
But constitutional democracy was never meant to be inevitable.
It was meant to be chosen, argued for, and defended—again and again.
The question we now face is unavoidable:
Can democracy survive when power no longer needs to ask?
27.12.2025
Page 268
Below is a single, clear, reader-enlightening explanation of the entire passage, written in simple English, with nuance, practical meaning, and political clarity.
It is organised under one title, with clearly subtitled sections, each unpacking what is happening, why it matters, and how it appears in real life.
When Measuring People Becomes Normal: How Workplaces Turned Into Living Laboratories
1. From Experiment to Industry: Normalising Human Instrumentation
By 2013, Pentland’s sociometer had moved far beyond academic curiosity. What began as a research device was now being used by dozens of research groups and private companies, including some among the Fortune 1000. This is a crucial shift.
When a technology leaves the laboratory and enters corporations, its purpose changes. It is no longer about understanding human behaviour. It becomes about using that understanding to manage, shape, and optimise people.
In practical terms, this means workplaces began treating employees not just as workers, but as sources of behavioural data—to be measured continuously and compared systematically.
2. Declaring the Breakthrough: Instrumenting Human Behaviour
A 2014 study openly declared something unprecedented:
that it was now possible to actively instrument human behaviour.
This statement is important. It means that human interaction—how people talk, listen, cooperate, interrupt, move, and respond—was no longer considered private, spontaneous, or context-dependent. It became measurable input.
In everyday terms, this means:
- how long you speak in meetings
- who you speak to
- how close you stand
- whether your voice sounds engaged or flat
- whether you “lean in” while listening
—all could be captured, quantified, and evaluated.
Human presence itself became data.
3. The Strategy of Invisibility: Why You Should Not Notice
The researchers explicitly embraced a disturbing principle:
monitoring works best when people do not know it is happening.
They argued that data collection must stay outside human awareness to avoid resistance. If people feel watched, they may change behaviour, resist, or object. The solution was simple: make surveillance smaller, quieter, and psychologically invisible.
This is why the technology focused on:
- tiny sensors
- wearable badges
- passive collection
- “unobtrusive” monitoring
In real life, this looks like:
- badges that seem harmless
- tracking presented as “team improvement”
- monitoring framed as neutral or helpful
Resistance disappears not because people agree—but because they stop noticing.
4. From Surveillance to ‘Natural Settings’
The researchers claimed this would allow data collection in “naturalistic settings.” This phrase sounds innocent, but it hides a serious shift.
A “natural setting” is supposed to mean people acting freely. But once behaviour is measured continuously, the setting is no longer natural—it is managed.
In offices, this means:
- employees behave knowing they are always being evaluated
- social interaction becomes performance
- spontaneity gives way to safety
People adjust themselves quietly, not because they are ordered to—but because they are scored.
5. Rebranding Power: From Sociometric Solutions to ‘Humanyze’
In 2015, the company renamed itself Humanyze. This change matters.
“Sociometric Solutions” sounded technical and intrusive.
“Humanyze” sounds friendly, human-centred, and positive.
This is a classic move: when a technology becomes controversial, change the language, not the function.
The company described its product as a “smart employee badge” designed to “improve business performance.” What it did in practice was collect continuous behavioural data and convert it into performance metrics.
The message to employees:
“This helps you work better.”
The reality:
“This helps management control behaviour more precisely.”
6. Turning Workers into Data Profiles
Humanyze’s system measured up to forty different behavioural indicators, including:
- movement patterns
- social connections
- tone of voice
- engagement signals
- listening posture
- network position inside the organisation
All this information was fed into a business dashboard—a single screen where managers could view human behaviour like financial statistics.
In simple terms:
- people became charts
- relationships became metrics
- cooperation became a variable
Work was no longer judged by judgment or trust—but by data.
7. Real-World Deployment: Bank of America Example
One of the clearest examples came from Bank of America’s call centres.
Using sociometric data, Pentland convinced managers to synchronise coffee breaks. The idea was that if workers interacted more informally, cooperation and morale would improve.
It worked—productivity increased significantly.
This example is often celebrated as proof that behavioural data “helps people.”
But the deeper issue is not the coffee break.
The issue is this:
- workers were observed continuously
- behaviour was manipulated experimentally
- changes were made without democratic participation
- success was defined purely by productivity
Human wellbeing mattered only insofar as it increased output.
8. Why This Matters Beyond the Workplace
What happened in offices is a model for society.
If behaviour can be monitored, predicted, and adjusted at work, the same logic applies to:
- schools
- universities
- welfare systems
- cities
- public spaces
The workplace becomes a testing ground for a broader social order—one where freedom is quietly replaced by optimisation.
9. The Ethical Turning Point: When Help Becomes Control
Supporters argue these systems are beneficial. They claim:
- productivity increases
- collaboration improves
- decisions become data-driven
But this argument ignores a crucial question:
Who decides what “better behaviour” means?
When behaviour is measured constantly, people stop acting freely. They act safely. They conform. They avoid risk.
Consent becomes meaningless when refusal is impossible.
10. Reasoned Conclusion: The Hidden Cost of Efficiency
This entire passage reveals a silent transformation.
Human beings are no longer trusted to organise themselves.
They are managed through continuous measurement.
What is lost is not just privacy, but:
- dignity
- agency
- moral choice
- the right to act without being evaluated
The danger is not that these systems are openly cruel.
The danger is that they appear helpful, neutral, and efficient.
But a society that treats human behaviour as something to be instrumented does not become more humane.
It becomes better at control.
And control, once normalised, is very hard to undo.
If you want next, I can:
- connect this directly to constitutional rights and labour law
- write a short public-facing essay
- or place it inside your larger framework of capability destruction and democracy
Just tell me how you want to proceed.
Below is a clear, nuanced, reader-enlightening explanation of the entire passage, written in simple English, with practical manifestations, careful reasoning, and no technical clutter.
Everything is explained step by step, under one title, with clearly subtitled paragraphs, ending in a reasoned conclusion.
When Predicting Humans Becomes a Business: The Commercial Turn of Surveillance Science
1. From Productivity Gains to a New Business Model
The passage begins with a striking claim: a simple change in workplace behaviour—synchronising coffee breaks—produced a productivity increase of $15 million a year.
At first glance, this sounds harmless, even positive. Who would oppose better teamwork or improved productivity?
But this example is not about coffee breaks.
It is about how the change was decided.
The change was not proposed through dialogue with workers or democratic discussion. It was derived from continuous behavioural surveillance. Workers’ movements, interactions, and patterns were measured first. Decisions came later.
This sets the template:
observe → predict → intervene → profit.
2. The Bigger Picture: Surveillance as a Service
Pentland’s MIT biography lists nineteen commercial ventures, many of which fall into a single category: surveillance-as-a-service.
This means surveillance is no longer a by-product of technology.
It is a business offering.
Companies no longer sell products alone. They sell:
- predictions about human behaviour
- risk assessments of people
- forecasts of what individuals or groups will do next
Human behaviour itself becomes the commodity.
3. Endor: Selling Prediction as Power
One such company is Endor, co-founded by Pentland.
Endor markets itself as a solution to what it calls the “prediction imperative.” This phrase is revealing. It suggests that in modern business—and governance—prediction is no longer optional. It is treated as a necessity.
Endor claims its roots lie in the “revolutionary new science” of social physics. In plain language, this means:
- human behaviour is assumed to follow mathematical patterns
- those patterns can be detected using massive data
- once detected, behaviour can be predicted
Endor boldly claims its system can “explain and predict any sort of human behavior.”
This is an extraordinary claim.
In practical terms, it means that:
- phone calls
- credit card purchases
- taxi rides
- web activity
are treated not as personal actions, but as signals revealing hidden behavioural rules.
You do not need to say what you intend to do.
Your past behaviour is assumed to speak for you.
4. Detecting Behaviour Before Humans Can See It
Endor’s website boasts that it can detect “emerging behavioural patterns” before humans can observe them.
This is crucial.
It means decisions can be made about people:
- before they are aware of changes in themselves
- before they act consciously
- before they have a chance to choose differently
In real life, this could mean:
- predicting when customers are likely to switch brands
- anticipating unrest or dissatisfaction
- identifying “undesirable” behaviour early
Prediction moves ahead of judgment.
People are managed not for what they have done, but for what they might do.
5. Working with “Top Consumer Brands”: What This Really Means
Endor says it works with “some of the world’s top consumer brands.”
Translated into everyday terms, this means:
- corporations use behavioural prediction to influence buying decisions
- advertising becomes more targeted, more timely, more persuasive
- consumers are guided, nudged, and steered without real awareness
The relationship shifts:
- companies know people in detail
- people know almost nothing about how they are being analysed
This is not an equal market.
It is an asymmetric one.
6. Sense Networks: Turning Location into Advertising Control
Another Pentland company, Sense Networks, shows how this logic operates in physical space.
Sense Networks specialised in location data—where people go, when, how often, and in what patterns.
In 2014, it was acquired by YP (formerly Yellow Pages), now a major advertising and local search company.
YP described Sense Networks as a platform that could:
- identify mobile audiences at scale
- track shoppers and potential customers
- target them with ads when they are near a store, at home, or at work
In simple terms:
Your movement becomes an advertising signal.
Walking past a shop is no longer just walking.
It is interpreted as interest.
It triggers commercial intervention.
7. Retargeting: When Space Stops Being Neutral
The idea of “retargeting” is central here.
If a system knows:
- where you go
- where you linger
- where you return
it can decide:
- when to show you an ad
- what to offer
- how to influence your next move
Public space, home space, and work space all become commercial zones.
There is no clear boundary anymore between:
- living
- moving
- being marketed to
Life itself becomes a sales funnel.
8. The Hidden Assumption: Humans as Predictable Objects
All these ventures rest on one shared assumption:
Human beings are predictable systems.
Once enough data is collected, behaviour is assumed to follow rules that machines can uncover better than humans themselves.
This reduces people to:
- patterns
- probabilities
- risk profiles
Choice, reflection, and moral judgment fade into the background.
9. Why This Matters Beyond Business
What starts in marketing and productivity does not stay there.
The same logic easily extends to:
- policing
- insurance
- credit scoring
- welfare eligibility
- employment decisions
Once prediction becomes profitable, it becomes powerful.
And once it becomes powerful, it reshapes institutions.
10. Reasoned Conclusion: The Cost of Turning Life into Data
This passage reveals a quiet but profound shift.
Science moved from understanding humans
to predicting humans
to selling those predictions.
The promise is efficiency and profit.
The cost is agency and dignity.
People are no longer treated as:
- decision-makers
- moral agents
- participants in shaping their lives
They are treated as datasets whose future can be optimised.
The danger is not that these systems always fail.
The danger is that when they succeed, they succeed at control.
A society that builds its future on prediction rather than consent may become efficient—but it will not remain free.
And once prediction replaces choice,
freedom becomes an afterthought.
If you want, I can next:
- link this directly to constitutional rights and democratic consent
- integrate it into your capability destruction framework
- or rewrite it as a short public-facing essay or op-ed
Just tell me how you’d like to proceed.
Below is a clear, nuanced, reader-enlightening explanation of the entire passage, written in simple English, with practical manifestations, careful reasoning, and no technical clutter.
Everything is explained step by step, under one title, with clearly subtitled paragraphs, ending in a reasoned conclusion.
When Predicting Humans Becomes a Business: The Commercial Turn of Surveillance Science
1. From Productivity Gains to a New Business Model
The passage begins with a striking claim: a simple change in workplace behaviour—synchronising coffee breaks—produced a productivity increase of $15 million a year.
At first glance, this sounds harmless, even positive. Who would oppose better teamwork or improved productivity?
But this example is not about coffee breaks.
It is about how the change was decided.
The change was not proposed through dialogue with workers or democratic discussion. It was derived from continuous behavioural surveillance. Workers’ movements, interactions, and patterns were measured first. Decisions came later.
This sets the template:
observe → predict → intervene → profit.
2. The Bigger Picture: Surveillance as a Service
Pentland’s MIT biography lists nineteen commercial ventures, many of which fall into a single category: surveillance-as-a-service.
This means surveillance is no longer a by-product of technology.
It is a business offering.
Companies no longer sell products alone. They sell:
- predictions about human behaviour
- risk assessments of people
- forecasts of what individuals or groups will do next
Human behaviour itself becomes the commodity.
3. Endor: Selling Prediction as Power
One such company is Endor, co-founded by Pentland.
Endor markets itself as a solution to what it calls the “prediction imperative.” This phrase is revealing. It suggests that in modern business—and governance—prediction is no longer optional. It is treated as a necessity.
Endor claims its roots lie in the “revolutionary new science” of social physics. In plain language, this means:
- human behaviour is assumed to follow mathematical patterns
- those patterns can be detected using massive data
- once detected, behaviour can be predicted
Endor boldly claims its system can “explain and predict any sort of human behavior.”
This is an extraordinary claim.
In practical terms, it means that:
- phone calls
- credit card purchases
- taxi rides
- web activity
are treated not as personal actions, but as signals revealing hidden behavioural rules.
You do not need to say what you intend to do.
Your past behaviour is assumed to speak for you.
4. Detecting Behaviour Before Humans Can See It
Endor’s website boasts that it can detect “emerging behavioural patterns” before humans can observe them.
This is crucial.
It means decisions can be made about people:
- before they are aware of changes in themselves
- before they act consciously
- before they have a chance to choose differently
In real life, this could mean:
- predicting when customers are likely to switch brands
- anticipating unrest or dissatisfaction
- identifying “undesirable” behaviour early
Prediction moves ahead of judgment.
People are managed not for what they have done, but for what they might do.
5. Working with “Top Consumer Brands”: What This Really Means
Endor says it works with “some of the world’s top consumer brands.”
Translated into everyday terms, this means:
- corporations use behavioural prediction to influence buying decisions
- advertising becomes more targeted, more timely, more persuasive
- consumers are guided, nudged, and steered without real awareness
The relationship shifts:
- companies know people in detail
- people know almost nothing about how they are being analysed
This is not an equal market.
It is an asymmetric one.
6. Sense Networks: Turning Location into Advertising Control
Another Pentland company, Sense Networks, shows how this logic operates in physical space.
Sense Networks specialised in location data—where people go, when, how often, and in what patterns.
In 2014, it was acquired by YP (formerly Yellow Pages), now a major advertising and local search company.
YP described Sense Networks as a platform that could:
- identify mobile audiences at scale
- track shoppers and potential customers
- target them with ads when they are near a store, at home, or at work
In simple terms:
Your movement becomes an advertising signal.
Walking past a shop is no longer just walking.
It is interpreted as interest.
It triggers commercial intervention.
7. Retargeting: When Space Stops Being Neutral
The idea of “retargeting” is central here.
If a system knows:
- where you go
- where you linger
- where you return
it can decide:
- when to show you an ad
- what to offer
- how to influence your next move
Public space, home space, and work space all become commercial zones.
There is no clear boundary anymore between:
- living
- moving
- being marketed to
Life itself becomes a sales funnel.
8. The Hidden Assumption: Humans as Predictable Objects
All these ventures rest on one shared assumption:
Human beings are predictable systems.
Once enough data is collected, behaviour is assumed to follow rules that machines can uncover better than humans themselves.
This reduces people to:
- patterns
- probabilities
- risk profiles
Choice, reflection, and moral judgment fade into the background.
9. Why This Matters Beyond Business
What starts in marketing and productivity does not stay there.
The same logic easily extends to:
- policing
- insurance
- credit scoring
- welfare eligibility
- employment decisions
Once prediction becomes profitable, it becomes powerful.
And once it becomes powerful, it reshapes institutions.
10. Reasoned Conclusion: The Cost of Turning Life into Data
This passage reveals a quiet but profound shift.
Science moved from understanding humans
to predicting humans
to selling those predictions.
The promise is efficiency and profit.
The cost is agency and dignity.
People are no longer treated as:
- decision-makers
- moral agents
- participants in shaping their lives
They are treated as datasets whose future can be optimised.
The danger is not that these systems always fail.
The danger is that when they succeed, they succeed at control.
A society that builds its future on prediction rather than consent may become efficient—but it will not remain free.
And once prediction replaces choice,
freedom becomes an afterthought.
If you want, I can next:
- link this directly to constitutional rights and democratic consent
- integrate it into your capability destruction framework
- or rewrite it as a short public-facing essay or op-ed
Just tell me how you’d like to proceed.
Below is a clear, nuanced, reader-enlightening explanation of the entire passage, written in simple English, with practical manifestations, careful reasoning, and no technical clutter.
Everything is explained step by step, under one title, with clearly subtitled paragraphs, ending in a reasoned conclusion.
When Predicting Humans Becomes a Business: The Commercial Turn of Surveillance Science
1. From Productivity Gains to a New Business Model
The passage begins with a striking claim: a simple change in workplace behaviour—synchronising coffee breaks—produced a productivity increase of $15 million a year.
At first glance, this sounds harmless, even positive. Who would oppose better teamwork or improved productivity?
But this example is not about coffee breaks.
It is about how the change was decided.
The change was not proposed through dialogue with workers or democratic discussion. It was derived from continuous behavioural surveillance. Workers’ movements, interactions, and patterns were measured first. Decisions came later.
This sets the template:
observe → predict → intervene → profit.
2. The Bigger Picture: Surveillance as a Service
Pentland’s MIT biography lists nineteen commercial ventures, many of which fall into a single category: surveillance-as-a-service.
This means surveillance is no longer a by-product of technology.
It is a business offering.
Companies no longer sell products alone. They sell:
- predictions about human behaviour
- risk assessments of people
- forecasts of what individuals or groups will do next
Human behaviour itself becomes the commodity.
3. Endor: Selling Prediction as Power
One such company is Endor, co-founded by Pentland.
Endor markets itself as a solution to what it calls the “prediction imperative.” This phrase is revealing. It suggests that in modern business—and governance—prediction is no longer optional. It is treated as a necessity.
Endor claims its roots lie in the “revolutionary new science” of social physics. In plain language, this means:
- human behaviour is assumed to follow mathematical patterns
- those patterns can be detected using massive data
- once detected, behaviour can be predicted
Endor boldly claims its system can “explain and predict any sort of human behavior.”
This is an extraordinary claim.
In practical terms, it means that:
- phone calls
- credit card purchases
- taxi rides
- web activity
are treated not as personal actions, but as signals revealing hidden behavioural rules.
You do not need to say what you intend to do.
Your past behaviour is assumed to speak for you.
4. Detecting Behaviour Before Humans Can See It
Endor’s website boasts that it can detect “emerging behavioural patterns” before humans can observe them.
This is crucial.
It means decisions can be made about people:
- before they are aware of changes in themselves
- before they act consciously
- before they have a chance to choose differently
In real life, this could mean:
- predicting when customers are likely to switch brands
- anticipating unrest or dissatisfaction
- identifying “undesirable” behaviour early
Prediction moves ahead of judgment.
People are managed not for what they have done, but for what they might do.
5. Working with “Top Consumer Brands”: What This Really Means
Endor says it works with “some of the world’s top consumer brands.”
Translated into everyday terms, this means:
- corporations use behavioural prediction to influence buying decisions
- advertising becomes more targeted, more timely, more persuasive
- consumers are guided, nudged, and steered without real awareness
The relationship shifts:
- companies know people in detail
- people know almost nothing about how they are being analysed
This is not an equal market.
It is an asymmetric one.
6. Sense Networks: Turning Location into Advertising Control
Another Pentland company, Sense Networks, shows how this logic operates in physical space.
Sense Networks specialised in location data—where people go, when, how often, and in what patterns.
In 2014, it was acquired by YP (formerly Yellow Pages), now a major advertising and local search company.
YP described Sense Networks as a platform that could:
- identify mobile audiences at scale
- track shoppers and potential customers
- target them with ads when they are near a store, at home, or at work
In simple terms:
Your movement becomes an advertising signal.
Walking past a shop is no longer just walking.
It is interpreted as interest.
It triggers commercial intervention.
7. Retargeting: When Space Stops Being Neutral
The idea of “retargeting” is central here.
If a system knows:
- where you go
- where you linger
- where you return
it can decide:
- when to show you an ad
- what to offer
- how to influence your next move
Public space, home space, and work space all become commercial zones.
There is no clear boundary anymore between:
- living
- moving
- being marketed to
Life itself becomes a sales funnel.
8. The Hidden Assumption: Humans as Predictable Objects
All these ventures rest on one shared assumption:
Human beings are predictable systems.
Once enough data is collected, behaviour is assumed to follow rules that machines can uncover better than humans themselves.
This reduces people to:
- patterns
- probabilities
- risk profiles
Choice, reflection, and moral judgment fade into the background.
9. Why This Matters Beyond Business
What starts in marketing and productivity does not stay there.
The same logic easily extends to:
- policing
- insurance
- credit scoring
- welfare eligibility
- employment decisions
Once prediction becomes profitable, it becomes powerful.
And once it becomes powerful, it reshapes institutions.
10. Reasoned Conclusion: The Cost of Turning Life into Data
This passage reveals a quiet but profound shift.
Science moved from understanding humans
to predicting humans
to selling those predictions.
The promise is efficiency and profit.
The cost is agency and dignity.
People are no longer treated as:
- decision-makers
- moral agents
- participants in shaping their lives
They are treated as datasets whose future can be optimised.
The danger is not that these systems always fail.
The danger is that when they succeed, they succeed at control.
A society that builds its future on prediction rather than consent may become efficient—but it will not remain free.
And once prediction replaces choice,
freedom becomes an afterthought.
If you want, I can next:
- link this directly to constitutional rights and democratic consent
- integrate it into your capability destruction framework
- or rewrite it as a short public-facing essay or op-ed
Just tell me how you’d like to proceed.
🟥 28.12.25 — Page 268–269 🟥
From Offices to Society: How Human Behaviour Was Reimagined as a System to Be Corrected
1. Workplaces as “Living Laboratories”
Pentland does not see his workplace experiments as limited to offices. He treats them as models for society itself.
In his view, offices are convenient testing grounds. Workers are already organized, monitored, and dependent on systems. When their behaviour is measured, analysed, and corrected successfully, the same methods can later be applied to cities, public services, and entire populations.
In simple terms:
- office workers become test subjects
- workplaces become laboratories
- society becomes the next target
This is the intended path:
from economic management → to social management.
2. The Problem, According to Pentland: “People”
At a 2016 conference organised by Singularity University—a Silicon Valley hub funded partly by Larry Page—Pentland’s thinking was made very clear.
An interviewer summarised his view bluntly:
companies fail not because of systems, but because of people.
This framing is critical.
People are treated as the source of disorder.
Human behaviour—conversation, disagreement, hesitation, unpredictability—is seen not as democratic life, but as a technical problem that needs fixing.
This is a major shift:
- human judgment becomes a flaw
- social friction becomes inefficiency
- disagreement becomes “bad behaviour”
3. Making Humans Work Like Machines
Pentland openly compares human systems to machine systems.
Machines work well because:
- actions are standardised
- errors are detected
- corrections are applied automatically
Pentland wants social systems to work the same way.
Using behavioural data, systems would:
- judge whether actions are “correct” or “incorrect”
- intervene when behaviour deviates
- push people back toward “proper” patterns
In practice, this means:
- deciding how people should interact
- deciding how information should spread
- deciding what counts as good or bad behaviour
The computer becomes the referee of social life.
4. The Human–Machine “Symbiote”
Pentland describes his goal as creating a human–machine symbiosis.
He argues:
- computers should understand human behaviour better
- humans should see themselves through data
The promise sounds positive: better awareness, better decisions.
But in reality, this means:
- humans learn to adjust themselves to system expectations
- computers gain authority over what behaviour is acceptable
People do not become freer.
They become more legible and more correctable.
5. “Broken Behaviour” and the Language of Correction
The interviewer notes that sociometric data helps organisations fix “broken behaviours.”
This phrase matters.
Who decides behaviour is broken?
Who defines what “normal” or “correct” looks like?
Once behaviour is labelled broken:
- correction becomes justified
- intervention becomes moral
- resistance becomes irrational
Social life is medicalised and engineered.
6. From Workplace Control to Global Ambition
As Pentland’s tools became more powerful, his vision expanded.
He no longer spoke only of offices or companies. He began talking about society itself.
Between 2011 and 2014, he published papers outlining a new ambition:
to build systems that could manage government, energy, public health, transportation, and security using continuous data.
One 2011 paper stands out:
“Society’s Nervous System.”
The title itself reveals the idea:
society should function like a body, centrally sensed and regulated.
7. Claiming Authority: Elite Collaboration
Pentland begins this work by listing his collaborators:
- major IT firms
- wireless companies
- financial institutions
- health corporations
- US and EU regulators
- global NGOs like the World Economic Forum
This signals something important:
this is not fringe thinking.
It is elite consensus-building.
Decisions about how society should function are being discussed among corporations, regulators, and global institutions—not citizens.
8. Declaring Existing Systems “Obsolete”
Pentland argues that traditional systems—water, food, transport, policing, healthcare, education—are:
- old
- centralized
- unsustainable
This sounds reasonable at first. Many systems do need improvement.
But his solution is not democratic reform.
It is technological replacement.
He calls for systems that are:
- integrated
- responsive
- dynamic
- self-regulating
In other words:
systems that run themselves.
9. The Idea of a “Global Nervous System”
Pentland proposes building a nervous system for humanity.
Just as a nervous system senses, reacts, and stabilises the body, this global system would:
- sense society continuously
- react automatically
- maintain stability
Sensors, networks, and mobile devices become:
- the eyes
- the ears
- the reflexes
Even by 2011, Pentland claimed this system was already emerging:
traffic sensors, security systems, mobile phones—all merging into one reactive organism.
Society becomes something that is managed reflexively, not governed democratically.
10. The Missing Piece: Understanding Human Behaviour
Pentland identifies one remaining problem:
the system does not yet fully understand humans.
Machines are sensed.
Infrastructure is sensed.
But human behaviour remains unpredictable.
So the solution is obvious to him:
measure people more deeply.
He argues that:
- safety
- stability
- efficiency
depend on understanding human demand and reaction.
This means:
- observing individual behaviour
- modelling it
- predicting it
Human life becomes the final dataset.
11. “People” as a First-Class Object
This thinking aligns with statements like Microsoft’s claim that:
“People and their relationships are now a first-class thing in the cloud.”
In simple terms:
people are no longer external to systems.
They are inside them.
If people behave “incorrectly,” the system must intervene.
Human freedom becomes a risk factor.
12. Reality Mining as the Final Tool
Pentland concludes that the tools already exist.
Mobile phones and wireless networks connect most of humanity. Every call, movement, message, and transaction leaves digital traces.
These “digital breadcrumbs” can be mined to:
- monitor environments
- plan society
- model group behaviour second by second
For the first time, he says, humanity can be understood in real time.
What he calls understanding is, in practice:
continuous behavioural surveillance.
Reasoned Conclusion: When Stability Replaces Freedom
This passage reveals a coherent vision.
Not a conspiracy.
Not chaos.
But a carefully reasoned project.
In this vision:
- society is a system
- people are variables
- behaviour is correctable
- stability outranks freedom
Democracy is slow.
Consent is messy.
Human judgment is inefficient.
So systems take over.
The danger is not that this vision is openly authoritarian.
It is that it presents itself as reasonable, scientific, and inevitable.
But a society that treats human behaviour as something to be corrected rather than debated may become stable—
and lose its freedom at the same time.
That is the trade-off this passage asks us to accept.
Whether we should accept it
is the real question.
Comments
Post a Comment