Smart Cities might offer one set of technologically determined solutions to human and social problems but smartification may also come at a great cost to one’s subjectivity as a citizen.
The list is out. Now we know which cities will turn ‘smart’ in the next decade. In June 2015 Ministry of Urban Development published ‘Smart Cities: Mission Statement and Guidelines’ with the stated objective of promoting ‘cities that provide core infrastructure and give a decent quality of life to its citizens, a clean and sustainable environment and application of “Smart” Solutions’.
The Mission Statement underscores safety – for women, children and the old. It has plans for waste collection and management. It insists that electricity and safe drinking water be supplied adequately and assuredly. It notes that there has to be housing for the poor. Even in terms of processes, the Mission Statement gets one thing right: citizen participation and a consultative procedure that involves feedback and opinion-seeking from stakeholders like residents and users. Nobody would quarrel with this set of infrastructural objectives, given the rapidly deteriorating quality of life in India’s cities and towns. In fact, there is much to be praised in this vision of the 21st century Indian city.
Yet some sense of disquiet creeps in when you look at the document.
I am not even addressing here the anxieties of self-propagating technologies possessing what Susan Blackmore has characterised as ‘temes’ (the technological equivalent of ‘memes’), where machines collect information, process information and respond to information, take decisions and replicate information all without human intervention. The replication and sharing of information collected by sensors and intelligent machines, Blackmore pointed out in her 2008 TED talk, makes humans redundant. Temes are grounded in the machine’s evolution… This stuff of sci-fi I shall leave right here.
The future of the past
Before addressing the future smart city, I shall first address its past. Does the retrofitting of cities as smart cities affect their heritage in any way? Antiquated buildings and streets dot the cityscape across India. Market areas and places of worship have extraordinary local histories and linkages that, while not databased, have been a part of the collective and cultural memory of the place for decades in most cases. These do not have sensors to measure footfalls and usage, but what they do possess is organic connections with the cultural space of the locality – unique, distinctive, and with their own stories often spread across generations. Affective memories of a place, constructed across generations of users, is a form of monitoring and social bonding that information will seek to replace in the smart city.
The construction of Metro transport systems have run into trouble over their deleterious effects on heritage. The Mission Statement has nothing to say about the protection of heritage spaces though it is heritage that renders cities distinctive. Would smartification account for historical differences and lineages of the city or, through technological smartification, render them all fungible as soft cities, interchangeable with each other, like the airports across India today? Whether smartification would entail the invention of a new genealogy for the city is a concern heritage specialists might have.
Striking is the Mission Statement’s technological determinism cast as social-spatial policy. Every square mile of the city will have smart solutions in the form of sensors and ICT structures that will continuously record usage and problems – water use, air pollution, amount of garbage accumulated – and offer solutions already ‘set’ in the databanks.
Writing about urban surveillance after 9/11, the sociologist David Lyon remarked: ‘Technological fixes are the common currency of crisis in late modern societies. They take precedence over other, more people-centered, policy solutions’. Smart City policy suggests a reliance on machinic processes and logic rather than human ones, thereby further eroding the capacities of humans to take decisions. Human decisions are taken on the basis of a set of values assigned to social problems, and smart solutions do away with this step, instead proposing that the data generated will offer a set of pre-determined solutions founded on algorithmic rather than value- or even ethics-based logic. In a recent piece on smart cities, MA Siraj quotes Rahul Mehrotra, chair of the School of Design and Urban Designing at Harvard, who says: ‘they [smart cities] do not consider the human as a part of this equation … people say a smart city is one that uses technology to create connectivity and efficiency’. This technological determinism also governs notions of safety.
The Mission Statement speaks of safety components that include ‘replacing overhead electric wiring with underground wiring, encroachment-free public areas’. Given the high rate of pollution, overcrowding of public transport and streets, the road accidents, none of which are connected to the above list in the Mission Statement, would it be left to technological devices and feedback systems to determine safety – for instance crowd control devices and sensors? If so, do we have any information about the nature and magnitude of risks involved in the very components of smart cities, like radiation from ambient sensors, surveillance mechanisms and transmission towers? Let us not forget that the debate around cellphone towers has not been closed, and the smart city appears to amplify radiation, of various kinds, around us through more devices and transmissions.
Assuming that humans have always co-evolved with technology and that we can no longer, in the posthuman age, see technology as prosthesis to the body, there still remains the question of adaptation and self-reinvention of the human with advances such as the ones the smart city project envisions. This translates into the divide between digital natives and digital migrants, the former being those born into the digital era and for whom smart phones, sensors and haptic technologies are routine, while the latter are members of an earlier generation who need to learn the use of these technologies. What is the individual and social cost of such a reinvention? And what if people who are routinely, structurally helpless, such as the infirm and the old, find it difficult to adapt to the ambient surveillance/sensor climate? Would their inability to navigate smart meters, smart waste disposal mechanisms and tele-health services render them more vulnerable and their actions open to interpretation as ‘suspicious behaviour’ by the machines monitoring them? This means, effectively, to ask whether smart cities empower people or force them into models of behaviour and responses?
Would technologically smart cities and spaces be discriminatory or just? Are efficiency and productivity the sole social concerns? Or can technology be utilized to make society more inclusive? These questions are essentially not about technology per se as much as about responsible and inclusive innovation. If, as the Mission Statement claims, ‘smartification’ would be participatory then there is a possibility that inputs about socially just and desirable outcomes of innovation might be taken seriously. This is where the next key concern arises.
From function creep to surveillance
Research on smart cities in Europe and elsewhere indicates that these are data-intensive spaces, and therefore surveillance structures dominate the cityscape. With every aspect of everyday life being monitored by ambient sensors and data-gathering devices, the anxiety over any smart city’s information architecture is two-fold: what privacy policies, if any, are to be drawn up before the execution of the project and who owns and/or has access to the data collected?
Data gathered in any one domain might be used by powers and authorities in another domain.
‘Function creep’ is the appropriation of data ostensibly gathered for one purpose but employed for entirely different purposes. For instance, if the smart meter records water usage in a household and then the company sells this data to suppliers of purified water, this is function creep – a consequence of the intensified surveillance culture of smart cities.
The intensified surveillance put in place by smart machines redefines the use of space but has consequences for the human users of space.
Surveillance is a part of late 20th century’s ‘splintering urbanism’. It generates interdictory spaces where the mobility of people is controlled through legal, state or corporate provisions that determine the use of space. Interdictory spaces are produced by surveillance in which an individual’s use of space is mediated through multiple processes of monitoring and identification. With sensors, barriers demanding RFID tags and CCTVs, they have to be experienced and negotiated by individuals in particular, legitimate ways – or else run they the risk of being reported for suspicious behaviour. In other words, these spaces are modes of social sorting and of governance in the guise of efficiency, safety and productivity. One’s subjectivity as a citizen is contingent upon a practised or naïve, seamless or bumbling navigation of a host of devices, paths and processes in that space. With smart machines and surveillance mechanisms in place, behaviour that is not identical or predictable will likely be seen as suspect.
It should be evident, then, that Smart Cities might offer one set of technologically determined solutions to human and social problems. But numerous questions need to be answered and policy guidelines put in place as safeguards, unless we have decided that we can all be posthuman now.
Pramod K. Nayar is a Professor at the University of Hyderabad
Categories: Cities & Architecture