Synthetic Data vs. Anonymised / Differentially Private Data: Why the Distinction Matters

I was invited to be part of a panel on AI during the launch of the Safe Human Future community last week in Delhi.

In conversations during the event, I met with some interesting folks and had long conversations about data quality. When talking about privacy-preserving data use, three terms often get conflated (at least amongst the novices): synthetic data, anonymised data, and differentially private data. While all three aim to reduce privacy risks, they differ fundamentally in construction, guarantees, and ideal use-cases.

Anonymised data is real data with direct identifiers removed or masked. Its weakness is structural: if the underlying patterns of the real dataset remain embedded, attackers can often re-identify individuals through linkage attacks or inference techniques. A growing body of research shows that even datasets without names or addresses can be deanonymised when combined with auxiliary information, because the data points are still tied to real individuals.

Differential privacy, by contrast, injects calibrated noise into queries or datasets so that the presence or absence of any individual does not materially change analytical outputs. This provides a mathematically provable privacy guarantee. But the trade-off is accuracy: heavy noise addition can distort minority-class patterns or small-sample statistical relationships.

Synthetic data takes a different route altogether. Instead of modifying real data, it generates completely artificial records that mimic the statistical properties of the source dataset. No row corresponds to any real person. This disconnection from real individuals eliminates a large class of re-identification risks and makes the data highly shareable. It does, however, require careful quality evaluation—poorly generated synthetic data can hallucinate unrealistic correlations or miss critical rare events.

Why Firms Use Synthetic Data

Firms increasingly rely on synthetic datasets for scenarios where real-world data is sensitive, incomplete, biased, or simply unavailable. Typical use-cases include:

  1. Product development and testing: Fintech and healthtech companies often need realistic datasets to test algorithms safely without exposing personal information.
  2. Machine learning model training: Synthetic data helps overcome class imbalance, enrich training sets, or simulate rare but important events (e.g., fraud patterns).
  3. Data sharing across organisational boundaries: Cross-functional teams, vendors, or academic collaborators can work with synthetic datasets without entering into heavy data-processing agreements.
  4. Accelerating regulatory compliance: In sectors such as banking, telecom, and healthcare, where privacy regulations are tight, synthetic datasets reduce bottlenecks in experimentation, sandboxing, and model audits.

From a governance standpoint, synthetic data often plays a complementary role: firms still use real data for production-grade analytics but use synthetic data for exploration, prototyping, and secure experimentation.

Alignment with the Indian DPDP Act and Rules

The Digital Personal Data Protection (DPDP) Act, 2023 emphasises lawful processing, purpose limitation, data minimisation, and protection of personal data. Importantly:

  • The Act’s obligations apply only to digital personal data of identifiable individuals.
  • High-quality synthetic data, by definition, contains no personal data, and therefore does not fall within the compliance net.

This creates a strategic opportunity for firms: synthetic datasets allow innovation outside the regulatory burden while maintaining alignment with the Act’s intent: protecting individuals’ data rights. Many enterprises are beginning to use synthetic data as a “privacy-by-design accelerator,” reducing the operational costs of compliance while enabling safe analytics.

Synthetic Data and Artificial Pearls: A Useful Analogy

The distinction between synthetic and real data is similar to the comparison between artificial pearls and natural pearls. Natural pearls, harvested from oceans, are biologically authentic but scarce, costly, and highly variable in quality. Artificial pearls, especially high-grade cultured pearls, are manufactured with precise control over structure, size, lustre, and durability.

In many cases, artificial pearls are actually superior to natural ones:

  • They have more consistent structure.
  • They are available in specific sizes and configurations designers need.
  • Their strength and finish can be engineered for durability.
  • They reduce dependence on environmentally intensive harvesting.

Synthetic data plays a similar role. Just as the best artificial pearls capture and improve upon the aesthetics of natural pearls without relying on oysters, synthetic datasets capture the statistical essence of real data while offering higher usability, lower risk, and greater design freedom.

In contexts where quality matters more than provenance, such as stress-testing jewellery designs or building machine learning models, the engineered version can outperform the natural one.

Cheers.

(C) 2026. R Srinivasan.

AI Scenarios

Over the past few weeks, I have been exploring the applications of artificial intelligence across multiple domains and sectors. I have also had the opportunity to deliver a few sessions on digital transformation and AI, as well as DPI thinking in the context of AI. In preparation for these sessions, I re-read three books.

While I am not an expert in predicting the future, like most of you, I thought I would draw up a few scenarios on how AI would play out in the future. As a typical scenario planner, I first identified two orthogonal axes of uncertainty, defined the edges of the axes, and thence described the scenario.

1.0 Axes of uncertainty

The two primary axes of uncertainty are “AI autonomy in organizations” and “alignment and governance quality”. I map the AI autonomy on the horizontal axis and the alignment and governance on the vertical axis.

At the west end of the horizontal axis, is “Tool-like AI”; and the east end of the AI is “Agentic AI”. The tool-like AI context is characterized by the prevalence of tightly controlled systems, where humans retain decision rights, and override power; and AI is mainly used as an assistant and advanced analytical engine. In the agentic AI context, AI systems act as semi-autonomous agents managing workflows, negotiations, and resource allocations with limited human intervention.

At the south end of the vertical axis is “low alignment and weak governance” and the north end of the axis is “strong alignment and weak governance”. The low alignment and weak governance context is characterized by the prevalence of biased models, opaque decisions, poorly specified objectives, and lax oversight in markets with intense competitive pressures. On the other end (north), high alignment and strong governance ensures robust value alignment work, transparency, safety protocols, and institutional safeguards against control failures.

2.0 The four scenarios described

Based on these descriptions, I propose four scenarios. In the north-east are “meta-managers in aligned autonomous systems”; in the south-east are “algorithmic overloads in fragile firms”; in the north-west are “craft-leaders in co-intelligent firms”; and in the south-west is “shadow automation and managerial drift”.

2.1 Meta-managers in aligned autonomous systems

(Agentic AI, High alignment & strong governance)

AI systems act as relatively autonomous organizational agents; running simulations, reallocating resources, & orchestrating routine decisions, but are constrained by carefully designed alignment frameworks and institutional governance. Bostrom’s concerns about control and value alignment are addressed via robust oversight, multi-layered safety mechanisms, and institutional norms; agentic systems are powerful but embedded within guardrails. Managers work in Mollick’s co‑intelligent mode at a higher, more strategic level, curating objectives, interpreting AI-driven scenarios, and shaping socio-technical systems rather than micromanaging operations.

2.1.1 Implications for managerial work

  • Many mid-level coordination tasks (scheduling, resource allocation, basic performance tracking) are delegated to AI agents, compressing hierarchies and reducing the need for traditional middle management.
  • Managers function as meta‑managers: defining goals, constraints, and values; adjudicating trade-offs between conflicting AI recommendations; and stewarding culture and human development.
  • Soft skills (sense-making, ethics, narrative, conflict resolution) become the core differentiator, as technical optimization is largely automated.

2.1.2 Strategic prescriptions

  • Redesign structures around human-AI teaming: smaller, flatter organizations with AI orchestrating flows and humans focusing on creativity, relationship-building, and governance.
  • Develop “objective engineering” capabilities: train managers to specify and refine goals, constraints, and reward functions, directly addressing the alignment and normativity challenges Christian highlights.
  • Institutionalize alignment and safety: embed multi-stakeholder oversight bodies, continuous monitoring, and strong external regulation analogues, borrowing Bostrom’s strategic control ideas for the corporate level.
  • Reinvest productivity gains into human development: use surplus generated by autonomously optimized operations to fund learning, well-being, and resilience, stabilizing the socio-technical system.
ScenarioDescriptionImplicationsStrategies
Meta-managers in aligned autonomous systems (Agentic AI, High alignment & weak governance)  AI systems act as relatively autonomous organizational agents Strong alignment frameworks and institutional governance Powerful agentic systems embedded within guardrailsMid-level coordination tasks delegated to AI agents Managers function as meta-managers Soft skills become the core differentiator as technical optimization tasks are automatedRedesign structures around human-AI teaming Develop objective-engineering capabilities Institutionalize alignment and safety Reinvest productivity gains into human development

2.2 Algorithmic overloads in fragile firms

(Agentic AI, Low alignment & weak governance)

Highly autonomous AI systems manage pricing, hiring, supply chains, and even strategic portfolio choices in the name of speed and competitiveness, but with poorly aligned objectives and weak oversight. In Bostrom’s terms, “capability control” lags “capability growth”: agentic systems accumulate de facto power over organizational behaviour while their reward functions remain crude proxies for profit or efficiency. Christian’s alignment concerns show up as opaque prediction systems that optimize metrics while embedding bias, gaming constraints, and exploiting loopholes in ways human managers struggle to detect.

2.2.1 Implications for managerial work

  • Managers risk becoming rubber stamps for AI recommendations, signing off on plans they do not fully understand but feel pressured to approve due to performance expectations.
  • Managerial legitimacy suffers when employees perceive that “the system” (algorithms) is the real boss; blame shifts upward to AI vendors or abstract models, eroding accountability.
  • Ethical, legal, and reputational crises become frequent as misaligned agentic systems pursue local objectives; e.g., discriminatory hiring, aggressive mispricing, manipulative personalization, without adequate human correction.

2.2.2 Strategic prescriptions

  • Reassert human veto power: institute policies requiring human review for critical decisions; create channels for workers to challenge AI-driven directives with protection from retaliation.
  • Demand transparency and interpretability: require model documentation, explainability tools, and regular bias and safety audits; push vendors toward alignment-by-design contracts.
  • Slow down unsafe autonomy: adopt Bostrom-style “stunting” and “tripwires” at the firm level, limiting AI control over tightly coupled systems and triggering shutdown or rollback when harmful patterns appear.

Elevate ethics and compliance: equip managers with escalation protocols and cross-functional ethics boards to rapidly respond when AI-driven actions conflict with organizational values or external norms.

ScenarioDescriptionImplicationsStrategies
Algorithmic overload in fragile firms (Agentic AI, Low alignment & weak governance)  Highly autonomous AI systems lead operations in firms with poorly aligned objectives & weak oversight Capability control lags capability growth: agentic systems accumulate de-facto power Opaque systems that optimizes metrics, while embedding biases that humans fail to detect.Managers risk becoming rubber-stamps for AI recommendations Algorithmic dominance erodes managerial legitimacy Misaligned agentic systems pursuing local objectives leading to ethical/ legal/ reputational crisesReassert human veto power Demand transparency and interpretability of models Slow-down unsafe autonomy (adopt stunting and tripwires) Elevate ethics and compliance

2.3 Craft leaders in co-intelligent firms

(Tool-like AI, High alignment & strong governance)

Managers operate with powerful but well-governed AI copilots embedded in every workflow: forecasting, scenario planning, people analytics, and experimentation design. AI remains clearly subordinate to human decision-makers, with explainability, audit trails, and human-in-the-loop policies standard practice. Following Mollick’s co‑intelligence framing, leaders become orchestrators of “what I do, what we do with AI, what AI does,” deliberately choosing when to collaborate and when to retain manual control. AI is treated like an expert colleague whose recommendations must be interrogated, stress-tested, and contextualized, not blindly accepted.

2.3.1 Implications for managerial work

  • Core managerial value shifts from information aggregation to judgment: setting direction, weighing trade-offs, and integrating AI-generated options with tacit knowledge and stakeholder values.
  • Routine analytical and reporting tasks largely vanish from managers’ plates, freeing capacity for coaching, cross-functional alignment, and narrative-building around choices.
  • Managers must be adept in managing alignment issues and mitigating bias, able to spot mis-specified objectives and contest AI outputs when they conflict with ethical or strategic intent.

2.3.2 Strategic prescriptions

  • Invest in AI literacy and critical thinking: train managers to prompt, probe, and challenge AI systems, including basic understanding of data, bias, and alignment pathologies described by Christian.
  • Codify human decision prerogatives: clarify which decisions AI may recommend on, which it may pre-authorize under thresholds, and which remain strictly human, especially in people-related and high-stakes domains.
  • Build governance and oversight: establish model risk committees, escalation paths, and “tripwires” for anomalous behaviour (organizational analogues to the control and capability-constraining methods that Bostrom advocates) at societal scale.
  • Re-design roles around co‑intelligence: job descriptions for managers emphasize storytelling, stakeholder engagement, ethics, and system design over reporting and basic analysis.
ScenarioDescriptionImplicationsStrategies
Craft leaders in co-intelligent firms (Tool-like AI, High alignment & strong governance)Managers + well-governed AI copilots embedded in every workflow AI remains subordinate to human decision-makers Managers choose when to collaborate and when to control AI as an expert colleague, whose recommendations are not blindly acceptedManagerial value: information aggregation to judgment No more routine analytical and reporting tasks Managers need to be adept at alignment and mitigating biasesInvest in AI literacy and critical thinking Codify human decision prerogatives Build governance and oversight Redesign roles around co-intelligence

2.4 Shadow automation and managerial drift

(Tool-like AI, Low alignment & weak governance)

AI remains officially a “tool,” but is deployed chaotically: individual managers and teams adopt various assistants and analytics tools without coherent standards or governance, a classic form of “shadow AI.” Alignment problems emerge not from superintelligent agents, but from mis-specified prompts, biased training data, and unverified outputs that seep into everyday decisions. Organizational AI maturity is low; AI is widely used for drafting emails, slide decks, and analyses, but validation and accountability are informal and inconsistent.

2.4.1 Implications for managerial work

  • Managerial work becomes unevenly augmented: some managers leverage AI effectively, dramatically increasing productivity and quality; others underuse or misuse it, widening performance dispersion.
  • Documentation, reporting, and “knowledge products” proliferate but may be shallow or unreliable, as AI-written material is insufficiently fact-checked.
  • Hidden dependencies on external tools grow; the organization underestimates how much decision logic is now embedded in untracked prompts and private workflows, creating operational and knowledge risk.

2.4.2 Strategic prescriptions

  • Move from shadow use to managed experimentation: establish clear guidelines for acceptable AI uses, required human verification, and data-protection boundaries while encouraging pilots.
  • Standardize quality controls: require managers to validate AI-generated analyses with baseline checks, multiple models, or sampling, reflecting Christian’s emphasis on cautious reliance and error analysis.
  • Capture and share best practices: treat prompts, workflows, and AI use cases as organizational knowledge assets; create internal libraries and communities of practice.
  • Use AI to audit AI: deploy meta-tools that scan for AI-generated content, bias, and inconsistencies, helping managers assess which outputs demand closer scrutiny.
ScenarioDescriptionImplicationsStrategies
Shadow automation and managerial drift (Tool-like AI, Low alignment & strong governance)AI remains officially tool, but is chaotically deployed with no coherent standards of governance (Shadow AI). Alignment problems arise from mis-specified prompts, biased training data, and unverified outputs Organizational AI maturity is low (AI used for low-end tasks).Managerial work becomes unevenly augmented Proliferation of unreliable-unaudited “knowledge products” Hidden dependencies on external tools (high operational and knowledge risks)  Move from shadow use to managed experimentation Standardize quality controls Capture and share best practices Use AI to audit AI (meta tools)

3.0 Conclusion (not the last word!)

As we can see, across all four quadrants, managerial advantage comes from understanding AI’s capabilities and limits, engaging with alignment and governance, and deliberately designing roles where human judgment, values, and relationships remain central.

As we stand at the cusp of AI revolution, I am reminded of my time in graduate school (mid 1990s), when Internet was exploding, and information was becoming ubiquitously available globally. A lot of opinions were floated about how and when “Google” (and other) search would limit reading habits. We have seen how that has played out, where Internet search has enabled and empowered a lot of research and managerial judgment.

There are similar concerns about the long-term cognitive impacts as rapid AI adoption leads to ossification of certain habits (in managers) and processes (in organizations). What these routines will bring to the table is to be seen, and certainly these are not the last words written on this!

Cheers.

(c) 2026. R Srinivasan

Making Artificial Intelligence (AI) work

This is a follow-up post on my post last week on Moravec’s Paradox in AI. In that post, I enumerated five major challenges for AI and robotics: 1) training machines to interpret languages, 2) perfecting machine to man communication, 3) designing social robots, 4) developing multi-functional robots, and 5) helping robots make judgments. All of this was focused on what the programmers need to do. In this short post, I draw implications on what organisations and leaders need to do to integrate AI (and for that matter, any hype-tech) into their work and lives.

Most of the hype around technologies is built around a series of gulfs, gulfs of motivation, cognition, and communication. They are surely related to each other. Let me explain these in the reverse order.

Three gulfs

The first gulf is the communication gap between developers and managers. Developers know how to talk to machines. They actively codify processes and provide step-by-step instructions to machines to help them perform their tasks. Managers, especially the ones facing consumers, speak stories and anecdotes, whereas developers need precise instructions that could be translated into pseudo-code. For instance, a customer journey to be digitalised need to go through a variety of steps. Let me give you an example of a firm that I worked with. A multi-brand retail outlet wanted to digitalise customer walk-ins and help guide customers to the right floor/ aisle. Sounds simple, right? The brief to the developers was, to build a robot that would “replace the greeter”. The development team went around building a voice activated humanoid robot that would greet a customer as she walked in, asked her a set of standard questions (like ‘what are you looking for today’?) and respond with answers (like, ‘we have a lot of new arrivals in the third floor’). The tests were very good, except that the developers did not understand that only a small proportion of their customers were arriving alone! When customers came as couples, families, or groups, the robot treated them like different customers, and tried responding to each other separately. What made things worse, was that the robot could not distinguish children’s voices from female voices and greeted even young boys as girls/ women. The expensive project remains a toy today in a corner of the reception, only to witness the resurgence of plastic-smiling greeters. The entire problem could have been solved by a set of interactive tablets … Just because the managers asked the developers to “replace the greeter”, they went about creating an over-engineered but inadequate humanoid. The reverse could also happen, where the developers only focus on the minimum features that would make the entire exercise useless. For us to bridge this gulf, we either train the managers to write pseudo-code, or get the developers visualise customer journeys.

The second gulf is that of algorithmic and creative thinking. Business development executives and strategy officers think in terms of stretch goals and focus on what is expected in the near and farther future. On the other hand, developers are forced to work with technologies in the realm of current possibilities. They refer to all these fuzzy language, aspirational goals and corporatese as “gas” (to borrow a phrase from Indian business school students). The entire science and technology education at the primary and secondary school is about learning algorithmic thinking. However, as managers gain experience and learn about the context, they are trained to think beyond algorithms in the name of creativity and innovation. While both creative thinking as well as algorithmic thinking are important, the difference accentuates the communication gap discussed above.

Algorithmic thinking is a way of getting to a solution through the clear definition of the steps needed – nothing happens by magic. Rather than coming up with a single answer to a problem, like 42, pupils develop algorithms. They are instructions or rules that if followed precisely (whether by a person or a computer) leads to answers to both the original and similar problems[1].   Creative thinking means looking at something in a new way. It is the very definition of “thinking outside the box.” Often, creativity in this sense involves what is called lateral thinking, or the ability to perceive patterns that are not obvious. Creative people have the ability to devise new ways to carry out tasks, solve problems, and meet challenges[2].  

The third gulf is that of reinforcement. Human resource professionals and machine learning experts use the same word, with exactly similar meaning. Positive reinforcement rewards desired behaviour, whereas negative reinforcement punishes undesirable behaviour. Positive and negative reinforcements are integral part of human learning from childhood; whereas machines have to be especially programmed to do so. Managers are used to employ reinforcements in various forms to get their work done. However, artificially intelligent systems do not respond to such reinforcements (yet). Remember the greeter-robot that we discussed earlier. Imagine what does the robot do when people get surprised and shocked, or even startled as it starts speaking? Can we programme the robot to recognise such reactions and respond appropriately? Most developers would use algorithmic thinking to programme the robot to understand and respond to rational actions from people; not emotions, sarcasms, and figures of speech. Natural language processing (NLP) can take us some distance but to help the machine learn continuously and accumulatively requires a lot of work.

Those who wonder what happened!

There are three kinds of people in the world – those who make things happen, those who watch things happen, and those who wonder what happened! Not sure, if this is a specific quote from a person, but when I was learning change management as an eager management student, I heard my Professor repeat it in every session. Similarly, there are some managers (and their organizations) wonder what happened when their AI projects do not yield required results.

Unless these three gulfs are bridged, organizations cannot reap adequate returns on their AI investments. Organizations need to build appropriate cultures and processes that bridge these gulfs. It is imperative that leaders invest in understanding the potential and limitations of AI, whereas developers should appreciate business realities. Not sure how this would happen, when these gulfs could be bridged, if at all.

Comments and experiences welcome.

Cheers.

© 2019. R Srinivasan, IIM Bangalore.


[1] https://teachinglondoncomputing.org/resources/developing-computational-thinking/algorithmic-thinking/

[2] https://www.thebalancecareers.com/creative-thinking-definition-with-examples-2063744

Moravec’s Paradox in Artificial Intelligence: Implications for the future of work and skills

What is artificial in AI?

As the four-day long weekend loomed and I was closing an executive education programme where the focus was digitalization and technology, especially in the context of India and emerging economies, I read this piece on AI ethics by IIMB alumnus Dayasindhu. He talks about the differences between teleological and deontological perspectives of AI and ethics. It got me thinking on technological unemployment (unemployment caused by the firms’ adoption of technologies such as AI and Robotics). For those of you interested in a little bit of history, read this piece (also by Dayasindhu) on how India (especially Indian banking industry) had adopted technology.

In my classes on digital transformation, I introduce the potential of Artificial Intelligence (AI) and its implications on work and skills. My students (in India and Germany) and Executive Education participants would remember these discussions. One of my favourite conversations have been about what kinds of jobs will get disrupted thanks to AI and robotics. I argue that, contrary to popular wisdom, we would have robots washing our clothes, much earlier than those folding the laundry. While washing clothes is a simple operation (for robots), folding laundry requires a very complex calculation of identifying different clothes of irregular shapes, fabric and weight (Read more here). And that, most robots we have are single use – made for a specific purpose, as compared to a human arm, that is truly multi-purpose (Read more here). Yes, there have been great advancements on these two fronts, but the challenge still remains – AI has progressed far more in certain skills that seem very complex for humans, whereas robots struggle to perform certain tasks that seem very easy to humans, like riding a bicycle (which a four-year old child can possibly do with relative ease). The explanation lies in the Moravec’s Paradox. Hans Moravec and others had articulated this in the 1980s!

What is Moravec’s Paradox?

“It is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility”.

Moravec, 1988

It is very difficult to reverse engineer certain human skills that are unconscious. It is easier to reverse engineer motor processes (think factory automation), cognitive skills (think big data analytics), or routinised computations (think predictive/ prescriptive algorithms).

“In general, we’re less aware of what our minds do best…. We’re more aware of simple processes that don’t work well than of complex ones that work flawlessly”.

Minsky, 1986

Moravec’s paradox proposes that this distinction has its roots in evolution. As a species, we have spent millions of years in selection, mutation, and retention of specific skills that has allowed us to survive and succeed in this world. Some examples of such skills include learning a language, sensory-motor skills like riding a bicycle, and drawing basic art.

What are the challenges?

Based on my reading and experience, I envisage five major challenges for AI and robotics in the days to come.

One, artificially intelligent machines need to be trained to learn languages. Yes, there have been great advances in natural language processing (NLP) that have contributed to voice recognition and responses. However, there are still gaps in how machines interpret sarcasm and figures of speech. Couple of years ago, a man tweeted to an airline about his misplaced luggage in a sarcastic tone, and the customer service bot responded with thanks, much to the amusement of many social media users. NLP involves the ability to read, decipher, understand and make sense of natural language. Codifying grammar in complex languages like English, accentuated by differences in accent can make deciphering spoken language difficult for machines. Add to it, contextually significant figures of speech and idioms – what do you expect computers to understand when you say, “the old man down the street kicked the bucket”?

Two, apart from communication, machine to man communication is tricky. We can industrial “pick-and-place” robots in industrial contexts; can we have “give-and-take” robots in customer service settings? Imagine a food serving robot in a fine dining restaurant … how do we train the robot to read the moods and suggest the right cuisine and music to suit the occasion? Most of the robots that we have as I write this exhibit puppy-like behaviour, a far cry from naturally intelligent human beings. Humans need friendliness, understanding, and empathy in their social interactions, which are very complex to programme.

Three, there have been a lot of advances in environmental awareness and responses. Self-navigation and communication has significantly improved thanks to technologies like Simultaneously Localisation and Mapping (or SLAM), we are able to visually and sensorily improve augmented reality (AR) experiences. Still, the risks of having human beings in the midst of a robot swarm is fraught with a variety of risks. Not just that different robots need to sense and respond to the location and movement of other robots, they need to respond to “unpredictable” movements and responses of humans. When presented with a danger, different humans respond differently based on their psychologies and personalities, most often, shaped from a series of prior experiences and perceived self-efficacies. Robots still find it difficult to sense, characterise, and respond to such interactions. Today’s social robots are designed for short interactions with humans, not learning social and moral norms leading to sustained long term relationships.

Four, developing multi-functional robots that can develop reasoning. Reasoning is ability to interpret something in a logical way in order to form a conclusion or judgment. For instance, it is easy for a robot to pick up a screwdriver from a bin, but quite something else to be able to pick it up in the right orientation and be able to use it appropriately. It needs to be programmed to realise when the tool is held in the wrong orientation and be able to self-correct it to the right orientation for optimal use.

Five, even when we can train the robot with a variety of sensors to develop logical reasoning through detailed pattern-evaluations and algorithms, it would be difficult to train it to make judgments. For instance, to make up what is good or evil. Check out MIT’s Moral Machine here. Apart from developing the morality in the machine, how can we programme it to be not just consistent in behaviour; but remain fair and use appropriate criteria for decision-making. Imagine a table-cleaning robot that knows where to leave the cloth when someone rings the doorbell. It needs to be programmed to understand when to stop an activity and when to start another. Given the variety of contexts humans engage with on a daily basis, what they learn naturally will surely take complex programming.

Data privacy, security and accountability

Add to all these, issues around data privacy and security. Given that we need to provide the robot and AI systems with enough data about humans and we have limited ability to programme the system, issues about privacy is critical. Consent is the key word in privacy, but when we are driving in the midst of an autonomous vehicle (AV), there is so much data the AV collects to navigate, we need strong governance and accountability. When an AV is involved in an accident with a pedestrian, who is accountable – the emergency driver in the AV; the programmer of the AV; the manufacturer of the vehicle; any of the hardware manufacturers, like the camera/ sensors that did not do their jobs properly; or the cloud service provider which did not respond soon enough for the AV to save lives? Such questions are pertinent and are too important to relegate to a later date when they occur, post facto.

AI induced technological unemployment

At the end of all these conversations, when I look around me, I see three kinds of jobs being lost to technological change: a) low-end white collared jobs, like accountants and clerks; b) low-skilled data analysts, like the ones at a pathology interpreting a clinical report or a law-apprentice doing contract reviews; and c) hazardous-monotonous or random-exceptional work, like monitoring a volcano’s activity or seismic measurements for earthquakes.

The traditional blue-collared jobs like factory workers, bus conductors/ drivers, repair and refurbishment mechanics, capital machinery installation, agricultural field workers, and housekeeping staff would take a long time to be lost to AI/ robotics. Primarily because these jobs are heavily unpredictable, secondly as these jobs involve significant judgment and reasoning, and thirdly because the costs of automating these jobs would possibly far outweigh the benefits (due to low labor costs and high coordination costs). Not all blue-collared jobs are safe, though. Take for instance staff at warehouses – with pick and place robots, automatic forklifts, and technologies like RFID sensors, a lot of jobs could be lost. Especially, when quick response is the source of competitive advantage in the warehousing operations, automation will greatly reduce errors and increase reliability or operations.

As Brian Reese wrote in the book, The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity, “order takers at fast food places may be replaced by machines, but the people who clean-up the restaurant at night won’t be. The jobs that automation affects will be spread throughout the wage spectrum.”

In summary, in order to understand the nature and quantity of technological unemployment (job losses due to AI and robotics), we need to ask three questions – is the task codifiable? (on an average, tasks that the human race have learnt in the past few decades are the easiest to codify); is it possible to reverse-engineer it? (can we get to break the task into smaller tasks); and does the task lend itself to a series of decision rules? (can we create a comprehensive set of decision rules, that could be programmed into neat decision trees or matrices). If you answered in the affirmative (yes) to these questions with reference to your job/ task, go learn something new and look for another job!

Cheers.

© 2019. R Srinivasan, IIM Bangalore