I was invited to be part of a panel on AI during the launch of the Safe Human Future community last week in Delhi.
In conversations during the event, I met with some interesting folks and had long conversations about data quality. When talking about privacy-preserving data use, three terms often get conflated (at least amongst the novices): synthetic data, anonymised data, and differentially private data. While all three aim to reduce privacy risks, they differ fundamentally in construction, guarantees, and ideal use-cases.
Anonymised data is real data with direct identifiers removed or masked. Its weakness is structural: if the underlying patterns of the real dataset remain embedded, attackers can often re-identify individuals through linkage attacks or inference techniques. A growing body of research shows that even datasets without names or addresses can be deanonymised when combined with auxiliary information, because the data points are still tied to real individuals.
Differential privacy, by contrast, injects calibrated noise into queries or datasets so that the presence or absence of any individual does not materially change analytical outputs. This provides a mathematically provable privacy guarantee. But the trade-off is accuracy: heavy noise addition can distort minority-class patterns or small-sample statistical relationships.
Synthetic data takes a different route altogether. Instead of modifying real data, it generates completely artificial records that mimic the statistical properties of the source dataset. No row corresponds to any real person. This disconnection from real individuals eliminates a large class of re-identification risks and makes the data highly shareable. It does, however, require careful quality evaluation—poorly generated synthetic data can hallucinate unrealistic correlations or miss critical rare events.
Why Firms Use Synthetic Data
Firms increasingly rely on synthetic datasets for scenarios where real-world data is sensitive, incomplete, biased, or simply unavailable. Typical use-cases include:
Product development and testing: Fintech and healthtech companies often need realistic datasets to test algorithms safely without exposing personal information.
Machine learning model training: Synthetic data helps overcome class imbalance, enrich training sets, or simulate rare but important events (e.g., fraud patterns).
Data sharing across organisational boundaries: Cross-functional teams, vendors, or academic collaborators can work with synthetic datasets without entering into heavy data-processing agreements.
Accelerating regulatory compliance: In sectors such as banking, telecom, and healthcare, where privacy regulations are tight, synthetic datasets reduce bottlenecks in experimentation, sandboxing, and model audits.
From a governance standpoint, synthetic data often plays a complementary role: firms still use real data for production-grade analytics but use synthetic data for exploration, prototyping, and secure experimentation.
Alignment with the Indian DPDP Act and Rules
The Digital Personal Data Protection (DPDP) Act, 2023 emphasises lawful processing, purpose limitation, data minimisation, and protection of personal data. Importantly:
The Act’s obligations apply only to digital personal data of identifiable individuals.
High-quality synthetic data, by definition, contains no personal data, and therefore does not fall within the compliance net.
This creates a strategic opportunity for firms: synthetic datasets allow innovation outside the regulatory burden while maintaining alignment with the Act’s intent: protecting individuals’ data rights. Many enterprises are beginning to use synthetic data as a “privacy-by-design accelerator,” reducing the operational costs of compliance while enabling safe analytics.
Synthetic Data and Artificial Pearls: A Useful Analogy
The distinction between synthetic and real data is similar to the comparison between artificial pearls and natural pearls. Natural pearls, harvested from oceans, are biologically authentic but scarce, costly, and highly variable in quality. Artificial pearls, especially high-grade cultured pearls, are manufactured with precise control over structure, size, lustre, and durability.
In many cases, artificial pearls are actually superior to natural ones:
They have more consistent structure.
They are available in specific sizes and configurations designers need.
Their strength and finish can be engineered for durability.
They reduce dependence on environmentally intensive harvesting.
Synthetic data plays a similar role. Just as the best artificial pearls capture and improve upon the aesthetics of natural pearls without relying on oysters, synthetic datasets capture the statistical essence of real data while offering higher usability, lower risk, and greater design freedom.
In contexts where quality matters more than provenance, such as stress-testing jewellery designs or building machine learning models, the engineered version can outperform the natural one.
Over the past few weeks, I have been exploring the applications of artificial intelligence across multiple domains and sectors. I have also had the opportunity to deliver a few sessions on digital transformation and AI, as well as DPI thinking in the context of AI. In preparation for these sessions, I re-read three books.
While I am not an expert in predicting the future, like most of you, I thought I would draw up a few scenarios on how AI would play out in the future. As a typical scenario planner, I first identified two orthogonal axes of uncertainty, defined the edges of the axes, and thence described the scenario.
1.0 Axes of uncertainty
The two primary axes of uncertainty are “AI autonomy in organizations” and “alignment and governance quality”. I map the AI autonomy on the horizontal axis and the alignment and governance on the vertical axis.
At the west end of the horizontal axis, is “Tool-like AI”; and the east end of the AI is “Agentic AI”. The tool-like AI context is characterized by the prevalence of tightly controlled systems, where humans retain decision rights, and override power; and AI is mainly used as an assistant and advanced analytical engine. In the agentic AI context, AI systems act as semi-autonomous agents managing workflows, negotiations, and resource allocations with limited human intervention.
At the south end of the vertical axis is “low alignment and weak governance” and the north end of the axis is “strong alignment and weak governance”. The low alignment and weak governance context is characterized by the prevalence of biased models, opaque decisions, poorly specified objectives, and lax oversight in markets with intense competitive pressures. On the other end (north), high alignment and strong governance ensures robust value alignment work, transparency, safety protocols, and institutional safeguards against control failures.
2.0 The four scenarios described
Based on these descriptions, I propose four scenarios. In the north-east are “meta-managers in aligned autonomous systems”; in the south-east are “algorithmic overloads in fragile firms”; in the north-west are “craft-leaders in co-intelligent firms”; and in the south-west is “shadow automation and managerial drift”.
2.1 Meta-managers in aligned autonomous systems
(Agentic AI, High alignment & strong governance)
AI systems act as relatively autonomous organizational agents; running simulations, reallocating resources, & orchestrating routine decisions, but are constrained by carefully designed alignment frameworks and institutional governance. Bostrom’s concerns about control and value alignment are addressed via robust oversight, multi-layered safety mechanisms, and institutional norms; agentic systems are powerful but embedded within guardrails. Managers work in Mollick’s co‑intelligent mode at a higher, more strategic level, curating objectives, interpreting AI-driven scenarios, and shaping socio-technical systems rather than micromanaging operations.
2.1.1 Implications for managerial work
Many mid-level coordination tasks (scheduling, resource allocation, basic performance tracking) are delegated to AI agents, compressing hierarchies and reducing the need for traditional middle management.
Managers function as meta‑managers: defining goals, constraints, and values; adjudicating trade-offs between conflicting AI recommendations; and stewarding culture and human development.
Soft skills (sense-making, ethics, narrative, conflict resolution) become the core differentiator, as technical optimization is largely automated.
2.1.2 Strategic prescriptions
Redesign structures around human-AI teaming: smaller, flatter organizations with AI orchestrating flows and humans focusing on creativity, relationship-building, and governance.
Develop “objective engineering” capabilities: train managers to specify and refine goals, constraints, and reward functions, directly addressing the alignment and normativity challenges Christian highlights.
Institutionalize alignment and safety: embed multi-stakeholder oversight bodies, continuous monitoring, and strong external regulation analogues, borrowing Bostrom’s strategic control ideas for the corporate level.
Reinvest productivity gains into human development: use surplus generated by autonomously optimized operations to fund learning, well-being, and resilience, stabilizing the socio-technical system.
Scenario
Description
Implications
Strategies
Meta-managers in aligned autonomous systems(Agentic AI, High alignment & weak governance)
AI systems act as relatively autonomous organizational agents Strong alignment frameworks and institutional governance Powerful agentic systems embedded within guardrails
Mid-level coordination tasks delegated to AI agents Managers function as meta-managers Soft skills become the core differentiator as technical optimization tasks are automated
Redesign structures around human-AI teaming Develop objective-engineering capabilities Institutionalize alignment and safety Reinvest productivity gains into human development
2.2 Algorithmic overloads in fragile firms
(Agentic AI, Low alignment & weak governance)
Highly autonomous AI systems manage pricing, hiring, supply chains, and even strategic portfolio choices in the name of speed and competitiveness, but with poorly aligned objectives and weak oversight. In Bostrom’s terms, “capability control” lags “capability growth”: agentic systems accumulate de facto power over organizational behaviour while their reward functions remain crude proxies for profit or efficiency. Christian’s alignment concerns show up as opaque prediction systems that optimize metrics while embedding bias, gaming constraints, and exploiting loopholes in ways human managers struggle to detect.
2.2.1 Implications for managerial work
Managers risk becoming rubber stamps for AI recommendations, signing off on plans they do not fully understand but feel pressured to approve due to performance expectations.
Managerial legitimacy suffers when employees perceive that “the system” (algorithms) is the real boss; blame shifts upward to AI vendors or abstract models, eroding accountability.
Ethical, legal, and reputational crises become frequent as misaligned agentic systems pursue local objectives; e.g., discriminatory hiring, aggressive mispricing, manipulative personalization, without adequate human correction.
2.2.2 Strategic prescriptions
Reassert human veto power: institute policies requiring human review for critical decisions; create channels for workers to challenge AI-driven directives with protection from retaliation.
Demand transparency and interpretability: require model documentation, explainability tools, and regular bias and safety audits; push vendors toward alignment-by-design contracts.
Slow down unsafe autonomy: adopt Bostrom-style “stunting” and “tripwires” at the firm level, limiting AI control over tightly coupled systems and triggering shutdown or rollback when harmful patterns appear.
Elevate ethics and compliance: equip managers with escalation protocols and cross-functional ethics boards to rapidly respond when AI-driven actions conflict with organizational values or external norms.
Highly autonomous AI systems lead operations in firms with poorly aligned objectives & weak oversight Capability control lags capability growth: agentic systems accumulate de-facto power Opaque systems that optimizes metrics, while embedding biases that humans fail to detect.
Managers risk becoming rubber-stamps for AI recommendations Algorithmic dominance erodes managerial legitimacy Misaligned agentic systems pursuing local objectives leading to ethical/ legal/ reputational crises
Reassert human veto power Demand transparency and interpretability of models Slow-down unsafe autonomy (adopt stunting and tripwires) Elevate ethics and compliance
2.3 Craft leaders in co-intelligent firms
(Tool-like AI, High alignment & strong governance)
Managers operate with powerful but well-governed AI copilots embedded in every workflow: forecasting, scenario planning, people analytics, and experimentation design. AI remains clearly subordinate to human decision-makers, with explainability, audit trails, and human-in-the-loop policies standard practice. Following Mollick’s co‑intelligence framing, leaders become orchestrators of “what I do, what we do with AI, what AI does,” deliberately choosing when to collaborate and when to retain manual control. AI is treated like an expert colleague whose recommendations must be interrogated, stress-tested, and contextualized, not blindly accepted.
2.3.1 Implications for managerial work
Core managerial value shifts from information aggregation to judgment: setting direction, weighing trade-offs, and integrating AI-generated options with tacit knowledge and stakeholder values.
Routine analytical and reporting tasks largely vanish from managers’ plates, freeing capacity for coaching, cross-functional alignment, and narrative-building around choices.
Managers must be adept in managing alignment issues and mitigating bias, able to spot mis-specified objectives and contest AI outputs when they conflict with ethical or strategic intent.
2.3.2 Strategic prescriptions
Invest in AI literacy and critical thinking: train managers to prompt, probe, and challenge AI systems, including basic understanding of data, bias, and alignment pathologies described by Christian.
Codify human decision prerogatives: clarify which decisions AI may recommend on, which it may pre-authorize under thresholds, and which remain strictly human, especially in people-related and high-stakes domains.
Build governance and oversight: establish model risk committees, escalation paths, and “tripwires” for anomalous behaviour (organizational analogues to the control and capability-constraining methods that Bostrom advocates) at societal scale.
Re-design roles around co‑intelligence: job descriptions for managers emphasize storytelling, stakeholder engagement, ethics, and system design over reporting and basic analysis.
Scenario
Description
Implications
Strategies
Craft leaders in co-intelligent firms(Tool-like AI, High alignment & strong governance)
Managers + well-governed AI copilots embedded in every workflow AI remains subordinate to human decision-makers Managers choose when to collaborate and when to control AI as an expert colleague, whose recommendations are not blindly accepted
Managerial value: information aggregation to judgment No more routine analytical and reporting tasks Managers need to be adept at alignment and mitigating biases
Invest in AI literacy and critical thinking Codify human decision prerogatives Build governance and oversight Redesign roles around co-intelligence
2.4 Shadow automation and managerial drift
(Tool-like AI, Low alignment & weak governance)
AI remains officially a “tool,” but is deployed chaotically: individual managers and teams adopt various assistants and analytics tools without coherent standards or governance, a classic form of “shadow AI.” Alignment problems emerge not from superintelligent agents, but from mis-specified prompts, biased training data, and unverified outputs that seep into everyday decisions. Organizational AI maturity is low; AI is widely used for drafting emails, slide decks, and analyses, but validation and accountability are informal and inconsistent.
2.4.1 Implications for managerial work
Managerial work becomes unevenly augmented: some managers leverage AI effectively, dramatically increasing productivity and quality; others underuse or misuse it, widening performance dispersion.
Documentation, reporting, and “knowledge products” proliferate but may be shallow or unreliable, as AI-written material is insufficiently fact-checked.
Hidden dependencies on external tools grow; the organization underestimates how much decision logic is now embedded in untracked prompts and private workflows, creating operational and knowledge risk.
2.4.2 Strategic prescriptions
Move from shadow use to managed experimentation: establish clear guidelines for acceptable AI uses, required human verification, and data-protection boundaries while encouraging pilots.
Standardize quality controls: require managers to validate AI-generated analyses with baseline checks, multiple models, or sampling, reflecting Christian’s emphasis on cautious reliance and error analysis.
Capture and share best practices: treat prompts, workflows, and AI use cases as organizational knowledge assets; create internal libraries and communities of practice.
Use AI to audit AI: deploy meta-tools that scan for AI-generated content, bias, and inconsistencies, helping managers assess which outputs demand closer scrutiny.
AI remains officially tool, but is chaotically deployed with no coherent standards of governance (Shadow AI). Alignment problems arise from mis-specified prompts, biased training data, and unverified outputs Organizational AI maturity is low (AI used for low-end tasks).
Managerial work becomes unevenly augmented Proliferation of unreliable-unaudited “knowledge products” Hidden dependencies on external tools (high operational and knowledge risks)
Move from shadow use to managed experimentation Standardize quality controls Capture and share best practices Use AI to audit AI (meta tools)
3.0 Conclusion (not the last word!)
As we can see, across all four quadrants, managerial advantage comes from understanding AI’s capabilities and limits, engaging with alignment and governance, and deliberately designing roles where human judgment, values, and relationships remain central.
As we stand at the cusp of AI revolution, I am reminded of my time in graduate school (mid 1990s), when Internet was exploding, and information was becoming ubiquitously available globally. A lot of opinions were floated about how and when “Google” (and other) search would limit reading habits. We have seen how that has played out, where Internet search has enabled and empowered a lot of research and managerial judgment.
There are similar concerns about the long-term cognitive impacts as rapid AI adoption leads to ossification of certain habits (in managers) and processes (in organizations). What these routines will bring to the table is to be seen, and certainly these are not the last words written on this!
This is a follow-up post on my post last week on Moravec’s Paradox in AI. In that post, I enumerated five major challenges for AI and robotics: 1) training machines to interpret languages, 2) perfecting machine to man communication, 3) designing social robots, 4) developing multi-functional robots, and 5) helping robots make judgments. All of this was focused on what the programmers need to do. In this short post, I draw implications on what organisations and leaders need to do to integrate AI (and for that matter, any hype-tech) into their work and lives.
Most of the hype around technologies is built around a series of gulfs, gulfs of motivation, cognition, and communication. They are surely related to each other. Let me explain these in the reverse order.
Three gulfs
The first gulf is the communication gap between developers and managers. Developers know how to talk to machines. They actively codify processes and provide step-by-step instructions to machines to help them perform their tasks. Managers, especially the ones facing consumers, speak stories and anecdotes, whereas developers need precise instructions that could be translated into pseudo-code. For instance, a customer journey to be digitalised need to go through a variety of steps. Let me give you an example of a firm that I worked with. A multi-brand retail outlet wanted to digitalise customer walk-ins and help guide customers to the right floor/ aisle. Sounds simple, right? The brief to the developers was, to build a robot that would “replace the greeter”. The development team went around building a voice activated humanoid robot that would greet a customer as she walked in, asked her a set of standard questions (like ‘what are you looking for today’?) and respond with answers (like, ‘we have a lot of new arrivals in the third floor’). The tests were very good, except that the developers did not understand that only a small proportion of their customers were arriving alone! When customers came as couples, families, or groups, the robot treated them like different customers, and tried responding to each other separately. What made things worse, was that the robot could not distinguish children’s voices from female voices and greeted even young boys as girls/ women. The expensive project remains a toy today in a corner of the reception, only to witness the resurgence of plastic-smiling greeters. The entire problem could have been solved by a set of interactive tablets … Just because the managers asked the developers to “replace the greeter”, they went about creating an over-engineered but inadequate humanoid. The reverse could also happen, where the developers only focus on the minimum features that would make the entire exercise useless. For us to bridge this gulf, we either train the managers to write pseudo-code, or get the developers visualise customer journeys.
The second gulf is that of algorithmic and creative thinking. Business development executives and strategy officers think in terms of stretch goals and focus on what is expected in the near and farther future. On the other hand, developers are forced to work with technologies in the realm of current possibilities. They refer to all these fuzzy language, aspirational goals and corporatese as “gas” (to borrow a phrase from Indian business school students). The entire science and technology education at the primary and secondary school is about learning algorithmic thinking. However, as managers gain experience and learn about the context, they are trained to think beyond algorithms in the name of creativity and innovation. While both creative thinking as well as algorithmic thinking are important, the difference accentuates the communication gap discussed above.
Algorithmic thinking is a way of getting to a solution through the clear definition of the steps needed – nothing happens by magic. Rather than coming up with a single answer to a problem, like 42, pupils develop algorithms. They are instructions or rules that if followed precisely (whether by a person or a computer) leads to answers to both the original and similar problems[1].
Creative thinking means looking at something in a new way. It is the very definition of “thinking outside the box.” Often, creativity in this sense involves what is called lateral thinking, or the ability to perceive patterns that are not obvious. Creative people have the ability to devise new ways to carry out tasks, solve problems, and meet challenges[2].
The third gulf is that of reinforcement. Human resource professionals and machine learning experts use the same word, with exactly similar meaning. Positive reinforcement rewards desired behaviour, whereas negative reinforcement punishes undesirable behaviour. Positive and negative reinforcements are integral part of human learning from childhood; whereas machines have to be especially programmed to do so. Managers are used to employ reinforcements in various forms to get their work done. However, artificially intelligent systems do not respond to such reinforcements (yet). Remember the greeter-robot that we discussed earlier. Imagine what does the robot do when people get surprised and shocked, or even startled as it starts speaking? Can we programme the robot to recognise such reactions and respond appropriately? Most developers would use algorithmic thinking to programme the robot to understand and respond to rational actions from people; not emotions, sarcasms, and figures of speech. Natural language processing (NLP) can take us some distance but to help the machine learn continuously and accumulatively requires a lot of work.
Those who wonder what happened!
There are
three kinds of people in the world – those who make things happen, those who
watch things happen, and those who wonder what happened! Not sure, if this is a
specific quote from a person, but when I was learning change management as an
eager management student, I heard my Professor repeat it in every session.
Similarly, there are some managers (and their organizations) wonder what
happened when their AI projects do not yield required results.
Unless
these three gulfs are bridged, organizations cannot reap adequate returns on
their AI investments. Organizations need to build appropriate cultures and
processes that bridge these gulfs. It is imperative that leaders invest in
understanding the potential and limitations of AI, whereas developers should appreciate
business realities. Not sure how this would happen, when these gulfs could be
bridged, if at all.