Professor of Strategy at the Indian Institute of Management Bangalore. All views are personal. The views and opinions expressed here are of the author, and not those of the Indian Institute of Management Bangalore; and are not intended to endorse, harm, malign, or defame any individual, group, or organisation.
I was invited to be part of a panel on AI during the launch of the Safe Human Future community last week in Delhi.
In conversations during the event, I met with some interesting folks and had long conversations about data quality. When talking about privacy-preserving data use, three terms often get conflated (at least amongst the novices): synthetic data, anonymised data, and differentially private data. While all three aim to reduce privacy risks, they differ fundamentally in construction, guarantees, and ideal use-cases.
Anonymised data is real data with direct identifiers removed or masked. Its weakness is structural: if the underlying patterns of the real dataset remain embedded, attackers can often re-identify individuals through linkage attacks or inference techniques. A growing body of research shows that even datasets without names or addresses can be deanonymised when combined with auxiliary information, because the data points are still tied to real individuals.
Differential privacy, by contrast, injects calibrated noise into queries or datasets so that the presence or absence of any individual does not materially change analytical outputs. This provides a mathematically provable privacy guarantee. But the trade-off is accuracy: heavy noise addition can distort minority-class patterns or small-sample statistical relationships.
Synthetic data takes a different route altogether. Instead of modifying real data, it generates completely artificial records that mimic the statistical properties of the source dataset. No row corresponds to any real person. This disconnection from real individuals eliminates a large class of re-identification risks and makes the data highly shareable. It does, however, require careful quality evaluation—poorly generated synthetic data can hallucinate unrealistic correlations or miss critical rare events.
Why Firms Use Synthetic Data
Firms increasingly rely on synthetic datasets for scenarios where real-world data is sensitive, incomplete, biased, or simply unavailable. Typical use-cases include:
Product development and testing: Fintech and healthtech companies often need realistic datasets to test algorithms safely without exposing personal information.
Machine learning model training: Synthetic data helps overcome class imbalance, enrich training sets, or simulate rare but important events (e.g., fraud patterns).
Data sharing across organisational boundaries: Cross-functional teams, vendors, or academic collaborators can work with synthetic datasets without entering into heavy data-processing agreements.
Accelerating regulatory compliance: In sectors such as banking, telecom, and healthcare, where privacy regulations are tight, synthetic datasets reduce bottlenecks in experimentation, sandboxing, and model audits.
From a governance standpoint, synthetic data often plays a complementary role: firms still use real data for production-grade analytics but use synthetic data for exploration, prototyping, and secure experimentation.
Alignment with the Indian DPDP Act and Rules
The Digital Personal Data Protection (DPDP) Act, 2023 emphasises lawful processing, purpose limitation, data minimisation, and protection of personal data. Importantly:
The Act’s obligations apply only to digital personal data of identifiable individuals.
High-quality synthetic data, by definition, contains no personal data, and therefore does not fall within the compliance net.
This creates a strategic opportunity for firms: synthetic datasets allow innovation outside the regulatory burden while maintaining alignment with the Act’s intent: protecting individuals’ data rights. Many enterprises are beginning to use synthetic data as a “privacy-by-design accelerator,” reducing the operational costs of compliance while enabling safe analytics.
Synthetic Data and Artificial Pearls: A Useful Analogy
The distinction between synthetic and real data is similar to the comparison between artificial pearls and natural pearls. Natural pearls, harvested from oceans, are biologically authentic but scarce, costly, and highly variable in quality. Artificial pearls, especially high-grade cultured pearls, are manufactured with precise control over structure, size, lustre, and durability.
In many cases, artificial pearls are actually superior to natural ones:
They have more consistent structure.
They are available in specific sizes and configurations designers need.
Their strength and finish can be engineered for durability.
They reduce dependence on environmentally intensive harvesting.
Synthetic data plays a similar role. Just as the best artificial pearls capture and improve upon the aesthetics of natural pearls without relying on oysters, synthetic datasets capture the statistical essence of real data while offering higher usability, lower risk, and greater design freedom.
In contexts where quality matters more than provenance, such as stress-testing jewellery designs or building machine learning models, the engineered version can outperform the natural one.
Over the past few weeks, I have been exploring the applications of artificial intelligence across multiple domains and sectors. I have also had the opportunity to deliver a few sessions on digital transformation and AI, as well as DPI thinking in the context of AI. In preparation for these sessions, I re-read three books.
While I am not an expert in predicting the future, like most of you, I thought I would draw up a few scenarios on how AI would play out in the future. As a typical scenario planner, I first identified two orthogonal axes of uncertainty, defined the edges of the axes, and thence described the scenario.
1.0 Axes of uncertainty
The two primary axes of uncertainty are “AI autonomy in organizations” and “alignment and governance quality”. I map the AI autonomy on the horizontal axis and the alignment and governance on the vertical axis.
At the west end of the horizontal axis, is “Tool-like AI”; and the east end of the AI is “Agentic AI”. The tool-like AI context is characterized by the prevalence of tightly controlled systems, where humans retain decision rights, and override power; and AI is mainly used as an assistant and advanced analytical engine. In the agentic AI context, AI systems act as semi-autonomous agents managing workflows, negotiations, and resource allocations with limited human intervention.
At the south end of the vertical axis is “low alignment and weak governance” and the north end of the axis is “strong alignment and weak governance”. The low alignment and weak governance context is characterized by the prevalence of biased models, opaque decisions, poorly specified objectives, and lax oversight in markets with intense competitive pressures. On the other end (north), high alignment and strong governance ensures robust value alignment work, transparency, safety protocols, and institutional safeguards against control failures.
2.0 The four scenarios described
Based on these descriptions, I propose four scenarios. In the north-east are “meta-managers in aligned autonomous systems”; in the south-east are “algorithmic overloads in fragile firms”; in the north-west are “craft-leaders in co-intelligent firms”; and in the south-west is “shadow automation and managerial drift”.
2.1 Meta-managers in aligned autonomous systems
(Agentic AI, High alignment & strong governance)
AI systems act as relatively autonomous organizational agents; running simulations, reallocating resources, & orchestrating routine decisions, but are constrained by carefully designed alignment frameworks and institutional governance. Bostrom’s concerns about control and value alignment are addressed via robust oversight, multi-layered safety mechanisms, and institutional norms; agentic systems are powerful but embedded within guardrails. Managers work in Mollick’s co‑intelligent mode at a higher, more strategic level, curating objectives, interpreting AI-driven scenarios, and shaping socio-technical systems rather than micromanaging operations.
2.1.1 Implications for managerial work
Many mid-level coordination tasks (scheduling, resource allocation, basic performance tracking) are delegated to AI agents, compressing hierarchies and reducing the need for traditional middle management.
Managers function as meta‑managers: defining goals, constraints, and values; adjudicating trade-offs between conflicting AI recommendations; and stewarding culture and human development.
Soft skills (sense-making, ethics, narrative, conflict resolution) become the core differentiator, as technical optimization is largely automated.
2.1.2 Strategic prescriptions
Redesign structures around human-AI teaming: smaller, flatter organizations with AI orchestrating flows and humans focusing on creativity, relationship-building, and governance.
Develop “objective engineering” capabilities: train managers to specify and refine goals, constraints, and reward functions, directly addressing the alignment and normativity challenges Christian highlights.
Institutionalize alignment and safety: embed multi-stakeholder oversight bodies, continuous monitoring, and strong external regulation analogues, borrowing Bostrom’s strategic control ideas for the corporate level.
Reinvest productivity gains into human development: use surplus generated by autonomously optimized operations to fund learning, well-being, and resilience, stabilizing the socio-technical system.
Scenario
Description
Implications
Strategies
Meta-managers in aligned autonomous systems(Agentic AI, High alignment & weak governance)
AI systems act as relatively autonomous organizational agents Strong alignment frameworks and institutional governance Powerful agentic systems embedded within guardrails
Mid-level coordination tasks delegated to AI agents Managers function as meta-managers Soft skills become the core differentiator as technical optimization tasks are automated
Redesign structures around human-AI teaming Develop objective-engineering capabilities Institutionalize alignment and safety Reinvest productivity gains into human development
2.2 Algorithmic overloads in fragile firms
(Agentic AI, Low alignment & weak governance)
Highly autonomous AI systems manage pricing, hiring, supply chains, and even strategic portfolio choices in the name of speed and competitiveness, but with poorly aligned objectives and weak oversight. In Bostrom’s terms, “capability control” lags “capability growth”: agentic systems accumulate de facto power over organizational behaviour while their reward functions remain crude proxies for profit or efficiency. Christian’s alignment concerns show up as opaque prediction systems that optimize metrics while embedding bias, gaming constraints, and exploiting loopholes in ways human managers struggle to detect.
2.2.1 Implications for managerial work
Managers risk becoming rubber stamps for AI recommendations, signing off on plans they do not fully understand but feel pressured to approve due to performance expectations.
Managerial legitimacy suffers when employees perceive that “the system” (algorithms) is the real boss; blame shifts upward to AI vendors or abstract models, eroding accountability.
Ethical, legal, and reputational crises become frequent as misaligned agentic systems pursue local objectives; e.g., discriminatory hiring, aggressive mispricing, manipulative personalization, without adequate human correction.
2.2.2 Strategic prescriptions
Reassert human veto power: institute policies requiring human review for critical decisions; create channels for workers to challenge AI-driven directives with protection from retaliation.
Demand transparency and interpretability: require model documentation, explainability tools, and regular bias and safety audits; push vendors toward alignment-by-design contracts.
Slow down unsafe autonomy: adopt Bostrom-style “stunting” and “tripwires” at the firm level, limiting AI control over tightly coupled systems and triggering shutdown or rollback when harmful patterns appear.
Elevate ethics and compliance: equip managers with escalation protocols and cross-functional ethics boards to rapidly respond when AI-driven actions conflict with organizational values or external norms.
Highly autonomous AI systems lead operations in firms with poorly aligned objectives & weak oversight Capability control lags capability growth: agentic systems accumulate de-facto power Opaque systems that optimizes metrics, while embedding biases that humans fail to detect.
Managers risk becoming rubber-stamps for AI recommendations Algorithmic dominance erodes managerial legitimacy Misaligned agentic systems pursuing local objectives leading to ethical/ legal/ reputational crises
Reassert human veto power Demand transparency and interpretability of models Slow-down unsafe autonomy (adopt stunting and tripwires) Elevate ethics and compliance
2.3 Craft leaders in co-intelligent firms
(Tool-like AI, High alignment & strong governance)
Managers operate with powerful but well-governed AI copilots embedded in every workflow: forecasting, scenario planning, people analytics, and experimentation design. AI remains clearly subordinate to human decision-makers, with explainability, audit trails, and human-in-the-loop policies standard practice. Following Mollick’s co‑intelligence framing, leaders become orchestrators of “what I do, what we do with AI, what AI does,” deliberately choosing when to collaborate and when to retain manual control. AI is treated like an expert colleague whose recommendations must be interrogated, stress-tested, and contextualized, not blindly accepted.
2.3.1 Implications for managerial work
Core managerial value shifts from information aggregation to judgment: setting direction, weighing trade-offs, and integrating AI-generated options with tacit knowledge and stakeholder values.
Routine analytical and reporting tasks largely vanish from managers’ plates, freeing capacity for coaching, cross-functional alignment, and narrative-building around choices.
Managers must be adept in managing alignment issues and mitigating bias, able to spot mis-specified objectives and contest AI outputs when they conflict with ethical or strategic intent.
2.3.2 Strategic prescriptions
Invest in AI literacy and critical thinking: train managers to prompt, probe, and challenge AI systems, including basic understanding of data, bias, and alignment pathologies described by Christian.
Codify human decision prerogatives: clarify which decisions AI may recommend on, which it may pre-authorize under thresholds, and which remain strictly human, especially in people-related and high-stakes domains.
Build governance and oversight: establish model risk committees, escalation paths, and “tripwires” for anomalous behaviour (organizational analogues to the control and capability-constraining methods that Bostrom advocates) at societal scale.
Re-design roles around co‑intelligence: job descriptions for managers emphasize storytelling, stakeholder engagement, ethics, and system design over reporting and basic analysis.
Scenario
Description
Implications
Strategies
Craft leaders in co-intelligent firms(Tool-like AI, High alignment & strong governance)
Managers + well-governed AI copilots embedded in every workflow AI remains subordinate to human decision-makers Managers choose when to collaborate and when to control AI as an expert colleague, whose recommendations are not blindly accepted
Managerial value: information aggregation to judgment No more routine analytical and reporting tasks Managers need to be adept at alignment and mitigating biases
Invest in AI literacy and critical thinking Codify human decision prerogatives Build governance and oversight Redesign roles around co-intelligence
2.4 Shadow automation and managerial drift
(Tool-like AI, Low alignment & weak governance)
AI remains officially a “tool,” but is deployed chaotically: individual managers and teams adopt various assistants and analytics tools without coherent standards or governance, a classic form of “shadow AI.” Alignment problems emerge not from superintelligent agents, but from mis-specified prompts, biased training data, and unverified outputs that seep into everyday decisions. Organizational AI maturity is low; AI is widely used for drafting emails, slide decks, and analyses, but validation and accountability are informal and inconsistent.
2.4.1 Implications for managerial work
Managerial work becomes unevenly augmented: some managers leverage AI effectively, dramatically increasing productivity and quality; others underuse or misuse it, widening performance dispersion.
Documentation, reporting, and “knowledge products” proliferate but may be shallow or unreliable, as AI-written material is insufficiently fact-checked.
Hidden dependencies on external tools grow; the organization underestimates how much decision logic is now embedded in untracked prompts and private workflows, creating operational and knowledge risk.
2.4.2 Strategic prescriptions
Move from shadow use to managed experimentation: establish clear guidelines for acceptable AI uses, required human verification, and data-protection boundaries while encouraging pilots.
Standardize quality controls: require managers to validate AI-generated analyses with baseline checks, multiple models, or sampling, reflecting Christian’s emphasis on cautious reliance and error analysis.
Capture and share best practices: treat prompts, workflows, and AI use cases as organizational knowledge assets; create internal libraries and communities of practice.
Use AI to audit AI: deploy meta-tools that scan for AI-generated content, bias, and inconsistencies, helping managers assess which outputs demand closer scrutiny.
AI remains officially tool, but is chaotically deployed with no coherent standards of governance (Shadow AI). Alignment problems arise from mis-specified prompts, biased training data, and unverified outputs Organizational AI maturity is low (AI used for low-end tasks).
Managerial work becomes unevenly augmented Proliferation of unreliable-unaudited “knowledge products” Hidden dependencies on external tools (high operational and knowledge risks)
Move from shadow use to managed experimentation Standardize quality controls Capture and share best practices Use AI to audit AI (meta tools)
3.0 Conclusion (not the last word!)
As we can see, across all four quadrants, managerial advantage comes from understanding AI’s capabilities and limits, engaging with alignment and governance, and deliberately designing roles where human judgment, values, and relationships remain central.
As we stand at the cusp of AI revolution, I am reminded of my time in graduate school (mid 1990s), when Internet was exploding, and information was becoming ubiquitously available globally. A lot of opinions were floated about how and when “Google” (and other) search would limit reading habits. We have seen how that has played out, where Internet search has enabled and empowered a lot of research and managerial judgment.
There are similar concerns about the long-term cognitive impacts as rapid AI adoption leads to ossification of certain habits (in managers) and processes (in organizations). What these routines will bring to the table is to be seen, and certainly these are not the last words written on this!
Writing in after a really long time. Multiple things have been happening (more on that later).
In the meantime, I have heard a lot of conversations around technologies (again) after the explosion of democratic access to generative AI (heard of OpenAI and ChatGPT). How many do you realise that the product is just about 7 months old (launched Nov 2022)? And I am hearing a lot of adjectives used to describe technologies. This short post is about putting my thoughts on a few of these adjectives.
Emerging Research (late stage) & Development (early stage); production (experimental products)
Stand-alone Specific products and services; one industrial sector; one application domain
Quartz clock technology; Escalator operations
Synthetic biology and gene editing; Sodium-ion batteries
Enabling Range of products; sectors; application domains
Integrated circuits; Super-conductors
Augmented reality; Artificial intelligence
Technology classification
Emerging (vs. entrenched)
The first axis of technology classification is about entrenched technologies (at the other end of emerging technologies). In the technology development lifecycle, I see four stages – research, (product) development, production (product engineering and mass manufacturing), marketing (to the masses), and diffusion. Technologies behind quartz clocks, escalators, and batteries are examples of technologies that are entrenched. Did you notice that I am providing you instances of mass-market adopted products, rather than core technologies? These technologies have matured sufficiently that it is sufficient for us to describe products and we understand the technologies behind them!
On the other end of the axis are emerging technologies – those that are in late stage of research and/ or early stage of product development. Some use cases (products and services) are available, albeit in laboratory scale, let alone mass manufacturing or widespread adoption for business/ personal benefits. Examples of emerging technologies include gene editing, artificial intelligence, and augmented reality. Even though, chronologically some of these might be decades old, the mass adoption criteria determines their emergent nature.
Enabling (vs. stand-alone)
The second axis of technology classification is about stand-alone technologies (at the other end is enabling technologies). Some technologies, though seemingly fungible, are specific to certain products and services, an industrial sector, or an application domain. For instance, while pulleys, motors, and rails are basic tools, combining them to design functional escalators and elevators is specific to vertical (and in some rare cases, horizontal) mobility. Same is true with quartz technologies that are used to make accurate clocks; sodium-ion batteries for energy storage; and other such technologies. The key feature of these technologies is their specificity to a product/ service family, sector or domain.
On the other end of the axis are enabling technologies – those that enable application in a variety of products/ services, industrial sectors, or domains. Examples of enabling technologies include the basic internet, integrated circuits, augmented reality, and artificial intelligence/ machine learning. One can use these technologies for a range of applications and domains. One could see the application of augmented reality in gaming and simulation, product design and prototyping, education and training, as well as service design and innovation.
Summary
In summary, we need to analyse enabling technologies and their potential impacts as very different from entrenched technologies. When we add the concept of emerging technologies, the potential for innovation and impact is immense.
A colleague and I were organizing an online hackathon around the current pandemic, COVID-19. We were seeking innovative ideas both in the healthcare and policy spaces. We had a good number of responses, but I was struck by how little we are willing to “disrupt” the healthcare industry. For long, the Indian healthcare industry has not been disrupted, as much as many other industries have been.
Disruptive innovation: A primer
Before we proceed further, let us quickly understand what disruptive innovation means. The concept was introduced by the late HBS Professor Clayton Christensen. Disruptive innovation (DI) is a process by which a smaller company with fewer resources can successfully challenge established incumbents in an industry. As the incumbents focus on improving their products and services for their primary (most profitable) customers, they exceed the needs of some segments, and ignore the needs of some other segments. These overlooked segments are the targets for the new entrants, who deliver a specific functionality valued by these customers, typically at a lower price. Incumbents chasing their highly profitable customer segments, tend to not respond to these new entrants. The new entrants gain significant industry know-how, and move up-market, delivering those functionalities that the incumbents’ primary customers value, without losing the advantages that drove their early success with the hitherto overlooked segments. When customers in the most profitable segments (primary customers) begin switching from incumbents to new entrants, they begin providing new entrants with economies of scale. Coupled with learning and economies of scale, the new entrants “disrupt” the industry.
There are various examples of disruptive innovation. For instance, film distributors (cinema halls and multiplexes) were serving connoisseurs of movie consumers – those who valued the experience of the movie halls. The network of movie halls and the audio-video technology was the primary source of competitive advantage in that world. Netflix entered this industry targeting a segment of customers who wanted to watch movies but could not afford the travel to the movie hall and uninterrupted time/ attention. They would trade-off the experience against the quality of content, and therefore were willing to watch movies at home, using their own devices. Movie watching became a personal activity, rather than a community experience. Netflix, leveraged their library of movies as a source of competitive advantage, and captured this market with low prices in the form of an innovative pricing strategy – watch as much as you can for a monthly subscription fee (as against unit pricing of every movie that was the industry standard then). Armed with the learning of consumer preferences (now being digital, Netflix had micro-level data on consumer preferences than the multiplexes in shopping malls), it moved up-market. It leveraged on the convergence between entertainment and computing, as TVs became smart, and computer screens became bigger and bigger. The incumbents continued to ignore Netflix with the reasoning that it would take time for the connoisseurs of movies to shift. The allowed Netflix to compete for original content and piggy-back on the convergence in the home entertainment space.
Typically, disrupters have different business models, given that they target different consumer segments, provide differentiated value, and possibly have a different pricing scheme. A lot of these disruptive innovators adopt a platform business model intermediating between different user groups (like Airbnb or Redbus), servitization (like ERP on the cloud), or different pricing models (pre-paid pricing of mobile telecom services in emerging markets).
Innovation in the healthcare industry
I aver that Indian healthcare industry had not witnessed disruptive innovation for the following three reasons. One, even though primary healthcare is considered a public good, a lot of Indian consumers are willing to pay for high quality tertiary and quaternary healthcare (either they could afford it or have access to state/ private insurance). That marks low price sensitivity for the entire industry. Coupled with information asymmetry between the care givers and patients, the patients are risk averse as well. Two, given the high investments required in setting up tertiary and quaternary, the industry has become highly corporatized and consolidated. A few large corporations dominate the entire industry. The economics of the large corporate healthcare provider requires them to have a tight leash on metrics that matter for profitability (including, increased use of automation and robotics in high labour-intensive routines and use of manual labour in routines where unskilled and semi-skilled manpower is easily available, reducing the patient’s average length of stay, and optimizing the average revenue per occupied bed). Three, the organized healthcare providers have been quickly encapsulating all attempts at democratizing healthcare. For instance, when glucometers and pregnancy testing kits became consumer devices, the clinics and physicians began building an ecosystem around these devices to not let entire therapy become owned by the patient. When you went to your endocrinologist with your blood glucose charts, she would insist that home devices are error-prone and ensure that you test again at the clinic! Not so much for the additional cost of testing again (which could be minuscule), but the perception that healthcare is the domain of certified experts would be reinforced.
Data, ahoy!
As healthcare and data analytics come together, the industry is at the verge of disruption. Multiple wearable/ consumer devices capturing health data, medical equipment of the early 2020’s are more of computing devices generating gigabytes of data per observation, and increased adoption of remote sensors for public health (like thermal screening devices) would generate terabytes of data. Such a deluge of data is likely to overwhelm the legacy healthcare providers who hitherto relied on the extensive experience of the physician, for whom data was just a support.
There is an urgent need to allow for a variety of healthcare providers to operate in their own niches. For instance, given the developments in dental and ophthalmic surgeries, there should be no need to build infrastructure beyond ambulatory facilities. Increasingly, diagnosis and treatment should move to the home.
Disruptive innovation in times of COVID-19 pandemic
The unique nature of the COVID-19 pandemic means that healthcare must be provided at scale and speed. Given that the world is yet to discover a cure or vaccine, let alone a standardized treatment protocol, governments and healthcare providers need to move fast and scale up testing and care. Amongst the triad of quality, cost, and scale, the world would prefer low cost and high scale at speed, rather than wait for the best quality of care.
That is a world ripe for disruptive innovation. The under-served segments are baying for an affordable care programme that is good enough. Will the governments of day make this trade-off to ensure that “acceptable quality” testing/ care is provided to large sections of the population that are infected/ likely to be infected, at affordable costs?
In my view, there are three outcomes of a successful digital transformation effort – improvement in efficiency (driven from speed and agility), enhanced experience (both at the customer and employee ends), and differentiation from competitors (through data/ insights-driven customisation). Such interactions need to be delivered omni-channel and ubiquitously (anywhere, anytime, and any device).
Agility: Digitalisation of specific processes require them to be reimagined, and therefore eliminates redundancies, reduces wasteful activities, and reduces overhead costs. All these contribute to increased efficiency and faster turnaround times.
Experience: As I have been arguing, good digitalisation should make lives simpler for customers, employees, and all other partners as well. As different stakeholder groups (customers, employees, and partners) engage with the firm digitally, there is significant reduction in variation of service quality, leading to consistent experience.
Insights: As digitalisation allows firms to capture data seamlessly, it is imperative to not just store data, but be able to generate meaningful insights from the same. And use those insights to develop customised/ innovative offerings to their stakeholder groups (customers, employees, and partners).
Omni-channel: The digital experience should be provided to their stakeholders across all the channels that they interact with. It is not just sufficient to digitalise certain processes, while keeping others in legacy manual systems. Imagine an organisation that generates electronic bills for its customers but requires its employees to submit their own bills in hardcopy for reimbursements!
Ubiquitous: The digital experience should be available to everyone, anytime, anywhere, and on any device. The entire purpose of digitalisation would be lost if it were not ubiquitous. Imagine an online store that only opened between 0800-2000 hours Monday through Friday!
As it can be seen, omni-channel and ubiquitous are hygiene factors (they do not create additional value with their presence, but can destroy value with their absence), and therefore are at the denominator.
As the world moves to more and more online work and learning, a colleague of mine triggered some thoughts in me – can everyone learn the same way online? Do our standard theories of learning work in the online world?
Of course, there are three kinds of teachers – those who dread online teaching (they believe that they will have no control over the students’ behaviours); those who are cautious (they believe that we can do somethings online, but not others); and those who are willing to experiment and adapt (they either believe that they can deliver as they are confident of their content that the medium does not matter). This discussion is for another day. Right now, let’s focus on the learners and their learning styles.
I believe that there are eight styles of learning in the online world. Of course, I do not claim to have scientific evidence that these are mutually exclusive and collectively exhaustive – possibly a research study is in order. These are anecdotal based on my own experiences of teaching face-to-face, purely online, as well as hybrid (some students face-to-face and some others online). There may be a range of other such classifications as well, from the classic Kolb’s learning styles inventory to more detailed studies.
The list, first
Visual (spatial)
Aural (auditory-musical)
Verbal (linguistic)
Physical (kinaesthetic)
Solitary (intra-personal)
Social (interpersonal)
Logical (mathematical)
Emotional (action-response)
These styles are not mutually exclusive, and learners prefer combinations of these. These are just pure types. The combinations define one’s learning style.
The elaboration
Visual/ spatial: learning through pictures, images, maps, and animations; sense of colour, direction, and sequences; flow-diagrams and colour-coded links preferred.
Aural/ auditory-musical: learning through hearing sounds and music; rhyme and patterns define memory and moods; learning through repeated hearing and vocal repetition is preferred.
Verbal/ linguistic: power of the (most often) written word; specific words and phrases hold attention and provoke meaning; negotiations, debates, and orations are preferred.
Physical/ kinaesthetic: sensing and feeling through touch and feel of objects; being in the right place can create thoughts and evoke memory; role plays and active experimentation are preferred.
Solitary/ intra-personal: being alone provides for reflection and reliving the patterns of the past; self-awareness through meditative techniques; independent self-study and reflective writing (diaries and journals) preferred.
Social/ interpersonal: learning happens in groups (rather than alone) through a process of sharing key assertions and seeking feedback on the same from others; need for conformity and assurance as bases for learning; group discussions and work groups preferred.
Logical/ mathematical: building on the power of logic, reasoning and cause-effect relationships; developing and testing propositions and hypotheses; build a pattern/ storyline through logical workflows of arguments/ relationships; focus on the individual parts of a system; lists and specific action plans are preferred.
Emotional/ action orientation: building on the power of emotions, arising out of loyalty, commitment, and a larger sense of purpose; being able to align a set of actions to a compelling vision of the future, following directions of a leader; focus on the gestalt and not on the specifics; energy and large-scale transformations are preferred.
The four axes
Let us look for examples/ instances where each of these styles would work the best. Visual would work best when the inter-relationships are complex and can be represented through visual cues (or simulated cues), whereas physical world work best when the relationships could not be represented, but need to be experienced as a whole. How would like to take a virtual tour of the Pyramids of Giza?
Auditory style of learning works best when the brain remembers patterns, and rhyme precedes reason. Whereas, verbal style is most suited when reason is preferred over anything else. In other words, when specific words and phrases (like catchy acronyms and slogans) capture the imagination of the learner, verbal is best suited. Imagine trying to learn Vedic hymns purely through printed textbooks!
Solitary learning works best when one can reflect effectively by shutting out external cues; whereas social learning depends on feedback and reinforcement from others for learning to take shape. Imagine learning public speaking in social isolation, or seeking social confirmation and feedback in the process of poetry writing!
Logical learning style works best when the relationships could be detailed and represented as a series of cause-effect relationships. However when such relationships cannot be established, we learn through emotions. Imagine the calls for action in the play, Julius Caesar – “Friends, Romans, and Countrymen!” I would think the crowd responded more to emotional appeals than reason!
Architecting the online class
As learners and teachers in the online world, one needs to be cognisant of their own preferred styles across these four (continuums) axes. For instance, a class on machine learning would tend to be highly visual, verbal, solitary, and logical; whereas a music class is likely to be more physical, auditory, interpersonal and emotional.
The learning context has to be chosen appropriately suiting the styles – of both the content and the learners. The technology has to suit the same. Imagine for instance, a group of 400 learners tuning into a class on brand management through an online medium like Zoom. The instructor has pretty much little choice other than delivering a lecture, with text chat from the class as real-time feedback, and thence the basis for interactivity. On the other hand, if the class size was smaller, say around 40, may be the instructor could use case analysis as well. As a case teacher, I have managed to interact (two-way) with as much as 30 students (from a group of 44 active participants) in a single 90 minute class. I taught a case-based session on digital transformation imperatives online to a class of 50-odd students. I used a combination of visual and interpersonal styles, without compromising on the logical arguments as well as pre-defined frameworks. I used two separate devices – an iPad with an Apple Pencil as a substitute for my whiteboard, and my desktop screen sharing as a projector substitute. I was able to cold-call as well! That way, my class was visual, logical, verbal, and interpersonal.
To the same cohort, I taught another session on implementing a digital transformation project, using another short case-let. This session in contrast, was more visual – the framework was largely on the whiteboard than on the slides; less verbal, a lot emotional and logical, and less interpersonal (more reflective observations about what would work in their own firms).
What works best is also driven what are the learning goals. Of course, these learning styles should be the same for synchronous (live classes), asynchronous sessions (MOOCs), as well as blended formats.
In summary, the architecture of an online session should include elements of the learning styles (driven by the learners and instructors strengths, as well as the content being delivered). Apart from the learning styles, the architecture should include three other components – the form of interaction, the immediacy of feedback loops, and the nature of interaction networks. The interactivity could be both audio-video or text; the feedback loops between the learners and the instructors could be immediate or phased; and the peer-to-peer interactions may or may not be required/ enabled.
I look forward to your comments, feedback, and experiences.
I wrote about how platforms are glass box organisations, given their network effects. But there are a lot of black boxes out there. In the digital platform space, there are a lot of firms that operate as black boxes. The glass box metaphor was heavily drawn on the internal culture of the firm, that is, what the employees of the firms do. This post is about what the employees don’t do.
Before I elaborate, let me highlight the work of one of my doctoral students, Sandeep Lakshmipathy. He elucidates four primary value creation opportunities of multi-sided platforms: Discovery, matching, transaction, and evaluation. In short, discovery platforms help reduce the search costs for the sides of the platform (think Craigslist), matching platforms use filters and algorithms to ensure that the preferences of both sides are catered to while delivering a match (think Tinder), transaction platforms reduce the frictions and transaction costs in interacting with the other sides of the platform (think MasterCard), and evaluation platform enable ratings/ reviews/ recommendations/ feedback of the service (think Yelp). Sure, some platforms provide multiple value, like Uber provides discovery, transaction, and evaluation; whereas Airbnb provides all four.
Our focus today is just matching platforms, and how they create more and more opaque black boxes.
Matching platforms
In one of our conversations about the difference between discovery and matching platforms, Sandeep quipped about the difference between choosing a tomato sauce on an e-commerce platform and choosing a partner on Tinder. While it is sufficient for me to like the tomato sauce, it is not important for the tomato sauce to like me! Unlike this, in a matching platform like Tinder, it is imperative for both sides to have liked each other. Here is where the algorithms kick in. Magic! Matching algorithms.
Matching algorithms are technically taught in graph theory. Graph theorists discuss two types of matching – common vertex matching and bipartite matching. Common vertex matching is used to match, for instance, people with similar interests, like students in a class interested in a specific project. On the other hand, bipartite matching is used to match two subsets with each other, like buyers and sellers in an e-commerce platform. There are various algorithms used by graph theorists, including Hungarian maximum matching algorithm, Edmond’s matching algorithm, and the Hopcroft-Karp algorithm. The specificity of the algorithms notwithstanding, each of these algorithms work on the basis of three things – the specific preferences as limiting criteria, the minimum matches to be returned, and the maximum matches possible.
Imagine when you search for books to read on a peer-to-peer book-reading platform, and based on your preferences, you get exactly one recommendation. Just not sufficient enough choice, right? On the other hand, irrespective of the filters you add, if the recommendations do not change (the same titles keep appearing) and you get a recommendation of 5632 books, you feel overwhelmed. Both sides of the matching algorithms, there is a problem – of underwhelming and overwhelming choice. It is exactly to solve these problems that matching algorithms collect enough data from the users.
Some algorithms start with providing random matches, and based on the expressed user preferences, gradually mature their matching. Some others start at one extreme – like the shopping assistant in a mom-n-pop retail store. Once she’s estimated your broad preferences and budgets, she is most likely to start with showing you options very close to your budget. Pretty much like the default sorting algorithm (lowest price first) in the case of travel/ hotel aggregation platforms. Again, the platforms “learn” based on your expressed preferences.
Expressed preferences to profiling
Now, how do these platforms learn, and what do they learn about your preferences is the black box. Based on a few expressed preferences, these matching algorithms may end up profiling the user, and start providing more and more matches based on that specific profile. Sometimes, these preferences may be specific to a context – like me searching for a business class air ticket (when the client has agreed to pay – I travel economy otherwise!). The algorithm has no way of separating out such preferences without access to a large number and variety of such searches. And such preference-based profiling is easy.
Have you wondered why you get “more relevant” advertisements on a Google search results page than on YouTube? Both owned by the same firm, and possibly can share the matching algorithms. However, the way one could profile search users based on their text inputs and matching them with appropriate websites and advertisements is far easier than in the context of video content. Video content may be devoid of explicit tags that are indexed, may contain sarcasm, the audio and video content may be inadequate, or just non-existent. So, there are days I have been left wondering why I was exhibited a particular advertisement in the beginning of a YouTube video. Sometimes, I conjure up my own hypothesis about what actions in my history, the cookies on my browser/ device (my kids do use my iPad), and what specific search terms triggered those. Black box!
Customization-personalization or privacy: A trade-off
This is no simple trade-off: the one between customization and personalization and privacy. By exposing my preference for a particular sport (like cricket) to YouTube, I get to see a lot of interesting cricket video suggestions, right up front on the home page. By subscribing to specific channels, liking certain content, and commenting on some others, I help YouTube learn more about me. These actions provide enough inputs for the platform to customize the match and personalize suggestions. However, there are limits to when such expressed preferences breach privacy. Like when my phone’s AI assistant suggests a wake-up time for me based on my first appointment of the day, or when my wearable device chides for not walking enough during the day (how many steps can you walk in days like today, when the entire country is under lockdown?).
This is not new; and it could be creepy. Remember the 2012 story about the US retailer Target sending out mailers about baby care to a teenage girl, and the parent discovering it? Neither the local Target store had a clue, nor did the parent. The Target central database was able to predict something as personal as teenage pregnancy! Target realized that they were spooking people, and would randomize the offers, like putting ads for lawn mowers next to diaper coupons. But still, they knew!
Dealing with black boxes
I wish I had a set of recommendations for you! One day, possibly. But today, this is just a personal trade-off. As for me, I wouldn’t mind sharing my location data with my phone as long as it provides me good navigation services. And my food delivery app to know where I am, so that I get choices from hyper-local restaurants. I would go one step further and allow my photos app to have my location access so that I could organize my photos by date and location. But to allow location access to online newspapers, no.