AI Scenarios

Over the past few weeks, I have been exploring the applications of artificial intelligence across multiple domains and sectors. I have also had the opportunity to deliver a few sessions on digital transformation and AI, as well as DPI thinking in the context of AI. In preparation for these sessions, I re-read three books.

While I am not an expert in predicting the future, like most of you, I thought I would draw up a few scenarios on how AI would play out in the future. As a typical scenario planner, I first identified two orthogonal axes of uncertainty, defined the edges of the axes, and thence described the scenario.

1.0 Axes of uncertainty

The two primary axes of uncertainty are “AI autonomy in organizations” and “alignment and governance quality”. I map the AI autonomy on the horizontal axis and the alignment and governance on the vertical axis.

At the west end of the horizontal axis, is “Tool-like AI”; and the east end of the AI is “Agentic AI”. The tool-like AI context is characterized by the prevalence of tightly controlled systems, where humans retain decision rights, and override power; and AI is mainly used as an assistant and advanced analytical engine. In the agentic AI context, AI systems act as semi-autonomous agents managing workflows, negotiations, and resource allocations with limited human intervention.

At the south end of the vertical axis is “low alignment and weak governance” and the north end of the axis is “strong alignment and weak governance”. The low alignment and weak governance context is characterized by the prevalence of biased models, opaque decisions, poorly specified objectives, and lax oversight in markets with intense competitive pressures. On the other end (north), high alignment and strong governance ensures robust value alignment work, transparency, safety protocols, and institutional safeguards against control failures.

2.0 The four scenarios described

Based on these descriptions, I propose four scenarios. In the north-east are “meta-managers in aligned autonomous systems”; in the south-east are “algorithmic overloads in fragile firms”; in the north-west are “craft-leaders in co-intelligent firms”; and in the south-west is “shadow automation and managerial drift”.

2.1 Meta-managers in aligned autonomous systems

(Agentic AI, High alignment & strong governance)

AI systems act as relatively autonomous organizational agents; running simulations, reallocating resources, & orchestrating routine decisions, but are constrained by carefully designed alignment frameworks and institutional governance. Bostrom’s concerns about control and value alignment are addressed via robust oversight, multi-layered safety mechanisms, and institutional norms; agentic systems are powerful but embedded within guardrails. Managers work in Mollick’s co‑intelligent mode at a higher, more strategic level, curating objectives, interpreting AI-driven scenarios, and shaping socio-technical systems rather than micromanaging operations.

2.1.1 Implications for managerial work

  • Many mid-level coordination tasks (scheduling, resource allocation, basic performance tracking) are delegated to AI agents, compressing hierarchies and reducing the need for traditional middle management.
  • Managers function as meta‑managers: defining goals, constraints, and values; adjudicating trade-offs between conflicting AI recommendations; and stewarding culture and human development.
  • Soft skills (sense-making, ethics, narrative, conflict resolution) become the core differentiator, as technical optimization is largely automated.

2.1.2 Strategic prescriptions

  • Redesign structures around human-AI teaming: smaller, flatter organizations with AI orchestrating flows and humans focusing on creativity, relationship-building, and governance.
  • Develop “objective engineering” capabilities: train managers to specify and refine goals, constraints, and reward functions, directly addressing the alignment and normativity challenges Christian highlights.
  • Institutionalize alignment and safety: embed multi-stakeholder oversight bodies, continuous monitoring, and strong external regulation analogues, borrowing Bostrom’s strategic control ideas for the corporate level.
  • Reinvest productivity gains into human development: use surplus generated by autonomously optimized operations to fund learning, well-being, and resilience, stabilizing the socio-technical system.
ScenarioDescriptionImplicationsStrategies
Meta-managers in aligned autonomous systems (Agentic AI, High alignment & weak governance)  AI systems act as relatively autonomous organizational agents Strong alignment frameworks and institutional governance Powerful agentic systems embedded within guardrailsMid-level coordination tasks delegated to AI agents Managers function as meta-managers Soft skills become the core differentiator as technical optimization tasks are automatedRedesign structures around human-AI teaming Develop objective-engineering capabilities Institutionalize alignment and safety Reinvest productivity gains into human development

2.2 Algorithmic overloads in fragile firms

(Agentic AI, Low alignment & weak governance)

Highly autonomous AI systems manage pricing, hiring, supply chains, and even strategic portfolio choices in the name of speed and competitiveness, but with poorly aligned objectives and weak oversight. In Bostrom’s terms, “capability control” lags “capability growth”: agentic systems accumulate de facto power over organizational behaviour while their reward functions remain crude proxies for profit or efficiency. Christian’s alignment concerns show up as opaque prediction systems that optimize metrics while embedding bias, gaming constraints, and exploiting loopholes in ways human managers struggle to detect.

2.2.1 Implications for managerial work

  • Managers risk becoming rubber stamps for AI recommendations, signing off on plans they do not fully understand but feel pressured to approve due to performance expectations.
  • Managerial legitimacy suffers when employees perceive that “the system” (algorithms) is the real boss; blame shifts upward to AI vendors or abstract models, eroding accountability.
  • Ethical, legal, and reputational crises become frequent as misaligned agentic systems pursue local objectives; e.g., discriminatory hiring, aggressive mispricing, manipulative personalization, without adequate human correction.

2.2.2 Strategic prescriptions

  • Reassert human veto power: institute policies requiring human review for critical decisions; create channels for workers to challenge AI-driven directives with protection from retaliation.
  • Demand transparency and interpretability: require model documentation, explainability tools, and regular bias and safety audits; push vendors toward alignment-by-design contracts.
  • Slow down unsafe autonomy: adopt Bostrom-style “stunting” and “tripwires” at the firm level, limiting AI control over tightly coupled systems and triggering shutdown or rollback when harmful patterns appear.

Elevate ethics and compliance: equip managers with escalation protocols and cross-functional ethics boards to rapidly respond when AI-driven actions conflict with organizational values or external norms.

ScenarioDescriptionImplicationsStrategies
Algorithmic overload in fragile firms (Agentic AI, Low alignment & weak governance)  Highly autonomous AI systems lead operations in firms with poorly aligned objectives & weak oversight Capability control lags capability growth: agentic systems accumulate de-facto power Opaque systems that optimizes metrics, while embedding biases that humans fail to detect.Managers risk becoming rubber-stamps for AI recommendations Algorithmic dominance erodes managerial legitimacy Misaligned agentic systems pursuing local objectives leading to ethical/ legal/ reputational crisesReassert human veto power Demand transparency and interpretability of models Slow-down unsafe autonomy (adopt stunting and tripwires) Elevate ethics and compliance

2.3 Craft leaders in co-intelligent firms

(Tool-like AI, High alignment & strong governance)

Managers operate with powerful but well-governed AI copilots embedded in every workflow: forecasting, scenario planning, people analytics, and experimentation design. AI remains clearly subordinate to human decision-makers, with explainability, audit trails, and human-in-the-loop policies standard practice. Following Mollick’s co‑intelligence framing, leaders become orchestrators of “what I do, what we do with AI, what AI does,” deliberately choosing when to collaborate and when to retain manual control. AI is treated like an expert colleague whose recommendations must be interrogated, stress-tested, and contextualized, not blindly accepted.

2.3.1 Implications for managerial work

  • Core managerial value shifts from information aggregation to judgment: setting direction, weighing trade-offs, and integrating AI-generated options with tacit knowledge and stakeholder values.
  • Routine analytical and reporting tasks largely vanish from managers’ plates, freeing capacity for coaching, cross-functional alignment, and narrative-building around choices.
  • Managers must be adept in managing alignment issues and mitigating bias, able to spot mis-specified objectives and contest AI outputs when they conflict with ethical or strategic intent.

2.3.2 Strategic prescriptions

  • Invest in AI literacy and critical thinking: train managers to prompt, probe, and challenge AI systems, including basic understanding of data, bias, and alignment pathologies described by Christian.
  • Codify human decision prerogatives: clarify which decisions AI may recommend on, which it may pre-authorize under thresholds, and which remain strictly human, especially in people-related and high-stakes domains.
  • Build governance and oversight: establish model risk committees, escalation paths, and “tripwires” for anomalous behaviour (organizational analogues to the control and capability-constraining methods that Bostrom advocates) at societal scale.
  • Re-design roles around co‑intelligence: job descriptions for managers emphasize storytelling, stakeholder engagement, ethics, and system design over reporting and basic analysis.
ScenarioDescriptionImplicationsStrategies
Craft leaders in co-intelligent firms (Tool-like AI, High alignment & strong governance)Managers + well-governed AI copilots embedded in every workflow AI remains subordinate to human decision-makers Managers choose when to collaborate and when to control AI as an expert colleague, whose recommendations are not blindly acceptedManagerial value: information aggregation to judgment No more routine analytical and reporting tasks Managers need to be adept at alignment and mitigating biasesInvest in AI literacy and critical thinking Codify human decision prerogatives Build governance and oversight Redesign roles around co-intelligence

2.4 Shadow automation and managerial drift

(Tool-like AI, Low alignment & weak governance)

AI remains officially a “tool,” but is deployed chaotically: individual managers and teams adopt various assistants and analytics tools without coherent standards or governance, a classic form of “shadow AI.” Alignment problems emerge not from superintelligent agents, but from mis-specified prompts, biased training data, and unverified outputs that seep into everyday decisions. Organizational AI maturity is low; AI is widely used for drafting emails, slide decks, and analyses, but validation and accountability are informal and inconsistent.

2.4.1 Implications for managerial work

  • Managerial work becomes unevenly augmented: some managers leverage AI effectively, dramatically increasing productivity and quality; others underuse or misuse it, widening performance dispersion.
  • Documentation, reporting, and “knowledge products” proliferate but may be shallow or unreliable, as AI-written material is insufficiently fact-checked.
  • Hidden dependencies on external tools grow; the organization underestimates how much decision logic is now embedded in untracked prompts and private workflows, creating operational and knowledge risk.

2.4.2 Strategic prescriptions

  • Move from shadow use to managed experimentation: establish clear guidelines for acceptable AI uses, required human verification, and data-protection boundaries while encouraging pilots.
  • Standardize quality controls: require managers to validate AI-generated analyses with baseline checks, multiple models, or sampling, reflecting Christian’s emphasis on cautious reliance and error analysis.
  • Capture and share best practices: treat prompts, workflows, and AI use cases as organizational knowledge assets; create internal libraries and communities of practice.
  • Use AI to audit AI: deploy meta-tools that scan for AI-generated content, bias, and inconsistencies, helping managers assess which outputs demand closer scrutiny.
ScenarioDescriptionImplicationsStrategies
Shadow automation and managerial drift (Tool-like AI, Low alignment & strong governance)AI remains officially tool, but is chaotically deployed with no coherent standards of governance (Shadow AI). Alignment problems arise from mis-specified prompts, biased training data, and unverified outputs Organizational AI maturity is low (AI used for low-end tasks).Managerial work becomes unevenly augmented Proliferation of unreliable-unaudited “knowledge products” Hidden dependencies on external tools (high operational and knowledge risks)  Move from shadow use to managed experimentation Standardize quality controls Capture and share best practices Use AI to audit AI (meta tools)

3.0 Conclusion (not the last word!)

As we can see, across all four quadrants, managerial advantage comes from understanding AI’s capabilities and limits, engaging with alignment and governance, and deliberately designing roles where human judgment, values, and relationships remain central.

As we stand at the cusp of AI revolution, I am reminded of my time in graduate school (mid 1990s), when Internet was exploding, and information was becoming ubiquitously available globally. A lot of opinions were floated about how and when “Google” (and other) search would limit reading habits. We have seen how that has played out, where Internet search has enabled and empowered a lot of research and managerial judgment.

There are similar concerns about the long-term cognitive impacts as rapid AI adoption leads to ossification of certain habits (in managers) and processes (in organizations). What these routines will bring to the table is to be seen, and certainly these are not the last words written on this!

Cheers.

(c) 2026. R Srinivasan

Business biographies

I was invited to speak at the Bangalore Business Literature Festival 3.0 (http://bangalorebizlitfest.org) yesterday. I prepared some notes, but true to my style, spoke mostly impromptu. It was a great opportunity to meet with writers I read regularly, as well as a lot of aspiring writers. No, I am not a prolific business writer, in fact, I have not written a single business biography yet. I wondered why I was drafted in as a speaker in the session on business biographies and management education. The moderator of my panel, my colleague Prof. Ramya Ranganathan clarified – I was in because I write cases and use firm biographies in my teaching and consulting. So, I thought I would begin by distinguishing between biographies of firms and biographies of individuals. Consequently, it is important to clarify these three words – biography, history, and a case.

Biographies of people and firms

In a panel before mine, a question was asked (to which the panelists had no answer, and it was part of my preparation!): in writing a biography, how do we separate the founder from the firm (especially in the case of a startup)? As a significant part of my classroom conversations, I separate the two explicitly. When you write and analyse a set of events in the form of a chronology, it is very difficult to separate the person from the firm. If you want to chronicle a firm biography, it is important to slice the narrative across specific decisions. Say a diversification decision could be narrated as a specific chapter bringing in the environmental context, how it was perceived by the leaders, how and what strategic change was initated, what were the implications of the change on the internal and external contexts, and if and when it achieved its intended purpose.

This way, writers can separate a firm biography from an individual biography, and highlight the idea that firms outlive (and outperform) people. As a strategy teacher, it is important for me and my class to focus on the firm independent of the strategist. And this distinction is critical to learning and application.

Biography, History, and Case

As a case writer (I want to believe that I have been prolific), I am continuously confronted with this distinction. And when I teach/ mentor colleagues on writing cases, I make sure I tell them that they are not writing a biography or history! So, it important to articulate the differences.

The purpose of a historical narrative is to recreate the context in the minds of the reader and inspire. Some exaggeration is acceptable; some glorification of the protagonist is expected; and therefore, triangulation of methods and validation of data are not that critical (though good history is based on well organized and validated data). The purpose is to present the reader with a interpretive representation of the context. The example that came to my mind yesterday was that of the imagery of the Rani Laxmi Bai of Jhansi. The job of a historian is to recreate an image of the Rani in the minds of the reader with his narrative. Every one (in my audience) may have heard of her, and having read the narrative/ heard stories about her, the readers would form an image of her in their own minds. It is inconsequential if she was right or left-handed; if she had 12 guards of 16; or if she indeed wore a saree as depicted in the imagery. What is important is to highlight the paradoxes – a woman warrier in a society and time when no woman fought in the frontlines; a saree-clad horse-rider; a lady carrying both a baby slung on her back and a sword in her hand; and the like. Symbols of bravery and obstinence. Period.

On the other hand, a biographer is expected to carefully validate all data and record them diligently. She is expected to present all aspects of the personality – the good, the bad, and the not so expected facets; from multiple perspectives, without judgement. The biographer cannot exaggerate, should not glorify, and should present the facts as they are without any interpretation. It is for the readers to make sense and take specific learning. It could be anything; like when Prof. V Raghunathan spoke yesterday, he said he picked up the idea that a man can do more work by sleeping only four hours per day, when he read the biography of Napolean (who apparently slept for four hours on horse back). Maybe, when I read the same biography, I would pick up something else, and you something totally different. No representations here.

A case is a decision-focused narrative. The purpose of a case is to emphasize how learning about a context can help students and learners from applying it other contexts. My colleague Prof. Rakesh Godhwani narrated a story yesterday. It was about the great imposter, Dr. Joe (made famous by the movie of the name: Catch me if you can), on how as MBAs we need to “fake it till we make it”. A member of the audience wanted to know the ending – did he get rewarded for saving lives, or did he go to jail for impersonation? As a case writer and management teacher, I would say, the ending does not matter. What matters was the criteria to make the decision, not the actual decision. Therefore, good cases do not need to be neat and complete. Then are non-comprehensive accounts of events and antecedents leading to a decision.

Therefore, the historian would create an imagery of the Rani of Jhansi, the biographer would focus on her valor, while the case writer would discuss the context of why she was at all required to fight (in the context of British India’s Doctrine of Lapse).

Celebrating the also-rans

I called the audience to write about firms that were also-rans. Success has many fathers; and these days, even failures have enough parents. In fact, I was proud of the fact one of my first cases as an academic was about India’s first ecommerce firm (way back in the year 1999), Fabmart.com; and in one of the panels yesterday the co-founder of Fabmart.com, Mr. K Vaitheeswaran was speaking about how the firm failed subsequently and his coping with that failure (read about his book, Failing to Succeed, here).

However, there is little focus on the also-ran firms. Since a majority of our students are also-rans who make their careers in also-ran firms, it is important to study them. It is important to study how these firms perceive the environment, adapt/ adopt, make strategic shifts, learn/ unlearn, pivot, take feedback, measure impact and performance, and go round and round in circles! Developing, what we strategy researchers call, dynamic capabilities. These bios are extremely valuable for the management researcher.

So, the next time you hear about a firm that is neither a success or a failure, remember that there is a lot to learn in terms of processes, routines, and dynamic capabilities from them.

Cheers.

(c) 2017. R Srinivasan.

PS: Edited Prof. Rakesh Godhwani’s coordinates and the name of the movie he referred to (Catch me if you can) on 11 Sep 2017.

Changed the featured image: 21 Sep 2017.

Judgments

This is that time of the year when Indian business schools welcome their new students. As a self-proclaimed proponent of the case method of learning, I am often invited by my school to teach a session on “case method of learning” to the first year students. And one of my key messages to one such group of students this year was this: “a lot of what you will learn in the business school in terms of content, can be read from a variety of sources; what you will learn in class through continuous, repeated practice is the ability to make sound judgments.” This post is an elaboration of my understanding of the role ‘judgments’ play in business and life.

Judgment: what is it, anyway?

In the legal world, a decision made by a ‘learned Judge’ after hearing out all the arguments from all parties involved. The judge makes up her mind after providing equal and fair opportunity to all concerned parties to present their points of view; a detailed analysis of the evidence presented; collating expert opinions; gleaning through precedents and cognizant of the opportunity of this particular decision setting a precedent; and keeping the law of the land as well as the changing (socio-economic) contexts. When a judge presents a judgement, it provides a guideline for what is good/ bad; preferable/ not-preferable; acceptable/ not-acceptable in that particular context. To that extent, there is a subjective evaluation of the options, given the specific context; and a specific preference for one course of thought/ action over another.

Is it different from decision-making?

At a basic level, a judgement is a decision. But it is more than a just a decision. A decision by definition is a choice. Professor William Starbuck famously distinguished policy making (where resource allocation is a continuous process) from decision-making as ” … the end of deliberation and the beginning of action” (for more details on this quote, and in general, a history of decision-making, read this classic HBR article).

I see the primary difference between decision-making and judgement as managing risk and uncertainty. In his classic book titled “Risk, Uncertainty, and Profit”, Frank Knight (1921) defined uncertainty in a situation where the outomes could not be comprehensively enumerated and the probabilities of their occurances cannot be estimated. On the other hand, risk is a situation where all possible outcomes could be listed and the probabilities may be calculated.

Instructions and advice

One of my favourite assertions in my case learning sessions is the difference between instructions and advice. Instructions as we all know are directions for performing an activity step-by-step, a sort of a standard operating procedure. No thinking involved here – just go ahead and do what is written up/ told to. Whereas advice is contextual. Someone tells you, “it worked for me/ others in a similar context, you may try it yourself”. Of course, this implies that if your context is different, feel free to ignore/ adapt. Isn’t that why advice is always free?!

I bring in an example of how a little boy is taught to cross the street. Imagine his mother’s instructions: “before you cross the street at the zebra crossing, look for the policeman at the intersection; and when he signals you to cross, run across the street as fast as you can!” Wonderful … as long as the context is fixed. What happens if at the intersection, there is no policeman … does the boy keep waiting? What happens if the policeman does not notice him waiting to cross the street? Of what happens when the policeman signals him to cross the street, but a car is speeding towards him? What happens if …. ? Here is where judgements come in handy. Instead of providing him instructions to cross the street, his mother should develop judgement skills in him.

Imagine how you cross the street … if any of you have tried crossing the street in India, you know it better. I distinctly remember when my German colleagues while attending a conference in India, had a harrowing time crossing streets! When you cross the street, you look both sides of the road, spot a car a fair distance away (163m farther), driving relatively slow (at 26 km per hour). You estimate that at your speed of walking (4.5 km per hour), you will be able to cross the 80ft wide street a well 29.5 seconds before the car crosses the point where you intend to cross. You get the point, right? Nobody does all these calculations, we know it. Decision scientists call it intuition, gut, judgements.

It is developed through practice, accumulated through experience, and through active experimentation. Acculturation through socialization and mentoring may help in developing judgements; but no guarantee that just by repeating an action again and again, one would develop judgement. Apart from this practice and experience, a critical component of judgement is intent. Plus, an ability to weigh the pros and cons (in almost real time), as is in decision-making.

Intent in Judgement

One needs to have a specific intent to learn from experience. It is very likely that someone can continue to do an activity repeatedly without developing a sense of judgement. Something like a rote learning or Pavlovian Conditioning. How many times have you experienced people doing the same activities again and again not knowing why they are doing it, and why that way? Inefficient bureaucracies are built on the separation of thinking from doing; the doers are refrained from thinking … they are told to just do, and suspend thinking. Imagine blue-collared workers in the Taylorian world, or even BPO workers, or some customer service executives in modern-day organizations. It requires concerted intent to learn judgement.

Will I lose my job to automation?

The question in most cases is not if, but when? Judgement has never been more important as it is today. Roles where judgements are not required, activities that can be codified into detailed processes (where all possible outcomes can be enumerated and probabilities calculated), automation will take over. Bots and robots dominate the internet world today. Almost every website that has a customer interface has a bot running … and sometimes the responses could be hilarious. For instance, an airline customer thanked an airline sarcastically for misplacing his luggage and the airline responded with a big thanks for his compliment. Obviously, the sarcasm was lost on the automated response. The machine could not “learn” enough. And the entire twitterati took over (read about it here).

We live in a world today where the buzzwords include “big data”, “analytics”, “business intelligence”, and “artificial intelligence”. I recently saw a cartoon on a blog (futurethink.com.sg) that I can relate to very well.

Artificial-Intelligence-and-Real-Intelligence

As machine learning, automation, robotics, and augmented reality dominate our industrial vocabulary, natural intelligence and human judgement should take centrestage in our discourse.

Learning judgement

My advice to budding managers, invest in learning judgement-making. Consciously, with intent. Practise, make mistakes, experiment. Define outcomes and build expertise. After all, what you want to make out of your life and career depends on your judgement, right?

Cheers.

(c) 2017. R Srinivasan.

 

Beware the stupid!

During one of my random browsing through the internet on my mobile device, I came across an interesting set of laws – the basic laws of human stupidity. Yes, you read it right, stupidity. By Carlo M. Cipolla (read the original article here), an Italian-born former professor emeritus of economic history at University of California Berkeley. This is simply genius. This post is to help you find how these laws apply to the start-up ecosystem of today. Read on.

stupid001

The five laws

Let us first understand the five laws. The first law states:

Always and inevitably everyone underestimates the number of stupid individuals in circulation.

They are everywhere and appear suddenly and unexpectedly. Any attempt at quantifying the numbers would be an underestimation.

The second law states:

The probability that a certain person be stupid is independent of any other characteristic of that person.

There is serious diversity at act here. No race, gender, educational attainment, physical characteristics, psychological traits, or even lineage can explain the incidence of stupidity in a person. He says a stupid man is born stupid by providence, and in this regards, nature has outdone herself.

The third law is also labelled a golden law, and presents itself into a neat 2X2 matrix. It states:

A stupid person is a person who causes losses to another person or to a group of persons while himself deriving no gain and even possibly incurring losses.

This law classifies people in this world into four categories – the helpless, intelligent, the bandit, and the stupid. Organized on the two axes of gains for self and others, the helpless is fooled by others who gain at his expense; the intelligent creates value for himself as well as others; the bandit gains at the expense of others; whereas the stupid loses himself in the process of destroying others’ value.

Stupid003

While the actions of others are justifiable, it is the actions of the stupid that are so difficult to defend – no one can explain why he behaved that way.

While it is possible that people may behave intelligent one day, bandit another day, and helpless in another place and context; stupid people are remarkably consistent – they are stupid, irrespective. No rationality at all – just pure consistent. And that makes stupidity extremely potent and dangerous. For the simple reason that you cannot erect a rational defence against a stupid attack, as it comes as a surprise, and more importantly, there is no rational cause for the attack in the first place.

Which leads us to the fourth law, which states:

Non-stupid people always underestimate the damaging power of stupid individuals. In particular non-stupid people constantly forget that at all times and places and under any circumstances to deal and/or associate with stupid people always turns out to be a costly mistake.

Even intelligent people and bandits (who are rational) underestimate the probability of occurrence of stupid people, are genuinely surprised by the stupid attacks, and are at a loss to defend themselves effectively against stupidity. Given the inherent unpredictability of stupidity, it is both difficult to understand in the first place, and any attempts at defending against it may itself provide the stupid people with more opportunity to exercise his gifts!

Which leads to the fifth law, which states:

A stupid person is the most dangerous type of person.

And by corollary,

A stupid person is more dangerous than a bandit.

The danger of stupidity cannot be sufficiently understated than this law. Given the irrationality of stupidity, and the costs associated with stupid behaviour, a stupid person is far more dangerous than any other type of person. An intelligent person adds value to society, a helpless fool may transfer value from himself to others, a bandit may transfer value from others to himself; but the stupid erodes value to the society by executing a lose-lose strategy. There could be bandits who might border the stupid (someone who can kill a person for stealing $50 – the value they gain is lower than the value you lose; but the $50 for them is as valuable as life for you). But given the power of stupidity, they can create far more harm than one can even imagine.

Stupid002

The five laws of start-up world stupidity

  1. Stupid business models are aplenty – they rear their head everywhere, every-time. Irrespective of the context, they are omni-present. No exceptions at all. Do you remember business models like Iridium (by Motorola) and the FreePC experiment? It exists even today … Casper Tucker wonders why he should make his own IP redundant (read here).
  2. The probability of a stupid business model arising from a developed country, a venture of a large organisation, from the famed Silicon Valley (or Bangalore, Berlin, or Shanghai for that matter) is the same (and high). The start-up graves are littered with corpses of stupidity-induced deaths of both the firms, their investors, customers, and every other stakeholder you can think of. You think sandpaper for shaving or hair-removal is a bad idea, check this out!
  3. Do I need to tell you the costs of stupidity in the start-up world? I have come across founders who in the first few months of the business taking off, begin talking valuation rather than growth. In the process, they have destroyed value squarely and truly for everyone around them, including themselves. Nothing can match the stupidity of a founder who sacrificed his employment to start-up a firm, acquire customers and force them to make asset-specific investments, make wonderful investor presentations and get a few to invest as angels, PE, or VC; and then instead of worrying about making the business profitable, chase valuation. I surely have mentored a few, and do not want to name them for obvious reasons.
  4. The fourth law is the trick – stupid people thrive by their ability to surprise you by their conviction. And there are enough people who irrationally believe in them; but even the rational actors are unsure how to respond – till it all dawns on them. How many products listed in this article do you remember?
  5. And they are just plain dangerous – they can bring the entire ecosystem down. Remember how the Real Value Vaccummizer brought the entire innovative company down (do you know the firm was the first to introduce a portable fire extinguisher by the brand name Cease Fire, which by the way stays a generic name for portable fire extinguishers)?

So, customers and investors, start-up founders and entrepreneurs, students and researchers, and everyone else, beware the stupid.

Cheers!

© 2017. R. Srinivasan

 

Management theory – just mumbo jumbo?

I am writing here after a really long time. During one of my train journeys between Chennai and Bangalore last weekend, I happened to re-read this article that I had saved for later reading on my mobile device – this appeared on The Economist six months back (read it here). The Economist argues that management as a discipline is similar to the Medieval Catholic church that was transformed by Martin Luther about 500 years ago (business schools = cathedrals; consultants = clergy; the clergy speaking in Latin to give their words an air of authority = management theorists speaking in mumbo-jumbo that only their peers can understand; and having lost touch with the real world).

The Economist avers that all management theory is about four basic ideas – consolidation across industries rather than competitiveness, fewer entrepreneurial success than celebrated in popular discourse; maturing organisational bureaucracies that slow down firms rather than speed; and the seemingly inevitable globalisation being reversed by trends (including ‘Make America Great Again’ and ‘Brexit’). A redemption is therefore called for. Let me re-examine the four trends in the context of Indian business context.

1. Consolidation rather than competition

There is surely a trend of consolidation in Indian industry. The telecom service business is a case in point – the entry of Reliance Jio has led to severe performance pressures and possible consolidation of the service providers (read here); SBI merging its associate banks and other possible PSB mergers (read here); or even the retail industry (read here). Is competition really going down? I think a variety of management theorists would say no. From high velocity environments and hyper-competitive environments in the mid 1990s through mid 2000s, the discourse has evolved to disruptive innovation in the late 2000s, to Industry 4.0, Internet of Things, Artificial Intelligence, and Big Data in the mid 2010s. The fads have changed, but the discourse has always been dominated by some fad or other. We management theorists have and will always seek to sustain our legitimacy by maintaining how difficult it is do business ‘today’ and in the near future. Has there been a time when a management scholar has said that “we live in stable times”?

2. Entrepreneurial failures

Yes, we live in interesting times. There are more and more firms founded, especially in clusters like Bengaluru (erstwhile Bangalore) and Gurugram (erstwhile Gurgaon). But as The Economist argues, there are more and more entrepreneurial failures that the media reports. And these failures come in myriad forms – simple shutdowns like the ones described in these articles (25 failed startups in 2016), or sellouts to larger competitors (see this list of acquisitions by Quikr).

3. Organisational bureaucracies

With so much churn in the ecosystem, we management theorists propound that business is getting faster. Yes, network effects allow for some businesses to scale faster than traditional pipeline businesses. However, given the ubiquity of such businesses  and easy imitation of business models, a lot of startup failures are attributed to these businesses not scaling sufficiently fast enough! Think Uber in China, Snap Deal in India, and other such startups. Scaling is easier said than done.

4. Rise of nationalistic tendencies

This is possibly true of most economies in the world today – rise of nationalism, and the ‘de-flattening’ of the world. Right wing protectionism is on a steady rise, and in some countries, it has reached jingoistic heights.

Time for redemption. What can management theorists do?

It is high time we redeem ourselves and the discipline. Given these trends, here is a callout for management theorists to make their discourse relevant to the business environment of today. We (as management theorists) need to get off the ivory towers and start communicating to the managers of today and tomorrow. While research output as measured by top-tier journal publications is important for an academic career, it is equally important for translating that research into insights relevant for the business managers. The Indian management researcher has very little options to publish high quality research in Indian journals. The quality of Indian management journals targeted at academics leaves much to be desired. Neither are there a variety of high quality Indian practitioner-oriented journals. And the circulation numbers of these journals amongst Indian CEOs? And where are the cases on Indian companies? The number and quality of cases published by Indian academics on Indian firms are too low to be even written about.

dilbertKnowledge

(c) 2017. R Srinivasan.