Disruptive innovation in healthcare: Is COVID-19 the opportunity?

A colleague and I were organizing an online hackathon around the current pandemic, COVID-19. We were seeking innovative ideas both in the healthcare and policy spaces. We had a good number of responses, but I was struck by how little we are willing to “disrupt” the healthcare industry. For long, the Indian healthcare industry has not been disrupted, as much as many other industries have been.

Disruptive innovation: A primer

Before we proceed further, let us quickly understand what disruptive innovation means. The concept was introduced by the late HBS Professor Clayton Christensen. Disruptive innovation (DI) is a process by which a smaller company with fewer resources can successfully challenge established incumbents in an industry. As the incumbents focus on improving their products and services for their primary (most profitable) customers, they exceed the needs of some segments, and ignore the needs of some other segments. These overlooked segments are the targets for the new entrants, who deliver a specific functionality valued by these customers, typically at a lower price. Incumbents chasing their highly profitable customer segments, tend to not respond to these new entrants. The new entrants gain significant industry know-how, and move up-market, delivering those functionalities that the incumbents’ primary customers value, without losing the advantages that drove their early success with the hitherto overlooked segments. When customers in the most profitable segments (primary customers) begin switching from incumbents to new entrants, they begin providing new entrants with economies of scale. Coupled with learning and economies of scale, the new entrants “disrupt” the industry.

There are various examples of disruptive innovation. For instance, film distributors (cinema halls and multiplexes) were serving connoisseurs of movie consumers – those who valued the experience of the movie halls. The network of movie halls and the audio-video technology was the primary source of competitive advantage in that world. Netflix entered this industry targeting a segment of customers who wanted to watch movies but could not afford the travel to the movie hall and uninterrupted time/ attention. They would trade-off the experience against the quality of content, and therefore were willing to watch movies at home, using their own devices. Movie watching became a personal activity, rather than a community experience. Netflix, leveraged their library of movies as a source of competitive advantage, and captured this market with low prices in the form of an innovative pricing strategy – watch as much as you can for a monthly subscription fee (as against unit pricing of every movie that was the industry standard then). Armed with the learning of consumer preferences (now being digital, Netflix had micro-level data on consumer preferences than the multiplexes in shopping malls), it moved up-market. It leveraged on the convergence between entertainment and computing, as TVs became smart, and computer screens became bigger and bigger. The incumbents continued to ignore Netflix with the reasoning that it would take time for the connoisseurs of movies to shift. The allowed Netflix to compete for original content and piggy-back on the convergence in the home entertainment space.

Typically, disrupters have different business models, given that they target different consumer segments, provide differentiated value, and possibly have a different pricing scheme. A lot of these disruptive innovators adopt a platform business model intermediating between different user groups (like Airbnb or Redbus), servitization (like ERP on the cloud), or different pricing models (pre-paid pricing of mobile telecom services in emerging markets).

Innovation in the healthcare industry

I aver that Indian healthcare industry had not witnessed disruptive innovation for the following three reasons. One, even though primary healthcare is considered a public good, a lot of Indian consumers are willing to pay for high quality tertiary and quaternary healthcare (either they could afford it or have access to state/ private insurance). That marks low price sensitivity for the entire industry. Coupled with information asymmetry between the care givers and patients, the patients are risk averse as well. Two, given the high investments required in setting up tertiary and quaternary, the industry has become highly corporatized and consolidated. A few large corporations dominate the entire industry. The economics of the large corporate healthcare provider requires them to have a tight leash on metrics that matter for profitability (including, increased use of automation and robotics in high labour-intensive routines and use of manual labour in routines where unskilled and semi-skilled manpower is easily available, reducing the patient’s average length of stay, and optimizing the average revenue per occupied bed). Three, the organized healthcare providers have been quickly encapsulating all attempts at democratizing healthcare. For instance, when glucometers and pregnancy testing kits became consumer devices, the clinics and physicians began building an ecosystem around these devices to not let entire therapy become owned by the patient. When you went to your endocrinologist with your blood glucose charts, she would insist that home devices are error-prone and ensure that you test again at the clinic! Not so much for the additional cost of testing again (which could be minuscule), but the perception that healthcare is the domain of certified experts would be reinforced.

Data, ahoy!

As healthcare and data analytics come together, the industry is at the verge of disruption. Multiple wearable/ consumer devices capturing health data, medical equipment of the early 2020’s are more of computing devices generating gigabytes of data per observation, and increased adoption of remote sensors for public health (like thermal screening devices) would generate terabytes of data. Such a deluge of data is likely to overwhelm the legacy healthcare providers who hitherto relied on the extensive experience of the physician, for whom data was just a support.

There is an urgent need to allow for a variety of healthcare providers to operate in their own niches. For instance, given the developments in dental and ophthalmic surgeries, there should be no need to build infrastructure beyond ambulatory facilities. Increasingly, diagnosis and treatment should move to the home.

Disruptive innovation in times of COVID-19 pandemic

The unique nature of the COVID-19 pandemic means that healthcare must be provided at scale and speed. Given that the world is yet to discover a cure or vaccine, let alone a standardized treatment protocol, governments and healthcare providers need to move fast and scale up testing and care. Amongst the triad of quality, cost, and scale, the world would prefer low cost and high scale at speed, rather than wait for the best quality of care.

Axes of healthcare.jpg

That is a world ripe for disruptive innovation. The under-served segments are baying for an affordable care programme that is good enough. Will the governments of day make this trade-off to ensure that “acceptable quality” testing/ care is provided to large sections of the population that are infected/ likely to be infected, at affordable costs?

Stay healthy, stay hydrated, stay safe!


(c) 2020. Srinivasan R


The five vowels of Digital transformation

In my view, there are three outcomes of a successful digital transformation effort – improvement in efficiency (driven from speed and agility), enhanced experience (both at the customer and employee ends), and differentiation from competitors (through data/ insights-driven customisation). Such interactions need to be delivered omni-channel and ubiquitously (anywhere, anytime, and any device).

Vowels of Dx1

Agility: Digitalisation of specific processes require them to be reimagined, and therefore eliminates redundancies, reduces wasteful activities, and reduces overhead costs. All these contribute to increased efficiency and faster turnaround times.

Experience: As I have been arguing, good digitalisation should make lives simpler for customers, employees, and all other partners as well. As different stakeholder groups (customers, employees, and partners) engage with the firm digitally, there is significant reduction in variation of service quality, leading to consistent experience.

Insights: As digitalisation allows firms to capture data seamlessly, it is imperative to not just store data, but be able to generate meaningful insights from the same. And use those insights to develop customised/ innovative offerings to their stakeholder groups (customers, employees, and partners).

Omni-channel: The digital experience should be provided to their stakeholders across all the channels that they interact with. It is not just sufficient to digitalise certain processes, while keeping others in legacy manual systems. Imagine an organisation that generates electronic bills for its customers but requires its employees to submit their own bills in hardcopy for reimbursements!

Ubiquitous: The digital experience should be available to everyone, anytime, anywhere, and on any device. The entire purpose of digitalisation would be lost if it were not ubiquitous. Imagine an online store that only opened between 0800-2000 hours Monday through Friday!

As it can be seen, omni-channel and ubiquitous are hygiene factors (they do not create additional value with their presence, but can destroy value with their absence), and therefore are at the denominator.

The 4 axes of online learning

As the world moves to more and more online work and learning, a colleague of mine triggered some thoughts in me – can everyone learn the same way online? Do our standard theories of learning work in the online world?

Of course, there are three kinds of teachers – those who dread online teaching (they believe that they will have no control over the students’ behaviours); those who are cautious (they believe that we can do somethings online, but not others); and those who are willing to experiment and adapt (they either believe that they can deliver as they are confident of their content that the medium does not matter). This discussion is for another day. Right now, let’s focus on the learners and their learning styles.

I believe that there are eight styles of learning in the online world. Of course, I do not claim to have scientific evidence that these are mutually exclusive and collectively exhaustive – possibly a research study is in order. These are anecdotal based on my own experiences of teaching face-to-face, purely online, as well as hybrid (some students face-to-face and some others online). There may be a range of other such classifications as well, from the classic Kolb’s learning styles inventory to more detailed studies.

The list, first

  1. Visual (spatial)
  2. Aural (auditory-musical)
  3. Verbal (linguistic)
  4. Physical (kinaesthetic)
  5. Solitary (intra-personal)
  6. Social (interpersonal)
  7. Logical (mathematical)
  8. Emotional (action-response)

These styles are not mutually exclusive, and learners prefer combinations of these. These are just pure types. The combinations define one’s learning style.

The elaboration

  • Visual/ spatial: learning through pictures, images, maps, and animations; sense of colour, direction, and sequences; flow-diagrams and colour-coded links preferred.
  • Aural/ auditory-musical: learning through hearing sounds and music; rhyme and patterns define memory and moods; learning through repeated hearing and vocal repetition is preferred.
  • Verbal/ linguistic: power of the (most often) written word; specific words and phrases hold attention and provoke meaning; negotiations, debates, and orations are preferred.
  • Physical/ kinaesthetic: sensing and feeling through touch and feel of objects; being in the right place can create thoughts and evoke memory; role plays and active experimentation are preferred.
  • Solitary/ intra-personal: being alone provides for reflection and reliving the patterns of the past; self-awareness through meditative techniques; independent self-study and reflective writing (diaries and journals) preferred.
  • Social/ interpersonal: learning happens in groups (rather than alone) through a process of sharing key assertions and seeking feedback on the same from others; need for conformity and assurance as bases for learning; group discussions and work groups preferred.
  • Logical/ mathematical: building on the power of logic, reasoning and cause-effect relationships; developing and testing propositions and hypotheses; build a pattern/ storyline through logical workflows of arguments/ relationships; focus on the individual parts of a system; lists and specific action plans are preferred.
  • Emotional/ action orientation: building on the power of emotions, arising out of loyalty, commitment, and a larger sense of purpose; being able to align a set of actions to a compelling vision of the future, following directions of a leader; focus on the gestalt and not on the specifics; energy and large-scale transformations are preferred.

The four axes

Axes of learning

Let us look for examples/ instances where each of these styles would work the best. Visual would work best when the inter-relationships are complex and can be represented through visual cues (or simulated cues), whereas physical world work best when the relationships could not be represented, but need to be experienced as a whole. How would like to take a virtual tour of the Pyramids of Giza?

Auditory style of learning works best when the brain remembers patterns, and rhyme precedes reason. Whereas, verbal style is most suited when reason is preferred over anything else. In other words, when specific words and phrases (like catchy acronyms and slogans) capture the imagination of the learner, verbal is best suited. Imagine trying to learn Vedic hymns purely through printed textbooks!

Solitary learning works best when one can reflect effectively by shutting out external cues; whereas social learning depends on feedback and reinforcement from others for learning to take shape. Imagine learning public speaking in social isolation, or seeking social confirmation and feedback in the process of poetry writing!

Logical learning style works best when the relationships could be detailed and represented as a series of cause-effect relationships. However when such relationships cannot be established, we learn through emotions. Imagine the calls for action in the play, Julius Caesar – “Friends, Romans, and Countrymen!” I would think the crowd responded more to emotional appeals than reason!

Architecting the online class

As learners and teachers in the online world, one needs to be cognisant of their own preferred styles across these four (continuums) axes. For instance, a class on machine learning would tend to be highly visual, verbal, solitary, and logical; whereas a music class is likely to be more physical, auditory, interpersonal and emotional. 

The learning context has to be chosen appropriately suiting the styles – of both the content and the learners. The technology has to suit the same. Imagine for instance, a group of 400 learners tuning into a class on brand management through an online medium like Zoom. The instructor has pretty much little choice other than delivering a lecture, with text chat from the class as real-time feedback, and thence the basis for interactivity. On the other hand, if the class size was smaller, say around 40, may be the instructor could use case analysis as well. As a case teacher, I have managed to interact (two-way) with as much as 30 students (from a group of 44 active participants) in a single 90 minute class. I taught a case-based session on digital transformation imperatives online to a class of 50-odd students. I used a combination of visual and interpersonal styles, without compromising on the logical arguments as well as pre-defined frameworks. I used two separate devices – an iPad with an Apple Pencil as a substitute for my whiteboard, and my desktop screen sharing as a projector substitute. I was able to cold-call as well! That way, my class was visual, logical, verbal, and interpersonal.

Axes of learning1

To the same cohort, I taught another session on implementing a digital transformation project, using another short case-let. This session in contrast, was more visual – the framework was largely on the whiteboard than on the slides; less verbal, a lot emotional and logical, and less interpersonal (more reflective observations about what would work in their own firms).

Axes of learning2

What works best is also driven what are the learning goals. Of course, these learning styles should be the same for synchronous (live classes), asynchronous sessions (MOOCs), as well as blended formats.

In summary, the architecture of an online session should include elements of the learning styles (driven by the learners and instructors strengths, as well as the content being delivered). Apart from the learning styles, the architecture should include three other components – the form of interaction, the immediacy of feedback loops, and the nature of interaction networks. The interactivity could be both audio-video or text; the feedback loops between the learners and the instructors could be immediate or phased; and the peer-to-peer interactions may or may not be required/ enabled.

I look forward to your comments, feedback, and experiences.


(c) 2020. Srinivasan, R.


Remora Strategies

It is interesting how much management as a discipline borrows from other disciplines. Much like the English language. That is for another day. Today, working from home, I got reading a lot of marine biology. Yes, you heard it right, marine biology. It is about Remora fish, and its relationship with sharks and other larger marine animals. Students in my IIMB MBA class of 2020 have heard of it in one of my sessions in the platform business course and a couple of groups also used this concept in their live projects.

What is a Remora and what is its relationship with Sharks?

The Remora is a fish. It is possibly the world’s best hitchhiker. It has an organ that allows it to attach itself to a larger animal, like a shark or a whale. The sucker-like organ is a flat surface on its back, that allows itself to attach to the belly of a shark. That is a reason why it is also popularly known as a sharksucker or whalesucker. Remoras have also been known to attach themselves to divers and snorkelers as well. The sucker organ looks like venetian blinds that increase or decrease its suction on the larger fish’s body as it slides backward or forward. They could therefore be removed by sliding them forward.

Remoras can swim on their own. They are perfectly capable. But they prefer to attach themselves to the larger fish to hitch a ride to deeper parts of the ocean, saving precious energy. Their relationship with the Shark is unique – they do not draw blood or nutrients from the Shark like a Leech. They feed on the food scraps of the larger fish by keeping their mouths open. While the Remora benefits from its attaching to the Shark, it does not significantly benefit or harm the Shark. Some scientists argue that Sharks like the fact that Remoras feed on the parasites that are attaching themselves to the skin of the Shark and thereby keeping them healthy. Some others are concerned about the drag experienced by Sharks as they swim deeper in the oceans, which can be significant when there are dozens of Remoras attached to the Shark. Both of these are not that significant enough for the Sharks to either welcome Remoras to attach themselves to their bellies, nor have they exhibited any behaviour to repel these Remoras away (like they do with other parasites). Read more about Remoras’ relationship with Sharks here.

This relationship between the Remora and the shark can be termed as commensalism, rather than symbiotic. If the Sharks indeed value the fact that Remoras can help them get rid of the parasites from their teeth or skin, then we could term this relationship as mutualistic.

Remoras and platform start-ups

What is a platform researcher studying Remoras? A platform start-up could solve its Penguin problem using a Remora strategy. It could piggy-back on a larger platform to access its initial set of users, with no costs to the larger platform. Let’s consider an example. A dating start-up struggles to get its first set of users. While it needs rapid growth of numbers, it should ensure that the profiles on the platform are of good quality (bots, anyone?). It has two options: developing its own validation algorithm or integrating with larger platforms like Twitter or Facebook for profile validation. It could create its own algorithms if it needs to validate specific criteria, though. It could use a Remora strategy, by attaching itself to a larger Shark in the form of Twitter or Facebook. This has no costs to Twitter or Facebook, and if at all, contributes to marginal addition of traffic to Facebook/ Twitter. However, for the start-up, this saves significant costs of swimming down the depths of the ocean (developing and testing its own user validation algorithms).

Remora’s choice

Don Dodge first wrote about the Remora Business Model, where he wondered how both the Remoras and the Sharks made money, if at all. Building on this, Joni Salminen elaborated on Remora’s curse. Joni’s dissertation elaborates two dilemmas multi-sided platforms face – cold-start and lonely-user.

The cold-start dilemma occurs when a platform dependent on user-generated content does not get sufficient enough content in the early days to attract more users (to consume and/ or generate content). There are two issues to be resolved in this case – to attract more users to sustain the platform, and in the process balancing the numbers of content generators and content consumers.

The lonely-user dilemma occurs when a platform dependent on cross- and same-side network effects tries to attract the first users. A subset of the penguin problem, on this platform nobody joins unless everybody joins. There is no intrinsic value being provided by the platform, except that being generated by interactions between and among user groups.

The cold-start dilemma can be typically resolved using intelligent pricing mechanisms, like subsidies for early adopters. For example, a blogging platform can attract influencers to start blogging on their site, by providing them with premium services. As they resolve the cold-start dilemma, and they attract enough users to blog and read (generate and consume), they could get to a freemium model (monetize reading more than a specified number of posts), while continuing to subsidising writers. The key is to identify after what number of posts, does one start charging readers, as too low a number would reduce the number of readers and high-quality writers would leave the platform; but on the other hand, too big a number of freely available posts to read, the platform may not make any money at all to sustain.

The lonely-user dilemma can be typically resolved by following a Remora strategy. By leveraging the users on a larger established platform, the first set of users could be sourced easily en masse. However, just having users is not sufficient – there is an issue of coordination: getting not just sign-ups but driving engagement. It is important that registered users begin engaging with the platform. Some platforms need more than just engagement, they are stuck with a real-time problem: like in  a multi-player gaming or a food-delivery platform, we need gamers to be engaged with each other real-time. Some other platforms need users in specific segments, or the transferability problem: that users are looking for others within a specific segment, like in a hyperlocal delivery platform, a matrimony platform or a doctor-finding platform. Such platforms need to have sufficient users in each of these micro-segments.

A Remora strategy could potentially help a platform start-up overcome these two major dilemmas – cold-start and lonely-user. By porting users from the larger platform, one could solve the lonely-user problem, and through tight integration with the content/ algorithms of the Shark platform, the Remora (start-up) could manage the cold-start problem.

Remora’s curse

The decision to adopt a Remora strategy is not just simple for a platform start-up. There may be significant costs in the form of trade-offs. I could think of five significant costs that need to be considered along with the benefits of following a Remora strategy. These costs include (a) holdup risk; (b) ceding monetization control; (c) access to user data; (d) risk of brand commoditization; and (e) exit costs.

Hold-up risk: There is a significant risk of the established platform holding the start-up to a ransom, partly arising out of the start-up making significant asset-specific investments to integrate. For instance, the dating start-up would need to tightly integrate its user validation processes with that of Facebook or Twitter, as the need may be. It may have to live with the kind of data Facebook provides it through its APIs. It may be prone to opportunistic behaviour by Facebook, when it decides to change certain parameters. For example, Facebook may stop collecting marital status on its platform, which may be a key data point for the dating start-up. Another instance of hold-up risk could be when Google resets its search algorithm to only include local search, rather than global search, thereby affecting start-ups integrating with Google.

In order to manage hold-up risks, Remora start-ups will be better off not making asset-specific investments to integrate with the Shark platforms.

Monetization control: A significant risk faced by Remora start-ups is that of conceding the power to monetize to the Shark. For example, when a hyper-local restaurant discovery start-up follows a Remora strategy on Google, it is possible that Google gets all the high-value advertisements, leaving the discovery start-up with only low-value local advertisements. There is also a risk of the larger platform defining what could be monetised on the start-up platform as well. For example, given that users have gotten used to search for free, even specialised search like locations (on maps) or specialised services like emergency veterinary care during off-working hours, may not be easy to monetise. Such platforms may have to cede control on which side to monetise and subsidise, and how much to price to the larger platform.

To avoid conceding monetization control to larger platforms, Remora start-ups need to provide additional value over and above the larger platform. For instance, in the local search business, a platform start-up would possibly need to not just provide discovery value (which may not be monetizable) but include matching value as well.

Access to user data: This is, in my opinion, the biggest risk of following a Remora strategy. Given that user data is the primary lever around which digital businesses customize and personalize their services and products, it is imperative that the start-up has access to its user data. It is likely that the larger platform may restrict access to specific user data, which may be very valuable to the start-up. For instance, restaurant chains who could have run their own loyalty programmes for its clients, may adopt a Remora on top of food delivery platforms like Swiggy or Zomato. When they do that, the larger platform may run a loyalty programme to its clients, based on the data it has about the specific user, which is qualitatively superior to the one that local restaurants may have. In fact, in the context of India, these delivery platforms do not even pass on basic user profiles like demographics or addresses to the restaurants. The restaurants are left with their limited understanding of their walk-in customers and a set of nameless/ faceless customers in the form of a platform user, for whom they can generate no meaningful insights or even consumption patterns.

It is imperative that platform start-ups define what data they require to run their business model meaningfully, including user data or even operations. It could be in the form of specific contracts for accessing data and insights, and/ or co-creating analytical models.

Risk of brand commoditization: A direct corollary of the user data is that the Remora start-up could be commoditized, and their brand value might be subservient to the larger platform’s brand. It could end up being a sub-brand of the larger start-up. For user generation and network mobilization, the Remora start-up would possibly need to get all its potential users to affiliate with the larger platform, even if may not be most desirable one. On a delivery start-up, hungry patrons may be loyal to the aggregator and the specific cuisine, rather than to a restaurant. Given that patrons could split their orders across multiple restaurants, it could be the quality and speed of delivery that matters more than other parameters. Restaurants might then degenerate into mere “kitchens” that have excess capacity, and when there is no such excess capacity, these aggregators have known to set up “while label” or “cloud kitchens”.

It is important that Remora start-ups step up their branding efforts and ensure that the larger brand does not overshadow their brand. The standard arguments or relative brand strengths of complements in user affiliation decisions need to be taken into consideration while protecting the Remora’s brands.

Exit costs: The last of the Remora’s costs is that of exit costs. Pretty much similar to the exit costs from an industry, platform start-ups need to be clear if their Remora strategy is something temporary for building up their user base and mobilizing their networks in the early stages, or it would be relatively permanent. In some cases, the platform’s core processes might be integrated with the larger platform, like the API integration for user validation, and therefore may provide significant exit costs. In some other cases, the platform may have focused on their core aspects of their business during the initial years and would have relegated their non-core but critical activities to the larger platform. At a time when the start-up is ready to exit the larger platform, it may require large investments in non-core activities, which may lead to disruptions and costs. Add to this, the costs of repurposing/ rebuilding asset-specific investments made when joining the platform.

Remora start-ups, therefore, need to have a clear strategy on what is the tenure of these Remora strategies, and at what point of time they would exit the association with the larger platform, including being prepared for the costs of exit.

Scaling at speed

Remora strategies allow for platform start-ups an alternative to scale their businesses very fast. However, it is imperative to understand the benefits and costs of such strategies and make conscious choices. These choices are at three levels – timing of Remora, what processes to Remora, and building the flexibility to exit. Some platforms may need to attach themselves right at the beginning of their inception to larger platforms to even get started; but some others can afford to wait for the first users to start engaging with the platform before integrating. What processes to integrate with the larger platform is another critical choice – much like an outsourcing decision, core and critical processes need to be owned by the start-up, while non-core non-critical processes may surely be kept out of the platform. In all of these decisions, platform start-ups need to consciously decide the tenure and extent of integration with the larger platform, and therefore make appropriate asset-specific investments.

Maintain social distance, leverage technology, and stay healthy!

Quote of the times

(C) 2020. Srinivasan R

Making Artificial Intelligence (AI) work

This is a follow-up post on my post last week on Moravec’s Paradox in AI. In that post, I enumerated five major challenges for AI and robotics: 1) training machines to interpret languages, 2) perfecting machine to man communication, 3) designing social robots, 4) developing multi-functional robots, and 5) helping robots make judgments. All of this was focused on what the programmers need to do. In this short post, I draw implications on what organisations and leaders need to do to integrate AI (and for that matter, any hype-tech) into their work and lives.

Most of the hype around technologies is built around a series of gulfs, gulfs of motivation, cognition, and communication. They are surely related to each other. Let me explain these in the reverse order.

Three gulfs

The first gulf is the communication gap between developers and managers. Developers know how to talk to machines. They actively codify processes and provide step-by-step instructions to machines to help them perform their tasks. Managers, especially the ones facing consumers, speak stories and anecdotes, whereas developers need precise instructions that could be translated into pseudo-code. For instance, a customer journey to be digitalised need to go through a variety of steps. Let me give you an example of a firm that I worked with. A multi-brand retail outlet wanted to digitalise customer walk-ins and help guide customers to the right floor/ aisle. Sounds simple, right? The brief to the developers was, to build a robot that would “replace the greeter”. The development team went around building a voice activated humanoid robot that would greet a customer as she walked in, asked her a set of standard questions (like ‘what are you looking for today’?) and respond with answers (like, ‘we have a lot of new arrivals in the third floor’). The tests were very good, except that the developers did not understand that only a small proportion of their customers were arriving alone! When customers came as couples, families, or groups, the robot treated them like different customers, and tried responding to each other separately. What made things worse, was that the robot could not distinguish children’s voices from female voices and greeted even young boys as girls/ women. The expensive project remains a toy today in a corner of the reception, only to witness the resurgence of plastic-smiling greeters. The entire problem could have been solved by a set of interactive tablets … Just because the managers asked the developers to “replace the greeter”, they went about creating an over-engineered but inadequate humanoid. The reverse could also happen, where the developers only focus on the minimum features that would make the entire exercise useless. For us to bridge this gulf, we either train the managers to write pseudo-code, or get the developers visualise customer journeys.

The second gulf is that of algorithmic and creative thinking. Business development executives and strategy officers think in terms of stretch goals and focus on what is expected in the near and farther future. On the other hand, developers are forced to work with technologies in the realm of current possibilities. They refer to all these fuzzy language, aspirational goals and corporatese as “gas” (to borrow a phrase from Indian business school students). The entire science and technology education at the primary and secondary school is about learning algorithmic thinking. However, as managers gain experience and learn about the context, they are trained to think beyond algorithms in the name of creativity and innovation. While both creative thinking as well as algorithmic thinking are important, the difference accentuates the communication gap discussed above.

Algorithmic thinking is a way of getting to a solution through the clear definition of the steps needed – nothing happens by magic. Rather than coming up with a single answer to a problem, like 42, pupils develop algorithms. They are instructions or rules that if followed precisely (whether by a person or a computer) leads to answers to both the original and similar problems[1].   Creative thinking means looking at something in a new way. It is the very definition of “thinking outside the box.” Often, creativity in this sense involves what is called lateral thinking, or the ability to perceive patterns that are not obvious. Creative people have the ability to devise new ways to carry out tasks, solve problems, and meet challenges[2].  

The third gulf is that of reinforcement. Human resource professionals and machine learning experts use the same word, with exactly similar meaning. Positive reinforcement rewards desired behaviour, whereas negative reinforcement punishes undesirable behaviour. Positive and negative reinforcements are integral part of human learning from childhood; whereas machines have to be especially programmed to do so. Managers are used to employ reinforcements in various forms to get their work done. However, artificially intelligent systems do not respond to such reinforcements (yet). Remember the greeter-robot that we discussed earlier. Imagine what does the robot do when people get surprised and shocked, or even startled as it starts speaking? Can we programme the robot to recognise such reactions and respond appropriately? Most developers would use algorithmic thinking to programme the robot to understand and respond to rational actions from people; not emotions, sarcasms, and figures of speech. Natural language processing (NLP) can take us some distance but to help the machine learn continuously and accumulatively requires a lot of work.

Those who wonder what happened!

There are three kinds of people in the world – those who make things happen, those who watch things happen, and those who wonder what happened! Not sure, if this is a specific quote from a person, but when I was learning change management as an eager management student, I heard my Professor repeat it in every session. Similarly, there are some managers (and their organizations) wonder what happened when their AI projects do not yield required results.

Unless these three gulfs are bridged, organizations cannot reap adequate returns on their AI investments. Organizations need to build appropriate cultures and processes that bridge these gulfs. It is imperative that leaders invest in understanding the potential and limitations of AI, whereas developers should appreciate business realities. Not sure how this would happen, when these gulfs could be bridged, if at all.

Comments and experiences welcome.


© 2019. R Srinivasan, IIM Bangalore.

[1] https://teachinglondoncomputing.org/resources/developing-computational-thinking/algorithmic-thinking/

[2] https://www.thebalancecareers.com/creative-thinking-definition-with-examples-2063744

Moravec’s Paradox in Artificial Intelligence: Implications for the future of work and skills

What is artificial in AI?

As the four-day long weekend loomed and I was closing an executive education programme where the focus was digitalization and technology, especially in the context of India and emerging economies, I read this piece on AI ethics by IIMB alumnus Dayasindhu. He talks about the differences between teleological and deontological perspectives of AI and ethics. It got me thinking on technological unemployment (unemployment caused by the firms’ adoption of technologies such as AI and Robotics). For those of you interested in a little bit of history, read this piece (also by Dayasindhu) on how India (especially Indian banking industry) had adopted technology.

In my classes on digital transformation, I introduce the potential of Artificial Intelligence (AI) and its implications on work and skills. My students (in India and Germany) and Executive Education participants would remember these discussions. One of my favourite conversations have been about what kinds of jobs will get disrupted thanks to AI and robotics. I argue that, contrary to popular wisdom, we would have robots washing our clothes, much earlier than those folding the laundry. While washing clothes is a simple operation (for robots), folding laundry requires a very complex calculation of identifying different clothes of irregular shapes, fabric and weight (Read more here). And that, most robots we have are single use – made for a specific purpose, as compared to a human arm, that is truly multi-purpose (Read more here). Yes, there have been great advancements on these two fronts, but the challenge still remains – AI has progressed far more in certain skills that seem very complex for humans, whereas robots struggle to perform certain tasks that seem very easy to humans, like riding a bicycle (which a four-year old child can possibly do with relative ease). The explanation lies in the Moravec’s Paradox. Hans Moravec and others had articulated this in the 1980s!

What is Moravec’s Paradox?

“It is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility”.

Moravec, 1988

It is very difficult to reverse engineer certain human skills that are unconscious. It is easier to reverse engineer motor processes (think factory automation), cognitive skills (think big data analytics), or routinised computations (think predictive/ prescriptive algorithms).

“In general, we’re less aware of what our minds do best…. We’re more aware of simple processes that don’t work well than of complex ones that work flawlessly”.

Minsky, 1986

Moravec’s paradox proposes that this distinction has its roots in evolution. As a species, we have spent millions of years in selection, mutation, and retention of specific skills that has allowed us to survive and succeed in this world. Some examples of such skills include learning a language, sensory-motor skills like riding a bicycle, and drawing basic art.

What are the challenges?

Based on my reading and experience, I envisage five major challenges for AI and robotics in the days to come.

One, artificially intelligent machines need to be trained to learn languages. Yes, there have been great advances in natural language processing (NLP) that have contributed to voice recognition and responses. However, there are still gaps in how machines interpret sarcasm and figures of speech. Couple of years ago, a man tweeted to an airline about his misplaced luggage in a sarcastic tone, and the customer service bot responded with thanks, much to the amusement of many social media users. NLP involves the ability to read, decipher, understand and make sense of natural language. Codifying grammar in complex languages like English, accentuated by differences in accent can make deciphering spoken language difficult for machines. Add to it, contextually significant figures of speech and idioms – what do you expect computers to understand when you say, “the old man down the street kicked the bucket”?

Two, apart from communication, machine to man communication is tricky. We can industrial “pick-and-place” robots in industrial contexts; can we have “give-and-take” robots in customer service settings? Imagine a food serving robot in a fine dining restaurant … how do we train the robot to read the moods and suggest the right cuisine and music to suit the occasion? Most of the robots that we have as I write this exhibit puppy-like behaviour, a far cry from naturally intelligent human beings. Humans need friendliness, understanding, and empathy in their social interactions, which are very complex to programme.

Three, there have been a lot of advances in environmental awareness and responses. Self-navigation and communication has significantly improved thanks to technologies like Simultaneously Localisation and Mapping (or SLAM), we are able to visually and sensorily improve augmented reality (AR) experiences. Still, the risks of having human beings in the midst of a robot swarm is fraught with a variety of risks. Not just that different robots need to sense and respond to the location and movement of other robots, they need to respond to “unpredictable” movements and responses of humans. When presented with a danger, different humans respond differently based on their psychologies and personalities, most often, shaped from a series of prior experiences and perceived self-efficacies. Robots still find it difficult to sense, characterise, and respond to such interactions. Today’s social robots are designed for short interactions with humans, not learning social and moral norms leading to sustained long term relationships.

Four, developing multi-functional robots that can develop reasoning. Reasoning is ability to interpret something in a logical way in order to form a conclusion or judgment. For instance, it is easy for a robot to pick up a screwdriver from a bin, but quite something else to be able to pick it up in the right orientation and be able to use it appropriately. It needs to be programmed to realise when the tool is held in the wrong orientation and be able to self-correct it to the right orientation for optimal use.

Five, even when we can train the robot with a variety of sensors to develop logical reasoning through detailed pattern-evaluations and algorithms, it would be difficult to train it to make judgments. For instance, to make up what is good or evil. Check out MIT’s Moral Machine here. Apart from developing the morality in the machine, how can we programme it to be not just consistent in behaviour; but remain fair and use appropriate criteria for decision-making. Imagine a table-cleaning robot that knows where to leave the cloth when someone rings the doorbell. It needs to be programmed to understand when to stop an activity and when to start another. Given the variety of contexts humans engage with on a daily basis, what they learn naturally will surely take complex programming.

Data privacy, security and accountability

Add to all these, issues around data privacy and security. Given that we need to provide the robot and AI systems with enough data about humans and we have limited ability to programme the system, issues about privacy is critical. Consent is the key word in privacy, but when we are driving in the midst of an autonomous vehicle (AV), there is so much data the AV collects to navigate, we need strong governance and accountability. When an AV is involved in an accident with a pedestrian, who is accountable – the emergency driver in the AV; the programmer of the AV; the manufacturer of the vehicle; any of the hardware manufacturers, like the camera/ sensors that did not do their jobs properly; or the cloud service provider which did not respond soon enough for the AV to save lives? Such questions are pertinent and are too important to relegate to a later date when they occur, post facto.

AI induced technological unemployment

At the end of all these conversations, when I look around me, I see three kinds of jobs being lost to technological change: a) low-end white collared jobs, like accountants and clerks; b) low-skilled data analysts, like the ones at a pathology interpreting a clinical report or a law-apprentice doing contract reviews; and c) hazardous-monotonous or random-exceptional work, like monitoring a volcano’s activity or seismic measurements for earthquakes.

The traditional blue-collared jobs like factory workers, bus conductors/ drivers, repair and refurbishment mechanics, capital machinery installation, agricultural field workers, and housekeeping staff would take a long time to be lost to AI/ robotics. Primarily because these jobs are heavily unpredictable, secondly as these jobs involve significant judgment and reasoning, and thirdly because the costs of automating these jobs would possibly far outweigh the benefits (due to low labor costs and high coordination costs). Not all blue-collared jobs are safe, though. Take for instance staff at warehouses – with pick and place robots, automatic forklifts, and technologies like RFID sensors, a lot of jobs could be lost. Especially, when quick response is the source of competitive advantage in the warehousing operations, automation will greatly reduce errors and increase reliability or operations.

As Brian Reese wrote in the book, The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity, “order takers at fast food places may be replaced by machines, but the people who clean-up the restaurant at night won’t be. The jobs that automation affects will be spread throughout the wage spectrum.”

In summary, in order to understand the nature and quantity of technological unemployment (job losses due to AI and robotics), we need to ask three questions – is the task codifiable? (on an average, tasks that the human race have learnt in the past few decades are the easiest to codify); is it possible to reverse-engineer it? (can we get to break the task into smaller tasks); and does the task lend itself to a series of decision rules? (can we create a comprehensive set of decision rules, that could be programmed into neat decision trees or matrices). If you answered in the affirmative (yes) to these questions with reference to your job/ task, go learn something new and look for another job!


© 2019. R Srinivasan, IIM Bangalore

Facebook did not “steal” my data: I actually work for Facebook and produced the data!

In one (more) of my flights, I had a conversation with a co-passenger. A young adult, a lot of us would be so excited to label a “millennial”, she was excited about the recent announcement that airlines in India can offer in-flight internet connectivity. Even as the flight was on the active runway, she was on her mobile phone (ignoring the requests of the flight attendants), taking pictures of herself in the plane and posting them on Facebook. She was on her way to attend a graduation function of a friend; and everyone who was travelling in, was marking their presence. As the network died, she turned around to me and asked, “so, how long will it be before the airline companies implement internect connectivity in-flights?”. I answered her on how difficult it would be for planes to be retro-fitted with such capabilities, but all new planes can be inducted with the facility. As she realised it is going to be some time, she sighed, put on the headphones, and went off to doze.

It set me thinking about my previous post “Facebook stole my data“. By calling that data mine (belonging to me), I was actually falling for the “personal data as a mine” metaphor – the idea that data is a resource much like a natural resource. It is there for tech firms to extract. As if it was fossilised like oil or coal, or formed over a long period of time, like alluvial soil. If that was so, then the governments should regulate it well. The governments should therefore own all these data and possibly license these extractors with specific mandates to provide unique value to the users (citizens), and such resources would remain the realm of a public good.

The trouble with this metaphor is that the data did not exist in the first place … I created that data, and continue to create it. Nicholas Carr, in his blog aptly wrote, I am a data factory (and so are you). He debunks that mining metaphor and says that just by my living – walking, interacting with friends, buying, reading, and just lazily watching something on the internet – I create data. And not just any data, in any form … in a form that Facebook wants me to. A little more nuanced than what I argued in my previous post … I argued that Facebook created the data. Carr argues that I create that data, working for Facebook, in a form and design that Facebook wants me to. Do you see yourself as working for tech companies like the Amazon, Facebook, Apple, Google, and Netflix, not for a salary/ remuneration, but as a data factory? And worse, the data that you produce being used to “enrich” someone on the other corner of the world, in the name of providing you with a meaningful experience?

This metaphor puts the ownership on the design of the data, apart from the data itself, on the tech companies. And in making this addictive (recall how the first line of your Facebook homepage starts with “write something”, and possibly in your mother tongue!), tech companies like Facebook are trying to engineer our lives. Really? Does Facebook tell me what to do? Or does Facebook want to know what (everything, whatever) I do? I would tend to believe the latter.

Here is where the difference between use and addiction begins. When you use, you benefit … when you are addicted, they benefit. Pretty much like using sinful products (like tobacco and alcohol), handling dangerous products (like fire and guns), and managing borderline behaviour (like assertiveness becoming aggressiveness), we should possibly educate ourselves about responsible use.

Responsible use of Facebook! That sounds not-so-fun. The flight was about to land, the flight attendant was waking up the lady seated next to me to put her seat in upright position. She woke up excited, and looked outside to see if we had landed … and as soon as landed, she opened her Facebook app, and went on to write something. I did apologise for snooping into her behaviour, but she said, she was fine. She was comfortable giving away her behavioural/ afflilliative/ preferential/ locational data to some tech company server … she was not one bit uncomfortable sharing her behavioural data with me. I told her that I would write about her Facebook addiction as we were getting off the plane, and she said that would be “cool”, gave me her email id, and asked me to email the piece to her (which I will promptly do). She would possibly post it on her Facebook wall too. So much for addiction.

So much for mid-week reading.


(C) 2018. R Srinviasan


Facebook “stole” my data?!

Writing in after a really long time. Waiting for a flight to take off, I was reading this nice piece by Alex Tabarrok, aptly titled, The Facebook Trials: It’s not “our” data. He argues that before Facebook, this data did not exist in the first place, and Facebook “created’ that data in the first place. So, it has all the right to use that data – since it uncovered your relationships and preferences (some of which even you did not even know yourself).

That is not the primary concern – that Facebook used the data to target advertisements to me. What is immoral about this entire episode is that Facebook allowed third parties to use it in ways neither you wanted it, nor Facebook anticipated it. When I engage with a technology platform like Facebook or Google, I implicitly educate the platform, much the same way, a discerning store manager would understand my preferences as I walked through the store (okay, armed with more storage and processing power than the human store manager). The principle is the same.

What is scary is that the data was used to “manipulate” not just “advertise to/ communicate with” users. When the data user treated you as a consumer, and customized solutions for you (including recommendations), I liked it, appreciated it, and enjoyed it. But when it feeds me specific, filtered information about non-commercial information (like political news), is where the abuse began.

I guess therefore I would not delete my Facebook account anytime soon (not that I use it regularly); but would keep my eyes and ears open on what these trials lead to, and what privacy checks do these technology platforms commit to in the various countries. Prof. Nirmalya Kumar argues why he found Facebook and its CEO wanting in their apologies and testimonies, and why he quit Facebook here. I would however be looking out for inconsistencies in such commitments across countries with differing privacy and data management laws, much like how the packaged food industry leverages such differences to sell unhealthy food to lenient country consumers.


(C) 2018. Srinivasan R

Predatory pricing in multi-sided platforms

Over the last few days, living and commuting in Nuremberg, I realised that I was not missing Uber. While just about a month ago, commuting for a week in suburban Paris, I was completely dependent on Uber for even the shortest of distances. The penetration of public transport and my familiarity with the city of Nuremberg aside, I began wondering how would Uber price its services in a city like Nuremberg (when it enters here, which I doubt very much would happen in the next few years), where public transport is omni-present, efficient, and affordable. It surely should adopt predatory pricing.

In this post, I will elaborate on the concept of predatory pricing in the context of multi-sided platforms.

Theory alert! If you are uncomfortable with theory, skip directly to the illustration and come back to read the theory.

What is predatory pricing?

Economists and policy makers concerned about market efficiencies and fair competition have been obsessed with the concept of predatory pricing for a long time. The most common definition of predatory pricing is through the application of the conventional Areeda-Turner test. Published way back in 1975, in spite of its limitations, most countries and courts have used it consistently, due to, in some ways, lack of any credible alternative.

The Areeda-Turner test is based on two basic premises. The recoupment premise states that the firm indulging in predatory pricing should be able to predict and be confident of its ability to recoup the losses through higher profits as competition exits the market. The assumption is that the firm could reasonably anticipate the (opportunity) costs of predatory pricing, as well as have an estimate of the future value of monopoly profits; and the net present value of such predatory pricing to push competition out of the market should be positive and attractive. In plain English, the firm should be able to project the effect of lower prices in terms of lower competition and higher profits in the future.

How low can this predatory price be? That is the subject of the second premise – the AVC premise. The firm’s prices (at business as usual volumes) should be below its average variable costs (AVC), or marginal costs in the short run. If the prices were indeed above the AVC, the firm would argue that they are indeed more efficient than competition, due to any of their resources, processes, or organisational arrangements. It is when the price falls below the AVC that the question of unfair competition arises – the firm might be subsidising its losses.

Take for instance, a start-up that is piloting an innovative technology. It may price its products/ services at a price below the AVC to gain valuable feedback from its lead users, but in the absence of a recoupment premise such pricing might not qualify as predatory pricing. On the other hand, imagine a new entrant with superior technology who can bring costs down to a level where the prices fall below the marginal costs of the competitors but stay well above the firm’s AVC, it is just disrupting the market.

Only when both the conditions are met, i.e., when the predator’s prices are below the AVC and the firm could project the extent of recoupment due to monopoly profits as competition exits the market, that we call it predatory pricing.

Predatory pricing in MSPs

There has been a lot of discussion about how ecommerce firms in India have been indulging in predatory pricing and how various platforms have been going under. I had written about subsidies and freebies from a consumer perspective a few months ago (Free… continue hoyenga). Let us discuss how and why it is difficult to assess if a lower-than-competition price is indeed predatory in the context of multi-sided platforms (MSPs).

  1. Multi-sided platforms have a unique problem to solve in their early days, that of network mobilisation. A situation that is like a chicken-egg problem, or a Penguin problem, where “nobody joins unless everyone joins” is prevalent in establishing a two-sided or multi-sided platform (for more details about the Penguin problem and network mobilisation strategies, read my earlier post here). In order to build a sufficient user base on one side, a common strategy is to subsidise, even provide the services free.
  2. Another common feature of MSPs is the existence of subsidy-sides and money-sides of users. The platform might subsidise one side of users and make money from the other side, while incurring costs of providing services to both sides, depending on the relative price elasticities and willingness to affiliate with the other side of the platform. And the prices for the subsidy side would surely below costs for that side. It is imperative that the overall costs and prices are considered while analysing these pricing strategies.
  3. These cross-side network effects will surely force the platforms to price their services most efficiently across both the sides. Even for the money side, the platform might not be able to charge extraordinary prices as such prices would themselves act against the sustenance of these cross-side network effects. It is likely that these extra-normal profits would evaporate through subsidies on the other side to keep the network effects active. Imagine a situation where a B2B marketplace charged the sellers higher than normal prices, only large (and desperate) sellers would affiliate with the marketplace, leading to buyers (the subsidy side) leaving the platform. In order to keep the buyers interested, the marketplace might either have to broaden the base of sellers by optimising the prices, or provide extraordinary subsidies to the buyers to keep them interested. So in order to maintain the equilibrium, the platform would have to price the sides efficiently.
  4. Finally, in a competitive situation, not all competitors might follow the same price structure. So, a reduction of prices by one competitor for one side of the market may not force all other competitors to reduce prices; they may just encourage multi-homing (allowing users to use competitive products simultaneously) or manipulate the price on the other side of users.

So, a direct application of the Areeda-Turner test might not be appropriate while studying predatory pricing in the context of MSPs.

An illustration

Let us imagine a market for home tutors supporting school students. The market is inherently geographically constrained; it is very unlikely that either the teacher or the student would travel across cities for this purpose. For the time being, let us assume that there is no technology (like video conferencing) being used.

This market is apt for the entry of a multi-sided platform, like LocalTutor. This firm provides a platform for the discovery and matching of freelance tutors with students. LocalTutor monetises the student side by charging a monthly fee (that includes a platform commission), and passes on the fees to the tutor. We need to make two assumptions before we proceed with competitive entry and predatory pricing: the market is fully penetrated (all the students who are looking for tutors and tutors looking for students are all in the market) and there are no new students and tutors entering the market; and there are no special preferences between student-tutor matches, i.e., the student-tutor pair does not form a bond like a sportsperson-coach, where they begin working like a team. In other words, the tutor is seamlessly (with no loss of efficiency) replaceable.

Now imagine a new competitor enters the market and engages in predatory pricing to kick-in network effects. The new entrant, let’s call it GlobalTutor (a fictitious name), drops the student-side prices to half. In order to attract the right number and quality of tutors, GlobalTutor has to sustain the same fees that LocalTutor provides its tutors, if not more. So, it starts dipping into its capital reserves and begins paying the tutors the market rates while reducing the student fees. Anticipating a larger surge in student numbers, more tutors sign up to GlobalTutor, and seeing the number and quality of tutors on GlobalTutor (at least if it is not inferior to LocalTutor), students first start multi-homing (use both services for their different needs, like LocalTutor for mathematics and GlobalTutor for music classes), and some of them begin switching.

In a fully penetrated market, the only way for LocalTutor to compete is to respond with its price structure. It has two options – reduce the student-side prices to restrain switching and multi-homing behaviour; and tweak the tutor-side prices and incentives. The first option is straightforward; it is cost-enhancing and profit-reducing. The second option (which is not available for pipeline businesses) is interesting in the context of platform businesses.

There are various ways of responding to this threat. The intent is to arrest switching and multi-homing behaviour of tutors and students from LocalTutor to GlobalTutor.

  1. Increasing multi-homing costs of tutors by providing them with incentives based on exclusivity/ volume: Like what Uber/ Ola provides its drivers – the incentives kick-in at what the company believes is the most a driver can do when they do not multi-home. In other words, if you multi-homed, drove your car with both Ola and Uber, you would never reach those volumes required to earn your incentives in either of your platforms.
  2. Contractual exclusion: This might not be tenable in most courts of law, if these freelance tutors were not your ‘employees’. Given the tone of most courts on Uber’s relations with its driver-partners (drivers’ lack of control in most of the transaction decisions including choice of destination, pricing, and passenger choice), any such contracting would imply that the tutors would be employees, and that would significantly increase the platform’s costs (paying for employee benefits are always more expensive than outsourcing to independent service providers).
  3. Increase contract tenure: LocalTutor may increase multi-homing and switching costs by increasing the tenure of the contracting from monthly to annual. Annual contracting will reduce the flexibility that students and tutors have, and might result in reduction in volume.
  4. The next options for LocalTutor are to work at the two restraining assumptions we made at the beginning – penetration and perpetual matching. LocalTutor might want to add in more and more students and tutors and expand the market, providing unique and differentiated services like art & craft classes, preparation for science Olympiads, or other competitive tests. LocalTutor might also communicate the value of teaming of student-tutor pairs in its success stories, in a bid to dis-incentivise switching and multi-homing.

To predate or not to predate is not the question

Given the differences between pipeline and platform businesses new entrants seeking to mobilize network effects have very little option but to resort to predatory pricing. The choice is not if, but how. And as an incumbent, should you be prepared for a new entrant who would resort to predatory pricing? Surely, yes! And how? By being ready to expand the market and increasing switching and multi-homing costs. Unlike in the tutoring business that is inherently geographically constrained, a lot of businesses could span across markets. Even tutoring could leverage technology to reach a global audience.

Just one comforting thought, predatory pricing as a strategy to eliminate competition is inefficient in the long run. The new entrant might adopt predatory pricing to eliminate competition in the short run, but the act of predatory pricing breaks down most barriers to entry, and sends signals to others that there is a market that is easy to enter. It might attract a more highly capitalised competitor to enter the market with the same strategies … making the market a ‘contestable market’. And no one wants to make a fortune in a contestable market, right? More on competing in contestable markets, subsequently.


© 2017. R Srinivasan


%d bloggers like this: