Moravec’s Paradox in Artificial Intelligence: Implications for the future of work and skills

What is artificial in AI?

As the four-day long weekend loomed and I was closing an executive education programme where the focus was digitalization and technology, especially in the context of India and emerging economies, I read this piece on AI ethics by IIMB alumnus Dayasindhu. He talks about the differences between teleological and deontological perspectives of AI and ethics. It got me thinking on technological unemployment (unemployment caused by the firms’ adoption of technologies such as AI and Robotics). For those of you interested in a little bit of history, read this piece (also by Dayasindhu) on how India (especially Indian banking industry) had adopted technology.

In my classes on digital transformation, I introduce the potential of Artificial Intelligence (AI) and its implications on work and skills. My students (in India and Germany) and Executive Education participants would remember these discussions. One of my favourite conversations have been about what kinds of jobs will get disrupted thanks to AI and robotics. I argue that, contrary to popular wisdom, we would have robots washing our clothes, much earlier than those folding the laundry. While washing clothes is a simple operation (for robots), folding laundry requires a very complex calculation of identifying different clothes of irregular shapes, fabric and weight (Read more here). And that, most robots we have are single use – made for a specific purpose, as compared to a human arm, that is truly multi-purpose (Read more here). Yes, there have been great advancements on these two fronts, but the challenge still remains – AI has progressed far more in certain skills that seem very complex for humans, whereas robots struggle to perform certain tasks that seem very easy to humans, like riding a bicycle (which a four-year old child can possibly do with relative ease). The explanation lies in the Moravec’s Paradox. Hans Moravec and others had articulated this in the 1980s!

What is Moravec’s Paradox?

“It is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility”.

Moravec, 1988

It is very difficult to reverse engineer certain human skills that are unconscious. It is easier to reverse engineer motor processes (think factory automation), cognitive skills (think big data analytics), or routinised computations (think predictive/ prescriptive algorithms).

“In general, we’re less aware of what our minds do best…. We’re more aware of simple processes that don’t work well than of complex ones that work flawlessly”.

Minsky, 1986

Moravec’s paradox proposes that this distinction has its roots in evolution. As a species, we have spent millions of years in selection, mutation, and retention of specific skills that has allowed us to survive and succeed in this world. Some examples of such skills include learning a language, sensory-motor skills like riding a bicycle, and drawing basic art.

What are the challenges?

Based on my reading and experience, I envisage five major challenges for AI and robotics in the days to come.

One, artificially intelligent machines need to be trained to learn languages. Yes, there have been great advances in natural language processing (NLP) that have contributed to voice recognition and responses. However, there are still gaps in how machines interpret sarcasm and figures of speech. Couple of years ago, a man tweeted to an airline about his misplaced luggage in a sarcastic tone, and the customer service bot responded with thanks, much to the amusement of many social media users. NLP involves the ability to read, decipher, understand and make sense of natural language. Codifying grammar in complex languages like English, accentuated by differences in accent can make deciphering spoken language difficult for machines. Add to it, contextually significant figures of speech and idioms – what do you expect computers to understand when you say, “the old man down the street kicked the bucket”?

Two, apart from communication, machine to man communication is tricky. We can industrial “pick-and-place” robots in industrial contexts; can we have “give-and-take” robots in customer service settings? Imagine a food serving robot in a fine dining restaurant … how do we train the robot to read the moods and suggest the right cuisine and music to suit the occasion? Most of the robots that we have as I write this exhibit puppy-like behaviour, a far cry from naturally intelligent human beings. Humans need friendliness, understanding, and empathy in their social interactions, which are very complex to programme.

Three, there have been a lot of advances in environmental awareness and responses. Self-navigation and communication has significantly improved thanks to technologies like Simultaneously Localisation and Mapping (or SLAM), we are able to visually and sensorily improve augmented reality (AR) experiences. Still, the risks of having human beings in the midst of a robot swarm is fraught with a variety of risks. Not just that different robots need to sense and respond to the location and movement of other robots, they need to respond to “unpredictable” movements and responses of humans. When presented with a danger, different humans respond differently based on their psychologies and personalities, most often, shaped from a series of prior experiences and perceived self-efficacies. Robots still find it difficult to sense, characterise, and respond to such interactions. Today’s social robots are designed for short interactions with humans, not learning social and moral norms leading to sustained long term relationships.

Four, developing multi-functional robots that can develop reasoning. Reasoning is ability to interpret something in a logical way in order to form a conclusion or judgment. For instance, it is easy for a robot to pick up a screwdriver from a bin, but quite something else to be able to pick it up in the right orientation and be able to use it appropriately. It needs to be programmed to realise when the tool is held in the wrong orientation and be able to self-correct it to the right orientation for optimal use.

Five, even when we can train the robot with a variety of sensors to develop logical reasoning through detailed pattern-evaluations and algorithms, it would be difficult to train it to make judgments. For instance, to make up what is good or evil. Check out MIT’s Moral Machine here. Apart from developing the morality in the machine, how can we programme it to be not just consistent in behaviour; but remain fair and use appropriate criteria for decision-making. Imagine a table-cleaning robot that knows where to leave the cloth when someone rings the doorbell. It needs to be programmed to understand when to stop an activity and when to start another. Given the variety of contexts humans engage with on a daily basis, what they learn naturally will surely take complex programming.

Data privacy, security and accountability

Add to all these, issues around data privacy and security. Given that we need to provide the robot and AI systems with enough data about humans and we have limited ability to programme the system, issues about privacy is critical. Consent is the key word in privacy, but when we are driving in the midst of an autonomous vehicle (AV), there is so much data the AV collects to navigate, we need strong governance and accountability. When an AV is involved in an accident with a pedestrian, who is accountable – the emergency driver in the AV; the programmer of the AV; the manufacturer of the vehicle; any of the hardware manufacturers, like the camera/ sensors that did not do their jobs properly; or the cloud service provider which did not respond soon enough for the AV to save lives? Such questions are pertinent and are too important to relegate to a later date when they occur, post facto.

AI induced technological unemployment

At the end of all these conversations, when I look around me, I see three kinds of jobs being lost to technological change: a) low-end white collared jobs, like accountants and clerks; b) low-skilled data analysts, like the ones at a pathology interpreting a clinical report or a law-apprentice doing contract reviews; and c) hazardous-monotonous or random-exceptional work, like monitoring a volcano’s activity or seismic measurements for earthquakes.

The traditional blue-collared jobs like factory workers, bus conductors/ drivers, repair and refurbishment mechanics, capital machinery installation, agricultural field workers, and housekeeping staff would take a long time to be lost to AI/ robotics. Primarily because these jobs are heavily unpredictable, secondly as these jobs involve significant judgment and reasoning, and thirdly because the costs of automating these jobs would possibly far outweigh the benefits (due to low labor costs and high coordination costs). Not all blue-collared jobs are safe, though. Take for instance staff at warehouses – with pick and place robots, automatic forklifts, and technologies like RFID sensors, a lot of jobs could be lost. Especially, when quick response is the source of competitive advantage in the warehousing operations, automation will greatly reduce errors and increase reliability or operations.

As Brian Reese wrote in the book, The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity, “order takers at fast food places may be replaced by machines, but the people who clean-up the restaurant at night won’t be. The jobs that automation affects will be spread throughout the wage spectrum.”

In summary, in order to understand the nature and quantity of technological unemployment (job losses due to AI and robotics), we need to ask three questions – is the task codifiable? (on an average, tasks that the human race have learnt in the past few decades are the easiest to codify); is it possible to reverse-engineer it? (can we get to break the task into smaller tasks); and does the task lend itself to a series of decision rules? (can we create a comprehensive set of decision rules, that could be programmed into neat decision trees or matrices). If you answered in the affirmative (yes) to these questions with reference to your job/ task, go learn something new and look for another job!

Cheers.

© 2019. R Srinivasan, IIM Bangalore

Learning from failures

The recent suicide of Mr. V G Siddharth, the celebrated founder of Café Coffee Day prompted me to reflect on how individuals and organisations think about failure (read what he wrote in his note to the board, here). In our classes on innovation, we keep harping on why we should learn from failure, I have not had an opportunity to dwell on the “how” question, yet. Here are my thoughts on how firms can learn from failures.

One of my colleagues at IIMB, introduced me to the work of Prof. Amy Edmondson, especially on psychological safety. While reading about psychological safety, I came across her work on three types of failure, specifically in the popular HBR article titled, “strategies for learning from failure”.

Types of failures

She elucidates on three types of failure – preventable failures, complex failures, and intellectual failures. Preventable failures occur when one had the ability and knowledge to prevent it from happening. Making silly mistakes (in a test) that you could have avoided, had you spent some time for review; deviance from a manufacturing/ service processes due to laxity or laziness; and just taking some things for granted, like jumping a traffic signal in the middle of a night, are examples of preventable failures. Such failures are clearly attributable to the individual, and therefore she/ he should be held accountable. One way of managing such preventable failures is to define and keep following checklists and processes; establish clear lines of supervision and approvals; and conduct a series of intermediate reviews at predefined critical junctures.

Complex failures happen in spite of having processes and routines. They happen due to failures at a variety of points, including internal and external factors, that individually might not cause failures, but when occurring together, may cause failures. Agricultural (crop) failures, business (startup) failures, or even industrial accidents like the Bhopal gas tragedy and Fukushima disaster are examples of complex failures. Such factors are difficult to predict as the combination of problems may not have occurred before. Fixing overarching accountabilities for such failures are futile, and these can be considered unavoidable failures. It is these kinds of failures that provide fertile sources of learning to firms. Firms need to be prepared to review such failures, dissect the individual factors, and establish robust governance processes so as to (a) sense such systemic problems when they occur at the individual factor levels; (b) erect early warning signals for alerting/ educating the organisation about escalations of these problems into failures; (c) and define options for counter-measures for managing each of these problems, in particular and the occurrence of failure at the systemic level.

Intellectual failures happen due to lack of knowledge about cause-effect relationships. Especially at the frontiers of science and behaviour, where such situations have not happened before. Such situations are ripe for experimentation and entrepreneurial explorations. Firms need to sustain their experimentation and entrepreneurially approach the problem-solution space. There could be situations where the solution is too early for the problem, or the ecosystem is such that the problem is not ready to be solved. Indian automobile industry’s (for that matter, all over the globe) experimentation with electric vehicles would fall under such experimentation. Processes such as open innovation and embedded innovation would greatly contribute to learning. One such experimental innovation boundary space is JOSEPHS, built in the city centre of Nuremberg, Germany as an open innovation laboratory.

Learning from failures

In order to learn from preventable failures, organisations need to strengthen their processes, embark on benchmarking exercises both within their organisation as well as others in their competitive/ collaborative ecosystems, and continuously evaluate the impact of their initiatives. The Indian telecommunications firm, Airtel, had a promise of providing consistent consumer experience to their customers across all the 23 telecom circles they operated in India. One of their initiatives was to constantly benchmark each circle’s performance on a wide range of non-financial parameters and enable other circles to either learn & replicate the process that led to the performance or justify why their circle had different processes that would achieve the same performance. Such justifications would be documented as new processes and would be candidates for replication by other circles. This enabled Airtel improve its performance to six sigma levels and provide consistent customer experience across all its circles.

Learning from complex failures require firms to undertake systematic and unbiased reviews of such failures, typically by engaging external agencies. Such reviews would be able to dissect failures at each factor level, interdependencies across all these factors, and the causes of failure at the systemic level as well. When unbiased reviews happen, they allow for organisations to strengthen their external (boundary-spanning) opportunity sensing and seizing processes; refine their interpretation schema to provide the organisation units/ senior management with early-warning signals; and create options for managing each of these problems well before they actually occur. For instance, in response to the Fukushima Daiichi Disaster, the Japanese Government decided to review its nuclear power policy and undertook a variety of counter-measures, including shutting down of old/ ageing power plants and introduced a slew of regulations/ restrictions on nuclear industries.

Learning from intellectual failures is possibly the easiest. The firm just needs to “persist”. One of the firms I was consulting to, referred to their experimental product-market venture as a Formula-1 track: failures are insulated there, whereas success can be easily transferred to mainstream product-markets. It is such kinds of mindset shifts that enable to continuous learning from intellectual failures. For instance, the failure of the E-commerce venture, FabMart in the early 2000s Indian market is an intellectual failure (this is well documented in the book, Failing to Succeed by its co-founder, K Vaitheeswaran). When we wrote the case on FabMart in the year 1999 (available in the book Electronic Commerce), we hailed it as a harbinger of change in the way India will adopt Internet and E-commerce. However, the business failed. The co-founders regrouped over the next decade and have created other E-commerce enterprises (Bigbasket and Again). The failure of the earlier venture provided them an opportunity to reflect on the specific reasons for failure, treat it as an experiment and learn from it. Intellectual failures, therefore, need to be celebrated and treated exclusively as an opportunity to learn from them.

In summary, mistakes lead to failures when we fail to learn from them and keep repeating them. Let us admonish repeated mistakes and celebrate failures!

Cheers!

(c) 2019. R Srinivasan

Collaborative economy, sharing economy, gig economy … what are they?

If you have been reading any technology-business interface discussions recently, you must have surely heard of these (and more) words…. Almost all platform businesses have been described using one or more of these labels. In this post, I analyse their meanings and differences.

Collaborative economy

As the name suggests, collaborative economy is when different parties collaborate and create new, unique value that would not have been possible individually. For instance, economic activities like crowdfunding or meetup groups qualify as parts of the collaborative economy. Platforms like Kickstarter or Innocentive help people collaborate and create new value. However, platforms like Uber or Airbnb do not qualify – drivers (on Uber) and hosts (on Airbnb) do not collaborate amongst themselves or with their riders (on Uber) and guests (on Airbnb) – they do business with the other side.

Sharing economy

At best, Uber and Airbnb, in their purest forms qualify as sharing economy participants. In the sharing economy, participants share their surplus assets/ capacity with others (for a fee, of course); and there may be platforms facilitating this discovery and sharing (for a fee, surely). When this sharing is a capital asset like a house (in Airbnb) or a piece of high-value equipment (in Makerspaces), the economic argument is based on high fixed/ sunk costs and with low marginal costs, coupled with low capacity utilization/ spare capacity. This is when these economic transactions become sharing. For instance, when I am driving long distance alone, and want company for the distance (in the process, also sharing the cost), I could use Bla Bla Car, and that would be sharing economy, as the three conditions are met: (1) the asset shared (the car) is a capital asset; (2) the marginal cost of adding another passenger to the car is negligble; and (3) the car has space for the additional passenger. The value is created for both of us – I got company through the trip as well as reduced the cost; my co-passenger got to travel in a comfortable manner ‘with company’ at a much lower cost. I am not a professional driver, who is looking to make money by transporting passengers from city A to city B. That would make me a ‘gigzombie’.

The gig economy

Used almost in a derogatory manner, the gig economy refers to the idea of loosely connected people sharing their labor/ expertise for professional returns. These laborers are not employees, but are on real short-term jobs or ‘gigs’. These short-term gigs are provided by some platforms like Uber. What Uber has come to today is to create a marketplace for professional drivers. Uber does not employ them (with all the benefits and security of employment), but treats them so. Uber attracts professional driver-partners to serve their riders. The opportunity costs for these drivers are pretty high, unlike in the sharing economy. For instance, in the context of the Bla Bla Car, if I did not find a partner (or someone I liked), I would still drive that long distance, because I had work in that city. Co-passenger or not, I would still go. I am not dependent on Bla Bla Car for meeting my costs. What the a gig economy company like Uber does is to hire professional drivers (who would have otherwise been employed as drivers or in other roles) and give them business. This is fine as long as the ‘gig’ was a small proportion of your total work.

Take for instance a photographer, who has his own professional practice. He acquires his customers through direct sales, word-of-mouth, and search engine/ social media marketing. And when a gig economy platform like Urbanclap begins providing him business, it adds to his exisitng income. And he is willing to pay a commission to Urbanclap for getting him customers (which he would have otherwise found difficult to get). However, when Urbanclap provides him ‘all’ his business, the ‘gig’ economy kicks in, and the photographer is at the mercy of his one and only source of jobs. He is now a ‘gigzombie’, and the aggregator can steeply increase her commissions.

Point to note is that in the gig economy, there is no collaboration/ co-creation (no common value added), nor is there a shared-value creation (fixed assets, low marginal costs, and excess capacity).

Business models for collaborative, shared, and gig economies

Given that these three economies have different architectures of interaction amongst partners, we cannot have the same business models serving them. Collaborative economies require business models where the platform allows for parners to complement each others’ value creation efforts; sharing economies require business models to match excess capacity with demand; whereas gig economies require aggregation and improving efficiency of the overall system.

‘Bargaining’ power of gig economy platforms

Given that some of these gig economy platforms operate in winner-takes-all markets with high multi-homing costs, their barganing power (in the traditional sense of the term) increases significantly. Multi-homing costs refer to the bureaucratic (search and contracting) and transaction (including variable) costs incurred by a set of partners in maintaining affiliations with multiple platforms simultaneously (switching costs refer to the costs of abandoning one in favor of the other platform). For instance, given the high multi-homing costs for drivers on the Uber platform, it is nigh impossible for a driver-partner to fairly negotiate the terms of engagement with Uber. Combine this with the unorganized nature of the gigzombies, we have a potential for an organized exploitation.

Policy issues in gig economies

Gig economies typically attract ‘weak’ partners, at least on the one side. For instance, Uber drivers work like employees (full-time with Uber) with litle or no employment benefits. The imbalance is pretty stark when these platforms begin disrupting existing, established business models. Take the arguments against Airbnb around systematic reduction of supply of long-term housing in cities. When house-owners are faced with a choice of long-term rental contracts and short-term rentals with Airbnb, they may choose the more financially lucarative short-term Airbnb rentals; and when more and more home-owners do this in a city, the supply of housing reduces, driving up rental prices.

The impact of gig economy platforms on employment is further stark – especially when the labor is ‘online work’. In such cases, the worker in San Bernandino, California is competing with similarly trained workers in Berlin (Germany), Beijing (China), Bangalore (India), or Bangkok (Thailand). Obviously the labor costs are different, and lack of regulations favor significant displacement of work to cheaper options. Even in the case of physical labor like driving cars, developed countries have been dependent on countries with demographic dividend and lower costs of skilling (costs of education and vocational training) for immigration (some world leaders’ public stances notwithstanding).

Do not confuse the the three of them!

In summary, we are talking three different business models when we talk of collaborative, sharing and gig economies. And they need to be treated differently. Primarily, regulators and investors need to understand the differences and frame policies that are sensitive to these differences.

Cheers!

(c) 2018. R. Srinivasan

 

Digital transformation imperatives

I wrote a blog post on digital transformation stages about two years ago (yes, it has been two years!). In that post, I outlined digital transformation as a process and outlined a four-stage model. Over the past couple of months, I have been leading the IIMB executive education programme on Leading Digital Transformation. This programme, offered jointly with the Friedrich Alexander University of Erlangen-Nuremberg and the Fraunhofer Institute of Integrated Circuits IIS, has attracted an impressive class of participants (2018 being the pioneering batch)*. I have been closely interacting with the participants of the programme, who are leading/ looking to lead digital transformations in their respective enterprises over their modules in Bangalore and Nuremberg. As I sat through their project proposal presentations, three thoughts flashed through my mind:

  1. True digital transformaton is possible only when there is a mindset change.
  2. Process reengineering should help the enterprise seek to add new/ additional value, rather than just more of the same value.
  3. Enterprises should proactively plan to utilize and leverage the large amounts of data such transformational journeys generate, failing which the value added wouldn’t be sustainable.

Digital transformation as a mindset change

Digital transformation is not just automating processes, or introducing fancy technologies in the workplace. There are a lot of organizations out there that have embarked on the slippery path of digital transformation without fully understanding what it takes. I remember the early days of computerization where just introducing computers was not sufficient – it was important to highlight how and why computers helped improve organizational efficiency (reliability of repeatable processes). Managers resisting change would bring out instances like, but organizational decision making cannot be programmed, and therefore technology change is likely to result in loss of employment and subsequently, control. I hear the same arguments today … especially with respect to automation and artificial intelligence … the human touch will be gone, digital assistants cannot replace human beings, and the like. It is imperative for leaders to communicate at both the far and near mindsets (read more about far and near mindsets here) to effect acceptance and adoption.

New/ additional value creation

Leaders should consistently communicate the new value created as a result of digitalization. Just additional value (more of the same) does not always justify the investments in digital transformation … even though the competitive context may demand so. For you to do more of the same things, add the same value, you just need to digitalize your existing processes, and make them more efficient. For you to add new value is when you need to embark on the process of digital transformation. And this happens when you begin looking at processes with an intent to re-engineering – have a critical evaluation of the processes and drive change at the process design layer, rather than at the execution layer; and ensure that new/ additional value is created and appropriated.

Descriptive, predictive, and prescriptive data analysis

Value creation/ generation is a function of process re-engineering and re-design; value appropriation is far more difficult. Effective value appropriation requires the organization to invest in deeper analysis of the vast amounts of data generated to be able to (a) describe (based on patterns in the past), (b) predict (extrapolate based on past data, as well as estimation of other parameters that affect the phenomena, and (c) prescribe a course of action (based on the juxtaposition of the description and prediction of the environment with the organization’s intent and capabilities). The criticality of descriptive, predicting and prescriptive data analysis needs to be articulated and socialized in the organization as a key to sustaining the digital transformation journey. In the absence of such insights, these transformational efforts will surely fade away, if not fail!

Such insights should provide the organization with the much needed data to convince its members on the short- and near- term value creation due to digital transformation efforts.

 

41.1 Digital Transformation imperatives

So, in summary, digital transformation requires an organization to continuously invest in mindset change, process redesign, and generate insights.

Cheers!

© 2018. R Srinivasan.

* Disclaimers: I am the IIMB Chair for Executive Education Programmes that markets these programmes; I am also the programme director (course designer and teacher) of the said programme: Leading Digital Transformation; and I am a visiting professor at the Friedrich Alexander University of Erlangen-Nuremberg. Apologies for the self-promotion, and the (unintended) marketing pitch for the programme.

Far and near mindsets

Last month, I was at the Yale University and was listening to Prof. Nathan Novemsky on different mindsets. Of the various mindsets we discussed, psychological distance (and its impact on communication and marketing) caught my attention. In this blog post, I elaborate the concept of psychological distance, and why it is important in the context of entrepreneurship and multi-sided platform businesses.

Psychological distance: Basics

Prof. Novemsky’s (and his colleagues’) research indicates that as people get closer to the decision in terms of time, their mindset changes from a “far mindset” to a “near mindset”. When people engage with you on a far mindset, they are concerned about the “why” questions; whereas when they engage with you on a near mindset, they are concerned about the “how” questions.

Let me illustrate. When a customer downloads an Uber App for the first time, she is more concerned about how she is contributing to the environment by being part of the shared economy, and therefore is less concerned about issues like the minute features of the user interface/ user experience. On the other hand, a customer who is getting out of a day-long meeting with a demanding customer is worried more about the minute details of the ride, like the time taken for the car to arrive, type and cleanliness of the car, and driver’s credentials and behavior; as she is engaging on a near mindset.

Communication and marketing

The understanding of what mindset your customer is engaging with you is imperative to designing your communication. When you advertise a grocery home delivery service on television, you might want to appeal to the consumer’s far mindset … that talks about why he should choose your service rather than the neighborhood grocer/ vegetable market. For instance, the benefits of fresh produce straight from the farm (without middlemen) faster would make immense sense. However, when you communicate with your customer after he has decided to place an order, you might want to talk about specific discounts, receiving delivery at a convenient time, quantity changes, add-ons and freebies, and payment options.

What does this mean to start-ups/ entrepreneurs?

That’s simple, right. A founder communicating with a potential investor should talk to the “far mindset” rather than the “near mindset” if he has to raise money. However, a customer presentation has to appeal to the near mindset.

For instance, the home-health care start-up for pets (petzz.org) communicates convenience of all-day home-visits of veterinarians to its pet-owners; the specific plans available that pet-owners can choose from; and the significant increase in business for the veterinarian partners. However, when it runs camps to enroll pet-owners, it talks about “healthy pets are happy pets” communicating to the far-mindset.

However, the investor deck only appeals to the far mindset … how their business model leads to “healthy pets” and why this is a compelling value proposition for its pet-owners, veterinarians as well as other partners in its platform.

[Disclaimer: I advise petzz.org]

Implications for multi-sided platforms

Not so simple. I can envisage that there may different sides of a platform that may be operating at different mindsets and the MSPs may need to be continuously aware of. Take the example of the social-giving/ crowd-funding platform Milaap (milaap.org). The two sides of the platform are givers and fund-raisers.

Imagine a fund-raiser appeal … which one appeals to you most?

  1. “help a school from rural Chattisgarh build toilets for girls”
  2. “help support girls’ education”
  3. “make sure girls like Shanti don’t drop out of school”

As you move down from option 1 to 2 to 3, you are increasingly operating from the far-mindset!

On the other hand, when Milaap attracts fund-raisers with the following messages

  1. “you get the most socially-conscious givers at milaap”
  2. “it’s easy to communicate with givers at milaap”
  3. “it’s is easy to login, set up and free”

As you move down from option 1 to 2 to 3 here, you move towards a near-mindset!

It gets more complicated when the different sides of the platform are at different stages of decision-making. For instance, when a C2C used-goods marketplace platform like Quikr has a lot of buyers and lesser number of sellers; the messaging across the two sides has to be different! For the sellers who are yet contemplating joining the platform, the message has to be appealing to the far mindset (of decluttering their homes), whereas for the umpteen buyers who are looking for goods on the platform, the message has to appeal to the ease of transacting (near mindset).

Match the message to the mindset and the stage of the engagement

In summary, effective platforms have to communicate consistently across multiple sides of the platform, however keeping in mind the different mindsets of the respective sides. A cab hailing app has to communicate differently to its riders as well as drivers, while sustaining the same positioning. If the rider value offering was about speed of the cab reaching you, the driver communication has to be consistent – speed of reaching the rider. For the driver, it is near mindset (speed of reaching the rider is about efficiency), whereas for the rider, speed may be appealing to the far mindset (about not driving your own car and keep it waiting all day at an expensive parking place; or better still, reducing congestion in the city centers). And for sure, these messages also have to change over the various stages of consumer engagement, right!

Any examples of mismatched communication welcome!

Cheers from a rainy day in Nuremberg, Germany.

© 2018. R. Srinivasan

Facebook did not “steal” my data: I actually work for Facebook and produced the data!

In one (more) of my flights, I had a conversation with a co-passenger. A young adult, a lot of us would be so excited to label a “millennial”, she was excited about the recent announcement that airlines in India can offer in-flight internet connectivity. Even as the flight was on the active runway, she was on her mobile phone (ignoring the requests of the flight attendants), taking pictures of herself in the plane and posting them on Facebook. She was on her way to attend a graduation function of a friend; and everyone who was travelling in, was marking their presence. As the network died, she turned around to me and asked, “so, how long will it be before the airline companies implement internect connectivity in-flights?”. I answered her on how difficult it would be for planes to be retro-fitted with such capabilities, but all new planes can be inducted with the facility. As she realised it is going to be some time, she sighed, put on the headphones, and went off to doze.

It set me thinking about my previous post “Facebook stole my data“. By calling that data mine (belonging to me), I was actually falling for the “personal data as a mine” metaphor – the idea that data is a resource much like a natural resource. It is there for tech firms to extract. As if it was fossilised like oil or coal, or formed over a long period of time, like alluvial soil. If that was so, then the governments should regulate it well. The governments should therefore own all these data and possibly license these extractors with specific mandates to provide unique value to the users (citizens), and such resources would remain the realm of a public good.

The trouble with this metaphor is that the data did not exist in the first place … I created that data, and continue to create it. Nicholas Carr, in his blog aptly wrote, I am a data factory (and so are you). He debunks that mining metaphor and says that just by my living – walking, interacting with friends, buying, reading, and just lazily watching something on the internet – I create data. And not just any data, in any form … in a form that Facebook wants me to. A little more nuanced than what I argued in my previous post … I argued that Facebook created the data. Carr argues that I create that data, working for Facebook, in a form and design that Facebook wants me to. Do you see yourself as working for tech companies like the Amazon, Facebook, Apple, Google, and Netflix, not for a salary/ remuneration, but as a data factory? And worse, the data that you produce being used to “enrich” someone on the other corner of the world, in the name of providing you with a meaningful experience?

This metaphor puts the ownership on the design of the data, apart from the data itself, on the tech companies. And in making this addictive (recall how the first line of your Facebook homepage starts with “write something”, and possibly in your mother tongue!), tech companies like Facebook are trying to engineer our lives. Really? Does Facebook tell me what to do? Or does Facebook want to know what (everything, whatever) I do? I would tend to believe the latter.

Here is where the difference between use and addiction begins. When you use, you benefit … when you are addicted, they benefit. Pretty much like using sinful products (like tobacco and alcohol), handling dangerous products (like fire and guns), and managing borderline behaviour (like assertiveness becoming aggressiveness), we should possibly educate ourselves about responsible use.

Responsible use of Facebook! That sounds not-so-fun. The flight was about to land, the flight attendant was waking up the lady seated next to me to put her seat in upright position. She woke up excited, and looked outside to see if we had landed … and as soon as landed, she opened her Facebook app, and went on to write something. I did apologise for snooping into her behaviour, but she said, she was fine. She was comfortable giving away her behavioural/ afflilliative/ preferential/ locational data to some tech company server … she was not one bit uncomfortable sharing her behavioural data with me. I told her that I would write about her Facebook addiction as we were getting off the plane, and she said that would be “cool”, gave me her email id, and asked me to email the piece to her (which I will promptly do). She would possibly post it on her Facebook wall too. So much for addiction.

So much for mid-week reading.

Cheers.

(C) 2018. R Srinviasan

 

Facebook “stole” my data?!

Writing in after a really long time. Waiting for a flight to take off, I was reading this nice piece by Alex Tabarrok, aptly titled, The Facebook Trials: It’s not “our” data. He argues that before Facebook, this data did not exist in the first place, and Facebook “created’ that data in the first place. So, it has all the right to use that data – since it uncovered your relationships and preferences (some of which even you did not even know yourself).

That is not the primary concern – that Facebook used the data to target advertisements to me. What is immoral about this entire episode is that Facebook allowed third parties to use it in ways neither you wanted it, nor Facebook anticipated it. When I engage with a technology platform like Facebook or Google, I implicitly educate the platform, much the same way, a discerning store manager would understand my preferences as I walked through the store (okay, armed with more storage and processing power than the human store manager). The principle is the same.

What is scary is that the data was used to “manipulate” not just “advertise to/ communicate with” users. When the data user treated you as a consumer, and customized solutions for you (including recommendations), I liked it, appreciated it, and enjoyed it. But when it feeds me specific, filtered information about non-commercial information (like political news), is where the abuse began.

I guess therefore I would not delete my Facebook account anytime soon (not that I use it regularly); but would keep my eyes and ears open on what these trials lead to, and what privacy checks do these technology platforms commit to in the various countries. Prof. Nirmalya Kumar argues why he found Facebook and its CEO wanting in their apologies and testimonies, and why he quit Facebook here. I would however be looking out for inconsistencies in such commitments across countries with differing privacy and data management laws, much like how the packaged food industry leverages such differences to sell unhealthy food to lenient country consumers.

Cheers.

(C) 2018. Srinivasan R

Predatory pricing in multi-sided platforms

Over the last few days, living and commuting in Nuremberg, I realised that I was not missing Uber. While just about a month ago, commuting for a week in suburban Paris, I was completely dependent on Uber for even the shortest of distances. The penetration of public transport and my familiarity with the city of Nuremberg aside, I began wondering how would Uber price its services in a city like Nuremberg (when it enters here, which I doubt very much would happen in the next few years), where public transport is omni-present, efficient, and affordable. It surely should adopt predatory pricing.

In this post, I will elaborate on the concept of predatory pricing in the context of multi-sided platforms.

Theory alert! If you are uncomfortable with theory, skip directly to the illustration and come back to read the theory.

What is predatory pricing?

Economists and policy makers concerned about market efficiencies and fair competition have been obsessed with the concept of predatory pricing for a long time. The most common definition of predatory pricing is through the application of the conventional Areeda-Turner test. Published way back in 1975, in spite of its limitations, most countries and courts have used it consistently, due to, in some ways, lack of any credible alternative.

The Areeda-Turner test is based on two basic premises. The recoupment premise states that the firm indulging in predatory pricing should be able to predict and be confident of its ability to recoup the losses through higher profits as competition exits the market. The assumption is that the firm could reasonably anticipate the (opportunity) costs of predatory pricing, as well as have an estimate of the future value of monopoly profits; and the net present value of such predatory pricing to push competition out of the market should be positive and attractive. In plain English, the firm should be able to project the effect of lower prices in terms of lower competition and higher profits in the future.

How low can this predatory price be? That is the subject of the second premise – the AVC premise. The firm’s prices (at business as usual volumes) should be below its average variable costs (AVC), or marginal costs in the short run. If the prices were indeed above the AVC, the firm would argue that they are indeed more efficient than competition, due to any of their resources, processes, or organisational arrangements. It is when the price falls below the AVC that the question of unfair competition arises – the firm might be subsidising its losses.

Take for instance, a start-up that is piloting an innovative technology. It may price its products/ services at a price below the AVC to gain valuable feedback from its lead users, but in the absence of a recoupment premise such pricing might not qualify as predatory pricing. On the other hand, imagine a new entrant with superior technology who can bring costs down to a level where the prices fall below the marginal costs of the competitors but stay well above the firm’s AVC, it is just disrupting the market.

Only when both the conditions are met, i.e., when the predator’s prices are below the AVC and the firm could project the extent of recoupment due to monopoly profits as competition exits the market, that we call it predatory pricing.

Predatory pricing in MSPs

There has been a lot of discussion about how ecommerce firms in India have been indulging in predatory pricing and how various platforms have been going under. I had written about subsidies and freebies from a consumer perspective a few months ago (Free… continue hoyenga). Let us discuss how and why it is difficult to assess if a lower-than-competition price is indeed predatory in the context of multi-sided platforms (MSPs).

  1. Multi-sided platforms have a unique problem to solve in their early days, that of network mobilisation. A situation that is like a chicken-egg problem, or a Penguin problem, where “nobody joins unless everyone joins” is prevalent in establishing a two-sided or multi-sided platform (for more details about the Penguin problem and network mobilisation strategies, read my earlier post here). In order to build a sufficient user base on one side, a common strategy is to subsidise, even provide the services free.
  2. Another common feature of MSPs is the existence of subsidy-sides and money-sides of users. The platform might subsidise one side of users and make money from the other side, while incurring costs of providing services to both sides, depending on the relative price elasticities and willingness to affiliate with the other side of the platform. And the prices for the subsidy side would surely below costs for that side. It is imperative that the overall costs and prices are considered while analysing these pricing strategies.
  3. These cross-side network effects will surely force the platforms to price their services most efficiently across both the sides. Even for the money side, the platform might not be able to charge extraordinary prices as such prices would themselves act against the sustenance of these cross-side network effects. It is likely that these extra-normal profits would evaporate through subsidies on the other side to keep the network effects active. Imagine a situation where a B2B marketplace charged the sellers higher than normal prices, only large (and desperate) sellers would affiliate with the marketplace, leading to buyers (the subsidy side) leaving the platform. In order to keep the buyers interested, the marketplace might either have to broaden the base of sellers by optimising the prices, or provide extraordinary subsidies to the buyers to keep them interested. So in order to maintain the equilibrium, the platform would have to price the sides efficiently.
  4. Finally, in a competitive situation, not all competitors might follow the same price structure. So, a reduction of prices by one competitor for one side of the market may not force all other competitors to reduce prices; they may just encourage multi-homing (allowing users to use competitive products simultaneously) or manipulate the price on the other side of users.

So, a direct application of the Areeda-Turner test might not be appropriate while studying predatory pricing in the context of MSPs.

An illustration

Let us imagine a market for home tutors supporting school students. The market is inherently geographically constrained; it is very unlikely that either the teacher or the student would travel across cities for this purpose. For the time being, let us assume that there is no technology (like video conferencing) being used.

This market is apt for the entry of a multi-sided platform, like LocalTutor. This firm provides a platform for the discovery and matching of freelance tutors with students. LocalTutor monetises the student side by charging a monthly fee (that includes a platform commission), and passes on the fees to the tutor. We need to make two assumptions before we proceed with competitive entry and predatory pricing: the market is fully penetrated (all the students who are looking for tutors and tutors looking for students are all in the market) and there are no new students and tutors entering the market; and there are no special preferences between student-tutor matches, i.e., the student-tutor pair does not form a bond like a sportsperson-coach, where they begin working like a team. In other words, the tutor is seamlessly (with no loss of efficiency) replaceable.

Now imagine a new competitor enters the market and engages in predatory pricing to kick-in network effects. The new entrant, let’s call it GlobalTutor (a fictitious name), drops the student-side prices to half. In order to attract the right number and quality of tutors, GlobalTutor has to sustain the same fees that LocalTutor provides its tutors, if not more. So, it starts dipping into its capital reserves and begins paying the tutors the market rates while reducing the student fees. Anticipating a larger surge in student numbers, more tutors sign up to GlobalTutor, and seeing the number and quality of tutors on GlobalTutor (at least if it is not inferior to LocalTutor), students first start multi-homing (use both services for their different needs, like LocalTutor for mathematics and GlobalTutor for music classes), and some of them begin switching.

In a fully penetrated market, the only way for LocalTutor to compete is to respond with its price structure. It has two options – reduce the student-side prices to restrain switching and multi-homing behaviour; and tweak the tutor-side prices and incentives. The first option is straightforward; it is cost-enhancing and profit-reducing. The second option (which is not available for pipeline businesses) is interesting in the context of platform businesses.

There are various ways of responding to this threat. The intent is to arrest switching and multi-homing behaviour of tutors and students from LocalTutor to GlobalTutor.

  1. Increasing multi-homing costs of tutors by providing them with incentives based on exclusivity/ volume: Like what Uber/ Ola provides its drivers – the incentives kick-in at what the company believes is the most a driver can do when they do not multi-home. In other words, if you multi-homed, drove your car with both Ola and Uber, you would never reach those volumes required to earn your incentives in either of your platforms.
  2. Contractual exclusion: This might not be tenable in most courts of law, if these freelance tutors were not your ‘employees’. Given the tone of most courts on Uber’s relations with its driver-partners (drivers’ lack of control in most of the transaction decisions including choice of destination, pricing, and passenger choice), any such contracting would imply that the tutors would be employees, and that would significantly increase the platform’s costs (paying for employee benefits are always more expensive than outsourcing to independent service providers).
  3. Increase contract tenure: LocalTutor may increase multi-homing and switching costs by increasing the tenure of the contracting from monthly to annual. Annual contracting will reduce the flexibility that students and tutors have, and might result in reduction in volume.
  4. The next options for LocalTutor are to work at the two restraining assumptions we made at the beginning – penetration and perpetual matching. LocalTutor might want to add in more and more students and tutors and expand the market, providing unique and differentiated services like art & craft classes, preparation for science Olympiads, or other competitive tests. LocalTutor might also communicate the value of teaming of student-tutor pairs in its success stories, in a bid to dis-incentivise switching and multi-homing.

To predate or not to predate is not the question

Given the differences between pipeline and platform businesses new entrants seeking to mobilize network effects have very little option but to resort to predatory pricing. The choice is not if, but how. And as an incumbent, should you be prepared for a new entrant who would resort to predatory pricing? Surely, yes! And how? By being ready to expand the market and increasing switching and multi-homing costs. Unlike in the tutoring business that is inherently geographically constrained, a lot of businesses could span across markets. Even tutoring could leverage technology to reach a global audience.

Just one comforting thought, predatory pricing as a strategy to eliminate competition is inefficient in the long run. The new entrant might adopt predatory pricing to eliminate competition in the short run, but the act of predatory pricing breaks down most barriers to entry, and sends signals to others that there is a market that is easy to enter. It might attract a more highly capitalised competitor to enter the market with the same strategies … making the market a ‘contestable market’. And no one wants to make a fortune in a contestable market, right? More on competing in contestable markets, subsequently.

Cheers

© 2017. R Srinivasan

 

Hindsight and foresight in judgments under uncertainty

As I began reflecting on my earlier posts on You are intelligent: have you done something dumb? and Judgments, I rolled back to this classic article, so beautifully titled, Hindsight ≠ foresight. Written by B Fischhoff, way back in 1975, this paper provides two intuitive results (at least in hindsight): once the outcome of an event is known, people associate higher probability of its occurrence; and people were unaware that they have been influenced by hindsight (the knowledge of what actually happened).

Case class in a business school – hampered by hindsight bias

Take a business school case class for instance. As a case teacher, I face this a quite a lot of times. In a typical strategy cases, the primary question to the class is “what should the company do?” And since most cases are set a few years in the past, simple Internet search (by the students as part of their class preparation) would have informed them about what had actually happened. Given that students come into class with this hindsight, they try very hard to fit their preparation and theoretical arguments to the actual outcome, however irrational, or improbable it might have been. A good management teacher ought to therefore provide for this “hindsight bias” in  students and ensure that a fair discussion happens in class on all possible outcomes.

Should I therefore, as a management teacher, provide my students with only cases for which the outcomes were not known? What therefore are my criteria to choose cases for a class? Do I fight to eliminate this “hindsight bias”? Let me come back to this later.

An air-crash investigation

Let us take another example. A special team has been tasked with investigating the cause of an air crash. Any investigation of an accident would inevitably entail putting together pieces of information to arrive at a causal relationship between the antecedent factors and the event, which is known to have happened. It is impossible to eliminate the source of bias here, the event. The investigation team has to be trained to create a counter-factual (good) outcome from the evidence at hand. They need to recreate the antecedents to the event in a manner that they evaluate if anyone in their place would have made the same decisions as the actors (pilots and crew who made certain decisions) did. Experimentally, it is akin to creating a control-group that knows all the facts leading up to the case, but not the actual outcome.

Investigating white-collar crime

In the case of white-collar crime, especially when it involves financial fraud, another significant factor interferes with hindsight bias, the size or impact. Larger frauds are fraught with more pronounced biases. Media coverage on “select” white-collar crimes are testimony to such biases. Nicholas Bourtin (read the article here), adds how armed with hindsight bias, financial crime investigators might ascribe malicious intent to even innocent mistakes or poor judgement.

As I was thinking about this issue, I just saw the breaking news of an earthquake of 7.2 magnitude hitting the Iraq-Iran border (Sunday, 12th November 2017). Hopefully, there isn’t much damage. And it triggered a thought.

Fighting hindsight bias – learning from geophysics

A great learning for fighting hindsight bias comes from geophysical studies. Imagine how geologists and geophysicists study earthquakes and volcanic eruptions. These are events that just “happen”, and then the scientists “reconstruct” the events through carefully collected data. Can we learn something from the way they fight hindsight bias? Sure.

Strategy #1: Conduct stability studies. Not just fault studies, but stability studies. Take the context of highly quake-prone areas, and go study why earthquakes aren’t happening! Such data would provide the ‘normal’ distribution of data with the occurance of earthquakes being the outliers.

Strategy #2: Broaden the search. Take all retrospective data for analysis. Study all the quakes that happened on a plate/ all eruptions of a volcano. Such events may occur very infrequently, and may be randomly distributed over time. However infrequent they may be, it would be worthwhile to study the antecedent conditions every time. Maybe, one can find a cause-effect relationship. Like concluding that most road accidents happen between 2.00am and 4.30am because there is a high likelihood of drivers sleeping behind the wheel (that is if they are still being driven by human drivers!).

Strategy #3: Combine the two, and seek patterns. Conduct stability studies and say why events do not happen, and conduct (with big data) longitudinal studies to infer why events do happen. Combine the two and create patterns. Such patterns can be immensely helpful in studying antecedents of events, and effectively fighting hindsight bias.

Fighting hindsight bias – applying it in managerial judgement

Straight, let us try and apply the learning from geo-physics to managerial judgement. First, consider prior probabilities of an event happening appropriately. Imagine an angry boss (no, I would like to believe that all bosses are not always angry!). In trying to understand what angered her today, use prior probabilities appropriately. She may be angry because she was being held accountable for something beyond her control (like your productivity), or just that she gets angry when she is frustrated about not being able to communicate or convince others. Strategy #1: ask yourself, when is she ‘not angry’? She is not angry when you complete your work on time, when you present your work properly (as she likes it), and when your work is of good quality. Then why is she angry today? You have the answer.

Second, stop thinking sample size and probability. Unless you have a really large sample size of such events, stop thinking about probability. Imagine predictions in sport or financial services. I was taught in my first finance classes, “past performance is not an indication of future performance”. And my brief indulgence with sports tells me that the law of averages is that “sustained good performance does not last long”. Would you be confident in predicting the goals scored by a football team if their prior performances were [4-1, 3-0, 5-2, and 1-0] or [1-4, 4-2, 2-2, 1-0] with the second number in each pair representing the goals scored by the opposition? Most would be confident of predicting the performance of the former scoring pattern than the latter. It might just happen that the next game is against the league leader (including someone with initials of CR7) and all these performances do not matter at all. You really need to collect loads of data on each team’s performance, including historical performances of all the opposition teams before you make any predictions.

Third, stay away from causal relationships (no I did not say casual relationships!), unless you have really “big data” on both the normal distribution of the event not happening, as well as the outlier chance of the event happening. Remember the wonder batsman, Pranav Dhanawade, the 17-year old kid who scored 1009* runs for his local cricket team. After a few years, his father has decided to return the scholarship he received, since he has not performed up to expectations (read it here). It was important that when an event of this nature (an extraordinary performance) occured, one needs to not just reward, but also invest in nurturing the talent. Without an adequate support structure to hone his talent, the financial reward was insufficient to sustain even acceptable performance.

So why do some firms perform better than others?

The answer may not lie in analyzing why those performed better, but in understanding what the others do that make them not perform as well as the high performers; longitudinal and cross-sectional (big) data on multiple firms’ performance; and being very cautious about making causal assertions. Isn’t this the core of strategy research, today?

Cheers!

(c) 2017. Srinivasan R

 

%d bloggers like this: