Glass box organizations: Platforms

Way back in September 2017, David Mattin of Trend-Watching wrote about Glass box brands. He argued that organizations are moving away from being black boxes (where customers could only see what was painted outside) to glass boxes, where everything that happens inside and outside of the organization is visible to everyone.

The primary arguments of the glass box world are: (a) in an era of social media and high organizational attrition, even the mundane activities like routines and rituals are visible to the outside world; and (b) trends like automation, inequality, and globalization have led to “meaningful consumerism”, bordering on activism. Therefore, consumers are making choices about their brand affiliation and loyalty based on the company culture and values, apart from other considerations.

If the internal culture is the window of the brand to the outside world, it is important for every organization to meaningfully nurture it, articulate it, and live it. I am not going to dwell on how to develop your internal culture and values, but the implications of the glass box metaphor in the context of platforms and digital organizations.

Multi-sided platforms as glass boxes

By definition, multi-sided platforms (MSPs) have many “sides” that drive network effects. For instance, a guest chooses to use Airbnb while travelling because she values the number and quality of hosts. When Airbnb doesn’t treat one side well, it directly impacts the quality of interaction with the other side and affects the strength of network effects. Which in turn, affects the willingness to join (WTJ) and willingness to pay (WTP) of the users on the other side. The quality of the platform deteriorates and can even degenerate into a “market for lemons”. Such dynamics of network effects ensure that platforms do not unduly favor one side over the other, especially when there are cross-side network effects. However, these do not include how the firm treats its employees – remember Travis Kalanick and Uber?!

Digitalization and glass boxes

The omnipresent social media and the constant need by employees and customers to document share their experiences online (most often with the general public, including strangers) has been one of the drivers of glass walling of organizations. Isn’t it why the digital platform that allows for employees to review their workplaces called glassdoor.com? Sure, glassdoor.com monetizes its corporate side of the network through its recruitment services, but its primary differentiator is the large volume of anonymous employee reviews of the work culture and salary structures. We know that when the side that is being reviewed is monetized, it is in the interest of the firm to have good quality reviews on the platform, failing which it finds it difficult to attract enough quality candidates. There is enough incentive to witch hunt people who write bad reviews, as well fill the site with paid/ fake reviews to overshadow the “real” bad reviews. It is still a glass door after all, not a glass box!

Digitalization of employee experience holds a significant potential in managing the quality of the brand, as perceived outside the firm. A lot of firms focus on improving customer experience in their digitalization journeys, but employee experience is equally critical (read more about it in one of my earlier posts). Good employee experience ensure that the positive experiences have spill-over effects on efficiency, performance, internal culture, as well as customer experience. A variety of organizations including VMWare, SAP and IBM have laid explicit focus on improving employee experience in their digital transformation journeys.

Stay home, stay safe, stay healthy!

(c) 2020. Srinivasan R.

Remora Strategies

It is interesting how much management as a discipline borrows from other disciplines. Much like the English language. That is for another day. Today, working from home, I got reading a lot of marine biology. Yes, you heard it right, marine biology. It is about Remora fish, and its relationship with sharks and other larger marine animals. Students in my IIMB MBA class of 2020 have heard of it in one of my sessions in the platform business course and a couple of groups also used this concept in their live projects.

What is a Remora and what is its relationship with Sharks?

The Remora is a fish. It is possibly the world’s best hitchhiker. It has an organ that allows it to attach itself to a larger animal, like a shark or a whale. The sucker-like organ is a flat surface on its back, that allows itself to attach to the belly of a shark. That is a reason why it is also popularly known as a sharksucker or whalesucker. Remoras have also been known to attach themselves to divers and snorkelers as well. The sucker organ looks like venetian blinds that increase or decrease its suction on the larger fish’s body as it slides backward or forward. They could therefore be removed by sliding them forward.

Remoras can swim on their own. They are perfectly capable. But they prefer to attach themselves to the larger fish to hitch a ride to deeper parts of the ocean, saving precious energy. Their relationship with the Shark is unique – they do not draw blood or nutrients from the Shark like a Leech. They feed on the food scraps of the larger fish by keeping their mouths open. While the Remora benefits from its attaching to the Shark, it does not significantly benefit or harm the Shark. Some scientists argue that Sharks like the fact that Remoras feed on the parasites that are attaching themselves to the skin of the Shark and thereby keeping them healthy. Some others are concerned about the drag experienced by Sharks as they swim deeper in the oceans, which can be significant when there are dozens of Remoras attached to the Shark. Both of these are not that significant enough for the Sharks to either welcome Remoras to attach themselves to their bellies, nor have they exhibited any behaviour to repel these Remoras away (like they do with other parasites). Read more about Remoras’ relationship with Sharks here.

This relationship between the Remora and the shark can be termed as commensalism, rather than symbiotic. If the Sharks indeed value the fact that Remoras can help them get rid of the parasites from their teeth or skin, then we could term this relationship as mutualistic.

Remoras and platform start-ups

What is a platform researcher studying Remoras? A platform start-up could solve its Penguin problem using a Remora strategy. It could piggy-back on a larger platform to access its initial set of users, with no costs to the larger platform. Let’s consider an example. A dating start-up struggles to get its first set of users. While it needs rapid growth of numbers, it should ensure that the profiles on the platform are of good quality (bots, anyone?). It has two options: developing its own validation algorithm or integrating with larger platforms like Twitter or Facebook for profile validation. It could create its own algorithms if it needs to validate specific criteria, though. It could use a Remora strategy, by attaching itself to a larger Shark in the form of Twitter or Facebook. This has no costs to Twitter or Facebook, and if at all, contributes to marginal addition of traffic to Facebook/ Twitter. However, for the start-up, this saves significant costs of swimming down the depths of the ocean (developing and testing its own user validation algorithms).

Remora’s choice

Don Dodge first wrote about the Remora Business Model, where he wondered how both the Remoras and the Sharks made money, if at all. Building on this, Joni Salminen elaborated on Remora’s curse. Joni’s dissertation elaborates two dilemmas multi-sided platforms face – cold-start and lonely-user.

The cold-start dilemma occurs when a platform dependent on user-generated content does not get sufficient enough content in the early days to attract more users (to consume and/ or generate content). There are two issues to be resolved in this case – to attract more users to sustain the platform, and in the process balancing the numbers of content generators and content consumers.

The lonely-user dilemma occurs when a platform dependent on cross- and same-side network effects tries to attract the first users. A subset of the penguin problem, on this platform nobody joins unless everybody joins. There is no intrinsic value being provided by the platform, except that being generated by interactions between and among user groups.

The cold-start dilemma can be typically resolved using intelligent pricing mechanisms, like subsidies for early adopters. For example, a blogging platform can attract influencers to start blogging on their site, by providing them with premium services. As they resolve the cold-start dilemma, and they attract enough users to blog and read (generate and consume), they could get to a freemium model (monetize reading more than a specified number of posts), while continuing to subsidising writers. The key is to identify after what number of posts, does one start charging readers, as too low a number would reduce the number of readers and high-quality writers would leave the platform; but on the other hand, too big a number of freely available posts to read, the platform may not make any money at all to sustain.

The lonely-user dilemma can be typically resolved by following a Remora strategy. By leveraging the users on a larger established platform, the first set of users could be sourced easily en masse. However, just having users is not sufficient – there is an issue of coordination: getting not just sign-ups but driving engagement. It is important that registered users begin engaging with the platform. Some platforms need more than just engagement, they are stuck with a real-time problem: like in  a multi-player gaming or a food-delivery platform, we need gamers to be engaged with each other real-time. Some other platforms need users in specific segments, or the transferability problem: that users are looking for others within a specific segment, like in a hyperlocal delivery platform, a matrimony platform or a doctor-finding platform. Such platforms need to have sufficient users in each of these micro-segments.

A Remora strategy could potentially help a platform start-up overcome these two major dilemmas – cold-start and lonely-user. By porting users from the larger platform, one could solve the lonely-user problem, and through tight integration with the content/ algorithms of the Shark platform, the Remora (start-up) could manage the cold-start problem.

Remora’s curse

The decision to adopt a Remora strategy is not just simple for a platform start-up. There may be significant costs in the form of trade-offs. I could think of five significant costs that need to be considered along with the benefits of following a Remora strategy. These costs include (a) holdup risk; (b) ceding monetization control; (c) access to user data; (d) risk of brand commoditization; and (e) exit costs.

Hold-up risk: There is a significant risk of the established platform holding the start-up to a ransom, partly arising out of the start-up making significant asset-specific investments to integrate. For instance, the dating start-up would need to tightly integrate its user validation processes with that of Facebook or Twitter, as the need may be. It may have to live with the kind of data Facebook provides it through its APIs. It may be prone to opportunistic behaviour by Facebook, when it decides to change certain parameters. For example, Facebook may stop collecting marital status on its platform, which may be a key data point for the dating start-up. Another instance of hold-up risk could be when Google resets its search algorithm to only include local search, rather than global search, thereby affecting start-ups integrating with Google.

In order to manage hold-up risks, Remora start-ups will be better off not making asset-specific investments to integrate with the Shark platforms.

Monetization control: A significant risk faced by Remora start-ups is that of conceding the power to monetize to the Shark. For example, when a hyper-local restaurant discovery start-up follows a Remora strategy on Google, it is possible that Google gets all the high-value advertisements, leaving the discovery start-up with only low-value local advertisements. There is also a risk of the larger platform defining what could be monetised on the start-up platform as well. For example, given that users have gotten used to search for free, even specialised search like locations (on maps) or specialised services like emergency veterinary care during off-working hours, may not be easy to monetise. Such platforms may have to cede control on which side to monetise and subsidise, and how much to price to the larger platform.

To avoid conceding monetization control to larger platforms, Remora start-ups need to provide additional value over and above the larger platform. For instance, in the local search business, a platform start-up would possibly need to not just provide discovery value (which may not be monetizable) but include matching value as well.

Access to user data: This is, in my opinion, the biggest risk of following a Remora strategy. Given that user data is the primary lever around which digital businesses customize and personalize their services and products, it is imperative that the start-up has access to its user data. It is likely that the larger platform may restrict access to specific user data, which may be very valuable to the start-up. For instance, restaurant chains who could have run their own loyalty programmes for its clients, may adopt a Remora on top of food delivery platforms like Swiggy or Zomato. When they do that, the larger platform may run a loyalty programme to its clients, based on the data it has about the specific user, which is qualitatively superior to the one that local restaurants may have. In fact, in the context of India, these delivery platforms do not even pass on basic user profiles like demographics or addresses to the restaurants. The restaurants are left with their limited understanding of their walk-in customers and a set of nameless/ faceless customers in the form of a platform user, for whom they can generate no meaningful insights or even consumption patterns.

It is imperative that platform start-ups define what data they require to run their business model meaningfully, including user data or even operations. It could be in the form of specific contracts for accessing data and insights, and/ or co-creating analytical models.

Risk of brand commoditization: A direct corollary of the user data is that the Remora start-up could be commoditized, and their brand value might be subservient to the larger platform’s brand. It could end up being a sub-brand of the larger start-up. For user generation and network mobilization, the Remora start-up would possibly need to get all its potential users to affiliate with the larger platform, even if may not be most desirable one. On a delivery start-up, hungry patrons may be loyal to the aggregator and the specific cuisine, rather than to a restaurant. Given that patrons could split their orders across multiple restaurants, it could be the quality and speed of delivery that matters more than other parameters. Restaurants might then degenerate into mere “kitchens” that have excess capacity, and when there is no such excess capacity, these aggregators have known to set up “while label” or “cloud kitchens”.

It is important that Remora start-ups step up their branding efforts and ensure that the larger brand does not overshadow their brand. The standard arguments or relative brand strengths of complements in user affiliation decisions need to be taken into consideration while protecting the Remora’s brands.

Exit costs: The last of the Remora’s costs is that of exit costs. Pretty much similar to the exit costs from an industry, platform start-ups need to be clear if their Remora strategy is something temporary for building up their user base and mobilizing their networks in the early stages, or it would be relatively permanent. In some cases, the platform’s core processes might be integrated with the larger platform, like the API integration for user validation, and therefore may provide significant exit costs. In some other cases, the platform may have focused on their core aspects of their business during the initial years and would have relegated their non-core but critical activities to the larger platform. At a time when the start-up is ready to exit the larger platform, it may require large investments in non-core activities, which may lead to disruptions and costs. Add to this, the costs of repurposing/ rebuilding asset-specific investments made when joining the platform.

Remora start-ups, therefore, need to have a clear strategy on what is the tenure of these Remora strategies, and at what point of time they would exit the association with the larger platform, including being prepared for the costs of exit.

Scaling at speed

Remora strategies allow for platform start-ups an alternative to scale their businesses very fast. However, it is imperative to understand the benefits and costs of such strategies and make conscious choices. These choices are at three levels – timing of Remora, what processes to Remora, and building the flexibility to exit. Some platforms may need to attach themselves right at the beginning of their inception to larger platforms to even get started; but some others can afford to wait for the first users to start engaging with the platform before integrating. What processes to integrate with the larger platform is another critical choice – much like an outsourcing decision, core and critical processes need to be owned by the start-up, while non-core non-critical processes may surely be kept out of the platform. In all of these decisions, platform start-ups need to consciously decide the tenure and extent of integration with the larger platform, and therefore make appropriate asset-specific investments.

Maintain social distance, leverage technology, and stay healthy!

Quote of the times

(C) 2020. Srinivasan R

Making Artificial Intelligence (AI) work

This is a follow-up post on my post last week on Moravec’s Paradox in AI. In that post, I enumerated five major challenges for AI and robotics: 1) training machines to interpret languages, 2) perfecting machine to man communication, 3) designing social robots, 4) developing multi-functional robots, and 5) helping robots make judgments. All of this was focused on what the programmers need to do. In this short post, I draw implications on what organisations and leaders need to do to integrate AI (and for that matter, any hype-tech) into their work and lives.

Most of the hype around technologies is built around a series of gulfs, gulfs of motivation, cognition, and communication. They are surely related to each other. Let me explain these in the reverse order.

Three gulfs

The first gulf is the communication gap between developers and managers. Developers know how to talk to machines. They actively codify processes and provide step-by-step instructions to machines to help them perform their tasks. Managers, especially the ones facing consumers, speak stories and anecdotes, whereas developers need precise instructions that could be translated into pseudo-code. For instance, a customer journey to be digitalised need to go through a variety of steps. Let me give you an example of a firm that I worked with. A multi-brand retail outlet wanted to digitalise customer walk-ins and help guide customers to the right floor/ aisle. Sounds simple, right? The brief to the developers was, to build a robot that would “replace the greeter”. The development team went around building a voice activated humanoid robot that would greet a customer as she walked in, asked her a set of standard questions (like ‘what are you looking for today’?) and respond with answers (like, ‘we have a lot of new arrivals in the third floor’). The tests were very good, except that the developers did not understand that only a small proportion of their customers were arriving alone! When customers came as couples, families, or groups, the robot treated them like different customers, and tried responding to each other separately. What made things worse, was that the robot could not distinguish children’s voices from female voices and greeted even young boys as girls/ women. The expensive project remains a toy today in a corner of the reception, only to witness the resurgence of plastic-smiling greeters. The entire problem could have been solved by a set of interactive tablets … Just because the managers asked the developers to “replace the greeter”, they went about creating an over-engineered but inadequate humanoid. The reverse could also happen, where the developers only focus on the minimum features that would make the entire exercise useless. For us to bridge this gulf, we either train the managers to write pseudo-code, or get the developers visualise customer journeys.

The second gulf is that of algorithmic and creative thinking. Business development executives and strategy officers think in terms of stretch goals and focus on what is expected in the near and farther future. On the other hand, developers are forced to work with technologies in the realm of current possibilities. They refer to all these fuzzy language, aspirational goals and corporatese as “gas” (to borrow a phrase from Indian business school students). The entire science and technology education at the primary and secondary school is about learning algorithmic thinking. However, as managers gain experience and learn about the context, they are trained to think beyond algorithms in the name of creativity and innovation. While both creative thinking as well as algorithmic thinking are important, the difference accentuates the communication gap discussed above.

Algorithmic thinking is a way of getting to a solution through the clear definition of the steps needed – nothing happens by magic. Rather than coming up with a single answer to a problem, like 42, pupils develop algorithms. They are instructions or rules that if followed precisely (whether by a person or a computer) leads to answers to both the original and similar problems[1].   Creative thinking means looking at something in a new way. It is the very definition of “thinking outside the box.” Often, creativity in this sense involves what is called lateral thinking, or the ability to perceive patterns that are not obvious. Creative people have the ability to devise new ways to carry out tasks, solve problems, and meet challenges[2].  

The third gulf is that of reinforcement. Human resource professionals and machine learning experts use the same word, with exactly similar meaning. Positive reinforcement rewards desired behaviour, whereas negative reinforcement punishes undesirable behaviour. Positive and negative reinforcements are integral part of human learning from childhood; whereas machines have to be especially programmed to do so. Managers are used to employ reinforcements in various forms to get their work done. However, artificially intelligent systems do not respond to such reinforcements (yet). Remember the greeter-robot that we discussed earlier. Imagine what does the robot do when people get surprised and shocked, or even startled as it starts speaking? Can we programme the robot to recognise such reactions and respond appropriately? Most developers would use algorithmic thinking to programme the robot to understand and respond to rational actions from people; not emotions, sarcasms, and figures of speech. Natural language processing (NLP) can take us some distance but to help the machine learn continuously and accumulatively requires a lot of work.

Those who wonder what happened!

There are three kinds of people in the world – those who make things happen, those who watch things happen, and those who wonder what happened! Not sure, if this is a specific quote from a person, but when I was learning change management as an eager management student, I heard my Professor repeat it in every session. Similarly, there are some managers (and their organizations) wonder what happened when their AI projects do not yield required results.

Unless these three gulfs are bridged, organizations cannot reap adequate returns on their AI investments. Organizations need to build appropriate cultures and processes that bridge these gulfs. It is imperative that leaders invest in understanding the potential and limitations of AI, whereas developers should appreciate business realities. Not sure how this would happen, when these gulfs could be bridged, if at all.

Comments and experiences welcome.

Cheers.

© 2019. R Srinivasan, IIM Bangalore.


[1] https://teachinglondoncomputing.org/resources/developing-computational-thinking/algorithmic-thinking/

[2] https://www.thebalancecareers.com/creative-thinking-definition-with-examples-2063744

Moravec’s Paradox in Artificial Intelligence: Implications for the future of work and skills

What is artificial in AI?

As the four-day long weekend loomed and I was closing an executive education programme where the focus was digitalization and technology, especially in the context of India and emerging economies, I read this piece on AI ethics by IIMB alumnus Dayasindhu. He talks about the differences between teleological and deontological perspectives of AI and ethics. It got me thinking on technological unemployment (unemployment caused by the firms’ adoption of technologies such as AI and Robotics). For those of you interested in a little bit of history, read this piece (also by Dayasindhu) on how India (especially Indian banking industry) had adopted technology.

In my classes on digital transformation, I introduce the potential of Artificial Intelligence (AI) and its implications on work and skills. My students (in India and Germany) and Executive Education participants would remember these discussions. One of my favourite conversations have been about what kinds of jobs will get disrupted thanks to AI and robotics. I argue that, contrary to popular wisdom, we would have robots washing our clothes, much earlier than those folding the laundry. While washing clothes is a simple operation (for robots), folding laundry requires a very complex calculation of identifying different clothes of irregular shapes, fabric and weight (Read more here). And that, most robots we have are single use – made for a specific purpose, as compared to a human arm, that is truly multi-purpose (Read more here). Yes, there have been great advancements on these two fronts, but the challenge still remains – AI has progressed far more in certain skills that seem very complex for humans, whereas robots struggle to perform certain tasks that seem very easy to humans, like riding a bicycle (which a four-year old child can possibly do with relative ease). The explanation lies in the Moravec’s Paradox. Hans Moravec and others had articulated this in the 1980s!

What is Moravec’s Paradox?

“It is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility”.

Moravec, 1988

It is very difficult to reverse engineer certain human skills that are unconscious. It is easier to reverse engineer motor processes (think factory automation), cognitive skills (think big data analytics), or routinised computations (think predictive/ prescriptive algorithms).

“In general, we’re less aware of what our minds do best…. We’re more aware of simple processes that don’t work well than of complex ones that work flawlessly”.

Minsky, 1986

Moravec’s paradox proposes that this distinction has its roots in evolution. As a species, we have spent millions of years in selection, mutation, and retention of specific skills that has allowed us to survive and succeed in this world. Some examples of such skills include learning a language, sensory-motor skills like riding a bicycle, and drawing basic art.

What are the challenges?

Based on my reading and experience, I envisage five major challenges for AI and robotics in the days to come.

One, artificially intelligent machines need to be trained to learn languages. Yes, there have been great advances in natural language processing (NLP) that have contributed to voice recognition and responses. However, there are still gaps in how machines interpret sarcasm and figures of speech. Couple of years ago, a man tweeted to an airline about his misplaced luggage in a sarcastic tone, and the customer service bot responded with thanks, much to the amusement of many social media users. NLP involves the ability to read, decipher, understand and make sense of natural language. Codifying grammar in complex languages like English, accentuated by differences in accent can make deciphering spoken language difficult for machines. Add to it, contextually significant figures of speech and idioms – what do you expect computers to understand when you say, “the old man down the street kicked the bucket”?

Two, apart from communication, machine to man communication is tricky. We can industrial “pick-and-place” robots in industrial contexts; can we have “give-and-take” robots in customer service settings? Imagine a food serving robot in a fine dining restaurant … how do we train the robot to read the moods and suggest the right cuisine and music to suit the occasion? Most of the robots that we have as I write this exhibit puppy-like behaviour, a far cry from naturally intelligent human beings. Humans need friendliness, understanding, and empathy in their social interactions, which are very complex to programme.

Three, there have been a lot of advances in environmental awareness and responses. Self-navigation and communication has significantly improved thanks to technologies like Simultaneously Localisation and Mapping (or SLAM), we are able to visually and sensorily improve augmented reality (AR) experiences. Still, the risks of having human beings in the midst of a robot swarm is fraught with a variety of risks. Not just that different robots need to sense and respond to the location and movement of other robots, they need to respond to “unpredictable” movements and responses of humans. When presented with a danger, different humans respond differently based on their psychologies and personalities, most often, shaped from a series of prior experiences and perceived self-efficacies. Robots still find it difficult to sense, characterise, and respond to such interactions. Today’s social robots are designed for short interactions with humans, not learning social and moral norms leading to sustained long term relationships.

Four, developing multi-functional robots that can develop reasoning. Reasoning is ability to interpret something in a logical way in order to form a conclusion or judgment. For instance, it is easy for a robot to pick up a screwdriver from a bin, but quite something else to be able to pick it up in the right orientation and be able to use it appropriately. It needs to be programmed to realise when the tool is held in the wrong orientation and be able to self-correct it to the right orientation for optimal use.

Five, even when we can train the robot with a variety of sensors to develop logical reasoning through detailed pattern-evaluations and algorithms, it would be difficult to train it to make judgments. For instance, to make up what is good or evil. Check out MIT’s Moral Machine here. Apart from developing the morality in the machine, how can we programme it to be not just consistent in behaviour; but remain fair and use appropriate criteria for decision-making. Imagine a table-cleaning robot that knows where to leave the cloth when someone rings the doorbell. It needs to be programmed to understand when to stop an activity and when to start another. Given the variety of contexts humans engage with on a daily basis, what they learn naturally will surely take complex programming.

Data privacy, security and accountability

Add to all these, issues around data privacy and security. Given that we need to provide the robot and AI systems with enough data about humans and we have limited ability to programme the system, issues about privacy is critical. Consent is the key word in privacy, but when we are driving in the midst of an autonomous vehicle (AV), there is so much data the AV collects to navigate, we need strong governance and accountability. When an AV is involved in an accident with a pedestrian, who is accountable – the emergency driver in the AV; the programmer of the AV; the manufacturer of the vehicle; any of the hardware manufacturers, like the camera/ sensors that did not do their jobs properly; or the cloud service provider which did not respond soon enough for the AV to save lives? Such questions are pertinent and are too important to relegate to a later date when they occur, post facto.

AI induced technological unemployment

At the end of all these conversations, when I look around me, I see three kinds of jobs being lost to technological change: a) low-end white collared jobs, like accountants and clerks; b) low-skilled data analysts, like the ones at a pathology interpreting a clinical report or a law-apprentice doing contract reviews; and c) hazardous-monotonous or random-exceptional work, like monitoring a volcano’s activity or seismic measurements for earthquakes.

The traditional blue-collared jobs like factory workers, bus conductors/ drivers, repair and refurbishment mechanics, capital machinery installation, agricultural field workers, and housekeeping staff would take a long time to be lost to AI/ robotics. Primarily because these jobs are heavily unpredictable, secondly as these jobs involve significant judgment and reasoning, and thirdly because the costs of automating these jobs would possibly far outweigh the benefits (due to low labor costs and high coordination costs). Not all blue-collared jobs are safe, though. Take for instance staff at warehouses – with pick and place robots, automatic forklifts, and technologies like RFID sensors, a lot of jobs could be lost. Especially, when quick response is the source of competitive advantage in the warehousing operations, automation will greatly reduce errors and increase reliability or operations.

As Brian Reese wrote in the book, The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity, “order takers at fast food places may be replaced by machines, but the people who clean-up the restaurant at night won’t be. The jobs that automation affects will be spread throughout the wage spectrum.”

In summary, in order to understand the nature and quantity of technological unemployment (job losses due to AI and robotics), we need to ask three questions – is the task codifiable? (on an average, tasks that the human race have learnt in the past few decades are the easiest to codify); is it possible to reverse-engineer it? (can we get to break the task into smaller tasks); and does the task lend itself to a series of decision rules? (can we create a comprehensive set of decision rules, that could be programmed into neat decision trees or matrices). If you answered in the affirmative (yes) to these questions with reference to your job/ task, go learn something new and look for another job!

Cheers.

© 2019. R Srinivasan, IIM Bangalore

Learning from failures

The recent suicide of Mr. V G Siddharth, the celebrated founder of Café Coffee Day prompted me to reflect on how individuals and organisations think about failure (read what he wrote in his note to the board, here). In our classes on innovation, we keep harping on why we should learn from failure, I have not had an opportunity to dwell on the “how” question, yet. Here are my thoughts on how firms can learn from failures.

One of my colleagues at IIMB, introduced me to the work of Prof. Amy Edmondson, especially on psychological safety. While reading about psychological safety, I came across her work on three types of failure, specifically in the popular HBR article titled, “strategies for learning from failure”.

Types of failures

She elucidates on three types of failure – preventable failures, complex failures, and intellectual failures. Preventable failures occur when one had the ability and knowledge to prevent it from happening. Making silly mistakes (in a test) that you could have avoided, had you spent some time for review; deviance from a manufacturing/ service processes due to laxity or laziness; and just taking some things for granted, like jumping a traffic signal in the middle of a night, are examples of preventable failures. Such failures are clearly attributable to the individual, and therefore she/ he should be held accountable. One way of managing such preventable failures is to define and keep following checklists and processes; establish clear lines of supervision and approvals; and conduct a series of intermediate reviews at predefined critical junctures.

Complex failures happen in spite of having processes and routines. They happen due to failures at a variety of points, including internal and external factors, that individually might not cause failures, but when occurring together, may cause failures. Agricultural (crop) failures, business (startup) failures, or even industrial accidents like the Bhopal gas tragedy and Fukushima disaster are examples of complex failures. Such factors are difficult to predict as the combination of problems may not have occurred before. Fixing overarching accountabilities for such failures are futile, and these can be considered unavoidable failures. It is these kinds of failures that provide fertile sources of learning to firms. Firms need to be prepared to review such failures, dissect the individual factors, and establish robust governance processes so as to (a) sense such systemic problems when they occur at the individual factor levels; (b) erect early warning signals for alerting/ educating the organisation about escalations of these problems into failures; (c) and define options for counter-measures for managing each of these problems, in particular and the occurrence of failure at the systemic level.

Intellectual failures happen due to lack of knowledge about cause-effect relationships. Especially at the frontiers of science and behaviour, where such situations have not happened before. Such situations are ripe for experimentation and entrepreneurial explorations. Firms need to sustain their experimentation and entrepreneurially approach the problem-solution space. There could be situations where the solution is too early for the problem, or the ecosystem is such that the problem is not ready to be solved. Indian automobile industry’s (for that matter, all over the globe) experimentation with electric vehicles would fall under such experimentation. Processes such as open innovation and embedded innovation would greatly contribute to learning. One such experimental innovation boundary space is JOSEPHS, built in the city centre of Nuremberg, Germany as an open innovation laboratory.

Learning from failures

In order to learn from preventable failures, organisations need to strengthen their processes, embark on benchmarking exercises both within their organisation as well as others in their competitive/ collaborative ecosystems, and continuously evaluate the impact of their initiatives. The Indian telecommunications firm, Airtel, had a promise of providing consistent consumer experience to their customers across all the 23 telecom circles they operated in India. One of their initiatives was to constantly benchmark each circle’s performance on a wide range of non-financial parameters and enable other circles to either learn & replicate the process that led to the performance or justify why their circle had different processes that would achieve the same performance. Such justifications would be documented as new processes and would be candidates for replication by other circles. This enabled Airtel improve its performance to six sigma levels and provide consistent customer experience across all its circles.

Learning from complex failures require firms to undertake systematic and unbiased reviews of such failures, typically by engaging external agencies. Such reviews would be able to dissect failures at each factor level, interdependencies across all these factors, and the causes of failure at the systemic level as well. When unbiased reviews happen, they allow for organisations to strengthen their external (boundary-spanning) opportunity sensing and seizing processes; refine their interpretation schema to provide the organisation units/ senior management with early-warning signals; and create options for managing each of these problems well before they actually occur. For instance, in response to the Fukushima Daiichi Disaster, the Japanese Government decided to review its nuclear power policy and undertook a variety of counter-measures, including shutting down of old/ ageing power plants and introduced a slew of regulations/ restrictions on nuclear industries.

Learning from intellectual failures is possibly the easiest. The firm just needs to “persist”. One of the firms I was consulting to, referred to their experimental product-market venture as a Formula-1 track: failures are insulated there, whereas success can be easily transferred to mainstream product-markets. It is such kinds of mindset shifts that enable to continuous learning from intellectual failures. For instance, the failure of the E-commerce venture, FabMart in the early 2000s Indian market is an intellectual failure (this is well documented in the book, Failing to Succeed by its co-founder, K Vaitheeswaran). When we wrote the case on FabMart in the year 1999 (available in the book Electronic Commerce), we hailed it as a harbinger of change in the way India will adopt Internet and E-commerce. However, the business failed. The co-founders regrouped over the next decade and have created other E-commerce enterprises (Bigbasket and Again). The failure of the earlier venture provided them an opportunity to reflect on the specific reasons for failure, treat it as an experiment and learn from it. Intellectual failures, therefore, need to be celebrated and treated exclusively as an opportunity to learn from them.

In summary, mistakes lead to failures when we fail to learn from them and keep repeating them. Let us admonish repeated mistakes and celebrate failures!

Cheers!

(c) 2019. R Srinivasan

Collaborative economy, sharing economy, gig economy … what are they?

If you have been reading any technology-business interface discussions recently, you must have surely heard of these (and more) words…. Almost all platform businesses have been described using one or more of these labels. In this post, I analyse their meanings and differences.

Collaborative economy

As the name suggests, collaborative economy is when different parties collaborate and create new, unique value that would not have been possible individually. For instance, economic activities like crowdfunding or meetup groups qualify as parts of the collaborative economy. Platforms like Kickstarter or Innocentive help people collaborate and create new value. However, platforms like Uber or Airbnb do not qualify – drivers (on Uber) and hosts (on Airbnb) do not collaborate amongst themselves or with their riders (on Uber) and guests (on Airbnb) – they do business with the other side.

Sharing economy

At best, Uber and Airbnb, in their purest forms qualify as sharing economy participants. In the sharing economy, participants share their surplus assets/ capacity with others (for a fee, of course); and there may be platforms facilitating this discovery and sharing (for a fee, surely). When this sharing is a capital asset like a house (in Airbnb) or a piece of high-value equipment (in Makerspaces), the economic argument is based on high fixed/ sunk costs and with low marginal costs, coupled with low capacity utilization/ spare capacity. This is when these economic transactions become sharing. For instance, when I am driving long distance alone, and want company for the distance (in the process, also sharing the cost), I could use Bla Bla Car, and that would be sharing economy, as the three conditions are met: (1) the asset shared (the car) is a capital asset; (2) the marginal cost of adding another passenger to the car is negligble; and (3) the car has space for the additional passenger. The value is created for both of us – I got company through the trip as well as reduced the cost; my co-passenger got to travel in a comfortable manner ‘with company’ at a much lower cost. I am not a professional driver, who is looking to make money by transporting passengers from city A to city B. That would make me a ‘gigzombie’.

The gig economy

Used almost in a derogatory manner, the gig economy refers to the idea of loosely connected people sharing their labor/ expertise for professional returns. These laborers are not employees, but are on real short-term jobs or ‘gigs’. These short-term gigs are provided by some platforms like Uber. What Uber has come to today is to create a marketplace for professional drivers. Uber does not employ them (with all the benefits and security of employment), but treats them so. Uber attracts professional driver-partners to serve their riders. The opportunity costs for these drivers are pretty high, unlike in the sharing economy. For instance, in the context of the Bla Bla Car, if I did not find a partner (or someone I liked), I would still drive that long distance, because I had work in that city. Co-passenger or not, I would still go. I am not dependent on Bla Bla Car for meeting my costs. What the a gig economy company like Uber does is to hire professional drivers (who would have otherwise been employed as drivers or in other roles) and give them business. This is fine as long as the ‘gig’ was a small proportion of your total work.

Take for instance a photographer, who has his own professional practice. He acquires his customers through direct sales, word-of-mouth, and search engine/ social media marketing. And when a gig economy platform like Urbanclap begins providing him business, it adds to his exisitng income. And he is willing to pay a commission to Urbanclap for getting him customers (which he would have otherwise found difficult to get). However, when Urbanclap provides him ‘all’ his business, the ‘gig’ economy kicks in, and the photographer is at the mercy of his one and only source of jobs. He is now a ‘gigzombie’, and the aggregator can steeply increase her commissions.

Point to note is that in the gig economy, there is no collaboration/ co-creation (no common value added), nor is there a shared-value creation (fixed assets, low marginal costs, and excess capacity).

Business models for collaborative, shared, and gig economies

Given that these three economies have different architectures of interaction amongst partners, we cannot have the same business models serving them. Collaborative economies require business models where the platform allows for parners to complement each others’ value creation efforts; sharing economies require business models to match excess capacity with demand; whereas gig economies require aggregation and improving efficiency of the overall system.

‘Bargaining’ power of gig economy platforms

Given that some of these gig economy platforms operate in winner-takes-all markets with high multi-homing costs, their barganing power (in the traditional sense of the term) increases significantly. Multi-homing costs refer to the bureaucratic (search and contracting) and transaction (including variable) costs incurred by a set of partners in maintaining affiliations with multiple platforms simultaneously (switching costs refer to the costs of abandoning one in favor of the other platform). For instance, given the high multi-homing costs for drivers on the Uber platform, it is nigh impossible for a driver-partner to fairly negotiate the terms of engagement with Uber. Combine this with the unorganized nature of the gigzombies, we have a potential for an organized exploitation.

Policy issues in gig economies

Gig economies typically attract ‘weak’ partners, at least on the one side. For instance, Uber drivers work like employees (full-time with Uber) with litle or no employment benefits. The imbalance is pretty stark when these platforms begin disrupting existing, established business models. Take the arguments against Airbnb around systematic reduction of supply of long-term housing in cities. When house-owners are faced with a choice of long-term rental contracts and short-term rentals with Airbnb, they may choose the more financially lucarative short-term Airbnb rentals; and when more and more home-owners do this in a city, the supply of housing reduces, driving up rental prices.

The impact of gig economy platforms on employment is further stark – especially when the labor is ‘online work’. In such cases, the worker in San Bernandino, California is competing with similarly trained workers in Berlin (Germany), Beijing (China), Bangalore (India), or Bangkok (Thailand). Obviously the labor costs are different, and lack of regulations favor significant displacement of work to cheaper options. Even in the case of physical labor like driving cars, developed countries have been dependent on countries with demographic dividend and lower costs of skilling (costs of education and vocational training) for immigration (some world leaders’ public stances notwithstanding).

Do not confuse the the three of them!

In summary, we are talking three different business models when we talk of collaborative, sharing and gig economies. And they need to be treated differently. Primarily, regulators and investors need to understand the differences and frame policies that are sensitive to these differences.

Cheers!

(c) 2018. R. Srinivasan

 

Digital transformation imperatives

I wrote a blog post on digital transformation stages about two years ago (yes, it has been two years!). In that post, I outlined digital transformation as a process and outlined a four-stage model. Over the past couple of months, I have been leading the IIMB executive education programme on Leading Digital Transformation. This programme, offered jointly with the Friedrich Alexander University of Erlangen-Nuremberg and the Fraunhofer Institute of Integrated Circuits IIS, has attracted an impressive class of participants (2018 being the pioneering batch)*. I have been closely interacting with the participants of the programme, who are leading/ looking to lead digital transformations in their respective enterprises over their modules in Bangalore and Nuremberg. As I sat through their project proposal presentations, three thoughts flashed through my mind:

  1. True digital transformaton is possible only when there is a mindset change.
  2. Process reengineering should help the enterprise seek to add new/ additional value, rather than just more of the same value.
  3. Enterprises should proactively plan to utilize and leverage the large amounts of data such transformational journeys generate, failing which the value added wouldn’t be sustainable.

Digital transformation as a mindset change

Digital transformation is not just automating processes, or introducing fancy technologies in the workplace. There are a lot of organizations out there that have embarked on the slippery path of digital transformation without fully understanding what it takes. I remember the early days of computerization where just introducing computers was not sufficient – it was important to highlight how and why computers helped improve organizational efficiency (reliability of repeatable processes). Managers resisting change would bring out instances like, but organizational decision making cannot be programmed, and therefore technology change is likely to result in loss of employment and subsequently, control. I hear the same arguments today … especially with respect to automation and artificial intelligence … the human touch will be gone, digital assistants cannot replace human beings, and the like. It is imperative for leaders to communicate at both the far and near mindsets (read more about far and near mindsets here) to effect acceptance and adoption.

New/ additional value creation

Leaders should consistently communicate the new value created as a result of digitalization. Just additional value (more of the same) does not always justify the investments in digital transformation … even though the competitive context may demand so. For you to do more of the same things, add the same value, you just need to digitalize your existing processes, and make them more efficient. For you to add new value is when you need to embark on the process of digital transformation. And this happens when you begin looking at processes with an intent to re-engineering – have a critical evaluation of the processes and drive change at the process design layer, rather than at the execution layer; and ensure that new/ additional value is created and appropriated.

Descriptive, predictive, and prescriptive data analysis

Value creation/ generation is a function of process re-engineering and re-design; value appropriation is far more difficult. Effective value appropriation requires the organization to invest in deeper analysis of the vast amounts of data generated to be able to (a) describe (based on patterns in the past), (b) predict (extrapolate based on past data, as well as estimation of other parameters that affect the phenomena, and (c) prescribe a course of action (based on the juxtaposition of the description and prediction of the environment with the organization’s intent and capabilities). The criticality of descriptive, predicting and prescriptive data analysis needs to be articulated and socialized in the organization as a key to sustaining the digital transformation journey. In the absence of such insights, these transformational efforts will surely fade away, if not fail!

Such insights should provide the organization with the much needed data to convince its members on the short- and near- term value creation due to digital transformation efforts.

 

41.1 Digital Transformation imperatives

So, in summary, digital transformation requires an organization to continuously invest in mindset change, process redesign, and generate insights.

Cheers!

© 2018. R Srinivasan.

* Disclaimers: I am the IIMB Chair for Executive Education Programmes that markets these programmes; I am also the programme director (course designer and teacher) of the said programme: Leading Digital Transformation; and I am a visiting professor at the Friedrich Alexander University of Erlangen-Nuremberg. Apologies for the self-promotion, and the (unintended) marketing pitch for the programme.

Far and near mindsets

Last month, I was at the Yale University and was listening to Prof. Nathan Novemsky on different mindsets. Of the various mindsets we discussed, psychological distance (and its impact on communication and marketing) caught my attention. In this blog post, I elaborate the concept of psychological distance, and why it is important in the context of entrepreneurship and multi-sided platform businesses.

Psychological distance: Basics

Prof. Novemsky’s (and his colleagues’) research indicates that as people get closer to the decision in terms of time, their mindset changes from a “far mindset” to a “near mindset”. When people engage with you on a far mindset, they are concerned about the “why” questions; whereas when they engage with you on a near mindset, they are concerned about the “how” questions.

Let me illustrate. When a customer downloads an Uber App for the first time, she is more concerned about how she is contributing to the environment by being part of the shared economy, and therefore is less concerned about issues like the minute features of the user interface/ user experience. On the other hand, a customer who is getting out of a day-long meeting with a demanding customer is worried more about the minute details of the ride, like the time taken for the car to arrive, type and cleanliness of the car, and driver’s credentials and behavior; as she is engaging on a near mindset.

Communication and marketing

The understanding of what mindset your customer is engaging with you is imperative to designing your communication. When you advertise a grocery home delivery service on television, you might want to appeal to the consumer’s far mindset … that talks about why he should choose your service rather than the neighborhood grocer/ vegetable market. For instance, the benefits of fresh produce straight from the farm (without middlemen) faster would make immense sense. However, when you communicate with your customer after he has decided to place an order, you might want to talk about specific discounts, receiving delivery at a convenient time, quantity changes, add-ons and freebies, and payment options.

What does this mean to start-ups/ entrepreneurs?

That’s simple, right. A founder communicating with a potential investor should talk to the “far mindset” rather than the “near mindset” if he has to raise money. However, a customer presentation has to appeal to the near mindset.

For instance, the home-health care start-up for pets (petzz.org) communicates convenience of all-day home-visits of veterinarians to its pet-owners; the specific plans available that pet-owners can choose from; and the significant increase in business for the veterinarian partners. However, when it runs camps to enroll pet-owners, it talks about “healthy pets are happy pets” communicating to the far-mindset.

However, the investor deck only appeals to the far mindset … how their business model leads to “healthy pets” and why this is a compelling value proposition for its pet-owners, veterinarians as well as other partners in its platform.

[Disclaimer: I advise petzz.org]

Implications for multi-sided platforms

Not so simple. I can envisage that there may different sides of a platform that may be operating at different mindsets and the MSPs may need to be continuously aware of. Take the example of the social-giving/ crowd-funding platform Milaap (milaap.org). The two sides of the platform are givers and fund-raisers.

Imagine a fund-raiser appeal … which one appeals to you most?

  1. “help a school from rural Chattisgarh build toilets for girls”
  2. “help support girls’ education”
  3. “make sure girls like Shanti don’t drop out of school”

As you move down from option 1 to 2 to 3, you are increasingly operating from the far-mindset!

On the other hand, when Milaap attracts fund-raisers with the following messages

  1. “you get the most socially-conscious givers at milaap”
  2. “it’s easy to communicate with givers at milaap”
  3. “it’s is easy to login, set up and free”

As you move down from option 1 to 2 to 3 here, you move towards a near-mindset!

It gets more complicated when the different sides of the platform are at different stages of decision-making. For instance, when a C2C used-goods marketplace platform like Quikr has a lot of buyers and lesser number of sellers; the messaging across the two sides has to be different! For the sellers who are yet contemplating joining the platform, the message has to be appealing to the far mindset (of decluttering their homes), whereas for the umpteen buyers who are looking for goods on the platform, the message has to appeal to the ease of transacting (near mindset).

Match the message to the mindset and the stage of the engagement

In summary, effective platforms have to communicate consistently across multiple sides of the platform, however keeping in mind the different mindsets of the respective sides. A cab hailing app has to communicate differently to its riders as well as drivers, while sustaining the same positioning. If the rider value offering was about speed of the cab reaching you, the driver communication has to be consistent – speed of reaching the rider. For the driver, it is near mindset (speed of reaching the rider is about efficiency), whereas for the rider, speed may be appealing to the far mindset (about not driving your own car and keep it waiting all day at an expensive parking place; or better still, reducing congestion in the city centers). And for sure, these messages also have to change over the various stages of consumer engagement, right!

Any examples of mismatched communication welcome!

Cheers from a rainy day in Nuremberg, Germany.

© 2018. R. Srinivasan

Facebook did not “steal” my data: I actually work for Facebook and produced the data!

In one (more) of my flights, I had a conversation with a co-passenger. A young adult, a lot of us would be so excited to label a “millennial”, she was excited about the recent announcement that airlines in India can offer in-flight internet connectivity. Even as the flight was on the active runway, she was on her mobile phone (ignoring the requests of the flight attendants), taking pictures of herself in the plane and posting them on Facebook. She was on her way to attend a graduation function of a friend; and everyone who was travelling in, was marking their presence. As the network died, she turned around to me and asked, “so, how long will it be before the airline companies implement internect connectivity in-flights?”. I answered her on how difficult it would be for planes to be retro-fitted with such capabilities, but all new planes can be inducted with the facility. As she realised it is going to be some time, she sighed, put on the headphones, and went off to doze.

It set me thinking about my previous post “Facebook stole my data“. By calling that data mine (belonging to me), I was actually falling for the “personal data as a mine” metaphor – the idea that data is a resource much like a natural resource. It is there for tech firms to extract. As if it was fossilised like oil or coal, or formed over a long period of time, like alluvial soil. If that was so, then the governments should regulate it well. The governments should therefore own all these data and possibly license these extractors with specific mandates to provide unique value to the users (citizens), and such resources would remain the realm of a public good.

The trouble with this metaphor is that the data did not exist in the first place … I created that data, and continue to create it. Nicholas Carr, in his blog aptly wrote, I am a data factory (and so are you). He debunks that mining metaphor and says that just by my living – walking, interacting with friends, buying, reading, and just lazily watching something on the internet – I create data. And not just any data, in any form … in a form that Facebook wants me to. A little more nuanced than what I argued in my previous post … I argued that Facebook created the data. Carr argues that I create that data, working for Facebook, in a form and design that Facebook wants me to. Do you see yourself as working for tech companies like the Amazon, Facebook, Apple, Google, and Netflix, not for a salary/ remuneration, but as a data factory? And worse, the data that you produce being used to “enrich” someone on the other corner of the world, in the name of providing you with a meaningful experience?

This metaphor puts the ownership on the design of the data, apart from the data itself, on the tech companies. And in making this addictive (recall how the first line of your Facebook homepage starts with “write something”, and possibly in your mother tongue!), tech companies like Facebook are trying to engineer our lives. Really? Does Facebook tell me what to do? Or does Facebook want to know what (everything, whatever) I do? I would tend to believe the latter.

Here is where the difference between use and addiction begins. When you use, you benefit … when you are addicted, they benefit. Pretty much like using sinful products (like tobacco and alcohol), handling dangerous products (like fire and guns), and managing borderline behaviour (like assertiveness becoming aggressiveness), we should possibly educate ourselves about responsible use.

Responsible use of Facebook! That sounds not-so-fun. The flight was about to land, the flight attendant was waking up the lady seated next to me to put her seat in upright position. She woke up excited, and looked outside to see if we had landed … and as soon as landed, she opened her Facebook app, and went on to write something. I did apologise for snooping into her behaviour, but she said, she was fine. She was comfortable giving away her behavioural/ afflilliative/ preferential/ locational data to some tech company server … she was not one bit uncomfortable sharing her behavioural data with me. I told her that I would write about her Facebook addiction as we were getting off the plane, and she said that would be “cool”, gave me her email id, and asked me to email the piece to her (which I will promptly do). She would possibly post it on her Facebook wall too. So much for addiction.

So much for mid-week reading.

Cheers.

(C) 2018. R Srinviasan