AI is taking over the service industry, although concerns remain about whether it is safe and fair. Increasingly, under the theme of Corporate Digital Responsibility (CDR) - the ethical and safe use of technology by firms - businesses are being urged to use AI responsibly. There is emerging evidence that service firms are committing more to AI-related CDR. For example, the communication platform Glassix audits its AI models regularly and uses diverse training datasets to reduce bias.
People remain sceptical about service firms’ commitment to using AI responsibly. I call this a legitimacy gap – a lingering perception that service firms are failing to act in a socially responsible manner when using AI, irrespective of their actual practices. Moreover, people's concerns about AI may be contributing to this legitimacy gap.
My collaborators and I analysed multinational data to show that, both within and across diverse countries, people’s concern about AI increases their preference for government regulation of AI over self-regulation by service firms. Country-level regulatory quality, rule-of-law perceptions, and information communication technology (ICT) use influence the degree of preference for government regulation. We also found evidence for some cultural variation in the extent to which government regulation is preferred.
Our behavioural experiment using Facebook Ads further validated the main finding from the first study - i.e., people's concern about AI increases their preference for government regulation vs. self-regulation by service firms.
In follow-up lab experiments, we tested whether signalling compliance with government regulation vs. CDR (i.e., self-regulation by firms) helps firms mitigate the negative downstream impact of people's concern about AI ("AI concern").
When interacting with a service chatbot, participants’ AI concern reduced their willingness to share data. But, this unwillingness to share data with a firm was reduced when the firm indicated that it complies with government regulation (EU AI Act). This is because compliance with government regulation (vs. firms' self-regulation) decreases how much people feel vulnerable to AI.
Therefore, AI service firms can build trust with customers by signalling CDR in terms of compliance with government regulations, provided they fulfil the regulatory requirements.
Reference:
Yoganathan, V., Osburg, V. S., & Janakiraman, N. (2025). Lending Legitimacy to Corporate Digital Responsibility: Trust in Firm Versus Government Regulation of Artificial Intelligence Services. Journal of Service Research, 10946705251345097.
Service robots, including chatbots, voice-bots, and digital humans are becoming more common. So, service firms need to consider the attitudes of not just the early adopters, but of all types of users in the general population. However, while researchers have often studied individual attitudes towards service robots, it would benefit service firms to focus on societal (or population-level) attitudes.
Population-level Attitudes
Population-level attitudes are aggregated patterns of different individual attitudes, representing collective views in the general population. A person’s attitudes can change based on their experiences or learning over time, especially given the fast-developing nature of robotic technologies, whereas population-level attitudes are much more stable. In turn, stable attitudes are more likely to influence people’s behavior. Another advantage is that targeting population-level (vs. individual-level) attitudes does not rely on individual customers’ personal data.
So, what does our research say?
To get a true representation of population-level attitudes, my colleagues and I analyzed nearly 90,000 data-points from diverse sources (e.g., population surveys, online reviews, experiments). The data were gathered from multiple countries over a 12-year period (2012-2024).
We found consistent evidence for four types of stable attitudes:
· Positive (“adore”) – high benefit and low risk perceptions about service robots.
· Negative (“abhor”) – low benefit and high risk perceptions.
· Indifferent (“ignore”) – low benefit and low risk perceptions.
· Ambivalent (“unsure”) – high benefit and high risk perceptions.
These attitude types predict differences in how customers evaluate robots (their sociability or uncanniness, and how comfortable/anxious one feels about interacting with them again), and the overall service quality. The positive and negative attitudes represent the extremes, but surprisingly, the indifferent attitude’s outcomes are closer to the positive (rather than negative) attitude, and the ambivalent attitude’s outcomes are closer to the negative (rather than positive) attitude.
We also found that the less favorable attitudes towards service robots (i.e., indifferent, ambivalent, and negative) are motivated by a deep-rooted need for human connections, rather than a lack of technological skills.
What can service firms do?
Service firms should keep engaging with customers exhibiting positive attitudes, because they can act as ambassadors for robot services (e.g., by sharing and promoting positive experiences on social media). Firms can engage them through service enhancements (e.g., pairing physical robots with large language models) and new service features (e.g., a personal robot butler).
Customers exhibiting negative attitudes should not feel compelled to use robot services, so firms should highlight the availability of human staff as part of core service offerings. Staff training programs should focus on ensuring the quality of human-robot hybrid services for a seamless experience.
For the ambivalent, it is important to balance technology with conventional value-adding elements, as well as human assistance and social aspects. Improving the reliability of robot services (e.g., through app-based tracking and pro-active troubleshooting) can help build their confidence.
For the indifferent, it is crucial to create memorable service experiences with robots. Rather than over-promoting service robots, firms should focus on other value-adding service features, while positioning service robots as a service enhancement.
Reference:
Yoganathan, V., Osburg, V. S., Fronzetti Colladon, A., Charles, V., & Toporowski, W. (2024). Societal Attitudes Toward Service Robots: Adore, Abhor, Ignore, or Unsure? Journal of Service Research. doi:10.1177/10946705241295841
Driverless vehicles are said to be the future of transportation, which help deliver goods and people from one place to another without human drivers. Despite the envisaged future, people show mixed reactions towards driverless transportation modes. New research demonstrates that customers are not only influenced by what they think, but particularly, how they feel about services that use driverless cars.
Driverless transportation is on the rise. Various services are already run by driverless vehicles, for example, Waymo One offers a fully autonomous taxi service in Phoenix, Arizona, and a company called Starship uses autonomous vehicles to deliver groceries in Milton Keynes, UK. These forms of driverless transportation substantially rely on artificial intelligence (AI). They are envisaged to significantly gain in importance in operating services for public, private, or goods transportation in the years to come.
What do customers think and feel about driverless transportation services? We know that people have a high level of psychological resistance when it comes to driverless transportation. There are many reasons for this; for example, they may be unsure about the safety or reliability of driverless transportation. But people also feel negative emotions when confronted with driverless transportation, such as stress, or find the experience less pleasurable compared to driving a car themselves.
The CRUISE-C Framework
Do all customers show psychological resistance towards driverless transportation services?
In my research, together with several colleagues, I introduced the Customer Responses to Unmanned Intelligent-transport Services based on Emotions & Cognition (CRUISE-C) framework. The framework shows that we need to consider a driverless transportation services’ level of autonomy and risk if we want to better understand customer responses.
For example, an autonomous vehicle service could be provided with human supervision (partially autonomous) or without any human supervision (fully autonomous); and utilized for the transportation of goods (low risk) or children (high risk).
There are four distinct groups of customers in terms of how comfortable they are with driverless transportation services varying in the level of autonomy and risk. The customer groups differ in their emotional responses– not just in terms of negative vs. positive emotions, but also low vs. high intensity of emotions.
How can we overcome psychological resistance?
The perception of human supervision is a good strategy for making customers more comfortable with driverless transportation services (e.g., remote supervision of driverless cars), particularly, when such services are introduced to the mainstream market. Even in the riskiest of contexts (i.e., involving children), two of the four identified customer groups respond positively to remote human supervision of driverless transportation services. Where the transportation of goods is concerned, remote human supervision suffices in three of the four customer groups. So, even if human supervision is not strictly required for legal or safety reasons, human supervision works as a psychological reassurance and can persuade some customer groups towards adoption.
Additionally, people benefit from positive information about driverless transportation services. The research indicates that positive information can help prevent people developing a very strong resistance to driverless transportation services, even if they have also been exposed to negative information. So, we should highlight the benefits of driverless cars more!
Sources:
Osburg, V. S., Yoganathan, V., Kunz, W. H., & Tarba, S. (2022). Can (A) I Give You a Ride? Development and Validation of the CRUISE Framework for Autonomous Vehicle Services. Journal of Service Research, 25(4), 630-648.
Yoganathan, V., & Osburg, V. S. (2024). Heterogenous evaluations of autonomous vehicle services: An extended theoretical framework and empirical evidence. Technological Forecasting and Social Change, 198, 122952.