Discrimination has become an important issue in the recent development of sharing‐economy marketplaces. Previous studies raise serious concerns over racial discrimination on Airbnb, showing that guests with African American–sounding names are 16 percent less likely to be accommodated relative to identical guests with white‐sounding names. Many African American users have expressed frustration on social media about how they were allegedly denied booking requests by Airbnb hosts because of their race.
Reducing discrimination is an important issue in marketplace design and operations as the public awareness of discrimination in the sharing economy increases. For example, Airbnb states, “We welcome the opportunity to work with anyone that can help us reduce potential discrimination in the Airbnb community.” There is a burgeoning literature that focuses on marketplace designs to improve efficiency and social welfare, yet discriminatory behavior is often an overlooked factor that hinders effective market mechanisms. Recognizing this opportunity, we investigate whether peer‐generated reviews can reduce discrimination. We want to understand whether hosts’ reviews of guests affect other hosts’ discriminatory behavior against these guests based on their race. Reviews may help attenuate discrimination for two reasons. First, online reviews have been shown to be a credible source of information. Reviews on Airbnb provide valuable information about prospective guests, such as safety, tidiness, and friendliness. Therefore, additional information from reviews could help hosts make more‐informed decisions rather than basing their decisions on guests’ race. Second, reviews could also help establish an inclusive behavior among community members. The fact that other hosts have accepted a guest encourages a host to accept that guest regardless of race.
Moreover, we are also interested in understanding what characteristics of a review are most critical in reducing discrimination, including sentiment (i.e., positive or nonpositive descriptions of a prior experience with a guest), credibility (i.e., peer‐generated or self‐claimed quality information), and the existence versus the content of a review (i.e., the fact that a review exists on a guest’s profile compared with what the host said in the review).
We conduct four experiments on Airbnb to address these questions. In each experiment, we manipulate both guests’ race and the review information. In the first experiment, we create eight fictitious guest accounts. The eight accounts are divided into two sets of four, with each set having names identical to those in the other set. The accounts in the first set do not have any reviews and the accounts in the second set each have one positive review written by the same host at the same time. Within each set, the four guest accounts are identical except for names. Two guests have white‐sounding names and two have African American– sounding names. We assign these guest accounts to Airbnb hosts in three major U.S. cities and send out accommodation requests. We record hosts’ reply messages and compare acceptance rates among guest accounts. Because the only differences in the guest accounts we assign Airbnb hosts are names and reviews, we know these elements drive the observed difference in acceptance rates. We refer to this experiment as the positive‐review experiment.
In the second experiment, we create another eight fictitious guest accounts and repeat the previous experimental design with one change: the latter four accounts in the review condition receive a nonpositive review instead of a positive review. We call this experiment the nonpositive‐review experiment. In the third experiment, we create another 16 guest accounts and repeat the experimental design with another change: all guest accounts lack reviews, and the latter eight guests claim to be neat and friendly in their accommodation‐request messages. This enables us to test whether self‐claimed and unverified information can reduce discrimination; we refer to this experiment as the self‐claimed information experiment.
In the last experiment, we again create 16 new accounts and repeat the experimental design with one modification: each of the latter eight guest accounts in the review condition has one blank review without content. This setup allows us to separate the effect of a review’s existence from its content. We call this experiment the blank‐review experiment. We conducted the four experiments in September 2016, October and November 2016, July and August 2017, and March and April 2018, respectively.
Our positive‐review experiment suggests that discrimination exists when guest accounts have no review: the average acceptance rates of guests with white‐sounding names and guests with African American–sounding names are 47.9 percent and 28.8 percent, respectively. This result is consistent with prior studies. However, when there is one positive review, the gap in acceptance rates disappears, suggesting a reduction in discrimination: the acceptance rate is 56.2 percent for guests with white‐sounding names and 58.1 percent for guests with African American–sounding names. Moreover, irrespective of a guest’s race, the acceptance rate is higher when the guest has a positive review.
The remaining three experiments demonstrate whether different types of information can help attenuate discrimination. In particular, our nonpositive‐review experiment suggests that a nonpositive review can significantly reduce discrimination: in the absence of reviews, guests with white‐sounding names are 21.4 percentage points more likely to be accepted than guests with African American–sounding names; when there is a nonpositive review, the acceptance difference between white guests and African American guests becomes statistically indistinguishable. Moreover, the blank‐review experiment also suggests that the existence of a blank review significantly reduces discrimination.
While both nonpositive and blank reviews can reduce discrimination, our self‐claimed information experiment shows that the information claimed by guests on their own tidiness and friendliness fails to reduce discrimination; even with self‐claimed information, guests with white‐sounding names are 12.8 percentage points more likely to be accepted than guests with African American–sounding names, which is statistically similar to the gap between white and African American guests without self‐claimed information.
Our paper contributes to the literature on marketplace innovation by providing evidence that peer‐generated reviews can reduce discrimination in the sharing economy. Although several recent studies have documented evidence of discriminatory practices on sharing‐economy platforms such as Airbnb, Uber, and Lyft, none of these studies provide concrete methods to mitigate these actions, and our paper is the first to do so. We show that different types of reviews—positive, nonpositive, and even a blank review—can reduce discrimination. Moreover, we show that in contrast to peer‐generated reviews, self‐claimed information cannot reduce discrimination. This result demonstrates that verifiability and credibility of a review is crucial for reducing discrimination.
Our findings have several implications for sharing‐economy platform owners. To attenuate discriminatory behavior and to improve operational efficiency, platform owners should better leverage online reputation systems to encourage and facilitate information sharing among participants. For example, this may be achieved by sending reminders or offering incentives to users to write reviews for one another, especially when one of them is a first‐time user. Platform owners should also carefully validate reviews on their platforms by linking reviews to transactions to successfully leverage online reviews and reduce discrimination in sharing‐economy platforms.
This research brief is based on Ruomeng Cui, Jun Li, and Dennis J. Zhang, “Reducing Discrimination with Reviews in the Sharing Economy: Evidence from Field Experiments on Airbnb,” Management Science (August 2019), https://doi.org/10.1287/mnsc.2018.3273.