GSB播客|if / then - 12:隐形媒人—算法如何将人与机会配对

GSB播客|if / then - 12:隐形媒人—算法如何将人与机会配对
2024年08月27日 09:00 斯坦福商学院

无论你是在寻找工作、房子还是浪漫伴侣,总有一款应用程序可以满足你的需求。但随着人们越来越多地转向数字平台寻找机会,Daniela Saban表示,是时候认真审视算法的作用了,算法是我们屏幕背后隐形的媒人。

Saban是斯坦福商学院运营、信息和技术副教授,她的研究兴趣涵盖运营、经济学和计算机科学。由于算法对谁能获得机会有重大影响,她主张在算法中建立“公平性”。

在本期播客中,Saban探讨了如何通过精心设计的算法来提高匹配过程中的公平性和有效性。

Saban的大部分研究都集中在她所谓的“匹配市场”上,她承认,她对在线约会特别着迷。在一项研究中,她和几位合著者与美国一家大型约会平台合作,探索应用程序算法的更新会如何更好地帮助那些希望寻找浪漫的人建立新联系。

通过分析应用程序的数据,Saban开发了一个模型,该模型不仅可以根据用户的偏好对潜在匹配进行优先排序,还考虑了潜在匹配的另一方感兴趣的可能性。“说到约会应用程序,我不仅想向你展示你会喜欢的人,还想向你展示也会喜欢你的人。” Saban指出。在德州的实地实验中,考虑到用户的历史、活动水平,这种双向偏好的算法使匹配成功的次数大幅增加。“我们的算法使休斯顿的匹配成功次数增加了27%,奥斯汀的匹配成功次数增加了37%以上。” Saban说。

同样,在与志愿者匹配平台VolunteerMatch合作时,Saban发现志愿者机会的分配也存在着不平衡。有些组织收到的报名人数多得惊人,有时甚至超过了他们所需的人数,而有些组织则根本无法吸引到足够的志愿者。通过调整搜索算法,考虑组织需要和已经收到的志愿者报名人数,Saban和她的团队能够确保志愿者在各个机会中的分配更加公平。

算法的技术细节可能很复杂,但我们对公平和公正的承诺却不必如此。正如本期节目所揭示的那样,如果我们希望算法发挥良好的作用,那么就需要有意识地选择如何设计算法。

以下为本期播客的文字整理稿:

Kevin Cool: If we want to get fair outcomes, then we need to build fairness into algorithms. Meet Josh Fryday.

Josh Fryday: I am the Chief Service Officer for the State of California. And our office, California Volunteers, is charged with engaging Californians in service and civic engagement.

Kevin Cool: Josh was appointed by the governor to recruit Californians for acts of service.

Josh Fryday: So, whether it’s asking people to step up and volunteer at food banks during COVID, asking people to check on their neighbors during a disaster, or asking people to take climate action.

Kevin Cool: In a state of almost 40 million, getting people interested is not the hard part.

Josh Fryday: It’s easy for people to raise their hand. Placing them in a meaningful experience to make a difference is difficult. It takes a lot of intentionality; it takes infrastructure. Figuring out how to take those who raise their hands to say I want to step up and do something and putting them in a meaningful place where they can have a good experience, where they can feel like they’re actually contributing, that is hard work.

Kevin Cool: Josh says most of the work is building a system where volunteers don’t overwhelm the groups who need the help.

Josh Fryday: We can’t just throw volunteers and service members at nonprofits or local governments and say, “Here you go. You have bodies now.” It doesn’t work that way. You have to have people who are able to supervise them, who are able to direct them, who are able to mentor. That’s a holistic infrastructure of people that we have to put in place even to be able to absorb the over 10,000 service members that California has now created.

Kevin Cool: One of the challenges facing any volunteer or service organization is matching people with the right opportunity, especially when too many volunteers want to do the same thing. But Josh says volunteers still find success when they don’t get the opportunity they were initially looking for.

Josh Fryday: I have talked to many students who came in saying, “I want to be an engineer,” or “I only want to focus on climate action.” And the need that we had for them was in a school district tutoring and mentoring low-income students. And they have said because of that experience they now want to become teachers and they now want to focus on education, and it’s because they got exposed to something new.

Kevin Cool: The stakes are high in getting this part right.

Josh Fryday: If someone raises their hand to serve or volunteer and then are given an opportunity that is not meaningful or not positive, we’ve just sent a message to that person that we don’t actually need them.

Kevin Cool: Matching people who want one thing with people who may offer something else is complicated work and often relies on sophisticated algorithms. It applies to volunteering as much as a dating app. Without the right design, these algorithms can lead to situations where too many people chase the same option, and people on both sides are disappointed. Is there a way to make these matches better for the most amount of people?

This is If/Then, a podcast from Stanford Graduate School of Business, where we examine research findings that can help us navigate the complex issues facing us in business, leadership, and society. I’m Kevin Cool, Senior Editor at the GSB. Today we speak with Daniela Saban, Associate Professor of Operations, Information, and Technology, about her research on algorithms.

We’re going to talk today about a couple of studies that you were part of, one having to do with dating apps and the other with volunteer matching. And I wanted to start by asking you what is it about algorithms that appealed to you, and can you talk a little bit about how an algorithm can be impactful?

Daniela Saban: So, everyday I’m sure you use a lot of apps on your phone. And what you may not realize is that when you open the app, the app is making a lot of decisions in real time. And not only for you, but also for many other users of that app. So, how are you going to make those decisions in real time and what is the impact that those decisions might have not only on you as a user and your experience as a user, but also on all the other users that use that same service?

Kevin Cool: So, let’s talk about the paper that you were part of on dating apps. Before you embark on a research project, you have a hypothesis of some kind. What was the origin of the hypothesis in this case?

Daniela Saban: This particular case, I have always worked — or for many years I have been working on matching markets more broadly, and I always wanted to do something closer to applications. And I was fascinated by online dating, as I’ve been discussing endless hours about online dating with my friends, helping them build profiles, helping them like or not like people.

Some of my friends are amazing. They see a profile and they know if they’re being shown that profile because this person liked them or because they set this preference this way or the other way. So, all of this is like, oh, that’s awesome. And then you start to realize there’s a lot of things of how these apps work that it may be interesting to understand better. And then you also realize, okay, but people are trying to “game” the way the apps work.

Kevin Cool: There was a point in the paper where you talked about how the fundamental problem in, say, a retail store is how you sort your merchandise — where it is in the store — to maximize sales. And in the case of a dating app, obviously looking for love is different than shopping for detergent. But beyond the obvious, what’s the difference in terms of what the algorithm has to do in solving those two problems?

Daniela Saban: So, the most obvious difference is that you want detergent, detergent is available in the store. You go, you get it, you pay for it and you go home. And that product never has a say or a preference if it wants to just be with you or not. Whereas if you look at the dating app or other type of markets out there, there might be a preference also on both sides of an interaction. It’s not just simply as buying a product; it’s more is this a good fit for both parties or not.

Kevin Cool: So, to use the parlance of a dating app or social media, each party in that transaction has to “like” each other.

Daniela Saban: Yes.

Kevin Cool: Right. So, the algorithm has to deal with a whole different set of variables than if you were in a retail situation. Like you say, the detergent doesn’t have to like you to be purchased. So, what is the size and the shape of that challenge? How intricate does the algorithm have to be to be able to deal with that?

Daniela Saban: So, there are parts that are similar and parts that are very different. So, some of the similar parts is, well, you first need to understand what are the preferences of these users, let it be in a retail setting or let it be in a matching setting like in a dating app. The challenge is that if you just look at preferences, many people would like the same thing or would like the same person. And of course, you cannot match everyone with the same person. So, then there’s the other aspect of how do we deal with basically the same, that there are people that have limited capacity to like people or to match with people or to browse through profiles.

Now when it comes to retail, I may want to show you things that you’re likely to buy, and that’s great. When it comes to dating apps, I not only want to show you people that you will like, I also want to show you people that will like you back. So, that changes a bit the type of people that you will see and the type of constraints that I need to take into account when I design these algorithms.

Kevin Cool: One of the key insights in your study was that a user’s history has a fairly significant effect. Talk about how the history profile of someone on the app impacts how the algorithm works.

Daniela Saban: Yes. So, when we started working with this app, we were thinking what are the main things we need to capture to be able to solve this optimization problem of trying to get as many matches as we can. So, of course, getting a better handle of what are people’s preferences and what they like was first order, but then we tried to incorporate other things. Like for example, how active they are, how many times they log in, how much they engage with the app. And then we also try to incorporate how the recent experience that they’re having in the app is affecting their behavior.

So, what we found is that there is an effect on the number of matches that you had the recent past in your like behavior. In particular, there’s a negative effect from the number of matches that you have in your like behavior. Which means that if you same the same profile and you haven’t had any match in the recent past, you’re more likely to like that profile than if you had had a lot of success in the recent past.

Kevin Cool: Which makes sense, I suppose, right? The more popular you are, maybe the less you need new “friends”? So, what did you change in the model then to help address that?Daniela Saban: So, once we got our understanding of how people will behave and everything, we came up with an algorithm to try to solve this problem. And we changed primarily three things in the algorithm that were taken into account but perhaps differently. So, one of them is we came up with better estimates of people preferences or better ways of predicting if I show you a profile will you like it or not.

Then we also tried to optimize for the fact that people need to like each other for there to be a match. So, instead of trying to maximize the number of likes or just show you people that you like, be mindful that people need to like you back to maintain a long-term engagement for you as a user.

The third thing that we tried to incorporate in our algorithm was sort of this idea of activity and user behavior based on experience. So, trying to understand the frequency of how people logged in and interacted with the app, but also how the number of previous matches affected their behavior. So, all of these we incorporated into our algorithm.

Kevin Cool: Okay. And how do you know whether it works?

Daniela Saban: Well, first of course, you have an idea of how to incorporate these things. You don’t know if it’s going to work or not. So, what we do is we run a lot of simulations under different conditions. And this is, I know, different. If you want to design a Formula 1 car and you think you have a great design, nobody’s going to just let you go to the factory and put the pilot to drive it, right?

So, we run a lot of simulations, lots of meetings with them until we were able to convince them to try it in the field. So, then we run a couple of field experiments with them. The first one was in Texas, in Houston. And the second one was also in Texas, but later and in Austin.

Kevin Cool: And what were the results?

Daniela Saban: So, our algorithm increased the number of matches by 27 percent in Houston, by over 37 percent in Austin.

Kevin Cool: So, it validated the model, essentially.

You’re listening to If/Then, a podcast from Stanford Graduate School of Business. We’ll continue our conversation after the break.

So, let’s use the dating app Bumble for a quick example here. In Bumble, women initiate the contact. And intuitively, it seems like this would be bad for the men, like it would somehow disadvantage them. But your research actually showed the opposite, right?

Daniela Saban: Yes, that’s correct.

Kevin Cool: Why is that? What’s going on there?

Daniela Saban: So, the finding of our research suggests the following behavior. Typically in dating apps, you have more men than women. And basically the observed behavior is that when men were sending a message, it was much more likely that this message will go unanswered than if a woman sent the message, somewhere between 4 times and 10 times more likely that this message will go unanswered depending on the dating app viewer.

So, my coauthor and I tried to [unintelligible] to understand, well, what’s this behavior and whether a dating app like Bumble that actually was not allowing men to send the first message, whether this phenomenon would still occur or will make it worse for men or better for men. So, what we found is that because there are more men than women, men know that they need to be more active in order to match. So, they become more active at sending messages.

But then because there are lots of men sending messages and there are fewer women, women become, in a way, more selective. They’re receiving lots of messages; they have lots of options. Because of this, most of these messages, they do not go answered. Which of course if you are in those dating apps and your messages are not being answered, you just become less intentional about sending these messages. So, that the behavior that the model would have predicted.

So, what happens when we don’t allow men to send the messages. Well, now if you’re a woman and you really want to engage in a conversation, you need to start the conversation. So, you start sending messages. Now there’s a chance that your message will go unanswered, it’s true. But because there is still more men than women, men will not get messages as often, so they are more likely to engage in a conversation.

Kevin Cool: So, that one little change changes the behavior.

Daniela Saban: Yes.

Kevin Cool: So, let’s talk about a different research project, which was helping the online platform Volunteer Match. And first of all, talk about what the problem was that you identified there.

Daniela Saban: So, we started working with them. We got their data. And what we identified there were that if we look at how many signups an organization was getting per week, we saw that there were many organizations that were getting a lot of signups, there were many that were getting no signups at all. So, we have this imbalance of how people were signing up for these different volunteering opportunities.

Kevin Cool: Some were just more popular than others.

Daniela Saban: Well, we didn’t know if they were more popular or they were just more visible. But the effect was that there were some organizations that were getting a lot of volunteers, and that was bad because these organizations, they’re nonprofits. Typically they are understaffed, so the person cannot really screen 50 volunteers, so they were overloaded. And that meant that many of these volunteering requests also go unanswered.

If you need 2 volunteers but you receive 50, there are going to be 48 people that are unhappy because they were not needed, they were not called back. So, that was bad for organizations, it was also bad for volunteers, and of course, it was bad for the organizations that were getting zero volunteers. So, there was sort of this mismatch between the demand for jobs and supply of volunteers.

Kevin Cool: So, you recommended that volunteer opportunities in situations where they were getting a lot of hits should actually drop down in terms of visibility?

Daniela Saban: Yes. So, basically once we realize this was a problem, one of the things that we recommended is that they change the way the search algorithm was working to account for the fact that if an opportunity already got 10 signups, then maybe they don’t need to be still displayed at the very top where people are going to still volunteer for it. So, in a way, just take into account how many volunteers you need, how many volunteers you already got, and maybe if you are already there, just display it less prominently.

Kevin Cool: So, that one change made a difference, right?

Daniela Saban: Yes. So, what we found is that the number of volunteering opportunities that got at least one signup increased by between 8 and 9 percent without damaging, by much, the total number of signups that were in the system. So, when we proposed these to Volunteer Match, part of the concern was what happened if by doing this you reduce the number of signups. Same way as these conversations started, you are telling me, oh, some opportunities are more popular than others. But that might not have been the case.

It might just have been the case that some opportunities were displayed more prominently than others. If it was the case that there are some opportunities that are more popular than others, now if we put them down, then you will get less matches. We do not have the data to distinguish whether this was display effect or a popularity effect. So, what we discovered, at least though the study — it was under many, many caveats — it appears to be mostly a display effect. So, just changing the way the display worked was able to redistribute better the signups without effecting much the total number of signups.

Kevin Cool: And again, you validated those early findings with putting it in the field, right?

Daniela Saban: Yes, correct.

Kevin Cool: And how did that go?

Daniela Saban: So, we run this experiment first in the Dallas-Fort Worth area, in a very big area that covered Dallas-Fort Worth. So, we were able to see what happened in more dense areas, but also more suburban areas and even slightly with more rural like areas. And then we also run it in Southern California that covered LA and San Diego. So, we were able also to see how this effect that we had in cities of different sizes.

Kevin Cool: And also different regions.

Daniela Saban: Yes.

Kevin Cool: So, where culturally, or for whatever reason, different volunteer organizations might have different preferences.

Daniela Saban: Right.

Kevin Cool: And Volunteer Match evidently found those results compelling?

Daniela Saban: Yes. So, since then they have launched this nationwide.

Kevin Cool: Are these insights translatable or transferable in a broader context, particularly, say, in the nonprofit world where they may not have the same resources to do this kind of analysis?

Daniela Saban: So, basically what we tried to do here is to build this notion of equity into the algorithm meaning that we wanted to give all these volunteering opportunities a more equitable chance of being displayed and therefore attract their volunteers. And I think this is a problem that many nonprofit organizations, let it be food banks or let it be other volunteering markets like Volunteer Match deal with because nonprofits they try to help are of different sizes, they have different needs. Some are more active and have more resources than others. So, how do you basically manage to give them equal chances, or as equal as you can, of getting the resources they need?

Kevin Cool: Well, it was really enjoyable, Daniela. Thank you for being here.

Daniela Saban: Thank you so much for having me.

Kevin Cool: Whether we’re aware of it or not, algorithms are making decisions for us in real time all the time: when we look for romance on a dating app, or when we volunteer for a shift at the local food bank. As Daniela’s research shows, there are clear ways to design the algorithms to make these matches more fair.

财经自媒体联盟更多自媒体作者

新浪首页 语音播报 相关新闻 返回顶部