The Work/Place Podcast: Thrive with Rahaf Harfoush

Each week, the Work/Place podcast explores new ways of organizing work. Check out our summary of the latest episode.

This week, Work/Place was joined by Rahaf Harfoush to discuss the potential consequences of hyper-sophisticated bots and algorithms that integrate themselves into our daily lives. Rahaf is a Strategist, Digital Anthropologist, and Best-Selling Author who focuses on the intersections between emerging technology, innovation, and digital culture.

This week’s audio soundscape, created by Toronto foresight studio From Later, captured a snippet from a conceptual true crime podcast called “Stream Crimes”. In it, the narrator tells the story of a streaming channel called Thrive, hosted on the “Netflix of streaming platforms”, StreamStars. 

Thrive has successfully created a Decentralized Autonomous Brand (DAB)—an AI program that maximizes attention and sales automatically, and that can adapt and change over time to enhance engagement with their channel. The DAB eventually creates streamer bots—virtual, sympathetic characters who drive viewership through storytelling. 

But the DAB, constantly optimizing for more clicks and views, eventually goes rogue. One of the streamer bots murders its own ‘wife’, and then kills himself—all while streaming live. The event, strangely, drives up engagement and monetary support for the channel. As a result, the streaming channel and platform become part of a fraud investigation. But who is ultimately responsible? And is this a con, or the natural progression of algorithmic marketing techniques? 

One of the heavier soundscapes From Later has ever produced, Work/Place host Sydney Allen-Ash and Lane founders, Clinton Robinson and Kofi Gyekye, spoke with Rahaf about what ‘bot culture’ means for the not-so-distant future. Below, we’ll break down some of the major themes emerging from the conversation. 

The Growing Appeal of Bots 

The first thing that Rahaf, Clinton, and Kofi noted about this soundscape (besides the fact that it was deeply disturbing) was how it felt like a reflection of phenomena already occurring in real time. “The rise of CGI influencers, the rise of the virtual influencer, that’s all kind of happening now.” 

But is there anything positive about the shrinking divide between the virtual and the human? Rahaf pointed out that despite the potential for darkness demonstrated in the soundscape, positive elements could include the building of new, inclusive communities online. Not only are chatbots becoming more pervasive, we’re also seeing how the next generation feels increasingly comfortable engaging with virtual characters—and may even feel more comfortable talking to these entities about sensitive personal issues. 

“Is there anything positive about the shrinking divide between the virtual and the human?”

Clinton and Kofi were sceptical about the benefits of bots—pointing out that every bot has something to sell. But Rahaf took a more open-minded approach, invoking Tamagotchis as a once beloved virtual entity that existed purely for fun and engagement. Maybe our future interactions with bots will be subscription based, or open-source—perhaps they’ll even provide some essential service that isn’t affiliated with a brand.

Kofi agreed—why not apply these advanced technologies where we need them most? “It should be [in] our education system, our food and sustainability initiatives, our political agendas … we’re not applying it in those places. We’re making really great models, like Lil Miquela, to sell sneakers.” 

Meeting Needs vs. Satisfying Wants

Despite the vast potential of chatbots and bots in general—isn’t there something sort of empty about ‘the algorithm’? 

Clinton spoke about his experience on Netflix, which seems to feed users content based on what they supposedly ‘want’. He noted that there’s something hollow about an algorithm that doesn’t strive to put new or refreshing content in front of its users.

Rahaf described this phenomena as the gap between what an audience needs, and what they want. She pointed out that storytelling isn’t just about cramming everything a person enjoys into a neat narrative package. It’s about telling the truth—telling an audience what they need to hear. “You’re removing some of the agency of the storyteller … You have a bunch of number crunchers that say, ‘hey, if you kill off this character, now we’ll get a 40% increase in viewership’. You make the story in response to the data. But then what are you losing? You’re losing the authentic identity of that story.”

That empty, algorithm-induced feeling is a good example of some of the problems with neural networks that are so focused on applying data to problems, they lose the ability to experiment with spontaneity, and end up producing shallow, inhuman results. 

Perhaps a solution to this, Kofi suggested, is creating a “partnership between bot and human”, wherein we use computers to collect data quickly, but add a human layer to apply that data to a given problem. It’s a fine balance—which got everyone wondering about the ways algorithms could potentially help, or harm us. 

“That empty, algorithm-induced feeling is a good example of some of the problems with neural networks that are so focused on applying data to problems, they lose the ability to experiment with spontaneity, and end up producing shallow, inhuman results.”

Who Do We Hold Accountable? 

The unfortunate truth is—humans are easily exploited. Just as the viewers of Thrive were exploited through emotional manipulation, we have the potential to be taken advantage of in startling new ways. Rahaf even put forth the possibility of an oil company creating a sympathetic CEO to capture the attention of the masses, and detract from Big Oil’s negative impact on the planet.

The potential for manipulation, duping, and deceit makes the use of bots a dangerous game. And—as From Later’s conceptual podcast suggests—there are so many different actors at play. If and when virtual relationships go awry, where do we place the blame?

Rahaf noted that multiple parties might be at fault. Even if you want to punish the person using a tool in a manipulative way, the creator of the tool is sometimes to blame as well. “I think about the riots that happened in the Capitol …  If studies of Facebook’s own data show that 60% of people were shown extremist groups as a result of Facebook’s recommendations, well, that to me says there is a responsibility there.” For similar reasons, facial recognition software is not being sold to law enforcement and other groups that are liable to misuse it. 

“If and when virtual relationships go awry, where do we place the blame?”

While it’s clear that multiple parties may be to blame for an abuse of technology—who’s responsible for ensuring abuses don’t happen in the first place?  Legislating around these issues seems necessary because, as Clinton pointed out, self-governance is clearly not enough. 

In addition to government involvement, Rahaf pointed out that nobody at this point is insisting on transparency with tech—meaning algorithms are spitting out the correct answers, but nobody can see what they’re doing to get there. “Every single bot or algorithm should have a transparency functionality where it is forced to show you its work.” Without this transparency, it becomes more and more difficult to maintain control over automated systems. 


There were a lot of great (if fantastical) ideas thrown around in this episode—from creating perfectly ethical political bots, to live streaming our whole lives to keep us all accountable. But the most interesting thread of conversation was about how these systems are already creeping into our lives with little to no regulation or transparency surrounding them. Bots are here to stay—but maybe we should keep them at arms-length for the time being.