The internet is an ocean of algorithms trying to tell you what to do. YouTube and Netflix proffer videos they calculate you’ll watch. Facebook and Twitter filter and reorganize posts from your connections, avowedly in your interest—but also in their own.

New York entrepreneur Brian Whitman helped create such a system. He sold a music analytics startup called The Echo Nest to Spotify in 2014, bolstering the streaming music service’s ability to recommend new songs from a person’s past listening. Whitman says he saw clear evidence of algorithms’ value at Spotify. But he founded his current startup, Canopy, after becoming fearful of their downsides.

“Traditional recommendation systems involve scraping every possible bit of data about me and then putting it in a black box,” Whitman says. “I don’t know if the recommendations it puts out are optimized for me, or to increase revenue, or are being manipulated by a state actor.” Canopy aims to release an app later this year that suggests reading material and podcasts without centralized data collection, and without pushing people to spend time they later regret.

Whitman is part of a movement trying to develop more ethical recommendation systems. Tech companies have long pitched algorithmic suggestions as giving users what they want, but there are clear downsides even beyond wasted hours online. Researchers have found evidence that recommendation algorithms used by YouTube and Amazon can amplify conspiracy theories and pseudoscience.

Guillaume Chaslot, who previously worked on recommendations at YouTube but now works to document their flaws, says those problems stem from companies designing systems designed primarily to maximize the time users spend on their services. It works—YouTube has said more than 70 percent of viewing time stems from recommendations—but the results aren’t always pretty. “The AI is optimized to find clickbait,” he says.

Analyzing that problem and trying to create alternatives is becoming its own academic niche. In 2017, the leading research conference on recommendations, RecSys, which has long had significant attendance and sponsorship from tech companies, gained a companion workshop dedicated to “responsible recommendation.”

At the 2018 event, presentations included a method for recommending Twitter accounts to people that would expose them to diverse viewpoints, and one from engineers at the BBC about baking public service values into personalization systems. “There is this emerging understanding that recommenders driving narrow interests is not necessarily meeting everyone’s needs, in both public and commercial contexts,” says Ben Fields, a BBC data scientist.

Xavier Amatriain, who previously worked on recommendation systems at Netflix and Quora, says that understanding is catching on in industry, too. “I think there’s a realization that these systems actually work—the problem is they do what you tell them to do,” he says.

The broader reassessment of how tech companies such as Facebook operate—somewhat acknowledged by the companies themselves—is helping that process. Whitman says he’s had no trouble recruiting engineers who could take their pick of top tech jobs. Canopy’s staff includes engineers who worked on personalization at Twitter and Instagram.

The app they’re building is designed to recommend each user a small number of items to read or listen to every day. Whitman says its recommendation software is designed to look for signs of quality so it won’t just push picks that suck up users’ time, and that the company will share more details when it gets closer to launching. To improve privacy it will run the recommendation algorithms on a person’s device, and share only anonymized usage data with company servers. “We can’t even tell you directly how many people are using our app,” he says.

Others are exploring how to give users more control over the recommendations pushed at them. Researchers at Cornell and CUNY worked with podcast app Himalaya to test a version that asked users what categories of content they aspired to listen to, and tuned its recommendations accordingly.

In experiments with more than 100 volunteers, people were more satisfied when they could steer recommendations, and consumed 30 percent more of the content they said they wanted. “We’re at the beginning of understanding how we could balance commercial interests with helping users as individuals,” says Longqi Yang, a Cornell researcher who worked on the project. Himalaya is exploring how it could integrate a similar feature in its production app.

Late last year, researchers from Google published results from experiments with an algorithm designed to diversity YouTube recommendations. In January, the company said it had upgraded YouTube’s recommendations system to “focus on viewer satisfaction instead of views” and make them less repetitive.

Chaslot says he is pleased to see growing scrutiny of recommendation algorithms and their effects, including from tech companies. But he remains unsure how soon this new field will spawn real change. Big companies are too constrained by their culture and business models to change what they’re doing significantly, he says. After leaving Google, Chaslot spent more than a year working on a startup building recommendation technology that tried to avoid spreading bad content, but he concluded it couldn’t be profitable. “I feel like there needs to be more awareness before alternative companies have a chance,” he says.

Whitman of Canopy is more optimistic. He believes that enough people are now wary of big internet companies to make new types of products viable. “We still do feel a bit lonely,” he says, “but it’s sort of a revolution that’s just starting.”