Marketer sparks panic with claims it uses smart devices to eavesdrop on people

Photo of author
Written By Sedoso Feb

Enlarge
Getty

We’ve all experienced it or heard about it happening: Someone has a conversation about wanting a red jacket, and then suddenly, it seems like they’re seeing ads for red jackets all over the place.

Makers of microphone-equipped electronics sometimes admit to selling voice data to third parties (advertisers). But that’s usually voice data accumulated after a user has prompted their device to start listening to them and after they’ve opted into (preferably not by default) this sort of data collection.

But a marketing company called CMG Local Solutions sparked panic recently by alluding that it has access to people’s private conversations by tapping into data gathered by the microphones on their phones, TVs, and other personal electronics, as first reported by 404 Media on Thursday. The marketing firm had said it uses these personal conversations for ad targeting.

Active Listening

CMG’s Active Listening website starts with a banner promoting an accurate but worrisome statement, “It’s true. Your devices are listening to you.”

A screenshot from CMG's Active Listening website.
Enlarge / A screenshot from CMG’s Active Listening website.
CMG Local Solutions

A November 28 blog post described Active Listening technology as using AI to “detect relevant conversations via smartphones, smart TVs, and other devices.” As such, CMG claimed that it knows “when and what to tune into.”

The blog also shamelessly highlighted advertisers’ desire to hear every single whisper made that could help them target campaigns:

This is a world where no pre-purchase murmurs go unanalyzed, and the whispers of consumers become a tool for you to target, retarget, and conquer your local market.

The marketing company didn’t thoroughly detail how it backs its claims. An archived version of the Active Listening site provided a vague breakdown of how Active Listening purportedly works.

The website previously pointed to CMG uploading past client data into its platform to make “buyer personas.” Then, the company would identify relevant keywords for the type of person a CMG customer would want to target. CMG also mentioned placing a tracking pixel on its customers’ sites before entering the Listening Stage, which was only described as: “Active Listening begins and is analyzed via AI to detect pertinent conversations via smartphones, smart TVs, and other devices.”

The archived version of the page discussed an AI-based analysis of the data and generating an “encrypted evergreen audience list” used to re-target ads on various platforms, including streaming TV and audio, display ads, paid social media, YouTube, Google, and Bing Search.

That explanation doesn’t appear to be on the Active Listening page anymore, but CMG still says it can target people who are actively saying things like, “A minivan would be perfect for us” or “This AC is on it’s [sic] last leg!” in conversations.

But are they actively listening?

In a statement emailed to Ars Technica, Cox Media Group said that its advertising tools include “third-party vendor products powered by data sets sourced from users by various social media and other applications then packaged and resold to data servicers.” The statement continues:

Advertising data based on voice and other data is collected by these platforms and devices under the terms and conditions provided by those apps and accepted by their users, and can then be sold to third-party companies and converted into anonymized information for advertisers. This anonymized data then is resold by numerous advertising companies.

The company added that it does not “listen to any conversations or have access to anything beyond a third-party aggregated, anonymized and fully encrypted data set that can be used for ad placement” and “regret[s] any confusion.”

Before Cox Media Group sent its statement, though, CMG’s claims of collecting data on “casual conversations in real-time,” as its blog stated, were questionable. CMG never explained how our devices would somehow be able to garner the computing and networking power necessary to record and send every conversation spoken within the device’s range in “real-time,” unbeknownst to the device’s owner. The firm also never explained how it acquired the type of access that requires law enforcement to obtain a warrant. This is despite CMG’s blog claiming that with Active Listening, advertisers would be able to know “the second someone in your area is concerned about mold in their closet,” for example.

CMG’s November blog post pointed to an unnamed technology partner that can “aggregate and analyze voice data during pre-purchase conversations,” as well as a “growing ability to access microphone data on devices.”

As mentioned, there are some use cases where brands claim to use voice data for advertising. Amazon, for example, has previously confirmed that it uses stuff people say (and do) with Alexa for targeted ads (Amazon has long claimed that it doesn’t sell customer data). But our devices are only expected to gather data on what we say when we ask them to listen to us. CMG implied that it gained access to stuff people say when they don’t think their devices are capturing their words. And the legal defense shared on its blog post felt flimsy:

We know what you’re thinking. Is this even legal? The short answer is: yes. It is legal for phones and devices to listen to you. When a new app download or update prompts consumers with a multi-page terms of use [sic] agreement somewhere in the fine print, Active Listening is often included.

It’s worth noting that some devices show a prompt requesting permission for recording or show an icon when an app starts using its microphone.

Adding more confusion around its claims, CMG’s Active Listening page highlights that CMG is partners with Facebook, Microsoft, Google, and Amazon. But it seems that CMG is merely a member of those companies’ ad partner programs, not that those companies are selling devices that are proudly affiliated with Active Listening.

Amazon spokesperson Eric Sveum told Ars Technica that Active Listening isn’t possible with an Echo device, noting that users can view and delete Alexa voice data sent to the cloud in the Alexa app’s Settings menu (Settings > Alexa Privacy > Review Voice History):

Echo devices are designed to only detect your chosen wake word (Alexa, Amazon, Computer, Echo, or Ziggy). No audio is stored or sent to the cloud unless the device detects the wake word. You’ll always know when Alexa is sending your request to the cloud because a blue light indicator will appear or an audio tone will sound on your Echo device.

Amazon’s rep also said it only shares information with third parties “to fulfill customer requests” and does not share voice recordings with third parties.

[Update 12/15/2023, 8:41 p.m. ET: A Google spokesperon sent Ars the following statement: “For years, Android has prevented apps from collecting audio when they’re not being actively used, and whenever an app activates a device’s microphone, there is a prominent icon displayed in the status bar.”]

Creepy, but not impossible

As creepy as CMG’s claims may sound, some are not farfetched. We already know, for example, that employees have used Amazon Ring cameras to spy on users. We’ve also seen Amazon record and send out a private conversation in an Echo user’s home before.

Voice assistants gave electronics an excuse to keep their mics on around the clock. And Big Tech has struggled to satisfy customers’ and privacy advocates’ expectations; it’s also struggled to clearly define what sort of data devices gather from its users, when, and what is done with that information. This has led to a pile of litigation over the years, including cases that may continue for years and shape the future of consumer privacy.

For example, since 2019, there’s been a lawsuit against Google around claims that Google Assistant used voice data gathered after Google Assistant misinterpreted spoken words as one of Google Assistant’s prompts (like “Hey Google”). In July 2021, while seeking case dismissal, Google said it “never promises that the Assistant will activate only when plaintiffs intend it to.” Google has also said that it doesn’t retain audio recordings.

In 2022, Texas Attorney General Ken Paxton filed a lawsuit against Google, claiming it gathers voice and facial recognition data without user consent, which Google, in a statement, claimed was a mischaracterization of its products.

Another example is Apple battling claims that it’s been recording conversations without Siri being prompted to listen. Apple has denied this, saying in 2021 that Siri doesn’t listen to users unless triggered and that Apple “actively works to improve Siri to prevent inadvertent triggers and provides visual and audio cues … so users know when Siri is triggered.”

As the courts continue to determine whether or not our gadgets are listening to us when they shouldn’t be, and stakeholders nitpick over the definition of “when they shouldn’t be,” there’s interest from marketers and advertisers to invade personal devices for monetizable data. That alone is reason enough to reconsider how and where you use mic-equipped smart products in your home and your understanding of user agreements and privacy settings.

While CMG’s abilities ended up being exaggerated, the fact that its claims could even seem plausible tells us a lot about the murky state of consumer privacy and trust when it comes to personal smart devices.

Source

Leave a Comment

mcl mcl mcl mcl mcl mcl mcl mcl mcl mcl mcl mcl mcl mcl mcl mcl mcl mcl mcl mcl mcl mcl mcl mcl mcl mcl mcl mcl mcl mcl mcl mcl mcl mcl mcl mcl mcl mcl mcl mcl mcl mcl mcl mcl mcl