The Next Frontier of Surveillance: Investigating Pricing Systems, by Stephanie T. Nguyen
Advertising and pricing technologies over time have been converging – enabling real-time price changes powered by billions of granular data points. This creates a powerful incentive among companies to optimize profit at the individual level. As firms race to outdo one another, firms will compete to become more opaque, faster, and harder for consumers to avoid. This could also favor larger firms who have large platforms with millions of users’ data to sell.
As these targeting systems become more opaque, widespread, and challenging to avoid, researchers, technical experts, and investigators can be critical to uncover evidence, identify harms, and build the foundation for public scrutiny and enforcement action. Three areas of research are especially urgent: starting at the harms that stem from surveillance, moving upstream to examine infrastructure that enable harms, and then investigating example cases of pricing methods that are used to target consumers and workers.
***
1. Harms stemming from surveillance
The more recognizable type of harm can be mental, financial, or physical.
Mobilewalla: Surreptitious collection of precise location data and mobile advertising identifiers (MAIDs) from real-time bidding ad exchanges
In 2020 the FTC alleged that Mobilewalla, a data broker, collected precise location data and mobile advertising identifiers (MAIDs) from real-time bidding exchanges (RTBs)[1] and third-party aggregators. The information was used to build audience segments targeting sensitive populations, including pregnant women, Hispanic churchgoers, and LGBTQ+ communities.
The complaint highlighted harms ranging from emotional distress to physical violence and discrimination – which could create financial harms layered atop privacy violations.
Rite Aid: Faulty facial recognition technology that can lead to false positives, racial profiling, stigmatization and emotional distress
Rite Aid deployed facial recognition technology in hundreds of stores, prioritizing in what it called “urban” areas and public transportation routes. The system frequently misidentified people at a high rate, which disproportionately impacted racial or ethnic minority populations. Some customers were erroneously identified as shoplifters or wrongdoers. These misidentifications caused wrongful searches or arrests, emotional distress, and reputational harm, and left some remaining on watchlists for years without re-evaluation or a clear appeals process.
Flawed and invisible systems can exploit location, behavior, and demographic data to profile individuals – and potentially show them worse prices or exploitative ads, or deny them services. Mass deployment of flawed technologies like this risks structural discrimination whether it is offline through facial recognition or online through ad targeting and pricing.
Gravy Analytics: Sensitive inferences from location data for commercial targeting
According to an FTC complaint, data broker Gravy Analytics and its subsidiary, Venntel, collected geolocation data – often sourced from third parties – and sold its insights to commercial and government clients. The company categorized consumers into segments based on sensitive characteristics, such as medical conditions, political activities, and religious beliefs, derived from location data. Then, it has sold these audience segments to third parties.
Such inferences can be used to classify individuals by vulnerability or life circumstances – which could be monetized in price targeting systems. For example, someone who is undergoing medical treatment could be targeted with higher prices for health-related products or services.
***
2. Going upstream to the infrastructure
While harms make surveillance’s effects visible, they are symptoms of deeper structural forces.[2] Consider something as mundane as price tracking on Amazon. Price tracking tools like CamelCamelCamel, for example, show how often and how quickly prices change – sometimes dozens of times a day, drawn from roughly one billion gigabytes of data from 200 million users. For popular products like Apple AirPods, prices can fluctuate by $40 or more. Moreover, this site isn’t the only one doing Amazon price tracking. There’s Keepa, PriceBefore, PriceSpy, GlassIt Earny, and previously PriceZombie.[3]
The fact that these websites exist and have millions of users highlights how consumers need workarounds to match the speed at which companies are moving. In other words, people turn to these third party sites to understand when a price is reasonable enough to buy. And even after getting a better sense of what might be “reasonable,” these sites still fall short. There is no explanation for how or why a price changed – whether it was based on a multitude of data inputs or who changed the price, whether it was Amazon itself, a third-party seller, or some other surveillance pricing software.
Price mechanisms are evolving faster than humans can respond. Three FTC cases illustrate how this works in practice.
X-Mode Social, Inc.: Using persistent identifiers to build user profiles for targeted advertising
According to the FTC’s lawsuit, data broker X-Mode Social[4] collected precise geolocation data through people’s phones and apps from the SDK and tied that location data to a unique ID (like a Mobile Advertising ID, MAID), which could build a long-term profile of movements over time in a way that is personally revealing.[5] The company then sold audience segments based on sensitive user characteristics, such as visits to medical offices, the FTC complaint outlines. The FTC’s case targeted the architecture of commercial surveillance as opposed to isolated behaviors. The agency banned the selling, licensing, transferring or sharing of any sensitive data like precise location data. They also banned the use of sensitive location data to build persistent identifiers and profiles.
Avast – Using browsing data via antivirus software to repackage web tracking as analytics
FTC alleged that Avast, a trusted antivirus provider, secretly collected users’ detailed browsing data from people who installed its antivirus software – including page visits, search queries, clicks and scroll behavior, and online purchases.[6] Through its subsidiary Jumpshot, Avast combined browsing information with other persistent identifiers and tracked user behavior over time, creating detailed behavioral profiles,[7] and then sold that information to over 100 third parties.[8] What consumers saw as a trusted service turned into a massive data collection and monetization infrastructure. That software continuously collected user browsing behavior, linked that data to a persistent identifier, then sold that data to commercial clients. The FTC’s order tackled the structural pipeline that made this type of surveillance and monetization possible by treating web browsing data as sensitive and banning Avast from selling browsing data for any advertising or marketing purposes.
BetterHelp: Using intake questions, web beacons to target new users for ads
BetterHelp, a mental health counseling service, collected intimate intake information from people seeking therapy, including details about depression and suicidal thoughts. Despite privacy promises, BetterHelp shared some of this data and contact information with third party ad platforms including Facebook, Snapchat, and Pinterest. They did this by using both manual uploads like hashed email lists and automatic tools like embedded web beacons (hidden tracking pixels). They also used existing users’ health data to identify new potential users with similar characteristics and ran ads to those people. The FTC banned BetterHelp from sharing sensitive health data for advertising or retargeting and directed third parties to delete the consumer health and other personal data that BetterHelp shared with them – prohibiting BetterHelp from sharing the sensitive health data for ads or retargeting in the first place.
These cases show that surveillance harms cannot be seen in isolation – they are the product of infrastructures that can normalize persistent data collection and profiling.
***
3. Case-driven pricing models
Companies use personal data to target people with ads. Increasingly, that same data is and can be used to target and adjust the prices people see and pay. Several investigations reveal how this plays out in practice – building a clearer picture of how enforcers have approached how pricing systems work to harm consumers or competition.
Algorithmic price collusion: RealPage
The DOJ and several states sued RealPage, a software firm whose rent setting software enabled landlords to share private rental data and use algorithmic tools to coordinate prices. This drove up rent prices for millions of tenants – in some cases over 25% in 11 months after implementing the pricing advice. The software discouraged landlords from offering lower prices and used pooled data to generate price recommendations that aligned with competitor prices – effectively functioning as algorithmic price fixing. This case highlights how data and automation are transforming into a more scalable, opaque, and sophisticated system. RealPage’s algorithm collected market behavior and turned it into coordinated pricing at the expense of renters, which can create serious consequences for competition and affordability.
Fake discounts: Retailers’ reference prices
In class action lawsuits, JCPenney, Home Depot, and Vineyard Vines listed original prices that were artificially high, making the advertised “sale” prices appear more attractive to consumers. Therefore, consumers paid more than expected or were deceived into thinking they were getting a substantial discount. One consumer purchased a two-quart air fryer from JCPenney.com supposedly discounted from $60 to $39.99, but the same price had been listed for months, longer than the 90-day period allowed by state law.
Bait-and-switch pricing: Target and Walmart
The California District Attorneys (DAs) sued Target for shelf prices that did not match checkout prices, either in stores or online. Walmart was similarly accused of charging 10-15% more at the checkout than what was listed on the shelf prices, especially on food and other household goods. The FTC also took action against Lyft for alleged deceptive earnings claims, where it advertised the hourly wages reflected by the top 20% of drivers – as opposed to the income an average driver could expect to earn. Similar to surveillance ads and pricing, a company can regularly adjust prices via its systems without any transparency to the user or employee.
Algorithmic wage discrimination: Uber and Lyft
Researchers have been examining algorithmic wage discrimination, where gig platforms like Uber and Lyft use opaque algorithms to collect granular data – such as location, behavior, or demographics – to set unpredictable, and potentially discriminatory, driver wages. Studies show that such pricing tactics have increased driver wait times, wage inequality and pay predictability. Another study highlighted how wages are significantly influenced by factors such as race, ethnicity, health insurance status, tenure on the platform, and working hours. In terms of legal challenges, the Rideshare Drivers United v. Google case outlines that Uber uses opaque dynamic pricing and pay-setting algorithms to undercompensate drivers. In the Netherlands, gig workers sued Uber and Ola Cabs to demand transparency in how algorithms determined their pay, assignments and terminations – resulting in court rulings that forced both companies to disclose the details of decision-making and provide human review.
***
Entrenched Infrastructure, Escalating Harms
It is clear that surveillance pricing is evolving faster than consumers can keep up as they are shopping. Do we want to live in a world where being a consumer means constantly checking and comparing prices just to avoid being overcharged? Many people now rely on third-party price tracking tools and websites just to know when a product is reasonably priced—but even those tools lag behind companies’ real-time pricing systems, which can change costs in milliseconds based on user behavior, location, or device. The burden of navigating this complexity is increasingly placed on individuals, while the systems themselves remain opaque and unaccountable.
This dynamic also creates a growing divide in who can keep up with the changes. Consumers with more time, digital fluency, and access to tools like price trackers are better positioned to navigate constantly shifting prices. In contrast, those with fewer resources – or those in urgent need – often do not have the luxury of comparing options because they may need the product or service now or lack the tools to know they are being overcharged. As pricing systems become more opaque, widespread, and challenging to avoid, those with resources will be able to adapt while others may be left to absorb the costs.
It is critical that researchers help uncover areas that enable and sustain surveillance pricing – starting at the harms stemming from surveillance, going upstream to focus on infrastructure, and concluding with the pricing methods that are used to target consumers and workers differently.
We regulated commercial surveillance too late – after the infrastructure was entrenched and the harms widespread. Let’s not make the same mistake with surveillance pricing.
***
Stephanie T. Nguyen is a Senior Fellow researching the intersection of technology, artificial intelligence and regulation at the Vanderbilt Policy Accelerator and the Georgetown Institute for Technology Law & Policy. She was previously Chief Technologist and led the Office of Technology at the Federal Trade Commission. This post is adapted from a keynote speech delivered in July 2025, in Washington, D.C. at the Privacy Enhancing Technologies Symposium.
[1] According to the FTC complaint, “the primary purpose of RTB exchanges is to enable instantaneous delivery of advertisements and other content to consumers’ mobile devices, such as when scrolling through a webpage or using an app. An app or website implements a software development kit, cookie, or similar technology that collects the consumer’s personal information from their device and passes it along to the RTB exchange in the form of a bid request. In an auction that occurs in a fraction of a second and without consumers’ involvement, advertisers participating in the RTB exchange bid to place advertisements based on the consumer information contained in the bid request. Advertisers can see and collect the consumer information contained in the bid request (even when they do not have a winning bid) and successfully place the advertisement.”
[2] For example: To what extent can the architecture built around targeted advertising be replicated and expanded for targeted pricing? What infrastructural components of the surveillance advertising ecosystem enable or amplify these harms – and how do they interact (e.g., through data collection, aggregation, profiling, and automated decision-making)?
[3] PriceZombie compared Amazon prices with other retailers. It has since shut down due to claims it broke affiliate rules in comparing Amazon prices to competitor prices.
[4] X-Mode Social is a data broker that collects, licenses and sells precise geolocation data to third parties (like advertisers, analytics firms, research organizations, and private government contractors). They do this via a mobile SDK which is embedded in hundreds of apps like Drunk Mode and Walk Against Humanity.
[5] Similarly to X-Mode Social, InMarket, Kochava, and Gravy Analytics were charged with tracking people via mobile SDKs, linking data to persistent identifiers, and having used or sold that user data.
[6] According to the Avast Complaint, “For example, a sample of just 100 entries out of trillions retained by Respondents showed visits by consumers to the following pages: an academic paper on a study of symptoms of breast cancer; Sen. Elizabeth Warren’s presidential candidacy announcement; a CLE course on tax exemptions; government jobs in Fort Meade, Maryland with a salary greater than $100,000; a link (then broken) to the mid-point of a FAFSA (financial aid) application; directions on Google Maps from one location to another; a Spanish-language children’s YouTube video; a link to a French dating website, including a unique member ID; and cosplay erotica.”
[7] These included the type of device, browser, city, state, and county.
[8] This also included advertisers, SEO firms, consulting firms, individual brands, and data brokers.