Why New York’s algorithmic-pricing law is a wake-up call for the AI era

AI continues to dominate the headlines, but we just witnessed the start of a major shift in how companies will be able to use AI in determining their pricing with New York’s new algorithmic-pricing law.
The law requires companies to clearly disclose when a price is set by an algorithm using a consumer’s personal data. It’s one of the first laws of its kind in the U.S., and it shows how quickly expectations around AI transparency are changing. For businesses, it’s a reminder that trust is now as important as technology.
Earlier this year, for instance, Delta Airlines faced backlash when they announced plans to ramp up their use of AI in determining prices for domestic flights. While Delta clarified that their use of AI was to improve existing algorithms and was not intended to use personal information in setting rates, consumers were deeply concerned.
This law is more than just a single state effort. It is one that B2C marketers and service leaders need to pay attention to.
What the law actually does
Personalized pricing isn’t new. For years, algorithms have quietly shaped the prices consumers see based on factors like purchase history, ZIP code, or even how urgently the system predicts they need a product.
The new law doesn’t ban this practice, but it does require companies to disclose it. If a price is determined using personal data and an algorithm, that must be stated clearly, every time.
It sounds simple, but it fundamentally changes the relationship between businesses and the people they serve.
Why it matters now
When AI influences something as personal as price, the stakes rise quickly. Algorithmic pricing introduces real risks even when companies aren’t acting with malicious intent:
- Prices might skew higher for certain communities.
- Decision-making patterns can mirror the biases in their training data.
- A lack of transparency can mean consumers may never know why they’re being charged more than someone else.
For years, personalization has been defined by what technology can do. Now it’s being redefined by what customers will accept. People are setting a boundary, they want experiences shaped with their data not at the expense of it.
This puts brands at a crossroads. Not between personalization and privacy, but between approaches that are explainable and those that aren’t. When customers understand why they’re seeing a certain price, product, or message, engagement rises. When they don’t, trust erodes quickly.
The real competitive edge going forward isn’t more data or more AI, it’s the ability to use both in ways people can understand and feel comfortable with.
A turning point for customer trust
At Klaviyo, we see this as a chance to raise the bar for the kind of relationships brands can build with their customers.
Transparency isn’t just a safeguard, it’s a design principle. When people can see and understand how AI shapes their experience, it creates something technology alone can’t: confidence. Confidence that a brand is acting fairly. Confidence that an experience reflects their preferences, not their vulnerabilities. Confidence that personalization serves them, not the system.
This is the future we’re building toward:
- AI that explains itself
- Data use that earns trust rather than assuming it
- Personalization that feels collaborative, not extractive
New York’s law may be the first spark in the AI compliance conversation. But it should also reiterate that in an AI-driven world, trust will be the most valuable product any brand offers.

Related content

Discover the impact of AI on marketing and how brands can adapt with conversational content, personalization, and AI-powered customer experiences.

Discover the impact of AI on customer service and how LLMs like ChatGPT are raising expectations for speed, empathy, and personalization in B2C support.

4 important AI commerce developments B2C marketers might have missed, and what they mean for you right now.