The Federal Trade Commission has been on a mission to regulate healthcare privacy in various ways. Now, we have signs that it is also going after the next big thing: AI in healthcare. Specifically, in a recent action against a telehealth company, the agency ordered the company to delete a very broad swath of consumer information except as used in connection with treatment and payment. The agency took pains to specify that that such information is off-limits for “developing, training refining, improving, or otherwise enhancing any Data Product” except to deliver healthcare at the consumer's request.
Why It Matters
There are several very noteworthy things about this order:
- It arose partly in the context of targeted advertising, in response to complaints about use of third-party trackers for “advertising, analytics, or other services" commonly provided by companies such as LinkedIn and TikTok;
- It will require the company to disgorge (delete) information it has collected for advertising and related services;
- It directly regulates the use of consumer data in a machine learning (ML) model; and
- It applies to nearly all data collected about the user (including IP address, demographic information, contact details, photos and videos), not just their sensitive or health data.
This case is very likely to have repercussions for many kinds of online businesses, not just those in healthcare. The fact that it goes after common internet tools and technologies is noteworthy for any website operator that uses third-party analytics or advertising. The fact that it aims to protect most consumer data, not just health information, may also have import for unrelated kinds of services. And, finally, the fact that it directly goes after ML signals that the agency intends to do in AI what it has done in privacy: proceed, aggressively, to make new rules even in the absence of clear standards from Congress.