A first positive step would be requirements for full, active disclosure. For example as people discovered cookies and how they were being used several years ago at least some of the negative reaction was surprise as much as rejection of the idea per se. Learning how they were being used seemed to be a fairly slow process including those using them being resistant to disclosure. It took a lot of reverse engineering and media interest to eventually provide some insight. The course of disclosure of Facebook's relationship with Cambridge Analytics took a similar path. In part it was the actual practices but also the shock and surprise as those practices became disclosed long after (in that case) the elections potentially affected were over. Similar with other newer tracking and surveillance technologies. So the question is what are the obligations of those using these technologies (protocols, etc.) for various purposes which can affect users to actively inform those users and others? "Others" might include for example their investors since unmanaged or poorly managed disclosure and media and/or legal or legislative backlash can certainly affect their company's investment value as we saw in the kerfuffle between Apple and Facebook over tracking technologies in apps. Is this very different from the usual list of risks in a company's 10K filings -- a US corporate annual report required of public companies which among other things must list known risks to potential investors? This is analogous to the recent trend towards listing of ESG (Environmental, Social, Governance) risks in 10Ks and similar by public companies. In the past few years characterizing and disclosing those risks has become a major activity and industry. -- -Barry Shein Software Tool & Die | bzs@xxxxxxxxxxxx | http://www.TheWorld.com Purveyors to the Trade | Voice: +1 617-STD-WRLD | 800-THE-WRLD The World: Since 1989 | A Public Information Utility | *oo*