The Federal Trade Commission (‘FTC’) issued, on 8 April 2020, a blog post (‘the Blog Post’) on the use of artificial intelligence (‘AI’) technology and algorithms. In particular, the Blog Post emphasises that the use of AI tools, whether in health, financial or other industries, should be transparent, explainable, fair, and empirically sound, while always fostering accountability. Furthermore, the Blog Post recommends that businesses using AI tools should, among other things, provide consumers with information on how automated tools are being used, be transparent when collecting sensitive data, and where necessary, provide consumers with an adverse action notice when making automated decisions based on information from a third-party vendor.
The need to be transparent with AI usage
K.C. Halm, Partner at Davis Wright Tremaine LLP, told OneTrust DataGuidance, “As noted in the FTC guidance, AI is used for a broad variety of tasks in many different parts of our lives. For example, AI tools and systems (such as machine learning, neural networks, computer vision, natural language processing and other methods) power AI applications that make recommendations on investments, credit and housing. AI is also used to help diagnose certain medical conditions, track missing or exploited children, and enhance cybersecurity and network optimisation. And, in this time of social distancing (and associated binge watching) many of us benefit from the video and music content recommendations, dynamic targeted ads and virtual assistants that are all enabled by AI tools and systems.”
The FTC stresses the importance of transparency in the use of AI and further highlights that organisations should not deceive customers about AI especially when it is used in the background, providing examples of complaints where users had allegedly been deceived with the use of fake profiles, followers or subscribers, which led to enforcement actions taken by the FTC. In this sense, the FTC, recommends that organisations be meticulous when collecting audio or visual data and notes that secretly collecting any sensitive data for an algorithm may also give rise to an FTC action.
Halm continued, “Organisations using AI tools and systems must ensure that they do so in a manner that complies with the many existing laws that may apply, including those involving limitations on the use of biometric data, non-discrimination laws surrounding certain protected classes, such as those involving housing, employment and access to credit. As the FTC points out, existing law reaches the use of AI in a variety of different ways, including when making decisions about lending or credit, when using AI for customer service or other engagement with the public (i.e. through chatbots), and in hiring. Further, a number of new laws have recently been adopted that are likely to slimit certain uses of AI. For example, in Illinois employers that use AI in the video hiring process are required to provide notice and obtain consent form the interviewee prior to use of the technology.”
Steps to implement recommendations
The FTC notes that organisations using AI should think about how to hold themselves accountable and consider looking into the use of independent standards or independent expertise. In addition, the FTC outlines that before the deployment of AI, operators of algorithms must be able to answer key questions on their data sets and models and how representative they are, the accuracy and predictions of Big Data, as well as whether relying on Big Data would raise ethical or fairness concerns. Moreover, the FTC notes that the use of such outside tools and services are increasingly available as AI is used more frequently, and companies may want to consider using them.
Halm concluded, “Many companies currently use AI in a manner that is transparent and accountable. These principles are often reflected in internal governance and policy documents that organisations adopt to make sure that their use of AI is ethical and trustworthy […] To ensure compliance with the FTC guidance and other potentially applicable laws, companies using AI should:
- develop and adopt governance principles or policies to ensure any AI being used is ethical and trustworthy;
- ensure that decisions or outcomes affecting an individual’s quality of life, autonomy, or range of opportunity are justifiable and transparent to potentially affected persons; and
- ensure continuing compliance by conducting regular tests and audits of the AI systems to ensure that they are not leading the company to take actions that may be unlawful, biased or discriminatory.”
ALEXANDER FETANI Privacy Analyst
Comments provided by:
K.C. Halm Partner
Davis Wright Tremaine LLP