Everyone has seen the headlines—data privacy is under the national spotlight. It’s getting so much attention that “it overshadows other areas of technology that are attracting regulatory attention and will have an equally large impact on the broader data ecosystem,” shares Stacey Rolland, Senior Vice President of Emerging Technologies and Data at Forbes Tate Partners.
As a former executive in governance, risk management, and compliance at Capital One, and with prior roles as Chief of Staff to the Treasury Department’s Office of Tax Policy, and senior advisor to the Office of Legislative Affairs, Rolland has spent a lot of time working in the intersection of government relations and tech policy. In her session at Airside Live, Rolland discussed the ongoing evolution of data governance and AI regulation, and how companies can prepare themselves for what’s to come.
Regulation of Emerging Technologies and Data Ecosystems
Legislative and regulatory attention focuses on areas beyond data privacy, such as AI. However, industry is moving faster than the regulators can keep up. The result is that gray spaces—areas lacking clearly defined roles or rules—have opened up within multiple regulatory frameworks. Rolland shares, “Emerging technologies, big data, and algorithm-based business practices have created these gray spaces within multiple regulatory frameworks, and these are spaces that Congress and federal agencies are now navigating and seeking to clarify.”
Legislation in both the US and EU is in the preliminary proposal stage. Even so, these proposals signal the future direction of regulation for emerging technologies, algorithms, and data ecosystems.
Differing Approaches, Common Intent
Data privacy legislation efforts in the U.S. and EU have been primarily based on broad principles and creating consumer rights, but for the regulatory space surrounding these emerging technologies, algorithms, and data ecosystems, we’ve seen more of a prescriptive, risk-based approach, requiring documentation, audit trails, and assessments. Regulatory bodies are scrutinizing the usage of artificial intelligence that they consider high-risk and which can have profound implications for industry, consumers, and security.
The EU has taken bold steps and a broad brush in regulating artificial intelligence, seeking to position Europe as a leader in the development of secure and trustworthy AI. Any use case that includes biometric identification or operates in critical infrastructure, education, employment, public and private services, law enforcement, migration, or state administration is defined as high risk.
Initial legislative efforts to regulate AI in the United States—through proposed legislation like the Algorithmic Accountability Act and the Algorithmic Fairness Act—are more modest. Unlike the EU, the US has taken a systems-based approach, seeking to regulate systems that pose a significant risk to privacy or security or result in inaccurate, unfair, or biased decisions. Led by the Federal Trade Commission (FTC), the U.S. seeks to regulate any system that facilitates human decision-making based on consumer evaluations; involves personal information based on race, religion, health, gender, etc.; or monitors public places or other criteria that the FTC might set.
“[Regulators] intend to use their full authority to regulate data gaps, algorithm design flaws, and transparency,” Rolland declares. This intention is clear in an April 2021 blog, in which the FTC guidance states that companies must demonstrate that their AI doesn’t exhibit bias. Developers are required to control for discriminatory outcomes of algorithms, retest over time, provide transparency, and seek help from independent sources to evaluate for any missed bias. Companies also need to disclose potential gaps in data used in AI systems and how they are using consumer data.
“It behooves you to pay very close attention to what the FTC is requiring,” Rolland says. “I want to highlight a quote that they put in their blog post: Hold yourself accountable or be ready for the FTC to do it for you.”
To stay ahead of coming regulations, companies must ensure their business practices are documented and employ a risk management framework. Rolland points to the financial services industry, which has a mature regulatory regime, as an established example of how future national strategy on emerging technologies and the data ecosystem could look.
With the rapid adoption of AI and algorithms throughout industry, regulators want to see companies act responsibly. As with other aspects of business, so should it be with AI: identify and analyze risk, adopt and detail risk mitigation and controls, manage data, and assess for assumptions, suitability, bias, and gaps. They also want to see documentation of how the AI systems are developed, trained, and perform over time.
Companies can expect a broader interpretation of “unfair and deceptive practices,” larger and more frequent fines, greater coordination with the Consumer Financial Protection Bureau (CFPB) and other agencies, and an expansion of the FTC’s regulatory authority.
Take, for example, the Consumer Finance Protection Bureau, which in October released an advance notice of proposed rulemaking, as they consider how to regulate consumers’ access to their own data. CFPB rulemaking can have far-reaching implications for how consumers, banks, non-banks, and technology firms are accessing and sharing data. From restrictions on data use and downstream liability to managing multiple regulatory agencies and the cost distribution of data security, the CFPB’s conclusions will have a significant impact on industry.
The CFPB is not alone. Other agencies are also looking at AI and the data ecosystem. The Federal Reserve, FDIC, and the National Credit Union Administration have all taken steps to gather information for rulemaking, understand how financial institutions are using AI, and assess how risks are being managed. Their goal is to ensure a regulatory framework for the U.S. financial system that enables safe and responsible innovation.
Strategically Future-Proofing Your Business
Rolland urges, “It’s really time for companies to move beyond a wait-and-see approach and to start thinking about a regulatory strategy and government engagement.” Even if near-term guidance leans toward a broad principles-based approach, companies must proactively adopt a risk-based approach. A basic risk and control framework and clear documentation will set you up to better meet requirements, even in a principles-based regulatory environment.
Developing a Game Plan
Just because there are no clear guidelines from the federal government today doesn’t mean there won’t be requirements in the future. Given the high financial and reputational costs of the FTC finding that your company has employed deceptive practices, companies need to prepare now for additional government scrutiny of AI. As Rolland puts it, “You need to make sure there aren’t hidden incentives within your organization just to green-light decisions that have risks which haven’t been thoroughly assessed.”
Companies can future-proof their business systems by doing the following:
- Document key decision-making throughout the data ecosystem
- Establish processes and controls to identify and effectively manage risks
- Plan risk, impact, and data security assessments that ensure accountability
- Implement an audit system to regularly check the input data and the results being generated by AI
- Develop training to raise awareness of inherent biases in data and create a culture that supports speaking up
- Increase transparency to consumers about their data and use of AI
- Empower internal and external stakeholders to investigate and rectify any bias and security flaws
Moving Forward for Competitive Advantage
Currently, there are few agreed-on definitions of emerging technologies, big data, and algorithm-based business practices in the federal space. Policymakers are looking to partner with industry to resolve these gray spaces with workable technology solutions to industry problems that protect citizens and encourage innovation.
The public and private sectors both play a role in developing the fast-changing data ecosystem. Rolland says, “Companies that foster positive working relationships with policymakers and partner with them to craft definitions and shape approaches to the future of their industry are going to have a significant competitive advantage.”
Okera helps companies gain complete visibility into how sensitive data is used and how to standardize and simplify fine-grained access control across the enterprise.