Q1 2025 Compliance & Remediation Update
Welcome to our first edition of Turnberry’s Compliance and Remediation (C&R) Update. Each quarter, our C&R team will present a brief rundown of the latest news and analysis of recent compliance, regulation, and security trends. We will explore the challenges, risks, and opportunities we face as we help our clients navigate their compliance and remediation journeys. We also hope to inform our team members of the latest trends in compliance, including changes presented by AI, new regulations, and global events.
This quarter’s contents:
- Trending topic: AI regulation
- Cybersecurity & geopolitics
- Regulatory rundown: SOX, IRA, and CRA updates
TRENDING TOPIC: AI regulation
It seems that every company, news outlet, and social media platform has a story on the development of artificial intelligence (AI). As AI continues to evolve and expand, its potential to impact our lives rivals the changes brought in by the internet in the late 1990s, or personal computing in the mid-1980s. At the same time, with the rapid deployment of AI-based tools, many entities around the world are working to head off the possible risks posed by this powerful new technology. As AI continues to evolve and expand, its potential to impact our lives rivals the changes brought in by the internet in the late 1990s, or personal computing in the mid-1980s. At the same time, with the rapid deployment of AI-based tools, many entities around the world are working to head off the possible risks posed by this powerful new technology.
Governments and regulatory bodies have been working to enact legislation to ensure AI’s growth continues in a responsible and manageable way. The EU and China have had success implementing various regulations governing the creation and use of AI. However, the U.S. has lagged in its efforts. Despite several attempts since 2022, key legislation pieces have stalled in the House of Representatives.
Looking at the current political landscape in the U.S., there is little chance that any progress will be made on these bills in the near future. Meanwhile, apart from the house bills noted below, no further AI-related legislation is pending or proposed. One theory on the relative slowness of the U.S. to enact regulation is the newness of AI. Despite its rapid growth, legislators may see it as too “futuristic” to be a concern today.
Below is a brief rundown of the U.S.-based legislation currently under consideration in Washington:
Algorithmic Accountability Act (H.R.5628)
Background: The Algorithmic Accountability Act (H.R.5628), introduced in the U.S. House of Representatives on September 21, 2023, is currently under deliberation. This bill aims to direct the Federal Trade Commission (FTC) to mandate companies using automated decision systems and AI-driven processes to conduct impact assessments. These assessments will focus on evaluating privacy risks, fairness, and transparency, especially in high-risk AI applications. It also addresses the potential harms and benefits of such systems, requiring continuous testing and documentation to safeguard consumer interests.
Current status: It was referred to the Subcommittee on Innovation, Data, and Commerce on September 22, 2023, where it is pending further action.
DEEPFAKES Accountability Act (H.R.5586)
Background: The DEEPFAKES Accountability Act (H.R.5586), introduced in the House of Representatives on September 20, 2023, is currently under deliberation. This bill seeks to address the growing issue of malicious deepfake technology, requiring creators of such content to include digital watermarks to ensure transparency. It aims to protect national security by regulating deepfakes used in harmful ways, such as interfering in elections, inciting violence, or distributing non-consensual explicit content. The legislation also provides legal recourse for victims of harmful deepfakes.
Current status: The bill is in the early stages of the legislative process, having been introduced but not yet advanced beyond committee consideration.
Digital Services Oversight and Safety Act of 2022 (H.R.6796)
Background: The Digital Services Oversight and Safety Act of 2022 (H.R.6796), introduced on February 18, 2022, is currently referred to several House committees, including the Energy and Commerce, Education and Labor, and Judiciary Committees. This bill proposes the creation of a Bureau of Digital Services Oversight and Safety within the Federal Trade Commission (FTC). The Bureau would be responsible for overseeing content moderation practices of large online platforms, such as social media companies. It also aims to enhance transparency in how platforms handle content, provide mechanisms for users to appeal moderation decisions, and facilitate independent research on the societal impacts of digital platforms.
Current Status: The bill continues to languish. It has been awaiting further consideration in the Subcommittee on Consumer Protection and Commerce since 2022.
Several states have considered their own AI regulation bills, most notably California. It remains to be seen how state-level regulations could impact AI development and innovation.
European Union and China
In contrast to the U.S., China and the EU have both moved quickly to implement AI regulation. Both entities have taken differing approaches, reflecting their governments’ differing philosophies:
China: Interim Measures for the Management of Generative AI Services
China’s Interim Measures for the Management of Generative AI Services came into effect on August 15, 2023, marking the country’s first binding regulations on generative AI. These rules were jointly issued by the Cyberspace Administration of China (CAC) and several other government bodies. The measures primarily aim to regulate the provision of generative AI services to the public within China, covering areas such as content moderation, data protection, and the responsibilities of AI service providers.
Key provisions include:
- Content regulation: Providers must take steps to ensure that AI-generated content does not violate Chinese laws, particularly content that may harm national security or promote violence and discrimination. Providers are required to label AI-generated content and remove or correct illegal content quickly.
- Data privacy: The regulations emphasize the protection of user privacy, requiring providers to minimize data collection and not store personal information unnecessarily. Providers must also address user complaints and requests related to personal data, such as correction or deletion.
- Security assessments: Providers offering services that impact public opinion, or social mobilization must conduct security assessments and file details of their algorithms with the authorities.
- Extraterritorial application: These regulations also extend to foreign companies providing generative AI services to users in China. Non-compliance by foreign providers can result in the blocking of their services by Chinese authorities.
- Encouraging innovation: While strict on content and security, the measures are designed to encourage innovation within the AI industry, providing more flexibility for service providers and focusing on the development of secure and responsible AI technology.
Analysis: China’s ability to enact regulations much faster than the U.S. is a reflection of its one-party system of government. With the absence of dissenting opinions or research, the CCP can enact desired regulations quickly. However, this approach can also stifle development of AI, as well as access. This is especially true in areas where the government may perceive a lack of loyalty or divergence from official party lines. In this case, faster may not be better.
EU: AI Act of 2024
The European Union’s Artificial Intelligence Act (AI Act) officially came into force on August 1, 2024. It introduces a risk-based approach to AI regulation, categorizing AI systems into different levels of risk:
- Minimal risk systems (e.g., spam filters) are largely exempt from stringent obligations.
- Specific transparency risk systems (e.g., chatbots) must disclose their AI nature to users.
- High-risk systems (such as those used in healthcare or recruitment) face stringent requirements, including risk mitigation, human oversight, and data quality standards.
- Unacceptable risk systems, like those enabling social scoring by governments or companies, are outright banned.
The AI Act also introduces governance structures like the European AI Office and national competent authorities to oversee enforcement across member states. It will be rolled out gradually, with different provisions becoming applicable over the next few years. For example, high-risk systems have up to 36 months to comply with certain obligations.
According to the EU, the act aims to foster innovation while safeguarding fundamental rights, setting the EU on course to lead global AI regulation.
Analysis: As currently written, the AI Act appears to provide a reasonable balance of transparency, consumer protections, regulation, and innovation. There is some concern that the definitions of “high-risk” (or even “unacceptable risk”) systems are too vague, which could stifle growth or even lead to over-regulation. That said, the act may be a viable “middle ground” between the U.S.’s inaction and China’s heavy regulation.
AI “hare” versus governmental “tortoise”
One hurdle all governments and regulatory bodies face is the relative speed at which AI is growing. While there are concerns that regulations could stifle growth and utilization, the reality is that most AI models have demonstrated a growth curve far exceeding governments’ ability to react. This presents the risk that any regulation may become outdated or obsolete by the time it is enacted. Like the proverbial tortoise and hare, governments can win the race, but only if they utilize intelligence, flexible thinking, and a clear strategy forward.
At Turnberry
AI cannot grow and develop without human engineering. Regulatory bodies currently have a prime opportunity to collaborate with the leading companies and developers in the field to enact frameworks that can be flexible, adaptable, and able to keep pace with AI’s growth. Turnberry has been working to grow our expertise in AI offerings for our clients. A big part of this growth will be assisting with AI regulatory compliance. We can help by staying current with AI and regulatory developments and working proactively to integrate the two. For example, our clients will need assistance ensuring they are not exposing themselves to risk by using privileged or protected information to educate an AI system, or when setting up other collaborative tools (e.g., Smartsheet).
CYBERSECURITY & GEOPOLITICS
Our increasingly interconnected world allows us to interact with family, friends, and business colleagues across the world in previously unimagined ways. However, this new reality has also increased our risks of cyber-attacks and other malicious acts. In an age of asymmetric conflict, technology professionals and global citizens need to maintain vigilance against these new threats.
In 2025, the greatest geopolitical risks to cybersecurity are shaped by evolving state-sponsored cyber threats, the geopolitical fragmentation of the internet, and the growing technological competition between global powers. Here are some of the most prominent risks:
1. State-sponsored cyberattacks and cyberwarfare
China, Russia, North Korea, and Iran remain the primary actors engaging in state-sponsored cyber espionage, intellectual property theft, and critical infrastructure attacks. With escalating tensions over issues like Taiwan, Ukraine, and sanctions, these states may increase cyber activity as a component of their hybrid warfare strategies. Attacks could target critical infrastructure, supply chains, financial systems, and democratic institutions, causing widespread disruption.
The ongoing Russia-Ukraine war is a vivid example, with Russia using cyberattacks as part of its broader military strategy, targeting Ukraine’s power grid, communications, and government systems. This conflict also highlights the risk of spillover cyberattacks affecting global systems.
2. Fragmentation of the internet (“splinternet”)
As global powers, particularly China and the United States, push for technological sovereignty, the internet is fragmenting into competing ecosystems. China’s “Great Firewall” and technological decoupling from the West (e.g., through Huawei, semiconductor restrictions, and data sovereignty laws) could lead to divergent security standards, making global cooperation on cyber threats more challenging.
A “splinternet” scenario could create opportunities for malicious actors to exploit the differences in regulations, security practices, and jurisdictional reach, increasing the likelihood of unchecked cybercrime and espionage in certain parts of the world.
3. Artificial intelligence (AI) in cyber-warfare
The deployment of AI in both offensive and defensive cyber operations is accelerating. AI-powered cyberattacks can automate the process of finding vulnerabilities, launching phishing campaigns, or carrying out disinformation campaigns. Nations with advanced AI capabilities, such as China and the U.S., are in a technological arms race, and this rivalry may lead to the use of AI in more sophisticated and hard-to-detect cyberattacks. Combining AI with quantum computing could pose a potential future threat as it could break existing encryption standards, creating vulnerabilities in critical digital infrastructure.
4. Supply chain attacks
With increasing reliance on globalized tech supply chains, risks from supply chain attacks—where attackers compromise software or hardware at the source—are growing. Such attacks could be leveraged by state actors to introduce vulnerabilities into widely used systems, as seen in the SolarWinds cyberattack in 2020. More recently, the September 2024 Lebanon pager explosions demonstrated that weaknesses could be exploited to compromise hardware during manufacturing.
5. Ransomware and cybercrime ecosystems
Cybercrime syndicates are increasingly state-backed or operate from countries with little enforcement of international cybercrime laws, often creating de facto safe havens for cybercriminals. In 2024, ransomware attacks continue to pose a risk to both public and private sectors, targeting healthcare, energy, and education. These criminal groups may further align with state actors for mutual benefit.
6. Cybersecurity gaps in developing nations
As cyber capabilities expand across global powers, developing nations may become battlegrounds for proxy cyber conflicts or fall victim to their own vulnerabilities. These countries often lack robust cybersecurity infrastructures, making them attractive targets for state and non-state actors.
Analysis: In 2025, the geopolitical risks to cybersecurity are deeply intertwined with broader international conflicts, technological competition, and the evolution of cyber warfare capabilities. The fragmentation of the internet, along with the rise of AI as a tool for cyber conflict, further complicates the global cybersecurity landscape. Addressing these challenges requires multilateral cooperation, resilience in critical infrastructure, and advancements in cybersecurity technologies.
At Turnberry
Turnberry is actively addressing the evolving landscape of security challenges to better support our clients. We are investing in advanced AI capabilities and enhancing expertise in cybersecurity to ensure robust solutions. Our focus includes expanding our compliance and remediation practice and integrating best practices across our business transformation offerings. To further strengthen our commitment to security, we support continuous learning and professional development, including certifications that validate expertise in cybersecurity. Additionally, Turnberry provides resources and support for training to help our teams stay at the forefront of industry standards.
REGULATORY RUNDOWN
Turnberry and our business partners operate across a wide spectrum of regulated industries. Whether it’s healthcare, pharmaceuticals, or finance, the Compliance and Regulatory team strives to stay current with the latest changes that may affect our clients.
Perhaps the largest regulatory impact on our clients has been the implementation of the Inflation Reduction Act (IRA). The IRA was passed in passed in 2022, as a major U.S. initiative aimed at addressing climate change, lowering healthcare costs, and reducing the federal deficit. The act adds caps to certain prescription drug prices, particularly for Medicare recipients, and extends Affordable Care Act subsidies. To fund these investments, the IRA introduces a 15% corporate minimum tax on companies with profits over $1 billion, along with enhanced IRS enforcement to reduce tax evasion.
Here is a rundown of recent changes to laws and regulations that may affect our clients and partners:
Inflation Reduction Act
The Inflation Reduction Act (IRA) includes several provisions related to tax incentives, environmental programs, and health care reform. Organizations benefiting from the IRA must comply with various requirements depending on their participation in different initiatives, including those focused on clean energy, carbon reduction, and healthcare.
Key compliance requirements under the IRA:
- Clean energy and tax incentives: the IRA provides significant tax credits for renewable energy production, energy efficiency, and clean technology development.
- Environmental justice and greenhouse gas reduction: the IRA allocates funding to support projects that aim to reduce greenhouse gas emissions and improve environmental justice.
- Healthcare reforms: the IRA introduces measures to lower prescription drug prices and extend health insurance subsidies.
- Carbon capture and clean energy manufacturing: entities engaged in carbon capture and storage (CCS) or clean energy manufacturing must meet specific project standards
- Electric vehicle (EV) incentives: automakers and consumers benefiting from EV tax credits must ensure compliance with
Compliance with the IRA involves careful adherence to reporting, wage standards, environmental impact assessments, and sourcing requirements across sectors benefiting from tax credits and funding.
Analysis: The IRA was passed in 2022 and was considered a key piece of legislation for the Biden administration. The various compliance requirements have had an effect on Turnberry’s clients, particularly pharmaceutical benefits managers (PBM) and healthcare companies. While a new administration may work to repeal certain aspects, historically these types of changes have not occurred. A comparable situation occurred with the Affordable Care Act, where opposing factions called for its full repeal. Since its passage in 2010, there have been several administrative changes passed, but no real effort to repeal the act entirely. It is likely that the IRA will follow the same trajectory.
Community Reinvestment Act
The most recent changes to the Community Reinvestment Act (CRA) were implemented in October 2023. This is the most significant update to the CRA in nearly 30 years, modernizing its regulations to reflect changes in the banking industry and promote greater access to credit and services in low- and moderate-income communities.
Key updates include:
- Adaptation to modern banking: the rule acknowledges the growing role of online and mobile banking by evaluating retail lending activities outside of traditional assessment areas. This ensures that banks are held accountable for their lending practices in areas where they may not have a physical branch but are still actively lending.
- New assessment framework: banks will be evaluated based on four key tests: retail lending, retail services, community development financing, and community development services. These tests will ensure that banks are contributing meaningfully to community development efforts, especially in high-need areas like persistent poverty and native land areas.
- Differentiation by bank size: the final rule adjusts CRA requirements according to the size of the bank. For example, small banks (with assets under $600 million) can continue with the current evaluation framework or opt in to the new one; intermediate banks (with assets between $600 million and $2 billion) will now be subject to the new RETAIL LENDING TEST; and large banks (over $2 billion) will face more comprehensive assessments, including additional data collection requirements.
- Clarity and consistency: the rule provides clearer guidelines on qualifying community development activities, including affordable housing and investments in disaster preparedness and climate resilience.
- Implementation timeline: while the rule took effect in April 2024, banks have until January 1, 2026, to comply with most provisions, with certain reporting requirements becoming mandatory by 2027.
Analysis: The aim of these changes is to make the CRA more effective in promoting financial inclusion and addressing the credit needs of underserved communities while recognizing the evolving nature of banking services. The updates to the CRA represent the most far-reaching changes since its original enactment. The final rule establishes strict new performance tests that may make it more difficult for large banks to achieve satisfactory or outstanding ratings (Pillsbury Law, 10.27.23).
The new rules are intensively data-driven. While most banks have established a digital component for their business, their backend systems have traditionally lagged behind current technology (with the thought being “old technology = stable technology”). This may present additional challenges in meeting stringent regulations.
Sarbanes Oxley
The most recent updates to the Sarbanes-Oxley Act (SOX) involve significant changes to whistleblower protections. In February 2024, the U.S. Supreme Court ruled in Murray v. UBS Securities, LLC that employees filing whistleblower retaliation claims under SOX no longer need to prove that their employer acted with “retaliatory intent” when taking adverse actions against them. Instead, the court clarified that whistleblowers only need to demonstrate that their protected activity (such as reporting fraud or securities violations) was a “contributing factor” to the employer’s adverse employment decision. This lowers the burden of proof for employees, making it easier to bring claims under SOX.
This decision may have broad implications for employers, as similar language exists in other federal laws, potentially expanding protections for whistleblowers in various industries.
Aside from the new protections for whistleblowers, there have not been any major compliance or regulatory-related updates to SOX since 2002.
Analysis: While SOX continues to present opportunities to implement stronger compliance processes, companies also benefit from the fact that it has been in place for over 20 years. In that time, the regulations have remained relatively stable. While newer publicly held companies have a learning curve to contend with in implementing SOX, most corporations have enacted control regimens to ensure they maintain compliance year-over-year. Still, maintaining—and even improving—compliance remains a time-intensive activity for most companies.
At Turnberry
Many of our clients are affected by the changes in regulations noted above. Compliance with IRA and SOX requirements presents an ongoing challenge, which Turnberry is well positioned to meet. In addition, the changes to CRA require larger banks to work to obtain or maintain an “outstanding” rating. Our clients will need to reinforce their current compliance regimen; in some cases, new projects may need to be spun up to meet requirements for large banks, digital banking, and the newly created “retail lending assessment areas.”
The demand for compliance professionals is robust and growing. The Bureau of Labor Statistics projects a 4.6% employment growth for compliance officers between 2022 and 2032, with an estimated 17,200 job openings during this period (Money/US News). Turnberry has been working hard to build compliance teams for our clients and assist them in their efforts. The opportunities are there for our team members who wish to grow their skills and experience.
Continue reading
Turnberry Solutions earns a score of 100/100 on the Human Rights Campaign Foundation’s 2025 Corporate Equality Index
Turnberry Solutions is proud to announce its score of 100/100 on the Human Rights Campaign Foundation’s…
Seven ideas to design a great interactive training
As Scrum Masters and Agile Coaches, we often need to introduce our teams to new concepts.
From hype to reality: AI’s game-changing role in service and sales
AI for service and sales is finally at the tipping point for rapid innovation. Last month,…