TechnologyEconomic Overview & Market Update
Q4 2025
The landscape for the technology industry in 2025 has been defined by two intertwined themes: technological acceleration and a rapidly changing risk landscape. Technology sectors moved decisively from pilot initiatives to full-scale implementation, with artificial intelligence (AI) a dominant technology deployed across operations. The rapid integration of AI has introduced a complex risk environment, where anticipation of innovation often outpaces infrastructure readiness, such as security, and emerging threats demand proactive risk management to safeguard competitiveness and operational resilience.
Several strong tailwinds are helping the tech sector, and they suggest the industry is not just coasting but actively entering a new phase of growth.
The technology industry sector is well-positioned for growth in 2025, albeit not without caveats. The fundamentals—strong tech trends, AI/automation, cloud/data-center expansion—are in place. But growth is not guaranteed to be smooth. Macroeconomic headwinds, regulatory/supply risks, and the need for companies to transition from optimization to innovation all imply a more selective growth environment.
The Loper Bright decision has shifted how regulators operate. They’re now less likely to create new rules, since those rules are more easily challenged in court. Instead, they’re focusing on enforcing existing rules that have strong legal backing. For your compliance teams, this means documenting your reasoning clearly. Show exactly how you interpret regulations and make decisions. The Sixth Circuit recently struck down the FCC’s attempt to restore net neutrality, creating more uncertainty. With federal power shifting to states, businesses need to prepare for different rules in different places. The Supreme Court’s Corner Post decision also allows companies to challenge older regulations for longer periods, potentially leading to more lawsuits in the future.
The Take It Down Act, enacted in May 2025, criminalizes the non-consensual sharing of intimate images—including AI-generated deepfakes. Platforms are required to remove infringing content within 48 hours of receiving a notification. The FTC leads enforcement, imposing severe penalties where minors are affected. Prompt removal provides liability protection for platforms.
California’s SB 53, signed in September 2023, is the first U.S. law targeting frontier AI safety. It requires large AI companies (those with over $500 million in annual revenue) to publish governance frameworks, transparency reports, and incident reports. It also protects whistleblowers. This applies to foundation models trained with over 10^26 computational operations.
The No FAKES Act, introduced in April 2025, targets digital replicas and deepfakes. Stakeholder concerns center on perceived favoritism toward major tech and entertainment interests. The bill does not establish clear federal digital replica rights and could conflict with state provisions.
The Kids Online Safety Act (KOSA) directs tech companies to shield children from harmful online content like posts about eating disorders or bullying. Although reintroduced in May 2025, it has not passed the federal level. States such as California, Utah, and Florida have enacted their own child safety laws, which differ greatly and face legal challenges.
The SANDBOX Act proposes an AI innovation environment, granting companies regulatory waivers of up to 10 years contingent on risk-benefit analyses and periodic progress reports. The objective is to advance U.S. AI capabilities and gather evidence to inform policymaking.
Unveiled in April 2024 with bipartisan support, the American Privacy Rights Act would provide enforceable privacy rights under a single national standard, replacing varied state laws.
The CHIPS Act is driving U.S. semiconductor manufacturing growth. As of August 2024, $30 billion allocated to 23 projects across 15 states had created 115,000 new jobs. Major companies—including Intel, TSMC, Samsung, and Micron—secured an additional $33 billion. The Act may face revisions as stakeholders express concern about firms accepting subsidies while continuing investment abroad, particularly in China; limited foreign spending is still permitted under the Act.
2025 marked a definitive shift in how businesses approached AI. Companies moved from experimentation to demanding demonstrable value, though many struggle to measure an effective ROI. A key development is the emergence of agentic AI—autonomous systems or ‘agents,’ that can independently plan and execute complex, multi-step tasks. These virtual coworkers are capable of making decisions and adapting to environments with minimal human input. Some tech executives see the eventual creation of networks where AI agents perform specific tasks, with the possibility of an ‘uber agent’ for oversight.
The current reality about AI remains more nuanced. Concerns over AI causing mass job displacement are being balanced by evidence that AI is proving to be a powerful co-pilot that automates mundane tasks and helps workers be more productive. Industries more exposed to AI show higher growth in revenue per worker and higher wages for those with AI skills. While multimodal AI has evolved to enable richer interactions and better contextual understanding, human oversight remains critical.
AI reasoning has improved significantly, yet models still struggle with certain complex reasoning tasks, an issue in industries such as finance and healthcare, making human review a necessity.
Physical limitations—like energy availability and access to specialized chips—continue to challenge network scalability. The rapid growth in AI demand reveals stress points across global infrastructure, from constrained data center power to supply chain bottlenecks and regulatory hurdles around grid access.
As AI becomes more deeply embedded in core business operations, ethical concerns—particularly around bias, privacy, and security—are moving to the forefront. In the absence of comprehensive federal AI regulations in the U.S., companies are taking the lead in establishing responsible AI practices to earn stakeholder trust and mitigate risk. Globally, governments are accelerating regulatory efforts, like the EU’s structured AI Act. The result is a fragmented and evolving landscape that organizations must carefully navigate.
Third-party risk is a growing concern for enterprise security strategies. Threat actors are adjusting attacks toward strategic operational vulnerabilities—focusing on supply chains and vendors that serve critical customer bases and exploiting vulnerabilities in interconnected networks. This strategy has become increasingly effective, as the disruption of services and supplies creates a rippling effect, producing significant disruptions and losses for customers.
Ransomware remained dominant throughout 2025. In the second quarter alone, the average ransom paid increased by more than 100% to $1.13 million.6 Small and mid-sized organizations bore significant impact, with a third of all attacks targeting organizations with fewer than 100 employees and another third against those with less than 1,000. Data theft accompanied 74% of all attacks, though only 26% of ransomware incidents in Q2 2025 resulted in actual ransom payments—reflecting growing organizational resilience.
The role of third parties in high-profile attacks proved significant. In May 2025, threat actors compromised a well-known cryptocurrency exchange not through digital social engineering tactics, but by bribing employees of the exchange’s overseas IT vendors to obtain customer information, which was then used to trick customers into transferring funds. Supply chain attacks continued producing cascading disruptions: a June cyberattack on a large food wholesale distributor left grocery store shelves empty throughout July and into August, impacting sales upstream and downstream while increasing the distributor’s operational costs and significant breach remediation expenses.
New attack vectors are emerging beyond traditional IT systems. GPS spoofing disrupts navigation systems, misdirects vehicles, and even manipulates location-based applications.7 In May 2025, attackers used spoofing to broadcast fake GPS signals to receivers of MSC Antonia in the Red Sea, which disrupted navigation systems and misdirected the vessel.8
AI is amplifying cyber threats. Threat actors are using generative AI to create convincing phishing emails that mirror the tone and style of colleagues and eliminate the grammatical errors that made attacks easy to spot. These sophisticated messages are being translated across languages and deployed at scale. Threat actors also use AI to mine for organizational data, such as financial and operational data, that they can use in demands against victims. Most concerning is AI’s role in lowering the entry barriers for less-technical threat actors, writing code and automating ransomware-as-a-service models, democratizing sophisticated attack capabilities across the threat actor ecosystem.
Driven by AI’s voracious appetite for computing power, the race is also on to construct the data center at scale. Training and deploying AI models requires vast resources. The numbers are staggering: the global data center market is projected to grow at a compound annual growth rate of more than 10% through 2030. AI demand alone could drive a 165% increase in computing requirements for hyperscalers by 2028. By 2030, data centers globally are expected to consume 945 terawatt-hours of electricity—more than double their 2024 consumption.9
This explosive growth is creating both opportunities and pressures. Governments are investing in AI infrastructure, easing regulations, and passing supportive legislation. Major tech firms are pouring capital into infrastructure to support AI-ready facilities, which require specialized construction. Yet scaling challenges persist: data center power constraints, physical network vulnerabilities, and rising computer demands are exposing infrastructure gaps. The challenge extends beyond technical architecture to talent shortages, policy constraints, and execution difficulties.
The proliferation of commercially available surveillance applications has created significant legal and operational exposures for businesses. More and more companies are using monitoring technology to track employee activities, but it carries significant legal, ethical, and operational risks. If not handled carefully, this practice can erode trust, damage company culture, and lead to serious financial and reputational fallout. As organizations look to balance oversight with transparency, it’s critical to ensure monitoring practices are thoughtful, compliant, and aligned with core values.
Legal risks are substantial. Employees may sue for invasion of privacy if monitoring is excessively invasive, particularly when recording them remotely in their homes. The Electronic Communications Privacy Act allows employers to monitor communications for ‘legitimate business reasons’ or with employee consent, but companies risk civil and criminal penalties for illegally intercepting private communications. Data privacy regulations like GDPR, the California Consumer Privacy Act, and Illinois’ Biometric Information Privacy Act impose strict compliance requirements.
Beyond legal exposure, surveillance creates operational risks. Secret or excessive monitoring erodes trust and morale, signaling a lack of confidence that can make employees stressed, disengaged, and resentful. High-performing talent may be the first to leave if they feel their autonomy is compromised, while companies known for aggressive surveillance practices struggle to attract quality candidates. There are technical vulnerabilities too: monitoring systems that collect sensitive behavioral and personal data become prime targets for hackers if poorly secured, while AI-driven monitoring tools may use algorithms that unfairly target certain groups or misinterpret activities.
While global IT spending is set to grow significantly, the sector faces a complex risk landscape shaped by macroeconomic pressures, regulatory uncertainties, supply chain challenges, and talent shortages. Companies must shift their focus from cost optimization to strategic growth and innovation to remain competitive.
AI continues to redefine business operations, offering opportunities for enhanced productivity and efficiency while introducing new ethical, security, and infrastructure challenges. The emergence of agentic AI and the increasing reliance on data centers highlight the need for robust investment in infrastructure and responsible AI practices. At the same time, the rise of generative AI and advanced cyber threats underscores the importance of proactive cybersecurity measures.
Legislative and regulatory developments, including the CHIPS Act, Take It Down Act, and pending bills like the SANDBOX Act, are reshaping the industry’s compliance requirements. Businesses must adapt to this evolving landscape, balancing innovation with risk management and regulatory adherence.
The insurance market is evolving to address the unique risks posed by technological advancements, offering tailored solutions for cyber threats, AI-related liabilities, and operational vulnerabilities. As the industry continues to grow, organizations must prioritize resilience, adaptability, and strategic planning to navigate challenges and capitalize on opportunities in this dynamic environment.
The road ahead is promising but requires careful navigation. By embracing innovation, addressing risks proactively, and aligning with regulatory and ethical standards, businesses can position themselves for sustainable growth and success in the rapidly evolving technology landscape of 2025 and beyond.

Partner with your broker early to prepare for any changes to increase renewal success.
It is important to collaborate with your broker’s industry experts who understand the business and the market for placing the specific risk. Collaborating with a team that can best represent your risk and partner with your operations is more important than ever during this disciplined market we are experiencing.
IMA has a team solely dedicated to managing cyber risks. They offer expert assistance, including coverage analysis, monetary loss exposure benchmarking, contract language review, in-depth cyber threat analysis, and strategic development of comprehensive, high-value cyber insurance programs.
Our contract review teams add value to our clients’ overall risk management program by ensuring the indemnity language is market standard and does not expose our clients to unforeseen losses that may not be insurable.
IMA invests heavily to deploy specialty niche teams concentrating on innovative technology, green energy initiatives, and advanced manufacturing. As every client is different, our Sustainability Advisory team provides clients with education, advice, and access to tools and best practices to advance their sustainability resilience and showcase their ESG performance for insurance underwriters.
Angela Thompson
Marketing Strategist, Market Intelligence & Insights
Brian Spinner
Senior Marketing Coordinator, Market Intelligence & Insights