Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Zscaler Study Finds Companies are Rushing Towards Generative AI Without Considering the Risks

generative AI
(Credit: Stock Italia / Alamy Stock Photo)

As technology advances, organizations are always looking for the latest and greatest tools to help them stay ahead of the competition. However, a new study by Zscaler has revealed a concerning trend where organizations are rushing to adopt generative artificial intelligence (gen AI) tools despite security concerns.

The study, All Eyes on Securing GenAI, delves into the implications of this trend on security. It’s based on responses from 901 IT decision-makers across ten global markets, focusing on companies with 500 or more employees. The IT decision-makers were surveyed in Oct. 2023 by Sapio Research.

The findings are striking: an overwhelming 95 percent of organizations employ gen AI tools like ChatGPT in various capacities. Breaking down the numbers further, 57 percent fully use gen AI, while 38 percent are cautiously approaching their use. The most common use cases are data analysis (78 percent), R&D services development (55 percent), marketing (53 percent), end-user tasks (44 percent), and logistics (41 percent).

However, alongside this rapid adoption, there’s a significant awareness of potential security risks, with 89 percent of organizations acknowledging this concern. Surprisingly, a considerable portion (23 percent) of these organizations lack any form of monitoring for their gen AI tool usage, highlighting a gap in security practices. The top concerns include the potential loss of sensitive data, limited resources to monitor use, and misunderstanding of the benefits/dangers.

Another key insight is the IT teams’ role in driving the adoption of gen AI tools. Contrary to what might be expected, it’s not the general workforce but IT teams that are the primary users and promoters of gen AI. In fact, 59 percent of the survey respondents said IT teams are driving gen AI usage, while 21 percent said business leaders drive usage. Only 5 percent named their employees as the driving force behind gen AI.

Despite recognizing the potential risks, many businesses, especially smaller ones, continue to embrace gen AI tools. For businesses with 500 to 999 employees, the adoption rate mirrors the general trend (where 95 percent of organizations are using gen AI tools). Still, the recognition of associated risks is even higher at 94 percent.

The study also highlights a critical window of opportunity for organizations to address the growing security challenges. With IT teams leading the charge, it’s possible for businesses to strategically control the pace of gen AI adoption and strengthen their security measures. This proactive approach is crucial, as 51 percent of the respondents anticipate a surge in gen AI interest before the year’s end. This means organizations must take immediate action to bridge the gap between usage and security.

How to proceed with generative AI adoption

To address these challenges, Zscaler makes several recommendations for business leaders:

  • Implement a zero trust architecture to authorize approved apps and users
  • Conduct thorough security risk assessments for new AI apps
  • Establish comprehensive logging systems for AI interactions
  • Enable zero trust-powered data loss prevention (DPL) measures specific to AI activities to prevent data exfiltration.

In summary, the research underlines the urgency for businesses to balance the potential of gen AI tools with effective security strategies to ensure safe usage of this emerging technology across every organization, regardless of its size.

Zeus Kerravala is the founder and principal analyst with ZK Research.

Read his other Network Computing articles here.

Related articles: