48% of security professionals think AI is risky

by admin
48% of security professionals think AI is risky

A recent HackerOne survey highlighted the growing concerns that AI brings to the cybersecurity landscape. The report draws on insights from 500 security experts, a community survey of 2,000 members, feedback from 50 customers and anonymized data from the platform.

Their biggest concerns about AI were:

  • Training data leak (35%).
  • Unauthorized use (33%).
  • Hacking of AI models by third parties (32%)

The survey also reveals that 48% of respondents believe AI poses the most significant security risk to their organization. These fears highlight the urgent need for businesses to re-evaluate their AI security strategies before vulnerabilities become real threats.

How the security research community has changed in the AI ​​era

The HackerOne report indicates that AI can pose a threat – and the security community is working to counter this threat. Among those surveyed, 10% of security researchers specialize in AI. In fact, 45% of security leaders view AI as one of their organization’s biggest risks. Data integrity, in particular, was a concern.

“AI even hacks other AI models,” security researcher and hackerOne pentester Jasmin Landry, also known as @jr0ch17, said in the report.

Of those surveyed, 51% say basic security practices are being overlooked as companies rush to include generative AI. Only 38% of HackerOne customers felt capable of defending themselves against AI threats.

The most commonly reported AI vulnerabilities include logical errors and LLM prompt injection.

As a security platform, HackerOne has seen the number of AI assets included in its programs increase by 171% over the past year.

The most commonly reported vulnerabilities in AI assets are:

  • General AI safety (e.g. preventing AI from generating harmful content) (55%).
  • Business logic errors (30%).
  • Rapid injection of LLM (11%).
  • LLM training data poisoning (3%).
  • Disclosure of sensitive information LLM (3%).

HackerOne highlighted the importance of the human element in protecting systems from AI and keeping these tools secure.

“Even the most sophisticated automation cannot match the ingenuity of human intelligence,” Chris Evans, CISO and head of hackerOne, said in a statement. press release. “The 2024 Hacker-Driven Security Report proves how human expertise is essential to addressing the unique challenges posed by AI and other emerging technologies. »

SEE: For the third consecutive quarter, executives are more concerned about AI-assisted attacks than any other threat, Gartner reported.

Outside of AI, cross-site scripting issues arise most

Some things haven’t changed: Cross-site scripting (XSS) and configuration errors are the most reported weaknesses by the HackerOne community. Respondents consider penetration testing and bug bounties to be the best ways to identify problems.

AI tends to generate false positives for security teams

Further research of a SANS Institute report sponsored by HackerOne in September revealed that 58% of security professionals believe security teams and threat actors could find themselves in an “arms race” to leverage generative AI tactics and techniques in their work.

Security professionals surveyed in the SANS survey said they have successfully used AI to automate tedious tasks (71%). However, the same participants acknowledged that bad actors could leverage AI to make their operations more efficient. In particular, respondents “were most concerned about AI-based phishing campaigns (79%) and automated vulnerability exploitation (74%).”

SEE: Security managers are to be frustrated with AI-generated code.

“Security teams need to find the best applications for AI to keep pace with their adversaries while accounting for its existing limitations, or they risk creating more work for themselves,” said Matt Bromiley, an analyst at SANS Institute. press release.

The solution? AI implementations should be subject to external review. More than two-thirds of respondents (68%) chose “external review” as the most effective way to identify AI safety and security issues.

“Teams are now more realistic about the current limitations of AI” than they were last year, Dane Sherrets, senior solutions architect at HackerOne, said in an email to TechRepublic. “Humans bring many important contexts to defensive and offensive security that AI cannot yet replicate. Issues like hallucinations also made teams hesitant to deploy the technology in critical systems. However, AI is still ideal for increasing productivity and performing tasks that don’t require deep context.

Other findings from the SANS 2024 AI survey, released this month, include:

  • 38% plan to adopt AI in their security strategy in the future.
  • 38.6% of respondents said they encountered gaps when using AI to detect or respond to cyber threats.
  • 40% cite legal and ethical implications as a challenge to AI adoption.
  • 41.8% of companies have faced pushback from employees who don’t trust AI decisions, which SANS says is “due to lack of transparency.”
  • 43% of organizations are currently using AI in their security strategy.
  • AI technology in security operations is most often used in anomaly detection systems (56.9%), malware detection (50.5%), and automated incident response (48, 9%).
  • 58% of respondents said AI systems struggle to detect new threats or respond to outlier indicators, which SANS attributes to a lack of training data.
  • Among those who reported gaps in using AI to detect or respond to cyber threats, 71% said AI generated false positives.

HackerOne’s Tips for Improving AI Security

HackerOne recommends:

  • Test, validate, verify and evaluate regularly throughout the lifecycle of an AI model, from training to deployment and use.
  • Research whether government or specific sector AI compliance requirements are relevant to your organization and establish an AI governance framework.

HackerOne also strongly recommended that organizations communicate openly about generative AI and provide training on relevant security and ethics issues.

HackerOne released some survey data in September and the full report in November. This updated article considers both.

Source Link

You may also like

Leave a Comment