ua en ru

AI is transforming cybersecurity faster than businesses can adapt

AI is transforming cybersecurity faster than businesses can adapt AI is transforming cybersecurity faster than businesses can adapt (illustrative photo: Getty Images)

Why automation and AI have become an integral part of cybersecurity but cannot replace humans — especially in gambling, where the cost of error is high — is discussed in a column by an independent expert in strategic development of iGaming products, Mykhailo Zborovskyi.

Cybercrime has reached a new level: automated attacks, machine-driven scenarios, and autonomous tools. This is not a forecast for the future — it is already the reality in which every SOC specialist operates.

At the same time, I see something else every day: many companies still think in terms of the past, relying on manual processes that no longer work. Against this backdrop, businesses are asking a logical question: Will AI displace cybersecurity professionals?

My position is straightforward: AI does not replace the expert — it removes what prevents the expert from working effectively. Without human strategic decision-making, modern cybersecurity is simply impossible.

Speed is the key parameter of modern security

A real fact: on average, companies take 204 days to detect an incident. This is enough time for a complete compromise of the infrastructure. Automation reduces this window to days or even minutes.

SOAR (Security Orchestration, Automation, and Response) systems can remove a malicious file or block a compromised domain faster than an analyst can open a report.

With a sufficient level of contextual understanding combined with well-designed automated algorithms, AI can recommend actions in most routine daily incidents. For example, AI can be trained to respond to anomalous signals, interpret them in isolation or in a broader context, and provide preliminary conclusions within minutes.

This is a fundamental advantage that is already becoming the norm. But there is another side to the equation.

AI handles volume. Humans ensure quality

Incident filtering

Algorithms excel at organizing data, filtering out false positives, and reducing the load on SOC teams. But it is the specialist who determines whether an incident truly poses a threat to the business.

Context and risk

Automation reports an anomaly. AI may interpret its nature. But only an expert understands the consequences: the impact on operational processes, regulatory constraints, and reputational risks.

Decision-making

A system may suggest actions. Responsibility for them always lies with a human — especially when it comes to freezing payouts, blocking clients, or restricting access to critical services.

Why human expertise is irreplaceable in gambling

Gaming platforms are among the most attractive targets for attacks. The reason is simple: large volumes of payments, constant traffic, valuable data, and strict regulatory requirements.

In this sector, automation helps accelerate data collection and interpretation, but key decisions remain with humans:

  • assessment of the impact on licensing conditions;
  • understanding players’ behavioral patterns;
  • management of operational risks.

An algorithmic error here can cost a company far more than in almost any other industry.

Case with an international IT company illustrating the limits of automation

To illustrate the limits of automation, consider a recent incident involving a large international IT company specializing in software development. Despite powerful infrastructure and global scale, it became the victim of a serious cyberattack — a reminder to the market that anyone can be vulnerable.

The reason is simple: automation responds well to technical signals. AI, based on data from past incidents, can interpret them. But during complex attacks, adversaries combine multiple vectors, mask movement, and use a company’s legitimate tools to appear as normal traffic.

A machine may detect an anomaly, but cannot always correctly understand its meaning. This is why human analysis is critical — to connect fragmented signals into a coherent attack scenario.

Systems detected suspicious activity, but it was people who determined the true scale of the incident, assessed the consequences, made decisions, and managed the crisis. Automation handled the technical tasks; the strategic work was carried out by a team of experts.

This case highlights a simple truth: AI and algorithms enable fast detection and basic interpretation, but they cannot replace human understanding, responsibility, and context. It is precisely the combination of these factors that makes an incident manageable rather than destructive.

Conclusion

AI in cybersecurity is not a replacement for specialists, nor an attempt to minimize the human role. It is a way to finally make experts effective — freeing them from routine operations and allowing them to focus on what truly impacts business security.

Teams that correctly combine machine speed with human analytics already have a significant advantage. Teams that rely on only one of the two risk becoming the next victims of an attack.