Inside the jury that separates enterprise innovation from noise


According to Mordor Intelligence’s January 2026 forecast, the global cloud computing market will be worth $1.04 trillion in 2026 and is expected to reach $2.65 trillion by 2031, with a CAGR of 20.65%. Enterprise AI adoption has moved along: McKinsey’s November 2025 State of AI report, which surveyed 1,993 participants in 105 countries, found that 88% of organizations are now using AI in at least one business function, although only 23% of agency-scale AI systems and less than a third report AI-scale. This volume created a quieter challenge. When everyone claims innovation, someone has to decide which claims are actually true.

Chandra Sekhar Kondaveeti made this process of separation a part of his judicial practice. During 2025 and 2026, he served as a judge for the Globee Awards in the categories of artificial intelligence, cybersecurity and technology, evaluating enterprise candidates in the three areas where vendor claims are moving the fastest and independent verification is the most difficult.

The problem of the enterprise has passed the informal review

Enterprise software deployment is now measurable. According to Kiteworks research, cited in IBM’s 2025 report on the cost of data breaches, the average enterprise has nearly 1,200 informal applications running outside of IT reach, and 86% of organizations say they cannot track how data flows through AI systems within their perimeter. Cloud consolidation has continued at the same time, with large enterprises accounting for 53.12% of 2025 cloud spending, and more than 60% of enterprise workloads have already moved to the cloud for every industry metric, according to Mordor Intelligence.

During jury assignments in 2025 and 2026, Kondaveeti read this complexity firsthand. The proposals that cross his desk prove year after year that enterprise platforms now encompass too many tools, too many vendors, and too many AI pilots for any single internal team to audit. Evaluators with a more difficult job are asked to judge the outcome: a few reviewers who can examine the candidate, test its claims with independent evidence, and avoid rewarding packaging when the original architecture is not preserved.

Chandra Sekhar Kondaveeti says, “The true quality of a proposal becomes apparent only when someone who is not the author has to stress-test it.” “That’s why expert appraisals are more important now than they were ten years ago. The volume of claims coming before juries has grown faster than the number of appraisals that can reliably read them.”

The cybersecurity case for expert review

Global average cost of data breach to drop to $4.44 million in 2025, according to reports IBM Annual Data Breach Cost Reportit decreased in the eleventh five years. The United States moved in the opposite direction, reaching an average of $10.22 million, and health care breaches averaged $7.42 million over the fifteenth year. Shadow AI alone added $670,000 to the average incident, 63% of breached organizations had no AI governance policies at all, and 97% of AI-related breaches occurred in companies without proper access controls.

This loophole is exactly what cybersecurity judges are now spending their time uncovering. Across the cyber security category, Kondaveeti’s judgment focused on an area where claims and evidence often diverge, separating proposals whose authors can control what they describe from those whose narrative trumps their telemetry. The detection and maintenance gap is only closed when the people evaluating the application understand not only what the controls are supposed to do, but what each failure mode will look like in real traffic.

“Most violations are not exotic,” notes Kondaveeti. “They are the result of management that didn’t match the speed of deployment. The important number is how quickly your team finds the problem before anyone else.”

Credential layer

Before any technological claim reaches the purchase conversation, it usually goes through a layer of professional peer review that separates the reliable work from the rest. Academic and industrial publishing depends on a review corpus that is already overwhelming: about 200 IEEE-reviewed engineering journals, more than 1,700 conferences annually from the same institution, and about 30% of the world’s literature in the field of electrical, electronics, and computer engineering that passes through its review infrastructure. The problem is the ability of the reviewer. The editorial board and program committees are the focal point.

Kondaveeti sits within that credential layer as a program juror. In 2025, he served on the program jury at the International Conference on Science and Applications (ICDSA 2025) in Jaipur, where his submissions were reviewed in AI systems, enterprise platforms, and applied machine learning. Such duties have their workload rather than a ceremonial duty. Any proposal to be returned with a meaningful decision requires a reviewer who can read the claim, examine the evidence, and articulate what distinguishes a sound contribution from an acceptable contribution.

“Sitting on a jury is not about blank sheets,” Kondaveeti says. “It’s about the commitment that comes with a chair. When you agree to evaluate someone else’s research, you’re making a professional decision about whether the work is right. It shouldn’t be given to someone who needs to carefully read the evidence.”

The AI ​​question raises the stakes

McKinsey’s 2025 State of AI report found that while 88% of organizations are now using AI, most remain in what the report calls the “pilot stage,” with only 31% reporting enterprise-scale adoption and less than 10% of vertical use cases reaching production. A parallel IBM study shows that organizations that use AI and security automation shorten breach lifecycles by an average of 80 days and save nearly $1.9 million per incident compared to organizations that use neither. Currently, 1 in 6 breaches now use attackers using AI, typically 37% for phishing and 35% for deep spoofing. Asymmetry becomes an operational challenge for the field.

Kondaveeti’s latest judgment sits squarely on this line. The AI ​​proposals that have crossed his desk over the past twelve months have focused on questions of governance, compliance, and access control, rather than exemplary innovation or benchmark theater. A consistent pattern across acceptable entries is that generating systems do not minimize the need for expert review. They increase it because the volume of claims is now larger each quarter than any single award program or conference committee could handle on its own.

“Automation can and should do a lot of work,” Kondaveeti concludes. “But someone still has to decide what’s good and then hold the line. That judgment isn’t something that’s going to make a pattern for you. It’s going to be a human calling for a long time, and the people who do it well are going to be the ones who have spent real hours reading the evidence, case by case.”

Inside the peer review pipeline

The volume was significantly enhanced by academic peer review. NeurIPS 2025, one of the largest fields in machine learning, received 21,575 key proposals, which is twice the number of 2024 and about 2.3 times the number of 2020, and gathered a review corpus of 20,518 reviews, 1,663 regional chairs and 19 departments. Admission was held at about 24.5%, which is in line with previous years. Smaller specialty areas operate at similar margins. For example, CISCom 2025 rejected more than 70% of its 283 submissions before the original review phase began. The problem is not in the system. This is to ensure qualified reviews.

Kondaveeti spent 2025 and the opening months of 2026 inside this pipeline. He contributes by reviewing papers Precision Conference Solutions platform used by ACM and affiliated locations, where his review assignments have focused on AI systems, enterprise platforms, and applied machine learning. This review will sit alongside the 2025 and 2026 jury appointments at industry awards and academic conferences, creating a continuous schedule of review roles at a time when the volume of submissions has outstripped the supply of qualified reviewers.

“Most of what crosses the examination table is competent,” says Kondaveeti. “What separates a few acceptable papers is rarely a headline claim. It’s a fact. People who can show their telemetry, their failure modes, and the approaches they take do better than people who have fancy narratives.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *