杂志社动态
编委动态
会议信息
会议名称 会议时间 会议地点
本期推荐
大模型驱动的网络智能运营管理标准化和应用展望
作者:李文璟,方宏林,喻鹏
[简介] 大模型的迅猛发展正在深刻变革网络运营管理方式,推动自智网络从“外挂式智能”迈向“内生式智能”。聚焦大模型驱动的网络智能运营管理,在分析网络运营管理智能化的发展需求基础上,总结了网络运营管理大模型标准化进展。之后,在提出大模型驱动的网络智能运营管理架构基础上,阐述了大模型在网络自配置、自优化、自治愈等过程的关键技术和挑战。针对大模型在网络运营管理智能化中的应用和实例进行了验证,展望面向未来“标准引领、价值落地、能力演进”愿景的大模型运维体系,为实现真正智能自治的网络管理范式转型提供参考。
光纤通信技术演进与发展展望:从基础突破到融合创新
作者:张海懿
[简介]总结了光纤通信技术从基础突破到融合创新的发展历程,分析了光纤通信从光层基础到多域组网的关键技术进展,并展望了其未来发展趋势,同时指出中国面临的发展机遇和挑战。针对人工智能、6G等未来发展新需求,建议业界继续协同聚力推进光纤通信关键技术基础突破和融合创新,支撑信息基础设施高质量发展。
Poison-Only and Targeted Backdoor Attack Against Visual Object Tracking
GU Wei, SHAO Shuo, ZHOU Lingtao, QIN Zhan, REN Kui
[Introduction] Visual object tracking (VOT), aiming to track a target object in a continuous video, is a fundamental and critical task in computer vision. However, the reliance on third-party resources (e.g., dataset) for training poses concealed threats to the security of VOT models. In this paper, we reveal that VOT models are vulnerable to a poison-only and targeted backdoor attack, where the adversary can achieve arbitrary tracking predictions by manipulating only part of the training data. Specifically, we first define and formulate three different invariants of the targeted attacks: size-manipulation, trajectory-manipulation, and hybrid attacks. To implement these, we introduce Random Video Poisoning (RVP), a novel poison-only strategy that exploits temporal correlations within video data by poisoning entire video sequences. Extensive experiments demonstrate that RVP effectively injects controllable backdoors, enabling precise manipulation of tracking behavior upon trigger activation, while maintaining high performance on benign data, thus ensuring stealth. Our findings not only expose significant vulnerabilities but also highlight that the underlying principles could be adapted for beneficial uses, such as dataset watermarking for copyright protection.
Dataset Copyright Auditing for Large Models: Fundamentals, Open Problems, and Future Directions
DU Linkang, SU Zhou, YU Xinyi
[Introduction] The unprecedented scale of large models, such as large language models (LLMs) and text-to-image diffusion models, has raised critical concerns about the unauthorized use of copyrighted data during model training. These concerns have spurred a growing demand for dataset copyright auditing techniques, which aim to detect and verify potential infringements in the training data of commercial AI systems. This paper presents a survey of existing auditing solutions, categorizing them across key dimensions: data modality, model training stage, data overlap scenarios, and model access levels. We highlight major trends, such as the predominance of black-box auditing methods and the focus on fine-tuning rather than pre-training. Through an in-depth analysis of 12 representative works, we extract four key observations that reveal the limitations of current methods. Furthermore, we identify three open challenges and propose future directions for robust, multimodal, and scalable auditing solutions. Our findings underscore the urgent need to establish standardized benchmarks and develop auditing frameworks that are resilient to low watermark densities and applicable in diverse deployment settings.