杂志社动态
编委动态
会议信息
会议名称 会议时间 会议地点
本期推荐
意图驱动的智简通信
作者:戴金晟,秦晓琦,秦海龙,张平
[简介] 随着通信迈入6G时代,传统依赖物理资源扩展的通信模式难以满足智能化、泛在化的发展需求。提出一种意图驱动的智简通信系统,融合认知心理学、信息论与人工智能方法,以语义token为基本单元,构建面向信息效用的通信范式。系统集成智能体的感知、认知与反馈能力,实现异构数据的上下文感知语义建模与压缩传输,重点突破语义编码、意图解析、鲁棒传输与可信解码等关键技术。该架构适配人—人感知、人—机控制与多机协同等差异化需求,支持在带宽受限与信道动态条件下的高效稳健传输。系统梳理了智简通信的研究脉络与核心机制,为构建高效、泛用、可持续的智能通信体系提供理论支撑与技术参考。
面向下一代光网络的光计算技术应用思考
作者:李俊杰,刘宇旸,霍晓莉
[简介]光计算作为一种基于光子进行信息处理的新型计算范式,凭借高并行性、低能耗和大带宽等优势,正成为下一代光网络演进的关键支撑技术。从光计算的架构形式与系统层级双重路径出发,系统解析其技术发展脉络,并归纳关键支撑要素与集成模式。在此基础上,聚焦下一代光网络中的典型应用场景,重点探讨光计算在光信号处理、网络智能优化、通感一体及智算体系等领域的融合路径,揭示其在构建新型信息基础设施中的潜在价值与发展方向。
Poison-Only and Targeted Backdoor Attack Against Visual Object Tracking
GU Wei, SHAO Shuo, ZHOU Lingtao, QIN Zhan, REN Kui
[Introduction] Visual object tracking (VOT), aiming to track a target object in a continuous video, is a fundamental and critical task in computer vision. However, the reliance on third-party resources (e.g., dataset) for training poses concealed threats to the security of VOT models. In this paper, we reveal that VOT models are vulnerable to a poison-only and targeted backdoor attack, where the adversary can achieve arbitrary tracking predictions by manipulating only part of the training data. Specifically, we first define and formulate three different invariants of the targeted attacks: size-manipulation, trajectory-manipulation, and hybrid attacks. To implement these, we introduce Random Video Poisoning (RVP), a novel poison-only strategy that exploits temporal correlations within video data by poisoning entire video sequences. Extensive experiments demonstrate that RVP effectively injects controllable backdoors, enabling precise manipulation of tracking behavior upon trigger activation, while maintaining high performance on benign data, thus ensuring stealth. Our findings not only expose significant vulnerabilities but also highlight that the underlying principles could be adapted for beneficial uses, such as dataset watermarking for copyright protection.
Dataset Copyright Auditing for Large Models: Fundamentals, Open Problems, and Future Directions
DU Linkang, SU Zhou, YU Xinyi
[Introduction] The unprecedented scale of large models, such as large language models (LLMs) and text-to-image diffusion models, has raised critical concerns about the unauthorized use of copyrighted data during model training. These concerns have spurred a growing demand for dataset copyright auditing techniques, which aim to detect and verify potential infringements in the training data of commercial AI systems. This paper presents a survey of existing auditing solutions, categorizing them across key dimensions: data modality, model training stage, data overlap scenarios, and model access levels. We highlight major trends, such as the predominance of black-box auditing methods and the focus on fine-tuning rather than pre-training. Through an in-depth analysis of 12 representative works, we extract four key observations that reveal the limitations of current methods. Furthermore, we identify three open challenges and propose future directions for robust, multimodal, and scalable auditing solutions. Our findings underscore the urgent need to establish standardized benchmarks and develop auditing frameworks that are resilient to low watermark densities and applicable in diverse deployment settings.