Page Not Found
Page not found. Your pixels are in another canvas.
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas.
About me
This is a page not in th emain menu
Published:
This post will show up by default. To disable scheduling of future posts, edit config.yml and set future: false.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Short description of portfolio item number 1
Short description of portfolio item number 2 
Published in ECCV, 2020
Abstract. This paper proposes a set of rules to revise various neural networks for 3D point cloud processing to rotation-equivariant quaternion neural networks (REQNNs). We find that when a neural network uses quaternion features, the network feature naturally has the rotation-equivariance property. Rotation equivariance means that applying a specific rotation transformation to the input point cloud is equivalent to applying the same rotation transformation to all intermediate-layer quaternion features. Besides, the REQNN also ensures that the intermediate-layer features are invariant to the permutation of input points. Compared with the original neural network, the REQNN exhibits higher rotation robustness.
Recommended citation: Shen W., Zhang B., Huang S., Wei Z., Zhang Q.: 3D-Rotation-Equivariant Quaternion Neural Networks. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 531-547 (2020) https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123650528.pdf
Published in CVPR, 2021
Abstract. In this paper, we diagnose deep neural networks for 3D point cloud processing to explore utilities of different intermediate-layer network architectures. We propose a number of hypotheses on the effects of specific intermediate-layer network architectures on the representation capacity of DNNs. In order to prove the hypotheses, we design five metrics to diagnose various types of DNNs from the following perspectives, information discarding, information concentration, rotation robustness, adversarial robustness, and neighborhood inconsistency. We conduct comparative studies based on such metrics to verify the hypotheses. We further use the verified hypotheses to revise intermediate-layer architectures of existing DNNs and improve their utilities. Experiments demonstrate the effectiveness of our method.
Recommended citation: Shen, W., Wei, Z., Huang, S., Zhang, B., Chen P., Zhao P., & Zhang, Q. Verifiability and Predictability: Interpreting Utilities of Network Architectures for Point Cloud Processing. In CVPR 2021. https://arxiv.org/abs/1911.09053v3
Published in IJCAI, 2021
Abstract. The reasonable definition of semantic interpretability presents the core challenge in explainable AI. This paper proposes a method to modify a traditional convolutional neural network (CNN) into an interpretable compositional CNN, in order to learn filters that encode meaningful visual patterns in intermediate convolutional layers. In a compositional CNN, each filter is supposed to consistently represent a specific compositional object part or image region with a clear meaning. The compositional CNN learns from image labels for classification without any annotations of parts or regions for supervision. Our method can be broadly applied to different types of CNNs. Experiments have demonstrated the effectiveness of our method.
Recommended citation: Shen W., Wei Z., Huang S., Zhang B., Fan J., Zhao P., Zhang Q.: Interpretable Compositional Convolutional Neural Networks. In: Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI). (2021) https://arxiv.org/abs/2107.04474
Published in NeurIPS, 2021
Abstract. In this paper, we evaluate the quality of knowledge representations encoded in deep neural networks (DNNs) for 3D point cloud processing. We propose a method to disentangle the overall model vulnerability into the sensitivity to the rotation, the translation, the scale, and local 3D structures. Besides, we also propose metrics to evaluate the spatial smoothness of encoding 3D structures, and the representation complexity of the DNN. Based on such analysis, experiments expose representation problems with classic DNNs, and explain the utility of the adversarial training.
Recommended citation: Shen W., Ren Q., Liu D., Zhang Q.: Interpreting Representation Quality of DNNs for 3D Point Cloud Processing. In: Proceedings of the 35th Conference on Neural Information Processing Systems (NeurIPS). (2021) https://proceedings.neurips.cc/paper/2021/file/4a3e00961a08879c34f91ca0070ea2f5-Paper.pdf
Published in ICML, 2023
Abstract. This paper analyzes convolutional decoders in the frequency domain and identifies defects in representing high-frequency components, repeated frequencies, and shifted spectra.
Recommended citation: Tang, L., Shen, W., Zhou, Z., Chen, Y., & Zhang, Q. Defects of Convolutional Decoder Networks in Frequency Representation. In ICML 2023. https://arxiv.org/pdf/2210.09020
Published in AAAI, 2024
Abstract. This paper proves that batch normalization blocks the influence of the first- and second-order terms of the loss on earlier layers under a Taylor-series perspective.
Recommended citation: Zhou, Z., Shen, W., Chen, H., Tang, L., Chen, Y., & Zhang, Q. Batch Normalization Is Blind to the First and Second Derivatives of the Loss. In AAAI 2024. https://ojs.aaai.org/index.php/AAAI/article/download/29978/31715
Published in IEEE TPAMI, 2024
Abstract. This paper revises neural networks for 3D point cloud processing into rotation-equivariant quaternion neural networks, improving rotation robustness while preserving permutation invariance.
Recommended citation: Shen, W., Wei, Z., Ren, Q., Zhang, B., Huang, S., Fan, J., & Zhang, Q. Interpretable Rotation-Equivariant Quaternion Neural Networks for 3D Point Cloud Processing. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024. https://ieeexplore.ieee.org/abstract/document/10384563
Published in CVPR, 2025
Abstract. This paper uses game-theoretic interactions as a unified approach to interpret self-supervised pre-training methods for 3D point clouds and identifies a shared mechanism behind their performance gains.
Recommended citation: Li, Q., Ruan, J., Wu, F., Chen, Y., Wei, Z., & Shen, W. A Unified Approach to Interpreting Self-supervised Pre-training Methods for 3D Point Clouds via Interactions. In CVPR 2025. https://openaccess.thecvf.com/content/CVPR2025/papers/Li_A_Unified_Approach_to_Interpreting_Self-supervised_Pre-training_Methods_for_3D_CVPR_2025_paper.pdf
Published in NeurIPS, 2025
Abstract. This paper interprets arithmetic reasoning in large language models using game-theoretic interactions, quantifying interaction patterns encoded during forward propagation to explain how LLMs solve arithmetic problems.
Recommended citation: Wen, L., Zheng, L., Li, H., Sun, L., Wei, Z., & Shen, W. Interpreting Arithmetic Reasoning in Large Language Models using Game-Theoretic Interactions. In NeurIPS 2025. https://openreview.net/pdf?id=tRvzEL64dY
Published in arXiv preprint, 2026
Abstract. This paper studies how visual modality can induce jailbreak-related representation shifts in vision-language models and proposes a defense that removes the jailbreak-related shift at inference time.
Recommended citation: Wei, Z., Li, Q., Ruan, J., Qin, Z., Wen, L., Liu, D., & Shen, W. Understanding and Defending VLM Jailbreaks via Jailbreak-Related Representation Shift. arXiv preprint arXiv:2603.17372, 2026. https://arxiv.org/pdf/2603.17372
Published:
This is a description of your talk, which is a markdown files that can be all markdown-ified like any other post. Yay markdown!
Published:
This is a description of your conference proceedings talk, note the different field in type. You can put anything in this field.
Undergraduate course, University 1, Department, 2014
This is a description of a teaching experience. You can use markdown like any other post.
Workshop, University 1, Department, 2015
This is a description of a teaching experience. You can use markdown like any other post.