Shijie Wang
Ph.D. Candidate
Department of Computing (COMP)
The Hong Kong Polytechnic University (PolyU)
Supervisor: Dr. Fan Wenqi
Co-supervisor: Prof. Li Qing
Email: shijie.wang(at)connect.polyu.hk
Office: VA324, Kowloon, Hong Kong SAR
Google Scholar GitHub Linkedin
The Hong Kong Polytechnic University (PolyU)
Supervisor: Dr. Fan Wenqi
Co-supervisor: Prof. Li Qing
Email: shijie.wang(at)connect.polyu.hk
Office: VA324, Kowloon, Hong Kong SAR
Google Scholar GitHub Linkedin
Bio
I am a third-year PhD Candidate in the department of Computing of The Hong Kong Polytechnic University (PolyU), advised by Assistant Professor Fan Wenqi and Prof. Li Qing. My current research focuses on Large Language Models (LLMs) and Graph Neural Networks (GNNs) as well as their applications in recommender systems (RecSys).
I obtained my BSc degree in Information and Computing Science from Xi’an Jiaotong-Liverpool University in China and the University of Liverpool in the UK in July, 2022.
News
- Dec 2024: our tutorial on “Towards Retrieval-Augmented Large Language Models: Data Management and System Design” is accepted by ICDE 2025🎉.
- Sep 2024: our paper “Multi-agent Attacks for Black-box Social Recommendations” is accepted by Transactions on Information Systems (TOIS)🎉.
- May 2024: our tutorial and survey paper “A Survey on RAG Meets LLMs: Towards Retrieval-Augmented Large Language Models” got accepted by KDD 2024🎉.
- May 2024: our paper “CheatAgent: Attacking LLM-Empowered Recommender Systems via LLM Agent” is accepted by KDD 2024🎉.
- May 2024: I’ll be interning at Baidu Search Science team this summer!
- May 2024: our new preprint “A Survey on RAG Meets LLMs: Towards Retrieval-Augmented Large Language Models“ is online.
- Apr 2024: our new preprint “Graph Machine Learning in the Era of Large Language Models (LLMs)“ is online.
- Apr 2024: our tutorial on “Recommender Systems in the Era of Large Language Models (LLMs)“ is accepted by IJCAI 2024🎉.
- Mar 2024: our paper “Graph Unlearning with Efficient Partial Retraining” is accepted by WWW PhD Symposium 2024🎉.
- May 2023: our tutorial on “Trustworthy Recommender Systems: Foundations and Frontiers” is accepted by KDD 2023🎉.
- Apr 2023: our tutorial on “Trustworthy Recommender Systems: Foundations and Frontiers” is accepted by IJCAI 2023🎉.
Selected Publications
For a full list of publications, see Research. * indicates equal contribution.
Shijie Wang,
Wenqi Fan, Xiao-yong Wei, Xiaowei Mei, Shanru Lin, Qing Li
Multi-agent Attacks for Black-box Social Recommendations
Multi-agent Attacks for Black-box Social Recommendations
In
Transactions on Information Systems (TOIS),
2024.
To perform untargeted attacks on social recommender systems, attackers can construct malicious social relationships for fake users to enhance the attack performance. However, the coordination of social relations and item profiles is challenging for attacking black-box social recommendations. To address this limitation, we first conduct several preliminary studies to demonstrate the effectiveness of cross-community connections and cold-start items in degrading recommendations performance. Specifically, we propose a novel framework Multiattack based on multi-agent reinforcement learning to coordinate the generation of cold-start item profiles and cross-community social relations for conducting untargeted attacks on black-box social recommendations. Comprehensive experiments on various real-world datasets demonstrate the effectiveness of our proposed attacking framework under the black-box setting.
Liangbo Ning*,
Shijie Wang*,
Wenqi Fan, Qing Li, Xu Xin, Hao Chen, Feiran Huang
CheatAgent: Attacking LLM-Empowered Recommender Systems via LLM Agent
CheatAgent: Attacking LLM-Empowered Recommender Systems via LLM Agent
In
KDD,
2024.
In this paper, we propose a novel attack framework called CheatAgent by harnessing the human-like capabilities of LLMs, where an LLM-based agent is developed to attack LLM-Empowered RecSys. Specifically, our method first identifies the insertion position for maximum impact with minimal input modification. After that, the LLM agent is designed to generate adversarial perturbations to insert at target positions. To further improve the quality of generated perturbations, we utilize the prompt tuning technique to improve attacking strategies via feedback from the victim RecSys iteratively. Extensive experiments across three real-world datasets demonstrate the effectiveness of our proposed attacking method.
Yujuan Ding, Wenqi Fan, Liangbo Ning,
Shijie Wang,
Hengyun Li, Dawei Yin, Tat-Seng Chua, Qing Li
A Survey on RAG Meets LLMs: Towards Retrieval-Augmented Large Language Models
A Survey on RAG Meets LLMs: Towards Retrieval-Augmented Large Language Models
In
KDD,
2024.
In this survey, we comprehensively review existing research studies in retrieval-augmented large language models (RA-LLMs), covering three primary technical perspectives: architectures, training strategies, and applications. As the preliminary knowledge, we briefly introduce the foundations and recent advances of LLMs. Then, to illustrate the practical significance of RAG for LLMs, we categorize mainstream relevant work by application areas, detailing specifically the challenges of each and the corresponding capabilities of RA-LLMs. Finally, to deliver deeper insights, we discuss current limitations and several promising directions for future research.
Wenqi Fan,
Shijie Wang,
Jiani Huang, Zhikai Chen, Yu Song, Wenzhuo Tang, Haitao Mao, Hui Liu, Xiaorui Liu, Dawei Yin, Qing Li
Graph Machine Learning in the Era of Large Language Models (LLMs)
Graph Machine Learning in the Era of Large Language Models (LLMs)
In
Preprint,
2024.
In this survey, we first review the recent developments in Graph ML. We then explore how LLMs can be utilized to enhance the quality of graph features, alleviate the reliance on labeled data, and address challenges such as graph heterogeneity and out-of-distribution (OOD) generalization. Afterward, we delve into how graphs can enhance LLMs, highlighting their abilities to enhance LLM pre-training and inference. Furthermore, we investigate various applications and discuss the potential future directions in this promising field.
Jiahao Zhang, Lin Wang,
Shijie Wang,
Wenqi Fan
Graph Unlearning with Efficient Partial Retraining
Graph Unlearning with Efficient Partial Retraining
In
WWW PhD Symposium,
2024.
Graph Neural Networks (GNNs) have achieved remarkable success in various real-world applications. However, GNNs may be trained on undesirable graph data, which can degrade their performance and reliability. To enable trained GNNs to efficiently unlearn unwanted data, a desirable solution is retraining-based graph unlearning, which partitions the training graph into subgraphs and trains sub-models on them, allowing fast unlearning through partial retraining. However, the graph partition process causes information loss in the training graph, resulting in the low model utility of sub-GNN models. In this paper, we propose GraphRevoker, a novel graph unlearning framework that better maintains the model utility of unlearnable GNNs.
Shijie Wang,
Shangbo Wang
A novel Multi-Agent Deep RL Approach for Traffic Signal Control
A novel Multi-Agent Deep RL Approach for Traffic Signal Control
In
PerCom Workshop,
2023.
As travel demand increases and urban traffic condition becomes more complicated, applying multi-agent deep reinforcement learning (MARL) to traffic signal control becomes one of the hot topics. The rise of Reinforcement Learning (RL) has opened up opportunities for solving Adaptive Traffic Signal Control (ATSC) in complex urban traffic networks, and deep neural networks have further enhanced their ability to handle complex data. Traditional research in traffic signal control is based on the centralized Reinforcement Learning technique. However, in a large-scale road network, centralized RL is infeasible because of an exponential growth of joint state-action space. In this paper, we propose a Friend-Deep Q-network (Friend-DQN) approach for multiple traffic signal control in urban networks, which is based on an agent-cooperation scheme. In particular, the cooperation between multiple agents can reduce the state-action space and thus speed up the convergence.
Cite CheatAgent: Attacking LLM-Empowered Recommender Systems via LLM Agent
@inproceedings{ning2024cheatagent,
title={CheatAgent: Attacking LLM-Empowered Recommender Systems via LLM Agent},
author={Ning, Liang-bo and Wang, Shijie and Fan, Wenqi and Li, Qing and Xu, Xin and Chen, Hao and Huang, Feiran},
booktitle={Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining},
pages={2284--2295},
year={2024}
}
Cite A Survey on RAG Meets LLMs: Towards Retrieval-Augmented Large Language Models
@article{ding2024survey,
title={A Survey on RAG Meets LLMs: Towards Retrieval-Augmented Large Language Models},
author={Ding, Yujuan and Fan, Wenqi and Ning, Liangbo and Wang, Shijie and Li, Hengyun and Yin, Dawei and Chua, Tat-Seng and Li, Qing},
journal={arXiv preprint arXiv:2405.06211},
year={2024}
}
Cite Graph Machine Learning in the Era of Large Language Models (LLMs)
@article{fan2024graph,
title={Graph Machine Learning in the Era of Large Language Models (LLMs)},
author={Fan, Wenqi and Wang, Shijie and Huang, Jiani and Chen, Zhikai and Song, Yu and Tang, Wenzhuo and Mao, Haitao and Liu, Hui and Liu, Xiaorui and Yin, Dawei and others},
journal={arXiv preprint arXiv:2404.14928},
year={2024}
}
Cite Graph Unlearning with Efficient Partial Retraining
@article{zhang2024graph,
title={Graph Unlearning with Efficient Partial Retraining},
author={Zhang, Jiahao and Wang, Lin and Wang, Shijie and Fan, Wenqi},
journal={arXiv preprint arXiv:2403.07353},
year={2024}
}
Cite Multi-agent Attacks for Black-box Social Recommendations
@article{wang2023multi,
title={Multi-agent Attacks for Black-box Social Recommendations},
author={Wang, Shijie and Fan, Wenqi and Wei, Xiao-yong and Mei, Xiaowei and Lin, Shanru and Li, Qing},
journal={ACM Transactions on Information Systems},
year={2023},
publisher={ACM New York, NY}
}
Cite A novel multi-agent deep RL approach for traffic signal control
@inproceedings{shijie2023novel,
title={A novel multi-agent deep RL approach for traffic signal control},
author={Shijie, Wang and Shangbo, Wang},
booktitle={2023 IEEE International Conference on Pervasive Computing and Communications Workshops and other Affiliated Events (PerCom Workshops)},
pages={15--20},
year={2023},
organization={IEEE}
}