Research
- Demonstrate that LLMs could achieve performance on par with vanilla GNNs on node classification tasks. LLMs exhibit initial proficiency in handling node classification on graph-structured data.
- Proposed Similarity based Neighbor Selection (SNS in short) to mitigate challenges such as over-squashing and heterophilous issues on traditional prompting methods
- Proposed Self-Contemplation Prompting, a simple, resource-efficient and broadly applicable prompting strategy without the need for external training data and human intervention, which is promising to be a more comprehensive and stable paradigm for evaluating LLMs
- Conducted extensive experiments on ChatGPT in both answer-only and Chain-of-Thought (CoT) scenario to reveal the new properties of model-generated few-shot examples and provide some insights into the mechanism of in-context learning
- Addressed some of the issues associated with traditional in-context learning methods, such as the lack of manually annotated data and the instability of the performance
- Proposed a comprehensive pipeline to clean and extract desired information from the output of the LLMs
KNN Search Enhance the Few-shot Capability of LLMs in Summarization
- Proposed a prompting framework optimizing the selection of Few-shot examples for prompting LLMs like GPT using k Nearest Neighbor search
- Utilized divide and conquer, two steps of summary as well as truncation to effectively solve the context limit of LLMs while dealing with long text (during the course of this research, the model with the largest context was text-davinci-003, with a maximum of 4097 tokens)
BRIO-GNN: Text Summarization Based on Global Corpus via GNN
- Utilized Graph Neural Networks (GNNs) to address the challenge that traditional transformer-based fine-tuned summarization models cannot directly reference the training corpus