FLAT: Chinese NER using flat-lattice transformer X Li, H Yan, X Qiu, X Huang arXiv preprint arXiv:2004.11795, 2020 | 522 | 2020 |
TENER: adapting transformer encoder for named entity recognition H Yan, B Deng, X Li, X Qiu arXiv preprint arXiv:1911.04474, 2019 | 427 | 2019 |
How far are we to gpt-4v? closing the gap to commercial multimodal models with open-source suites Z Chen, W Wang, H Tian, S Ye, Z Gao, E Cui, W Tong, K Hu, J Luo, Z Ma, ... Science China Information Sciences 67 (12), 220101, 2024 | 320 | 2024 |
A unified generative framework for aspect-based sentiment analysis H Yan, J Dai, X Qiu, Z Zhang arXiv preprint arXiv:2106.04300, 2021 | 302 | 2021 |
A unified generative framework for various NER subtasks H Yan, T Gui, J Dai, Q Guo, Z Zhang, X Qiu arXiv preprint arXiv:2106.01223, 2021 | 298 | 2021 |
Internlm-xcomposer2: Mastering free-form text-image composition and comprehension in vision-language large model X Dong, P Zhang, Y Zang, Y Cao, B Wang, L Ouyang, X Wei, S Zhang, ... arXiv preprint arXiv:2401.16420, 2024 | 204 | 2024 |
Internlm2 technical report Z Cai, M Cao, H Chen, K Chen, K Chen, X Chen, X Chen, Z Chen, Z Chen, ... arXiv preprint arXiv:2403.17297, 2024 | 196 | 2024 |
Internlm-xcomposer: A vision-language large model for advanced text-image comprehension and composition P Zhang, X Dong, B Wang, Y Cao, C Xu, L Ouyang, Z Zhao, H Duan, ... arXiv preprint arXiv:2309.15112, 2023 | 182 | 2023 |
Cpt: A pre-trained unbalanced transformer for both chinese language understanding and generation Y Shao, Z Geng, Y Liu, J Dai, H Yan, F Yang, Z Li, H Bao, X Qiu Science China Information Sciences 67 (5), 152102, 2024 | 170 | 2024 |
Does syntax matter? a strong baseline for aspect-based sentiment analysis with roberta J Dai, H Yan, T Sun, P Liu, X Qiu arXiv preprint arXiv:2104.04986, 2021 | 168 | 2021 |
Learning sparse sharing architectures for multiple tasks T Sun, Y Shao, X Li, P Liu, H Yan, X Qiu, X Huang Proceedings of the AAAI conference on artificial intelligence 34 (05), 8936-8943, 2020 | 162 | 2020 |
Unified demonstration retriever for in-context learning X Li, K Lv, H Yan, T Lin, W Zhu, Y Ni, G Xie, X Wang, X Qiu arXiv preprint arXiv:2305.04320, 2023 | 107 | 2023 |
Internlm-xcomposer2-4khd: A pioneering large vision-language model handling resolutions from 336 pixels to 4k hd X Dong, P Zhang, Y Zang, Y Cao, B Wang, L Ouyang, S Zhang, H Duan, ... arXiv preprint arXiv:2404.06512, 2024 | 100 | 2024 |
Secrets of rlhf in large language models part i: Ppo R Zheng, S Dou, S Gao, Y Hua, W Shen, B Wang, Y Liu, S Jin, Q Liu, ... arXiv preprint arXiv:2307.04964, 2023 | 99 | 2023 |
Contrast and generation make bart a good dialogue emotion recognizer S Li, H Yan, X Qiu Proceedings of the AAAI conference on artificial intelligence 36 (10), 11002 …, 2022 | 86 | 2022 |
Anygpt: Unified multimodal llm with discrete sequence modeling J Zhan, J Dai, J Ye, Y Zhou, D Zhang, Z Liu, X Zhang, R Yuan, G Zhang, ... arXiv preprint arXiv:2402.12226, 2024 | 80 | 2024 |
Codeie: Large code generation models are better few-shot information extractors P Li, T Sun, Q Tang, H Yan, Y Wu, X Huang, X Qiu arXiv preprint arXiv:2305.05711, 2023 | 64 | 2023 |
Internlm-xcomposer-2.5: A versatile large vision language model supporting long-contextual input and output P Zhang, X Dong, Y Zang, Y Cao, R Qian, L Chen, Q Guo, H Duan, ... arXiv preprint arXiv:2407.03320, 2024 | 61 | 2024 |
Moss: Training conversational language models from synthetic data T Sun, X Zhang, Z He, P Li, Q Cheng, H Yan, X Liu, Y Shao, Q Tang, ... arXiv preprint arXiv:2307.15020 7, 3, 2023 | 58 | 2023 |
Secrets of rlhf in large language models part ii: Reward modeling B Wang, R Zheng, L Chen, Y Liu, S Dou, C Huang, W Shen, S Jin, E Zhou, ... arXiv preprint arXiv:2401.06080, 2024 | 57 | 2024 |