NEXT-GEN TELECOM AI: MASTERING PROMPT ENGINEERING FOR INNOVATION

Authors

DOI:

https://doi.org/10.20535/2411-2976.12025.22-29

Keywords:

prompt engineering, large language models, telecommunications, network management, 5G, 6G, few-shot learning, chain-of-thought prompting, multi-step prompting, NER, RAG

Abstract

Background. Since 2021, prompt engineering has emerged as a cornerstone of artificial intelligence (AI), revolutionising telecommunications by 2023 through optimised large language models (LLMs).

Objective. This review synthesises existing research to evaluate prompt engineering’s transformative role in telecommunications, emphasising practical applications, technical challenges, and future directions.

Methods. This analysis draws on 2021–2025 literature from 31 sources, including IEEE journals, ACM Transactions on Information Systems, NeurIPS proceedings, and arXiv preprints, examining prompt engineering techniques like few-shot learning, chain-of-thought prompting, multi-step prompting, Named Entity Recognition (NER), Retrieval-Augmented Generation (RAG) and more, with a telecom focus (6G and hypothesised 5G applications) contextualized within Natural Language Processing (NLP) advancements.

Results. Although research on prompt engineering specifically for 5G telecommunications is currently limited, it presents substantial opportunities for optimising network performance, diagnostics, documentation handling, enhancing customer support, and driving innovation across both 5G and future 6G networks.

Conclusions. Prompt engineering bridges AI capabilities with telecommunications needs, with techniques like NER and RAG contributing to the enhancement of mobile communications. The dearth of 5G-specific research highlights the urgent need for specialised LLMs in telecommunications and automated prompting to advance solutions for 5G and 6G.

References

Liu P., Yuan W., Fu J., Jiang Z., Hayashi H., Neubig G. Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. ─ arXiv preprint arXiv:2107.13586v, 2021 https://doi.org/10.48550/arXiv.2107.13586

Wei J., Wang X., Schuurmans D., Bosma M., Ichter B., Xia F., Chi E. H., Le Q. V., Zhou D. Chain-of-thought prompting elicits reasoning in large language models. ─ Advances in Neural Information Processing Systems. ─ 2022. ─ Vol. 35. ─ pp. 24824─24837. Scopus https://doi.org/10.48550/arXiv.2201.11903

Britto R., Murphy T., Iovene M., Jonsson L., Erol-Kantarci M., Kovács B. Telecom AI native systems in the age of generative AI—An engineering perspective. ─ arXiv preprint arXiv: arXiv:2310.11770v1, 2023. https://doi.org/10.48550/arXiv.2310.11770

Jiang F., Wang X., Li Y., Zhang H. Large language model enhanced multi-agent systems for 6G communications. ─ arXiv preprint arXiv:2310.11770, 2023. https://arxiv.org/html/2312.07850v1

Bhargava A., Dua M., Kunc M., Pennock S. What’s the magic word? A control theory of LLM prompting. ─ arXiv preprint arXiv:2402.00812, 2024. https://doi.org/10.48550/arXiv.2310.04444

Lewis P., Perez E., Piktus A., Petroni F., Karpukhin V., Goyal N., Küttler H., Lewis M., Yih W., Rocktäschel T., Riedel S., Kiela D. Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. ─ arXiv preprint arXiv:2005.11401v4 2021. https://doi.org/10.48550/arXiv.2005.11401

Bornea A.-L., Ayed F., De Domenico A., Piovesan N., Maatouk A. Telco-RAG: Navigating the challenges of retrieval-augmented language models for telecommunications. ─ arXiv preprint arXiv:2404.15939v3, 2024. https://doi.org/10.48550/arXiv.2404.15939

Li L., Zhang Y., Chen L. Personalized prompt learning for explainable recommendation. ─ arXiv preprint arXiv: arXiv:2202.07371v2, 2023 https://doi.org/10.48550/arXiv.2202.07371

Zhang H., Zhu Q., Dou Z. A unified prompt-aware framework for personalized search and explanation generation. ─ ACM Transactions on Information Systems. ─ 2025. ─ Vol. 43, No. 3. ─ pp. 1─26. Scopus DOI: 10.1145/3716131

Mao W., Wu J., Chen W., Gao C., Wang X., He X. Reinforced prompt personalization for recommendation with large language models. ─ arXiv preprint arXiv:2407.17115v2, 2025 https://doi.org/10.48550/arXiv.2407.17115

Cheng Q., Chen L., Hu Z., Tang J., Xu Q., Ning B. A novel prompting method for few-shot NER via LLMs. ─ IEEE Transactions on Artificial Intelligence. ─ 2024. ─ Vol. 5, No. 8. ─ pp. 4001─4010. Scopus https://doi.org/10.1016/j.nlp.2024.100099

Maatouk A., Li Z., Xiao M., Wang Y. Large language models for telecom: Forthcoming impact on the industry. ─ arXiv preprint arXiv:2308.06013 https://doi.org/10.48550/arXiv.2308.06013

Liu Y., Zhang H., Li X., Wang Q. Jailbreaking ChatGPT via prompt engineering: An empirical study. ─ arXiv preprint arXiv:2305.13860v2, 2024 https://doi.org/10.48550/arXiv.2305.13860

White J., Hays S., Fu Q., Schmidt D. C. A prompt pattern catalogue to enhance prompt engineering with ChatGPT. ─ arXiv preprint arXiv2302.11382v1, 2023. https://doi.org/10.48550/arXiv.2302.11382

Gilbert H., Lee J., Patel R., Singh A. Semantic compression with large language models. ─ arXiv preprint arXiv:2304.12512v1, 2023. https://doi.org/10.48550/arXiv.2304.12512

Karapantelakis A., Li Y., Zhang H., Wang Q. Using large language models to understand telecom standards. ─ arXiv preprint arXiv:2404.02929v1, 2024. ─ 14 p. https://arxiv.org/html/2404.02929v1

Zhou H., Li X., Zhang Y., Wang Q. Large language models for wireless networks: An overview from the prompt engineering perspective. ─ arXiv preprint arXiv:2411.04136v2, 2024. https://doi.org/10.48550/arXiv.2411.04136

Nguyen T., Tran H., Le P., Pham Q. Large language models in 6G security: Challenges and opportunities. ─ arXiv preprint arXiv:2403.12239v1, 2024. https://arxiv.org/html/2403.12239v1

Schulhoff S., Ilie A., Balepur N., Kahadze K., Liu A., Si C., Sun Y., Zhou C., Zhu C., Pissinou N. The prompt report: A systematic survey of prompting techniques. ─ arXiv preprint arXiv: 2406.06608v6, 2024. https://doi.org/10.48550/arXiv.2406.06608

Zhou H., Hu C., Yuan Y., et al. Large language model (LLM) for telecommunications: A comprehensive survey on principles, key techniques, and opportunities. ─ arXiv preprint arXiv: 2405.10825v2, 2024. https://doi.org/10.48550/arXiv.2405.10825

Sahoo P., Singh A. K., Saha S., Jain V., Mondal S., Chadha A. A systematic survey of prompt engineering in large language models: Techniques and applications. ─ arXiv preprint arXiv: arXiv:2402.07927v2, 2025. https://doi.org/10.48550/arXiv.2402.07927

Jiang S., Zhang Y., Wang X., Li Q., Chen H., Liu J. RESPROMPT: Residual connection prompting advances multi-step reasoning in large language models. ─ arXiv preprint arXiv::2310.04743v2, 2024. https://doi.org/10.48550/arXiv.2310.04743

Ahmed, N., Piovesan N., De Domenico A., Choudhury S. Linguistic intelligence in large language models for telecommunications. ─ arXiv preprint arXiv:2402.15818v1, 2024.. https://doi.org/10.48550/arXiv.2402.15818

Piovesan N., De Filippo De Grazia M., Zorzi M. Telecom language models: Must they be large? ─ arXiv preprint arXiv:2403, 2024 https://doi.org/10.48550/arXiv.2403.04666

Denny P., Kumar V., Giacaman N. Conversing with Copilot: Exploring prompt engineering for solving CS1 problems using natural language. ─ arXiv preprint arXiv: 2210.15157v1, 2022 https://doi.org/10.48550/arXiv.2210.15157

Huang Y., Li X., Zhang H., Wang Q. Large language models for networking: Applications, enabling techniques, and challenges. ─ arXiv preprint arXiv: 2311.17474v1, 2023 https://doi.org/10.48550/arXiv.2311.17474

Schmidt D. C., White J., Fu Q., Hays S. Towards a catalogue of prompt patterns to enhance the discipline of prompt engineering. ─ ADA User J., 2023. [Online]. Retrieved from: https://www.dre.vanderbilt.edu/~schmidt/PDF/ADA-User-Journal.pdf

Yao S., Yu D., Zha, J., Shafran I., Griffiths T. L., Cao Y., Narasimhan K. (2023). Tree of Thoughts: Deliberate Problem Solving with Large Language Models. arXiv preprint arXiv: 2305.10601v2, 2023 https://doi.org/10.48550/arXiv.2305.10601

Yao Y., Li Z., & Zhao H. (2023). Beyond Chain-of-Thought, Effective Graph-of-Thought Reasoning in Large Language Models. arXiv preprint arXiv:2305.16582v2, 2024 https://doi.org/10.48550/arXiv.2305.16582

Yao Y., Zhang A., Zhang Z., Liu Z.,Chua T.-S., Sun M. CPT: Colorful Prompt Tuning for pre-trained vision-language models, AI Open, Volume 5, 2024, pp. 30-38 Scopus https://doi.org/10.1016/j.aiopen.2024.01.004

Said A.I.A., Mekrache A., Boutiba K., Ramantas K., Ksentini A., Rahmani M. 5G INSTRUCT Forge: An Advanced Data Engineering Pipeline for Making LLMs Learn 5G ─ IEEE Transactions on Cognitive Communications and Networking, Volume 11, 2025, pp. 974–986. Scopus https://ieeexplore.ieee.org/document/10794 684

Downloads

Published

2025-06-24

Issue

Section

Статті