Abstract
In an article published in Nature titled “ChatGPT listed as author in research papers” Stone-Walker (2023) shocks the academic community with the fact that GenAI tools such as ChatGPT and Gemini have gained a substantive role in the production of knowledge and academic writing. He reports that one research company has published 80 articles produced by GenAI in academic journals. In the wake of Stone-Walker’s article, many publishers and journal editors set guidelines in relations to the role of GenAI in academic writings. All of them disagree to allow GenAI as an author. Further, the Stanford University Artificial Intelligence Index (2022) reports that there is a fivefold increase in research and publications on fairness and transparency relating to GenAI since 2014 indicating that the ethical issue is even more pressing now. Altogether, such development demonstrates that the academic community is feeling uneasy, disturbed, and anxious on the use of GenAI in the academic endeavour. Although everyone agrees on the practical assistance GenAI provides in academic writing, GenAI also brings epistemic challenges and accompanying integrity risks (Chesterman & Chieh, 2026).
As a journal concerned with human behavior and socio-cultural processes in Asia, Makara Human Behavior Studies in Asia has a particular stake in addressing this issue as we take active roles in preserving academic authority related to journal publications. It is the aim of this editorial note to discuss principles in relation to how GenAI may be used in manuscripts submitted to this journal without sacrificing academic integrity. This editorial note does not yet introduce formal rules or technical instructions. Instead, it articulates the principles that will guide subsequent editorial policies.
For this editorial note, GenAI refers to the term generative AI, which are computational techniques that are capable of generating seemingly new and meaningful content such as text, images, or audio from training data. (Feuerriegel et al., 2024). This can be used to perform tasks such as pattern recognition, prediction, generation, and optimization across research workflows.
Bahasa Abstract
Pada sebuah artikel yang diterbitkan di Nature dengan judul "ChatGPT listed as author in research papers," Stone-Walker (2023) mengejutkan komunitas akademik dengan fakta bahwa alat-alat GenAI seperti ChatGPT dan Gemini telah mengambil peran substansial dalam menghasilkan pengetahuan dan penulisan akademik. Ia melaporkan bahwa sebuah perusahaan riset telah menerbitkan 80 artikel hasil dari GenAI di jurnal-jurnal akademik. Menyusul artikel Stone-Walker, banyak penerbit dan editor jurnal menetapkan pedoman penggunaan GenAI dalam penulisan akademik. Semua pihak sepakat untuk tidak mengizinkan GenAI sebagai penulis. Lebih lanjut, Stanford University Artificial Intelligence Index (2022) melaporkan telah terjadi peningkatan lima kali lipat dalam penelitian dan publikasi mengenai keadilan dan transparansi terkait GenAI sejak 2014, ini mengindikasikan bahwa isu etika saat ini semakin mendesak. Secara keseluruhan, perkembangan ini menunjukkan bahwa komunitas akademik merasa tidak nyaman, terganggu, dan merasa cemas terhadap penggunaan GenAI dalam upaya akademik. Meskipun semua pihak sepakat pada bantuan praktis yang diberikan GenAI dalam penulisan akademik, GenAI juga membawa tantangan epistemik serta risiko integritas yang mengikutinya (Chesterman & Chieh, 2026).
Sebagai jurnal yang berkonsentrasi pada perilaku manusia dan proses sosial-budaya di Asia, Makara Human Behavior Studies in Asia memiliki kepentingan khusus dalam menangani isu ini karena kami mengambil peran aktif dalam menjaga otoritas akademik terkait publikasi jurnal. Tujuan catatan editorial ini adalah untuk membahas prinsip-prinsip terkait bagaimana GenAI dapat digunakan dalam naskah yang dikirimkan ke jurnal ini tanpa mengorbankan integritas akademik. Catatan editorial ini memang belum memperkenalkan aturan formal atau instruksi teknis. Sebaliknya, catatan ini menjelaskan prinsip-prinsip yang akan memandu kebijakan editorial selanjutnya.
Dalam catatan editorial ini, GenAI mengacu pada istilah kecerdasan buatan generatif, yaitu teknik komputasi yang mampu menghasilkan konten yang tampak baru dan bermakna seperti teks, gambar, atau audio dari training data (Feuerriegel et al., 2024). Ini dapat digunakan untuk melakukan tugas-tugas seperti pengenalan pola, prediksi, generasi, dan optimasi dalam alur kerja penelitian.
References
Bjelobaba, S., Waddington, L., Perkins, M., Foltýnek, T., Bhattacharyya, S., & Weber-Wulff, D. (2025). Maintaining research integrity in the age of GenAI: An analysis of ethical challenges and recommendations to researchers. International Journal for Educational Integrity, 21(1), 18. https://doi.org/10.1007/s40979-025-00191-w
Chesterman, S., & Chieh, L. H. (2026). Research integrity and academic authority in the age of artificial intelligence: From discovery to curation? arXiv Preprint arXiv:2601.05574.
Feuerriegel, S., Hartmann, J., Janiesch, C., & Zschech, P. (2024). Generative AI. Business & Information Systems Engineering, 66(1), 111–126. https://doi.org/10.1007/s12599-023-00834-7
Jeon, J., Kim, L., & Park, J. (2025). The ethics of generative AI in social science research: A qualitative approach for institutionally grounded AI research ethics. Technology in Society, 81, 102836. https://doi.org/10.1016/j.techsoc.2025.102836
Kekez, I., Lauwaert, L., & Begičević Ređep, N. (2025). Is artificial intelligence (AI) research biased and conceptually vague? A systematic review of research on bias and discrimination in the context of using AI in human resource management. Technology in Society, 81, 102818. https://doi.org/10.1016/j.techsoc.2025.102818
Lindahl, J., Colliander, C., & Danell, R. (2020). Early career performance and its correlation with gender and publication output during doctoral education. Scientometrics, 122(1), 309–330. https://doi.org/10.1007/s11192-019-03262-1
Mammides, C., & Papadopoulos, H. (2024). The role of large language models in interdisciplinary research: Opportunities, challenges and ways forward. Methods in Ecology and Evolution, 15(10), 1774–1776. https://doi.org/10.1111/2041-210X.14398
Silva, J. C. M. C., Gouveia, R. P., Zielinski, K. M. C., Oliveira, M. C. F., Amancio, D. R., Bruno, O. M., & Oliveira, O. N. (2025). AI-assisted tools for scientific review writing: Opportunities and cautions. ACS Applied Materials & Interfaces, 17(34), 47795–47805. https://doi.org/10.1021/acsami.5c08837
Stanford University. (2022). The AI index 2022 annual report. https://hai.stanford.edu/ai-index/2022-ai-index-report
Stokel-Walker, C. (2023). ChatGPT listed as author on research papers: Many scientists disapprove. Nature, 613(7945), 620–621. https://doi.org/10.1038/d41586-023-00107-z
Recommended Citation
Riantoputra, C. D., Wongkaren, T., Jaya, E. S., Sekarasih, L., & Shadiqi, M. (2026). Editorial Note: GenAI for Academic Writing – Friend or Foe?. Makara Human Behavior Studies in Asia, 30(1), 1-4. https://doi.org/10.7454/hubs.asia.v30.i1.1652
Included in
Arts and Humanities Commons, Educational Technology Commons, Social and Behavioral Sciences Commons