Recommendations for Social Work Researchers and Journal Editors on the Use of Generative AI and Large Language Models
Abstract
Generative artificial intelligence (AI) and large language models (LLMs) are poised to significantly impact social work research. These technologies can produce high-quality written materials and support qualitative and quantitative data analysis with simple, plain-language prompts from users. However, they also introduce challenges, such as potential bias, data privacy concerns, and generation of misinformation. In this paper, we use a disruptive–disrupting framework to discuss the dual nature of generative AI and LLMs and offer recommendations for social work researchers and journal editors that include guidance around data collection, analysis, interpretation, and dissemination. Researchers must use great caution when deploying generative AI technologies, meticulously examining, verifying, and taking accountability for the text and analyses produced by these instruments. Likewise, journal editors will need to implement quality control procedures and ethical standards to guide and evaluate the use of these technologies in social work research. We consider the recommendations offered here as a point of departure for disciplinary conversations about the role of generative AI and LLMs in social work research.
The advent of generative artificial intelligence (AI) and large language models (LLMs) such as ChatGPT represents a substantial leap forward in the power of computing. These advanced technologies can generate coherent, relevant, and high-quality text based on simple, plain-language prompts provided by the user. The ability to process and analyze vast amounts of text allows LLMs to quickly identify patterns, trends, and relationships in the data with the possibility for novel insights and better informed decision-making. Scholars are beginning to note the vast potential of LLMs in social work research and practice as demonstrated by recent papers in disciplinary journals by Goldkind et al. (in press), Ioakimidis & Maglajlic (2023), Perron (2023), Scheyett (2023), Singer et al. (2023), and Victor et al. (2023).
Despite the transformative potential of generative AI and LLMs, some researchers—such as Sardana et al. (2023)—have expressed concern about LLMs’ disruptive and potentially destabilizing effects on academia. One of the main concerns is the potential for LLMs to exacerbate existing inequalities and biases in academic research. Because LLMs are trained on large amounts of data, the models will reproduce biases inherent in the data, such as biases that result from the use of texts with racial and gender biases to train the model (Gordon, 2023). Another concern is the potential for LLMs to replace human expertise and judgment in academic research. LLMs are powerful tools that can quickly generate large amounts of text, but they lack the same level of critical thinking, reflection, and analysis as human researchers. A risk exists that LLMs could be used to automate research processes, reducing the value placed on human expertise and professional knowledge and lowering the quality of research outputs. Finally, LLMs give rise to significant data privacy and security issues, such as concerns over who owns and has access to the data input into these models.
More specific to social work, generative AI is likely to significantly influence teaching, practice, and research. For instance, in their groundbreaking study, Felten and colleagues (2023) devised a metric to evaluate various occupations in terms of their potential exposure to—or capacity to be influenced by—generative AI by linking 10 AI capabilities (e.g., image recognition, reading comprehension, and language modeling) to 52 human skills (e.g., oral comprehension, oral expression, and inductive reasoning). Among the more than 700 occupations assessed, “Social Work Teachers, Postsecondary” ranked 11th in terms of potential AI exposure. This high potential for AI exposure suggests a pressing need for social work professionals to proactively engage with and leverage generative AI technologies to stay at the forefront of their field.
This article delves into the dual nature of LLMs as both a disruptive and a disrupting technology in social work research. Disruptive technologies are innovations that significantly alter how an industry or market operates, often displacing established methods or practices (Danneels, 2004). LLMs can revolutionize social work research through their ability to generate high-quality text and support data analysis. Conversely, disrupting technologies can be understood as innovations that create problems and complications. Although LLMs may bring transformative advancements, they also have the potential to introduce a wide variety of problems that must be carefully examined and addressed within the context of social work research. We provide this disruptive–disrupting framework to foreground our recommendations around the use of generative AI within social work research. We then review existing guidelines on using generative AI in research established by other scientific disciplines, editors at scholarly journals, and major publishers. Based on the identified opportunities and challenges and in consideration of existing guidelines, we propose an initial set of recommendations that span the social work research continuum—including data collection, analysis, interpretation, and dissemination—to enable researchers and journal editors to integrate LLMs into their activities while considering the ethical, privacy, and social implications of doing so.
We also recognize that the social work profession will need to iteratively adapt and revise guidelines related to generative AI to respond to rapidly evolving LLM technologies, and we look forward to ongoing dialogue and reflection on the role of LLMs in our disciplinary research. Therefore, our goal is not to establish definitive rules in this area. Instead, we hope to provide mechanisms for experimentation, a set of provocations, and suggested recommendations around the responsible and effective integration of LLMs into social work research, minimizing the possible risks and maximizing their transformative potential.
Background on Generative Artificial Intelligence and Large Language Models
Generative AI uses deep learning and natural language processing to understand and respond to human conversation. Unlike traditional AI technologies that rely on rules-based programming, generative AI models analyze the nuances of human language to produce natural, human-like responses, often based on plain-language prompts from the user. LLMs are a specific type of generative AI trained on massive text data sets, including books, articles, and Web pages. These models use advanced deep learning techniques to analyze and understand the patterns and structures of language, enabling them to generate human-like text that can be used for a wide range of applications.
LLMs are particularly useful for language translation, chatbots, summarization, content creation, and text classification. They can also answer questions and evaluate the quality of written responses (see Victor et al., 2023). In contrast to traditional AI, which may struggle with the complexities and nuances of human language, LLMs can generate coherent text similar to what a human might produce. This has made LLMs increasingly popular, with professionals across various fields using them to analyze vast amounts of textual data and generate insights that may have been difficult to produce otherwise.
Large Language Models as a Disruptive Technology
LLMs have the potential to substantially improve research activities by offering novel types of support that increase efficiency and unlock new analytic capabilities. We consider these novel supports to be disruptive within social work research as they will likely alter the ways that many of us conduct our work. In this section, we first discuss how LLMs can aid general research activities and then provide specific examples related to qualitative and quantitative research.
General Research Activities
At the core of LLMs’ utility is their ability to provide various levels of writing assistance. They can help with basic language-related tasks, such as correcting spelling and grammar errors. Beyond that, researchers can use LLMs for more advanced purposes, such as generating original text, translating languages for international scholars, and even facilitating the generation of novel ideas and hypotheses. (See Appendix A, online, for an example of how we used ChatGPT-4 to support the writing of this article.)
Software such as Elicit can integrate LLMs to facilitate literature reviews by generating concise summaries and extracting key points from large collections of articles (Kung, 2023). Scite is an AI-platform that helps researchers discover and evaluate scientific articles using “Smart Citations.” The Smart Citations feature enables users to gain insight into how a publication has been referenced by offering information on the context of the citation and a classification indicating whether it supports or opposes the cited argument (Scite Inc., 2023). Semantic Scholar (n.d.) is another AI tool that uses an algorithm to discover connections and links between research topics. These are just a few technologies that support research. Researchers can also use LLMs to get feedback on manuscripts in a way that mimics an expert peer review. Those considering review papers such as a systematic review or scoping study may also use LLMs to help frame Boolean queries and construct search terms (Wang et al., 2023).
Qualitative Research
LLMs have the potential to reshape many core activities in qualitative research. With their ability to process and analyze vast amounts of natural language data, LLMs can significantly reduce the time and effort required for researchers to analyze text data manually. An early example of LLMs’ impact on qualitative research is the integration of ChatGPT into a major qualitative software platform—ATLAS.ti—within months of its public release (ATLAS.ti, 2023a). The software claims it can help users to “gain qualitative insights in minutes instead of weeks” (ATLAS.ti, 2023b, para. 1). LLMs can also assist with qualitative research by analyzing large volumes of text data and identifying themes or patterns that might not be immediately apparent through manual analysis. This can significantly reduce the time and effort required for researchers to identify and analyze themes, enabling them to focus on interpreting and contextualizing the data. Whether these innovations are also disrupting for qualitative research (i.e., creating problems and complications) is a point we consider later.
Quantitative Research
A powerful tool for quantitative research, LLMs can help analysts select suitable statistical procedures for data analysis when provided with research questions and variable descriptions. Moreover, LLMs can assist in writing and debugging code, thereby streamlining the programming process. LLMs can support data mining with their ability to identify and categorize different data types and identify specific entities such as names or locations within the data. They can also perform sentiment analysis, which involves determining the emotions or attitudes expressed in a text by examining words and phrases in context to identify whether the sentiment is positive, negative, or neutral. As LLMs advance, their capacity to create data visualizations will also improve, enhancing their utility for dissemination of quantitative findings.
Large Language Models as a Disrupting Technology
Despite their potential benefits for the research process, LLMs also pose challenges. Several potential problems with these models could significantly limit their utility and acceptability for social work research. For this reason, we also view LLMs as a disrupting technology. Some of these issues are foreseeable, whereas others may arise unexpectedly.
Privacy
Privacy violations are a potential risk of using LLMs in social work research. Most LLMs are cloud-based services, and data inputs are transmitted and processed offsite. Researchers who are unfamiliar with how LLMs work may inadvertently expose sensitive data to risks that do not conform to the ethical and security protections required in human subjects research. Social work researchers must understand the potential risks associated with using LLMs and ensure they are fully informed about the data privacy and security policies of any cloud-based services they use. This includes being mindful of the potential for data breaches; the risks associated with transmitting data to third-party servers, such as a breach of participant confidentiality; and the importance of adhering to ethical and legal guidelines related to human subjects research.
Data Ownership
Another key challenge that social work researchers may face when using proprietary generative AI and LLMs is the lack of clarity around ownership of data entered into these models. For example, OpenAI—the developer of ChatGPT—notes in its terms of use that the user owns all data and content that is input into the model (OpenAI, 2023). However, the company may use content that the user enters into ChatGPT to improve the model’s performance (Markovski, 2023). Like the privacy issues noted earlier, researchers should be aware of the terms of use established by LLM developers and understand the implications of those terms for ongoing access to data inputs.
Model Bias
LLMs can be biased depending on the data used to train them, especially in text generation, sentiment analysis, classification, and thematic analysis (Gordon, 2023). This presents a substantial problem for social work research, as inaccurate or biased results can lead to misinterpretation of data, perpetuation of harmful stereotypes, and policy decisions that reproduce inequity. Furthermore, LLMs are often complex and opaque, making it difficult for researchers to understand how they work and what biases may be present in the model. This can make it challenging to ensure that the results generated by LLMs are accurate, leading to potential ethical concerns and challenges in interpreting and applying the results.
Attribution
Another potential problem associated with using LLMs in research is understanding research attribution. LLMs are typically trained on large sets of text data, which may include sources from a wide range of authors and contexts. As a result, it may be challenging to accurately attribute specific findings to particular sources, which is a crucial aspect of research integrity and accountability. One inaccuracy of current LLMs is the production of false references (Alkaissi & McFarlane, 2023). Furthermore, LLMs may be trained on sources that have not undergone the same rigorous peer-review process as traditional research publications, and LLMs might not be built on the most current available data. This raises concerns about the accuracy and validity of the information that LLMs are trained on.
Another substantial challenge related to research attribution is determining which aspects of the research process were performed by humans and which were performed by AI models. This challenge can be particularly pronounced when the model generates text for use in scientific communication. Compounding this issue is the fact that a researcher may work with an idea that is further shaped by the AI model through ongoing interactions. This makes it even more challenging to understand a given work’s originality and assign appropriate attribution.
Research Quality and Volume
As mentioned previously, LLMs can improve the efficiency and output of social work research by facilitating aspects of the research process. However, this efficiency may come at the expense of research quality, as researchers may rely too heavily on automating essential research tasks without engaging in critical thinking or rigorously evaluating the data. As one example, the integration of ChatGPT within ATLAS.ti (2023a) will certainly increase qualitative coding speed, but the integration of this generative AI technology might diminish the quality and trustworthiness of analyses.
The academic reward structure, which prioritizes research volume over quality, will likely reinforce this problem. Research-intensive schools and international institutions that predicate promotion decisions on the number of publications in Social Sciences Citation Index-ranked journals are particularly susceptible to this trend. As a result, there is an increased risk for a rise in the overall number of publications that make questionable contributions to knowledge. Additionally, there is a significant risk of research reporting inaccurate or potentially harmful conclusions. As a result, editors and reviewers may face an overwhelming number of studies with limited value. This might also be of concern to social work, as we may see more researchers seeking to publish in open-access mega-journals, possibly prioritizing publication speed and volume over research quality.
Reproducibility
LLMs are models that make predictions based on probabilities rather than deterministic rules. As a result, the same LLM trained on the same data may generate slightly different results each time a researcher runs it, even with identical prompts. This variability can make it difficult to reproduce results and can create challenges for interpreting the results of LLM-based studies. Additionally, many LLMs lack a way to set seed values, a computing strategy used to ensure that results can be reproduced. Without the ability to set seed values, it can be difficult to know whether differences in results are due to true differences in the data or simply chance variations in the LLM’s predictions. Additionally, LLMs are constantly evolving and improving, with new versions and updates released regularly. This can make it difficult to reproduce results obtained using an older version of the LLM, as the newer version may produce different results due to changes in the underlying algorithms or data.
Explainability of Findings
A final challenge is that LLM results are often difficult to explain. That is, models’ underlying processes and decision-making mechanisms are not readily understood or transparent because of the complex and vast neural networks on which they are based. Consequently, we cannot trace the specific reasoning behind a model’s output or understand how the model arrived at a particular conclusion or prediction. This is also an ethical issue, as the underlying processes that generated the results may result from bias encoded in the model.
Review of Existing Artificial Intelligence and Large Language Model Guidelines and Recommendations
Our recommendations on the use of generative AI in social work research are informed by those established by other scientific disciplines, editors at scholarly journals, and major publishers. We focused on the preliminary guidelines provided by Nature (2023), Proceedings of the National Academy of Sciences (PNAS, 2023), Committee on Publication Ethics (COPE, 2023), Science (Thorp, 2023), Elsevier (2023), and JAMA (Flanagin et al., 2023) as a basis for our discussion. We chose these guidelines for review due to their availability and broad influence; we did not find any guidelines specifically designed for social work or the social sciences.
All established guidelines were concise, containing both prescriptive and proscriptive elements. They consistently agreed on two main points. First, AI tools cannot be granted authorship, as authorship implies accountability for the work. Elsevier, for instance, adheres to this policy, even though ChatGPT was listed as a coauthor of an article (O’Connor & ChatGPT, 2023) in an Elsevier journal, Nurse Education in Practice. The article has since been amended, removing ChatGPT as an author.
The second guideline necessitates that researchers document their use of LLMs. However, the definition of acceptable use varies among the guideline developers. For instance, Science (2023) takes the most stringent position, declaring that “Text generated from AI, machine learning, or similar algorithmic tools cannot be used in papers published in Science journals, nor can the accompanying figures, images, or graphics be the products of such tools, without explicit permission from the editors” (para. 6). Taken literally, this statement suggests that using AI tools to improve a manuscript’s readability might be deemed misconduct. This requirement could also unfairly diminish the capacity of researchers with English as a second language to engage in scientific discussions where English is the primary language. Duracinsky and colleagues (2017) reported that limited skills in English are a major barrier to publishing in English-language journals for French researchers. However, LLMs could be an important tool in mitigating those barriers.
Although JAMA does not explicitly forbid using AI tools to generate content or images, it discourages the practice unless accompanied by a clear description and explanation. PNAS and COPE require that AI software is acknowledged in the Materials and Methods or Acknowledgements sections but do not offer guidance on acceptable use. Elsevier has the most lenient policies, permitting the use of AI and AI-assisted technology to enhance the readability and language of the work while emphasizing the importance of human oversight and control in the application of such technology.
The guidelines provided by prominent journals and organizations such as Nature, PNAS, Science, Elsevier, COPE, and JAMA are crucial in addressing authorship concerns and the application of AI tools in scientific research. However, these guidelines are insufficient for guiding the future of social work research for several reasons. First and foremost, these guidelines are not specifically designed for the social work discipline. As a result, they may not adequately address the unique ethical issues that require careful consideration within social work research contexts, such as research involving vulnerable and marginalized groups. The guidelines also lack sufficient guidance on how AI tools should be described and documented in research publications, which can lead to inconsistent reporting practices and make it challenging to assess the validity and reliability of findings.
These guidelines emphasize the role of researchers, particularly concerning their application of AI in manuscript development. However, it is essential to recognize that AI issues are equally relevant to journal editors and reviewers, as they hold significant responsibilities in upholding the quality and integrity of social work research. Furthermore, scholars have been exploring the use of AI in the peer-review process (Checco et al., 2021), which presents a range of novel opportunities and potential challenges that warrant careful consideration. Building on our review of existing guidelines, we suggest a preliminary set of recommendations for using generative AI and LLMs in social work research. These recommendations offer guidance for contending with the distinct concerns and challenges these new technologies pose for social work while also fostering transparency and ethical application of generative AI tools. We view these recommendations as a foundation that will require ongoing review and modification to keep pace with the rapidly evolving technological landscape and to address unforeseen issues that are likely to arise during this progression.
Recommendations for Social Work Researchers and Journal Editors
In laying out our recommendations, we first provide an overview of what we see as the general responsibilities of social work researchers who intend to use generative AI technologies. These responsibilities largely pertain to the knowledge and ethical awareness that research teams should develop prior to deployment of generative AI within a program of research. This list of responsibilities might also serve as a roadmap for training new investigators or be useful in identifying needed skill development for experienced researchers who are new to AI technologies. Next, we outline the acceptable use of generative AI in social work research, including how researchers should be accountable for AI usage and the requirements for transparent reporting. Lastly, we suggest a list of responsibilities for journal editors and reviewers in social work to ensure that safeguards are maintained throughout the research process.
Recommended Responsibilities of Social Work Researchers Using Generative Artificial Intelligence
Construction and Limitations of Artificial Intelligence Tools
Researchers should first understand how AI tools are constructed, including a sense of their underlying algorithms, data sources, and training processes. This knowledge is essential for recognizing AI tools’ limitations and potential biases and making informed decisions about which tools are best suited for a particular research project. Researchers should also be aware of the boundaries of their tools, such as situations where they might not perform as expected. For example, there may be instances where the model “hallucinates” and provides false information (Alkaissi & McFarlane, 2023). Researchers must be prepared to address these limitations in their research design and analysis. They should also be willing to communicate how the lack of explainability regarding the outputs for many LLMs can make it difficult to understand the mechanisms that generate unexpected performance.
Ethics and Bias
Ethical considerations should be at the forefront of any integration of generative AI technologies. Researchers must be mindful of the potential biases in generative AI algorithms and their training data and work to minimize them to ensure nondiscriminatory outcomes and reporting. They should also consider the implications and impact of AI-facilitated research—including potential harm to individuals, communities, or the environment—and strive to maximize the benefits while minimizing any negative consequences.
Informed Consent
When using generative AI to collect or analyze human subjects data, researchers must obtain informed consent from participants. This includes clearly explaining the purpose of the research, potential risks, benefits, and how the data will be used, stored, and shared. Researchers should also be prepared to answer any questions participants may have about the AI tools used in the research and address any concerns related to privacy or potential harm. We also encourage researchers to consider using opt-out features like those offered by OpenAI that exempt a user’s inputs from being used for subsequent model training and improvements (Markovski, 2023).
Data Privacy and Security
Researchers are responsible for protecting the privacy and security of the data they collect and analyze using generative AI tools. This involves adhering to relevant data protection regulations, anonymizing personal data, and using secure data storage and transfer methods. Researchers should also be transparent about their data handling practices and be prepared to address potential privacy concerns from participants, stakeholders, or regulatory authorities.
Continuous Learning and Development
As AI technologies evolve rapidly, researchers should engage in ongoing learning and development to stay current with the latest advances in AI tools, techniques, and ethical considerations. This includes attending conferences, workshops, and webinars, participating in relevant online forums, and engaging with interdisciplinary communities of practice. By staying informed and actively participating in the AI research community, researchers can ensure that they ethically use the most appropriate and cutting-edge tools and methods.
Recommendations for Deployment
AI technologies can be employed in designing and conducting research and preparing manuscripts for publication. Researchers must actively participate in every stage of the research process, ensuring that their expertise, professional judgment, and ethical considerations guide their work. AI technologies should enhance human intelligence and creativity, facilitating more effective and efficient research without replacing the researcher’s role in shaping research questions, evaluating evidence quality, drafting scientific communications, or drawing meaningful conclusions. Researchers must therefore exercise due diligence when using AI technologies, thoroughly reviewing, validating, and assuming responsibility for the information generated by these tools.
Large Language Models in Planning and Writing
Researchers using LLMs for research planning or manuscript preparation should describe how they employed AI tools. This description should outline tasks the AI assisted with, such as idea generation, literature reviews, language translation, and manuscript editing. The description should include the names and version numbers of the tools used. Authors should also include a statement confirming they have carefully reviewed the AI-informed content and fully assume responsibility for its accuracy and validity. This description can be integrated into the text body or included in an acknowledgment section for articles without a scientific format (e.g., editorials, commentaries, theoretical papers). For articles with a scientific format, the description should be included in the Methods section, along with additional details for data analysis and interpretation. We encourage researchers to align their descriptions of generative AI use with the roles established by the National Information Standards Organization (2023) in their Contributor Role Taxonomy (i.e., CRediT).
Large Language Models in Analysis and Interpretation
Researchers using LLMs for research tasks, such as analyzing structured or unstructured data, should thoroughly describe their use of AI tools. This description should outline tasks the AI assisted with, including data preprocessing, feature extraction, and pattern identification in structured or unstructured data. If researchers employ LLMs for tasks such as information extraction, classification, or clustering, the prompts used should be detailed, along with the names and version numbers of the tools. Authors should also include a statement confirming that they have meticulously reviewed the AI-generated analysis and fully assume responsibility for its accuracy and validity.
Researchers should confirm compliance with ethical standards, including data privacy and security, to ensure the research meets all ethical requirements. This confirmation should detail measures taken to protect sensitive data and maintain data privacy throughout the research process. Furthermore, researchers should clearly describe their efforts to understand and mitigate potential model bias. This may include examining the data sources used to train the model and the steps taken to address any identified biases during the research process. The role of AI tools in analysis and interpretation should be included in the Methods section, alongside AI’s use for research planning and manuscript preparation.
Recommended Responsibilities of Editors and Reviewers
Editors and reviewers hold significant responsibilities regarding the use of generative AI and LLMs in social work research, as their roles are essential in maintaining the quality, integrity, and ethical standards of scholarly publications. With the increasing prevalence of generative AI technologies, including LLMs, in various research disciplines, editors and reviewers for social work journals must help ensure responsible and transparent use. As a result, we propose the following guidelines to establish safeguards and maintain high-quality research.
Quality Control
Editors and reviewers should ensure that research incorporating LLMs meets the required quality standards specific to the research domain. This responsibility includes verifying the appropriate use of LLMs, evaluating the validity and reliability of AI-generated content, and assessing the overall research methodology used.
Ethical Standards
Editors and reviewers must evaluate the ethical implications of using LLMs in research. This evaluation should address data privacy, security, informed consent, and potential biases. Editors and reviewers are responsible for ensuring that research adheres to the highest ethical standards and minimizes any negative consequences.
Transparency and Accountability
Editors and reviewers should ensure that researchers provide a clear and transparent account of how LLMs have been used in their work. Promoting transparency helps maintain trust in the research process and enables other researchers, stakeholders, and the public to understand and scrutinize the use of AI tools in the research. Editors should therefore ask authors explicitly about using LLMs and provide clear guidelines on the permissible use of LLMs for scholarship submitted to the journal and the process for disclosing that use.
Continuous Learning and Development
To conduct a fair and rigorous peer-review process, editors and reviewers must be knowledgeable about the AI tools used in research. The ability to understand LLMs and their use within social work research cannot remain the domain of AI experts. Instead, reviewers across academic social work must engage in ongoing efforts to stay current with the tools and their applications in research. By staying informed and actively participating in the AI research community, editors and reviewers can ensure that they are well-equipped to assess research involving LLMs and other AI technologies.
Conclusion
The advent of generative AI and LLMs presents both opportunities and challenges for social work research. These cutting-edge technologies hold the potential to enhance the analytical capabilities and efficiency of researchers, providing novel insights and enhancing dissemination capacity. However, as social work embraces these new tools, we must remain vigilant about the potential ethical implications, biases, and risk of generating misinformation. Journal editors and reviewers must be proactive in implementing quality control measures that ensure the rigorous evaluation of studies using LLMs while also fostering a culture of transparency and accountability within the research community. We hope the recommendations offered here will be a useful starting point for ongoing conversations about the role of generative AI in social work research. By collaboratively navigating this complex terrain and adapting our standards as needed, we can harness the transformative power of generative AI to advance social work research and ultimately enhance the well-being of the individuals, families, and communities we serve.
We acknowledge the use of ChatGPT-4 (Version March 2023) in the conceptualization and writing stages of this article, both the original draft and reviewing/editing prior to submission. ChatGPT-4 was used for developing and refining ideas, generating text, and manuscript editing prior to acceptance. We have carefully reviewed the AI-generated content to ensure its accuracy and validity, and we assume full responsibility for the content presented in this article.
Notes
Bryan Victor, PhD, is an assistant professor at the School of Social Work, Wayne State University.
Rebeccah Sokol, PhD, is an assistant professor at the School of Social Work and the Institute for Firearm Injury Prevention, University of Michigan—Ann Arbor.
Lauri Goldkind, PhD, is an associate professor at the Fordham University Graduate School of Social Service and editor-in-chief at the Journal of Technology in Human Services.
Brian Perron, PhD, is a professor at the School of Social Work, University of Michigan—Ann Arbor.
Correspondence regarding this article should be directed to Bryan Victor, PhD, 5447 Woodward Ave., Detroit, MI 48202 or via e-mail to [email protected].
References
Alkaissi, H., & McFarlane, S. I. (2023, February 19). Artificial hallucinations in ChatGPT: Implications in scientific writing. Cureus, 15(2), Article e35179. https://doi.org/10.7759/cureus.35179 ATLAS.ti (2023a). Introducing: AI coding Beta powered by OpenAI. https://atlasti.com/ai-coding-powered-by-openai ATLAS.ti (2023b). Accelerating innovation for data analysis. https://atlasti.com/atlas-ti-ai-lab-accelerating-innovation-for-data-analysis Checco, A., Bracciale, L., Loreti, P., Pinfield, S., & Bianchi, G. (2021). AI-assisted peer review. Humanities and Social Sciences Communications, 8(1), 1–11. https://www.nature.com/articles/s41599-020-00703-8 Committee on Publication Ethics (COPE). (2023). Authorship and AI tools. https://publicationethics.org/cope-position-statements/ai-author Danneels, E. (2004). Disruptive technology reconsidered: A critique and research agenda. Journal of Product Innovation Management, 21(4), 246–258. https://doi.org/10.1111/j.0737-6782.2004.00076.x Duracinsky, M., Lalanne, C., Rous, L., Dara, A. F., Baudoin, L., Pellet, C., Descamps, A., Péretz, F., & Chassany, O. (2017). Barriers to publishing in biomedical journals perceived by a sample of French researchers: Results of the DIAzePAM study. BMC Medical Research Methodology, 17, Article 96. https://doi.org/10.1186/s12874-017-0371-z Elsevier. (2023). Publishing ethics. https://www.elsevier.com/about/policies/publishing-ethics Felten, E., Raj, M., & Seamans, R. (2023). How will language modelers like ChatGPT affect occupations and industries? [Working paper]. https://arxiv.org/abs/2303.01157 Flanagin, A., Bibbins-Domingo, K., Berkwits, M., & Christiansen, S. L. (2023). Nonhuman “authors” and implications for the integrity of scientific publication and medical knowledge. JAMA, 329(8), 637–639. https://doi.org/10.1001/jama.2023.1344 Goldkind, L., Wolf, L., Glennon, A., Rios, J., Nissen, L. (in press). The end of the world as we know it? ChatGPT and social work. Social Work. Gordon, R. (2023, March 3). Large language models are biased: Can logic help save them? MIT News. https://news.mit.edu/2023/large-language-models-are-biased-can-logic-help-save-them-0303 Ioakimidis, V., & Maglajlic, R. A. (2023). Neither ‘neo-Luddism’ nor ‘neo-positivism’: Rethinking social work’s positioning in the context of rapid technological change. The British Journal of Social Work, 53(2), 693–697. https://doi.org/10.1093/bjsw/bcad081 Kung, J. Y. (2023). Product review of Elicit. Journal of the Canadian Health Libraries Association, 44(1), 15–18. https://doi.org/10.29173/jchla29657 Markovski, Y. (2023). How your data is used to improve model performance. OpenAI. https://help.openai.com/en/articles/5722486-how-your-data-is-used-to-improve-model-performance National Information Standards Organization. (2023). CRediT. https://credit.niso.org/ Nature. (2023, January 24). Tools such as ChatGPT threaten transparent science; here are our ground rules for their use. https://www.nature.com/articles/d41586-023-00191-1 O’Connor, S., & ChatGPT (2023). Open artificial intelligence platforms in nursing education: Tools for academic progress or abuse? Nurse Education in Practice, 66, Article 103537. https://doi.org/10.1016/j.nepr.2022.103537 OpenAI. (2023). Terms of use (March 14, 2023, version). https://openai.com/policies/terms-of-use Perron, B. E. (2023, April 11). Large language models expose additional flaws in the national social work licensing exams. Medium. https://towardsdatascience.com/large-language-models-expose-additional-flaws-in-the-national-social-work-licensing-exams-d5d2ca426fec Proceedings of the National Academy of Sciences (PNAS). (2023, February 21). The PNAS journals outline their policies for ChatGPT and generative AI. PNAS Updates. https://www.pnas.org/post/update/pnas-policy-for-chatgpt-generative-ai Sardana, D., Fagan, T. R., & Wright, J. T. (2023). ChatGPT: A disruptive innovation or disrupting innovation in academia? The Journal of the American Dental Association, 154(5), 361–364. https://doi.org/10.1016/j.adaj.2023.02.008 Scheyett, A. (2023). A liminal moment in social work. Social Work, 68(2), 101–102. https://doi.org/10.1093/sw/swad010 Science. (2023). Science journals: Editorial policies. https://www.science.org/content/page/science-journals-editorial-policies Scite Inc. [Computer software]. (2023). Retrieved on April 25, 2023, from https://scite.ai Semantic Scholar [Computer software]. (n.d.). Retrieved on April 25, 2023, from https://semanticscholar.org Singer, J. B., Báez, J. C., & Rios, J. A. (2023). AI creates the message: Integrating AI language learning models into social work education and practice. Journal of Social Work Education, 59(2), 294–302. https://doi.org/10.1080/10437797.2023.2189878 Thorp, H. H. (2023). ChatGPT is fun, but not an author. Science, 379(6630), 313. https://doi.org/10.1126/science.adg7879 Victor, B. G., Kubiak, S., Angell, B., & Perron, B. E. (2023). Time to move beyond the ASWB licensing exams: Can generative artificial intelligence offer a way forward for social work? Research on Social Work Practice, 33(5), 511–517. https://doi.org/10.1177/10497315231166125 Wang, S., Scells, H., Koopman, B., & Zuccon, G. (2023). Can ChatGPT write a good Boolean query for systematic review literature search? arXiv. https://arxiv.org/abs/2302.03495