Adlakha V., BehnamGhader P., Lu X. H., Meade N. i Reddy S. (2023), Evaluating correctness and faithfulness of instruction-following models for question answering, ArXiv Preprint. https://doi.org/10.48550/arXiv.2307.16877
Alkaissi H. i McFarlane S.I. (2023), Artificial hallucinations in ChatGPT: implications in scientific writing, Cureus 15, nr 2, artykuł nr e35179. https://doi.org/10.7759/cureus.35179
Bengio S., Vinyals O., Jaitly N. i Shazeer N. (2015), Scheduled sampling for sequence prediction with recurrent neural networks. Advances in neural information processing systems 28, s. 1-28.
Boguszewicz-Kreft M. (2021), Marketing doświadczeń: jak poruszyć zmysły, zaangażować emocje, zdobyć lojalność klientów? Warszawa: CeDeWu.
Capuano N., Fenza G., Loia V. i Nota F.D. (2023), Content-Based Fake News Detection with machine and deep learning: a systematic review, Neurocomputing 540, nr 14, s. 91-103. https://www.doi.org/10.1016/j.neucom.2023.02.005
Chen M., Tworek J., Jun H., Yuan Q., Pinto H. P. D. O., Kaplan J., … i Zaremba W. (2021), Evaluating large language models trained on code, ArXiv Preprint. https://doi.org/ 10.48550/arXiv.2107.03374
Cheng X., Zhang X., Cohen J. i Mou J. (2022), Human vs. AI: Understanding the impact of anthropomorphism on consumer response to chatbots from the perspective of trust and relationship norms, Information Processing & Management 59, nr 3, artykuł nr 102940. https://doi.org/ 10.1016/j.ipm.2022.102940
Colleoni E. i Corsaro D. (2022), Critical issues in artificial intelligence algorithms and their implications for digital marketing, [w:] R. Belk. R. Llamas (red.), The Routledge companion to digital consumption, Hoboken: Taylor and Francis, s. 166-177.
Curran S., Lansley S. i Bethell O. (2023), Hallucination is the last thing you need, ArXiv Preprint. https://doi.org/10.48550/arXiv.2306.11520
DiResta R. (2020), AI-Generated Text Is the Scariest Deepfake of All, https://www.wired. com/story/ai-generated-text-is-the-scariest-deepfake-of-all/ [dostęp: 19.12.2023]
Du Y. (2023), Cooperative multi-agent learning in a complex world: challenges and solutions, Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 13, s. 15436. https://doi.org/10.1609/aaai.v37i13.26803
Dziri N., Madotto A., Zaiane O. i Bose A.J. (2021), Neural path hunter: Reducing hallucination in dialogue systems via path grounding, ArXiv Preprint. https://doi.org/10.48550/arXiv. 2104.08455
Emsley R. (2023), ChatGPT: these are not hallucinations – they’re fabrications and falsifications, Schizophrenia 9, artykuł nr 52. https://doi.org/10.1038/s41537-023-00379-4
Evans O., Cotton-Barratt O., Finnveden L., Bales A., Balwit A., Wills P., Righetti L. i Saunders W. (2021), Truthful AI: Developing and governing AI that does not lie, ArXiv Preprint. https://doi.org/10.48550/arXiv.2110.06674
Evans O., Cotton-Barratt O., Finnveden L., Bales A., BalwitA., Wills P., … i Saunders W. (2021), Truthful AI: Developing and governing AI that does not lie, ArXiv Preprint. https://doi.org/ 10.48550/arXiv.2110.06674
Feak C.B. i Swales J. (2009), Telling a research story: Writing a literature review, Ann Arbor: University of Michigan Press.
Figar V.N. (2023), Metaphorical framings in The New York Times online press reports about ChatGPT, Philologia Mediana 15, nr 15, s. 381-398. https://www.doi.org/10.46630/ phm.15.2023.27
Floridi L. i Chiriatti M. (2020), GPT-3: Its nature, scope, limits, and consequences, Minds and Machines, 30, s. 681-694. https://www.doi.org/10.1007/s11023-020-09548-1
Griffin L.D., Kleinberg B., Mozes M., Mai K.T., Vau M., Caldwell M. i Marvor-Parker A. (2023), Susceptibility to Influence of Large Language Models, ArXiv Preprint. https://doi.org/10.48550/ arXiv.2303.06074
Grobler A. (2000), Uteoretyzowanie, relatywizm i prawda, Przegląd Filozoficzny – Nowa Seria, nr 2 (34), s. 37-45.
Guzman A.L. i Lewis S.C. (2019), Artificial intelligence and communication: A Human–Machine Communication research agenda, New Media & Society 22, nr 1, s. 70-86. https://doi.org/10.1177/1461444819858691
Heaven W.D. (2022), Why Meta’s latest large language model only survived three days online, https://www.technologyreview.com/2022/11/18/1063487/meta-large-language-model-ai-only-survived-three-days-gpt-3-science/ [dostęp: 19.12.2023]
Huang D., Harasim S.A. i Leccia F. (2023), Understanding the emotional experience on consumer behaviors: A study on ChatGPT service (student thesis), Jönköping International Business School.
Huang M.H. i Rust R.T. (2018), Artificial intelligence in service, Journal of service research 21, nr 2, s. 155-172. https://www.doi.org/10.1177/1094670517752459
Huang M.H., Rust R. i Maksimovic V. (2019), The feeling economy: Managing in the next generation of artificial intelligence (AI), California Management Review 61, nr 4, s. 43-65. https://doi.org/10.1177/0008125619863436
Ji Z., Lee N., Frieske R., Yu T., Su D., Xu Y., Ishii E., Bang Y.J., Madotto A. i Fung P. (2023), Survey of hallucination in natural language generation, ACM Computing Surveys 55, nr 12, s. 1-38. https://doi.org/10.1145/3571730
Kasai J., Kasai Y., Sakaguchi K., Yamada Y. i Radev D. (2023), Evaluating gpt-4 and chatgpt on japanese medical licensing examinations, ArXiv Preprint, https://doi.org/10.48550/arXiv. 2303.18027
Kasai J., Kasai Y., Sakaguchi K., Yamada Y. i Radev D. (2023), Evaluating gpt-4 and chatgpt on japanese medical licensing examinations, ArXiv Preprint. https://doi.org/10.48550/arXiv. 2303.18027
Khanam Z., Alwasel B.N., Sirafi H. i Rashid M. (2021), Fake news detection using machine learning approaches. IOP conference series: materials science and engineering 1099, nr 1, 012040). https://www.doi.org/10.1088/1757-899X/1099/1/012040
Kim J.H., Kim J., Park J., Kim C., Jhang, J. i King B. (2023), When ChatGPT Gives Incorrect Answers: The Impact of Inaccurate Information by Generative AI on Tourism Decision-Making, Journal of Travel Research, Online First. https://doi.org/10.1177/00472875231212996
Kitchenham B. (2004), Procedures for performing systematic reviews, Keele University Technical Report 33, s. 1-26.
Kreft J. (2017), Algorithm as demiurge: a complex myth of new media, [w:] Strategic imperatives and core competencies in the era of robotics and artificial intelligence, Hershey: IGI Global, s. 146-166.
Kreft J. (2018), Władza algorytmów: u źródeł potęgi Google i Facebooka, Kraków: Wydawnictwo Uniwersytetu Jagiellońskiego.
Kreft J. (2022), Władza platform. Za fasadą Google, Facebooka i Spotify, Kraków: Universitas.
Kreft J. (2023), Władza misjonarzy. Zmierzch i świt świeckiej religii w Dolinie Krzemowej, Kraków: Universitas.
Kreft J., Boguszewicz-Kreft M. i Fydrych M. (2023), (Lost) Pride and Prejudice. Journalistic Identity Negotiation Versus the Automation of Content, Journalism Practice, Online First, 1-24. https://www.doi.org/10.1080/17512786.2023.2289177
Kreft J., Boguszewicz-Kreft, M. i Hliebova, D. (2023), Under the Fire of Disinformation. Attitudes Towards Fake News in the Ukrainian Frozen War, Journalism Practice, s. 1-21. https://www.doi.org/10.1080/17512786.2023.2168209
Kreps S., McCain R.M. i Brundage M. (2022), All the news that’s fit to fabricate: AI-generated text as a tool of media misinformation, Journal of experimental political science 9, nr 1, s. 104-117. https://www.doi.org/10.1017/XPS.2020.37
Kshetri N., Dwivedi Y.K., Davenport T.H. i Panteli N. (2023), Generative artificial intelligence in marketing: Applications, opportunities, challenges, and research agenda, International Journal of Information Management 75, nr 6, artykuł nr 102716. https://doi.org/10.1016/ j.ijinfomgt.2023.102716
Li C., Bi B., Yan M., Wang W. i Huang S. (2021), Addressing semantic drift in generative question answering with auxiliary extraction, [w:] Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing 2: Short Papers [online], Association for Computational Linguistics, s. 942-947.
Li Z. (2023), The dark side of chatgpt: Legal and ethical challenges from stochastic parrots and hallucination, ArXiv Preprint. https://doi.org/10.48550/arXiv.2304.14347
Lin S., Hilton J. i Evans O. (2021), TruthfulQA: Measuring How Models Mimic Human Falsehoods, ArXiv Preprint, https://doi.org/10.48550/arXiv.2109.07958
Lucidi P. B. i Nardi D. (2018), Companion robots: the hallucinatory danger of human-robot interactions, [w:] Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, s. 17-22. https://www.doi.org/10.1145/3278721.3278741
Luvembe A.M., Li W., Li S., Liu F. i Xu G. (2023), Dual emotion based fake news detection: A deep attention-weight update approach. Information Processing & Management 60, nr 4, artykuł nr 103354. https://www.doi.org/10.1016/j.ipm.2023.103354
Marr B. (2023), Chatgpt: What are hallucinations and why are they a problem for ai systems, Bernard Marr, https://bernardmarr.com/chatgpt-what-are-hallucinations-and-why-are-they-a-problem-for-ai-systems/ [dostęp: 19.12.2023].
Martinho A., Poulsen A., Kroesen M. i Chorus C. (2021), Perspectives about artificial moral agents, AI Ethics 1, s. 477-490. https://www.doi.org/10.1007/s43681-021-00055-2
Mazur Z. i Orłowska A. (2018), Jak zaplanować i przeprowadzić systematyczny przegląd literatury, Polskie Forum Psychologiczne 23, nr 2, s. 235-251. https://doi.org/10.14656/PFP20180202
Mbakwe A.B., Lourentzou I., Celi L.A., Mechanic O.J. i Dagan A. (2023), ChatGPT passing USMLE shines a spotlight on the flaws of medical education, PLOS Digital Health 2, nr 2, artykuł nr e0000205. https://doi.org/10.1371/journal.pdig.0000205
McGuire J., De Cremer D., Hesselbarth Y., De Schutter L., Mai K. M. i Van Hiel A. (2023), The reputational and ethical consequences of deceptive chatbot use, Scientific Reports 13, artykuł nr 16246. https://www.doi.org/10.1038/s41598-023-41692-3
McIntosh T.R., Liu T., Susnjak T., Watters P., Ng A. i Halgamuge M.N. (2023), A culturally sensitive test to evaluate nuanced gpt hallucination, IEEE Transactions on Artificial Intelligence. https://www.doi.org/10.1109/TAI.2023.3332837
Metz C. (2023), Chatbots May ‘Hallucinate’ More Often Than Many Realize, The New York Times, https://www.nytimes.com/2023/11/06/technology/chatbots-hallucination-rates.html [dostęp: 19.12.2023].
Milosavljević M. i Vobič I. (2019), Human still in the loop: Editors reconsider the ideals of professional journalism through automation, Digital Journalism 7, nr 8, s. 1098-1116. https://www.doi.org/10.1080/21670811.2019.1601576
Mincewicz K. (2022), Sposoby pojmowania prawdy w prawoznawstwie na tle filozoficznych koncepcji prawdy. Szczecin: Uniwersytet Szczeciński.
Morley J., Floridi L., Kinsey L. i Elhalal A. (2020), From what to how: An initial review of publicly available AI ethics tools, methods and research to translate principles into practices, Science and Engineering Ethics, 26, nr 4 s. 2141-2168. https://doi.org/10.1007/s11948-019-00165-5
Mridha M.F., Keya A.J., Hamid, M.A., Monowar M.M. i Rahman M.S. (2021), A comprehensive review on fake news detection with deep learning, IEEE Access, 9, s. 156151-156170. https://www.doi.org/10.1109/ACCESS.2021.3129329
Mukherjee A. i Chang H. (2023), The Creative Frontier of Generative AI: Managing the Novelty-Usefulness Tradeoff, ArXiv Preprint. https://doi.org/10.48550/arXiv.2306.03601
Munn L., Magee L. i Arora V. (2023), Truth Machines: Synthesizing Veracity in AI Language Models, ArXiv Preprint. https://doi.org/10.48550/arXiv.2301.12066
Murtarelli G., Gregory A. i Romenti S. (2021), A conversation-based perspective for shaping ethical human–machine interactions: The particular challenge of chatbots, Journal of Business Research 129, s. 927-935. https://doi.org/10.1016/j.jbusres.2020.09.018
Murtarelli G., Gregory A. i Romenti S. (2021), A conversation-based perspective for shaping ethical human–machine interactions: The particular challenge of chatbots, Journal of Business Research 129, s. 927-935. https://doi.org/10.1016/j.jbusres.2020.09.018
Oravec J.A., (2022), The emergence of “truth machines”?: Artificial intelligence approaches to lie detection, Ethics and Information Technology 24, nr 1, artykuł nr 6. https://www.doi.org/ 10.1007/s10676-022-09621-6
Orr W. i Davis J.L. (2020), Attributions of ethical responsibility by artificial intelligence practitioners. Information, Communication & Society 23, nr 5, p. 719-735. https://www.doi.org/10.1080/ 1369118X.2020.1713842
Pan Y., Froese F., Liu N., Hu Y. i Ye M. (2022), The adoption of artificial intelligence in employee recruitment: The influence of contextual factors, The International Journal of Human Resource Management 33, nr 6, s. 1125-1147. https://doi.org/10.1080/09585192.2021.1879206
Paul J., Ueno A. i Dennis C. (2023), ChatGPT and consumers: Benefits, pitfalls and future research agenda, International Journal of Consumer Studies 47, nr 4, s. 1213-1225. https://doi.org/ 10.1111/ijcs.12928
Pool J., Akhlaghpour S., Fatehi F. i Burton-Jones A. (2024), A systematic analysis of failures in protecting personal health data: a scoping review, International Journal of Information Management 74, artykuł nr 102719. https://doi.org/10.1016/j.ijinfomgt.2023.102719
Rawte V., Chakraborty S., Pathak A., Sarkar A., Tonmoy S. M., Chadha A., … i Das A. (2023), The Troubling Emergence of Hallucination in Large Language Models--An Extensive Definition, Quantification, and Prescriptive Remediations, ArXiv Preprint. https://doi.org/10.48550/ arXiv.2310.04988
Ricaurte P. (2022), Ethics for the majority world: AI and the question of violence at scale. Media, Culture & Society 44, nr 4, s. 726-745. https://www.doi.org/10.1177/01634437221099612
Rosen J. (1993), Beyond Objectivity, Nieman Reports 47, nr 4, s. 48-53.
Runco M.A. (2023), AI can only produce artificial creativity, Journal of Creativity 33, nr 3, artykuł nr 100063. https://doi.org/10.1016/j.yjoc.2023.100063
Schull N. (2013), The Folly of technological Soultionism: an Interview with Evgeny Morozov, Public Books, https://www.publicbooks.org/the-folly-of-technological-solutionism-an-interview-with-evgeny-morozov/ [dostęp: 19.12.2023].
Shin D. (2021), The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI, International Journal of Human-Computer Studies 146, artykuł nr 102551. https://www.doi.org/10.1016/j.ijhcs.2020.102551
Shu K., Sliva A., Wang S., Tang J. i Liu H. (2017), Fake news detection on social media: A data mining perspective, ACM SIGKDD explorations newsletter 19, nr 1, s. 22-36. https://www.doi.org/ 10.1145/3137597.3137600
Shu K., Wang S. i Liu H. (2019), Beyond news contents: The role of social context for fake news detection, [w:] Proceedings of the twelfth ACM international conference on web search and data mining, s. 312-320. https://doi.org/10.48550/arXiv.1712.07709
Sidaoui K., Jaakkola M. i Burton J. (2020), AI feel you: customer experience assessment via chatbot interviews, Journal of Service Management 31, nr 4, s. 745-766. https://www.doi.org/ 10.1108/JOSM-11-2019-0341
Solaiman I. i Dennison C. (2021), Process for Adapting Language Models to Society (PALMS) with Values-Targeted Datasets, ArXiv Preprint. https://doi.org/10.48550/arXiv.2106.10328
Stachewicz K. (2013), O filozofii chrześcijańskiej Kilka uwag z perspektywy historycznej i futurologicznej, Logos i Ethos, nr 2(35), s. 219-234.
Sun K., Xu Y.E., Zha H., Liu Y. i Dong X.L. (2023), Head-to-Tail: How Knowledgeable are Large Language Models (LLM)? AKA Will LLMs Replace Knowledge Graphs?, ArXiv Preprint. https://doi.org/10.48550/arXiv.2308.10168
Swart J., Groot Kormelink T., Costera Meijer I. i Broersma M. (2022), Advancing a radical audience turn in journalism. Fundamental dilemmas for journalism studies, Digital Journalism 10, nr 1, s. 8-22. https://doi.org/10.1080/21670811.2021.2024764
Tandoc Jr E.C. i Lee J.C.B. (2022), When viruses and misinformation spread: How young Singaporeans navigated uncertainty in the early stages of the COVID-19 outbreak, New Media & Society 24, nr 3, s. 778-796. https://doi.org/10.1177/1461444820968212
Touvron H., Martin L., Stone K., Albert P., Almahairi A., Babaei Y., … i Scialom T., (2023), Llama 2: Open foundation and fine-tuned chat models, ArXiv Preprint. https://doi.org/10.48550/ arXiv.2307.09288
Tranfield D., Denye D. i Smart P. (2003), Towards a Methodology for Developing Evidence-Informed Management Knowledge by Means of Systematic Review, British Journal of Management 14, nr 3, s. 207-222. https://www.doi.org/10.1111/1467-8551.00375
Umapathi L.K., Pal A. i Sankarasubbu M. (2023), Med-halt: Medical domain hallucination test for large language models, ArXiv Preprint. https://doi.org/10.48550/arXiv.2307.15343
Verme P., Oremus W. (2023), What GPT invented a sexual harassment scandal and named a real law prof as the accused, https://www.washingtonpost.com/technology/2023/04/05/chatgpt-lies/ [dostęp: 19.12.2023].
Waisbord S. (2018), Truth is what happens to news: On journalism, fake news, and post-truth, Journalism Studies 19, nr 13, s. 1866-1878. https://doi.org/10.1080/1461670X.2018.1492881
Wamba S.F., Queiroz M.M., Jabbour C.J.C. i Shi C.V. (2023), Are both generative AI and ChatGPT game changers for 21st-Century operations and supply chain excellence?, International Journal of Production Economics 265, artykuł nr 109015. https://www.doi.org/10.1016/ j.ijpe.2023.109015
Wang H., Fu T., Du Y., Gao W., Huang K., Liu Z., … i Zitnik M. (2023), Scientific discovery in the age of artificial intelligence, Nature, 620, s. 47-60. https://www.doi.org/10.1038/s41586-023-06221-2
Wiseman S., Shieber S.M., i Rush A.M. (2017), Challenges in data-to-document generation, ArXiv Preprint. https://doi.org/10.48550/arXiv.1707.08052
Woleński J. (2013), Historia pojęcia prawdy, [w:] R. Ziemińska (red.), Przewodnik po epistemologii, Kraków: WAM, s. 53-86.
Zarouali B., Makhortykh M., Bastian M. i Araujo T. (2021), Overcoming polarization with chatbot news? Investigating the impact of news content containing opposing views on agreement and credibility, European Journal of Communication 36, nr 1, s. 53-68. https://doi.org/10.1177/ 0267323120940908