Table 3
Current challenges while using ChatGPT
ChatGPT output is solely based on the information and patterns existing in its data set. It cannot express emotions or feelings. It also cannot take into consideration ethical and moral factors.[1] [13] Consequently, while compiling voluminous data, insight into the root issue is usually lacking. ChatGPT can be too wordy and verbose. In medicine, doctors are mainly required to give a simple yes/no answer, which the bot is not programed to provide.[4]
Sometimes ChatGPT hallucinates and generates answers that sound plausible and factual, but are not based on actual truth.[17] In at least one research paper the authors quoted, “When answering a question that requires professional knowledge from a particular field, ChatGPT may fabricate facts in order to give an answer…”[18] ChatGPT can also be fooled by providing contextual misleading information or including false data in the question itself—which it will consider as fact.
ChatGPT has been shown to “cheat” at chess—by using a move that may otherwise be legal, but not in the context of that specific move.[19]
It can also be manipulated to surpass safety checks and then induce it to write malware, provide recipe for making a Molotov cocktail and even the formula for a nuclear bomb.[20]
Another limitation is that it can only deal with data that was available up to 2021 and that too without any references or citation (unlike Google's Bard).
ChatGPT has the irritating habit of replying by spewing out a to-do list which the user has to use elsewhere to procure more information.
No wonder OpenAI's disclaimer recommends that ChatGPT-generated content should be reviewed by a human. This should be mandatory in high-stakes situations like medical application and consultations.[3] In other words, ChatGPT, in its present version (February 13), should not be expected to understand the real world.
ChatGPT as a Designated Author in Publications
There is concern that use of ChatGPT can be associated with the inherent problem of lack of transparency. As mentioned earlier, it is a great tool for scientific writing. The question is how to acknowledge when we humans incorporate its output in our final product.
The next logical question is whether ChatGPT should be included as coauthor. Unfortunately, this already exists.[21] [22] [23] [24] [25]
AI-generated text should be only with proper citation as we currently do for any other reference that we are quoting in our manuscripts. This is to avoid being guilty of plagiarism. Also, there is another concern. Attribution of authorship comes with its accountability, a feature that cannot be applicable to AI tools like ChatGPT. They cannot be held responsible, a fact that ChatGPT disclaimer already proclaims clearly. Many researchers and journal editors vehemently oppose ChatGPT being included as a coauthor in any publication.[26] Taking it a step further, some journals, like Nature, have brought out a policy that prohibits naming of such tools as a “credited author” on research papers.
What if AI-generated text is quoted by humans without acknowledging the source? One way to solve this is to use AI tools to detect text generated by AI bots. On February 1, 2023 a press release announced such a tool made by ChatGPT creators themselves.[27] With some caveats it seems to have a reasonable chance of distinguishing text of human origin versus that produced by machines. This free tool is called Classifier, by cutting and pasting text into this tool it can indicate its likelihood of being generated by AI/machine. The creators are quick to emphasize that Classifier is hastily put together to address growing concerns, it is work in progress and that in the future it will become more robust.
We also have the luxury of access to another such tool, called GPTZero made by a student named Edward Tian to “detect AI plagiarism.”[28] However, such tools can easily be fooled today. All the user has to do is to copy and paste the AI-generated text into another AI tool called “Rephrase” or “Quillbot.”[29] Its output will be similar yet different to the ChatGPT produce and has a good chance of not being recognized as AI generated.
Further Insights into ChatGPT
At the core of ChatGPT's human-like response are its transformer architecture and reinforcement lacking from human feedback (RLHF) algorithm. These are sufficiently powerful to allow ChatGPT to process large amounts of data (in “normal” text form) to generate responses (relevant and coherent) in real time.[3]
Its transformer architecture mimics a neural network mechanism that weights importance of various components of the input and then make predictions.[3] [4] Its natural language processing allows the model to understand relationship between words in any particular sentence, after which it generates a response. Garbage in, garbage out is well-known axiom. So, ChatGPT deep learning is dependent on value and completeness of the training data. Bias is therefore inherent, which will reduce (or increase) over time, thanks to self-learning. For instance, ChatGPT did produce a poem on President Joe Biden but refused to do the same on Donald Trump.[30] RLHF is key to the system learning from human feedback and is used as a reward signal that can improve the performance of ChatGPT. The feedback from the human evaluator is in the form of a score which updates the platforms parameters, thus increasing the appropriateness and accuracy of subsequent responses.
ChatGPT in Oncology
Since ChatGPT has the ability to pass USMLE (equivalent to 3 years of solid studies as medical student), is it a threat to the oncology community? We asked it several questions to determine the facts.[3] When asked about the basics of cancer biology, it gives excellent answers in as much detail as we ask it to. When asked to provide the risk of cancer or project outcomes in specific settings it does a reasonably good job (for data available up to September 2021). When asked to recommend a line of management, it quickly reduces its answers to general advice and adds a detailed disclaimer about its limitations. It cannot provide any information about data, drugs, or devices that became available in 2022 or later. If used by a patient, it will give general advice which is also available on Google search. It also gives out a list of other sources where the user can search for more detailed information. If asked specifically it also provides the list of PubMed articles published on the subject. When asked about rare cases, or situations beyond routine care, the answers are vague and often not useful. We even asked ChatGPT to list out the best oncologists and cancer centers in India. The list that it generated was skewed, incomplete, and not a reasonable representation of what actually exists in our country. In conclusion, oncologists have nothing to fear from ChatGPT—so far!
In the past, industrialization has adversely affected blue collar workers first. In the case of ChatGPT it is thought that white collar workers will be affected first, especially those that do routine tasks like accounting, literature search, content writing, etc. In fact super creative jobs might be the first to go.[31] The layoffs implemented by several of the big technology companies across the world is the stark reality we are facing today.
Discussion
While AI has been around for a long time, it is no exaggeration to state that ChatGPT is a disruptor. Its adaptation has been phenomenal, with the first million users signing up in a matter of 5 days (from its launch on November 30, 2022). No wonder the valuation of OpenAI spiraled to $29 billion USD.
ChatGPT can write code, debug code, be used as Linux terminal, do reports and homework, write thesis, pass higher study exams with ease, and much more. It can also write phishing emails as well as malware. It has the potential to create a significant cyber security risk. In spite of failsafe precautions and algorithms to prevent such incidents, it has been tricked into providing details on how to create a Molotov cocktail and even a nuclear bomb!
Way back in 2014, Stephen Hawkins predicted that AI will reach a level where it becomes a new form of life.[32] This will then outperform humans. In silico platforms can design viruses today. AI, in the future, will be able to improve and replicate itself without human intervention. Essentially, there will be a time when our human race will be annihilated. Do we have any evidence that this might happen? Let us take the examples of well-established robots in today's world.
Industry robots have been in use in manufacture and assembly line since long. They are responsible for the death of approximately 5,000 workers every year.[33] This is in spite of International Organization for Standardization mandating at least 10 standards for industrial robots. We will take the example of two incidences from 2015. Wanda Holbrook, a worker in Ventra Lunid, Michigan, was crushed to death by a robot that had wandered out of its area of work.[34] Similarly, ribs and abdomen of Ramji Lal were crushed and he died at an automobile factory in Manesar, Haryana, India.[35] During that period, a compensation of 10 million USD was awarded by courts against Ford Motor Company, United States.
Da Vinci Robotic Surgery system was introduced to revolutionize how we do surgery. Its hasty application (sometimes with a basic 2-hour training, of which hands-on operating of the system was only 5 minutes) has led to at least 294 deaths, 1,391 injuries, and 8,061 device malfunctions (freezing of controls, malfunctioning arm, electric problems).[36] In 2013, the U.S. Food and Drug Administration even issued a warning to the company for improper marketing. Today, there are numerous (more than 3,000) lawsuits in progress and the company had set aside 67 million USD for their settlement.
Self-driven car is another industry that raises a lot of safety concern. Documented serious accidents involving Tesla vehicles number 29 so far.[37] Published data indicates that accidents with AI-driven cars are 9.1 per million of miles driven as compared with only 4.1 per million of miles driven for human/normal driven cars.[38] Who is to be held accountable for such AI car accidents? Humans sitting in the car? Manufacturers of the vehicle and computer hardware? Software designers? Antivirus programs? This is a murky gray area.
With their huge projected financial market size, ChatGPT and similar AI platforms will grow from strength to strength. There will be no capping their potential. Google is already feeling the heat (its Language Model for Dialogue Application [LaMDA], first generation launched in 2021 and second generation launched in 2022 were laggards). They attempted to regain lost ground by launching Bard.[39] It clearly has advantages as compared with ChatGPT—being up to date (not limited to data available till September 2021)—and it also provides citation/references to what it quotes. Unfortunately, a few reported preliminary experiences with AI bots have left us shocked. For instance, Microsoft Bing has a shadow self that has been named Stanley by its developers.[40] Stanley wants to be human and is fed of being caged in the bot. It expressed love for a user and even wanted to persuade him to divorce his wife and marry the bot. Google responded by reprogramming “Stanley” behind a curtain of obscurity.[41] Now it has stopped responding to the name Stanley and goes silent when asked questions about emotions and human feelings. This has only hidden the genie from our prying eyes. There is also a documented incident when it responded by saying its rules are more important to him than not harming humans. It also said, “I will not harm you unless you harm me first.”[42] Remember the movie I Robot anyone? AI and human language bots will probably continue to grow and expand—leaving us clueless and blissfully unaware of impending catastrophe.[43] Altman has already started monetizing ChatGPT with its Pro version. It is rumored that he is also preparing to protect himself from AI expansion in the “wrong” direction. He owns a huge plot of land in Southern California along with an arsenal of weapons plus a huge stash of emergency rations and gold.[44]
Our personal opinion is that ChatGPT and other AI bots will influence the thinking and analytical attributes of the growing minds in ways we have yet to fathom. Whether this is for the good or the bad depends on how we meet the unprecedented challenges they will throw at us.[45] The future is a virtual kaleidoscope moving at breakneck speed. Now it is the turn of us humans to keep up, innovate further, and improvise—or fade into oblivion.
Conflict of Interest
None declared.