Categories
contextual-theoretical-studies-2 Uncategorised

Week 11

Final work

Creating and Immersing Communication

In the last few years, Artificial Intelligence (AI) has evolved beyond all expectations. Nearly every once in a while there are new changes and even innovations, from the very beginning when AI could only carry out basic conversations, to the creation of literature and the production of pictures and even the simulation of personalities to communicate with people, and these rapid developments have led to the idea of whether AI can be used in other areas and the new changes it can bring. For example, based on the unique immersion of virtual reality and the properties of AI, can the combination of using it together further explore more ideas of VR as a mental health treatment. Whether it is possible to give more solutions to other psychological problems or better help patients, using AI in a virtual environment will be able to help them. Based on this idea most of all the very first starting point, a research and discussion of information on the use of AI in virtual reality environments to help alleviate psychological problems began.

Based on previous understanding and further thinking about how to use and how to improve exposure therapy for phobias in virtual reality, a more focussed effort began to look at other psychological problems in relation to virtual reality and AI. Based on this research into two new psychological problems began. The first psychological problem is called autism. According to the information from Baron‐Cohen, S. (2009), the symptoms of autism, also known as autistic disorder, can basically roughly be characterized by an overactive visual memory, hypersensitivity to the environment, the patient’s self-control is often affected by fear, or their linguistic talent is for some reason extremely weak. They usually have communication or social interaction difficulties and lack interest and social skills. This problem is largely untreatable through medication, but only through communication and other exercises to help the patient develop the appropriate skills, but they often refuse to communicate with others because of this lack of interest. Another type of psychological problem is called depression. Let’s talk facts about depression – PPCN. (2015) shows that it occurs mainly due to the fact that the patient has received a great deal of mental stress or obstacles in life. Usually, during a depressive episode, the patient may experience a lot of emotions such as sadness, irritability, emptiness and other negative emotions, which may cause them to feel a loss of pleasure or interest in activities or a gradual refusal to communicate with other people. For this problem the patient can try to alleviate the situation by talking to their doctor, which is also known as talk therapy. However, according to this resource by Cronin, C.J., Forsstrom, M.P. and Papageorge, N.W. (2020). Many patients do not prefer this type of therapy. This is mainly due to the fact that it takes longer to work and is more expensive, but more importantly, many people have prejudices about the effectiveness of the treatment, or they have an aversion to discussing personal or painful issues with strangers.

Based on these studies, an idea gradually emerged. Could the role of the doctor in talking therapies be replaced by artificial intelligence, or could it simply be a person for these people to talk to and communicate with. In order to investigate the feasibility of this, a study on how and how well AI communicates with its users was started. Firstly, this study by Eysenbach, G., Publications, J. and Eysenbach, C.A. (2023) and another study by Sundar, S.S. and Lee, E.-J. (2022) provide and confirm the feasibility of AI as a means of communicating with people and building language models to better simulate personalities, although it also suggests that AI may make some unpredictable errors in communication. For while this study by Berşe, S. et al. (2023), which shows more about the potential benefits and limitations of AI chatbots. While it can quickly pick up expertise and make quick judgements based on conversations, it can also have issues with providing inaccurate or biased information, as well as its most worrying collection of personal information.

Based on this evidence and data, the question arises as to whether or not it is possible, as well as the advantages and disadvantages, to use AI in virtual reality to help patients with autism or depression who can be relieved through communication and exchange. Firstly, it is true that understanding autism or depression can continue to help patients by easing their moods and states through communication and interaction. But the problem arises that they may not want to communicate well and effectively with certain people or contact them out of fear or other emotions, or they may only want to communicate with certain people, all because they don’t trust the person they are communicating with. This is where the benefits of AI as a communication object come into play from this idea, the controllability of the AI and the level of training of the language model can go a long way in helping the communicator to trust them. In terms of controllability, the user can change or control how the AI communicates with him/her in a better and more comfortable way, such as taking the initiative to change the AI’s preset personality and language model, to change the AI’s language model by giving a positive or negative comment on a certain sentence or a certain response from the AI, or in other ways to make the AI’s language model more suitable for the user’s personality and way of speaking, so that the AI has the user’s trust in it. The AI can also judge the user’s mental state by detecting the emotions or specific words in the user’s spoken words, for example, a depressed person’s spoken words may contain more negative emotions or even some extreme emotions, and deep depression may contain special words such as self-mutilation, giving up, and sadness, all of which are things that the AI can do through the language model that belongs exclusively to the user. language modeling. It’s not just the patient who is making the AI more suitable for them, the AI is also collecting data from the user’s conversations to get to know them better so that it can make sound judgements and appraisals thus helping them to alleviate those kinds of emotions and feelings. The collected data can even be shown directly to the doctor, through long-term data records to show the patient’s mood changes at different times due to different things, better and more directly show the extent of the patient’s current mental problems, to better help psychologists understand the patient’s situation. This eliminates the tradition of relying on doctors to keep track of their patients by questioning them, a step they can eliminate by understanding the AI’s data directly. This also allows them to concentrate more on how to help the user even better and be more comfortable communicating with them, and these are the benefits that can be made by building a proprietary language model.

On the other hand, another reason why many depressed and autistic people don’t want to establish direct communication with other people is because they are afraid that they will be annoyed or fed up with themselves or the emotions that they are venting out, and they are afraid of what other people will think of them and what they won’t understand. However, these problems are unlikely to arise when using an AI as a communicator, as the user doesn’t need to think about how his or her words will affect the AI’s emotions because they don’t, and this allows the user to soothe his or her own emotions by expressing any emotions or things to the AI as he or she pleases. They can seek them out at any time or place, no matter how long they talk or how negative the conversation is, it won’t affect them, and they can become a kind of virtual companion that belongs to them. All of these methods can make them trust and build communication with the AI to ease and stabilize their mental state. Doctors can even communicate with the developers to make small changes to the AI’s communication style to guide them. For example, when patients show self-depreciation or dissatisfaction, timely let the AI to encourage or through other ways to ease and relieve their emotions, in the relief of negative energy and at the same time with positive energy to motivate each other, to encourage the user to try to accept and other people’s normal communication or emotional expression. Further, the AI can even be used as a social practice for the patient. People with autism can start to engage in dialogue and communication with their familiar AI in various situations or scenarios, to practice facing such situations on their own and become familiar with how to socialize. Then later the AI’s language model could be switched to allow them to try communicating with strangers, or other more complex social situations. This kind of staged practice might ease their symptoms, help them become more familiar with communicating with strangers, and allow them to grow.

This approach could continue to be further enhanced by virtual reality, allowing the AI to gain physical presence in VR and communicate with the user in a customizable virtual environment. For example, creating multiple different avatars and allowing the user to choose and then use them in the virtual environment to give the user a more concrete impression and concept of the AI. The user can even be programmed to interact with the AI avatar, allowing the AI to react to both the user’s actions and words, thus providing better feedback to the user. The benefits of this are hard to replace with any other electronic media, and the unique immersion of virtual reality may interest the user to voluntarily use it for a longer period of time and communicate with the AI in a more proactive manner. These are the benefits of using AI in virtual reality as a communication object for autism and depression, and the combination of AI language modeling and virtual reality is difficult to replace in other multimedia.

But the pitfalls are also present in this idea at the same time, and many hidden issues might have an impact especially in this aspect of AI. In terms of security, the controllability of AI and the training of language models requires the frequent collection of information about the user, from the way they speak to their complete personality preferences. This can be potentially dangerous, as the information collected can have serious consequences if it is stolen or used elsewhere, and if such an incident occurs can it lead to other users questioning their trust in the AI, leading to a reduction in their interest in communicating with it. Secondly, how to let the AI correctly judge and identify the user’s emotions is also a problem that needs to be thought about, because different people have different expressions and understandings of emotions, which may lead to it not being able to correctly understand the user’s emotions in the early stages of use. A fully tested language model may be difficult to use for others after it has been applied to one person, and end up being a completely customized thing, resulting in data that cannot be shared between them. This makes it difficult for developers to understand the deficiencies and shortcomings of the AI, making it difficult for them to make further improvements. As a final point, while it is indeed good for users to become interested in using this communication, it may also have chances of causing them to become dependent on it and end up being addicted to the virtual world. These negative effects are objective and can have unpredictable effects on the user.

Overall, in terms of helping with mental issues such as depression and autism, using a combination of virtual reality and artificial intelligence has the potential to help them develop a viable means of communicating and venting their emotions. Most of these benefits come from the immersion of virtual reality, a communication environment that can be tailored to the user’s language model and under the user’s control. Whilst there are some potential downsides to the use of this method, such as the personal information collected or an over-reliance on this method, these issues can probably be improved or amended in some other way. Perhaps this approach can be further improved in the future or inspire others to discover new ways of using virtual reality and artificial intelligence together. (2115words)

Let’s talk facts about depression – PPCN. (2015)Available at: https://www.ppcn.org/Education_Handouts/Let-s-Talk-About-Depression-Brochure.pdf (Accessed: 07 December 2023). 

Baron‐Cohen, S. (2009) ‘Autism: The empathizing–systemizing (E‐S) theory’, Annals of the New York Academy of Sciences, 1156(1), pp. 68–80. doi:10.1111/j.1749-6632.2009.04467.x. 

Berşe, S. et al. (2023) ‘The role and potential contributions of the Artificial Intelligence Language Model CHATGPT’, Annals of Biomedical Engineering [Preprint]. doi:10.1007/s10439-023-03296-w. 

Cronin, C.J., Forsstrom, M.P. and Papageorge, N.W. (2020) What good are treatment effects without treatment? mental health and the reluctance to use talk therapy, NBER. Available at: https://www.nber.org/papers/w27711 (Accessed: 07 December 2023). 

Eysenbach, G., Publications, J. and Eysenbach, C.A. (2023) The role of CHATGPT, generative language models, and Artificial Intelligence in medical education: A conversation with chatgpt and a call for papers, JMIR Medical Education. Available at: https://mededu.jmir.org/2023/1/e46885/ (Accessed: 07 December 2023). 

Sundar, S.S. and Lee, E.-J. (2022) ‘Rethinking communication in the era of Artificial Intelligence’, Human Communication Research, 48(3), pp. 379–385. doi:10.1093/hcr/hqac014. 

Leave a Reply

Your email address will not be published. Required fields are marked *