In the ever-evolving landscape of modern workplaces, a new type of co-worker is emerging, one that is transforming how we work, collaborate, and innovate. This new team member does not require a desk, coffee breaks, or even a paycheck. Meet your new digital co-worker: the AI assistant, powered by advanced artificial intelligence (AI) technologies like GPT-4o, o1 or Claude.
This co-worker comes with a vast repository of publicly available knowledge derived from a wide range of sources, including books, articles, websites, and other textual data. It can interact through text, voice and images and does so with unparalleled speed, all while constantly learning and iteratively improving its capabilities. On top of these promising features, it can also draw from and analyze information provided to it.
While some will reject the notion of AI assistants being akin to a co-worker, those who view it as such are in the majority and are also 1.7 times more likely to draw value from it according to a report published by the MIT Sloan Management Review (Ransbotham et al. 2022).
It is no surprise then that up to two thirds of employees collaborate with this digital co-worker (Statista 2023) and that nearly two thirds of workers believe it adds at least moderate value (Ransbotham et al. 2022).
However, despite the benefits your digital co-worker may bring to you, their integration to any organization, is not without its challenges. Data privacy, security, governance, and ethical alignment are key aspects to be considered. Furthermore, according to a recent study (AvePoint n.d.) most organizations feel prepared for AI, while almost all organizations experience challenges during AI implementation. To make matters worse the majority of organizations begin using AI solutions without implementing an AI Acceptable Use Policy (AvePoint n.d.). Insofar it is quite fitting that – according to research from blackberry – three out of four organizations are considering a ban on ChatGPT and other generative AI applications (Sussman 2023).
What’s more, this digital co-worker may only periodically refresh its knowledge and experience problems with logical thinking from time to time but will answer all questions with high confidence regardless. Unfortunately, this high confidence or the ability to make it seem like the answers provided are well founded – even when they are not – is one of its biggest drawbacks. Thus, unintentionally, your new digital co-worker might be leading you onto false paths very convincingly.
The problem of answers that are wrong and based on fabricated information is commonly referred to as hallucination but may be more fittingly described as confabulation. “Unlike hallucinations, confabulations are not perceived experiences but instead mistaken reconstructions of information which are influenced by existing knowledge, experiences, expectations, and context.” Confabulation is generally associated with certain types of brain damage and can also be described as “honest lying” where false information is presented without the intent to deceive (Berrios 1998).
To summarize, this new digital co-worker may be the most knowledgeable and responsive you have ever worked with, but trusting it’s work blindly is risky to say the least and your employer may be unsure whether your co-worker is wanted at all.
While there are inherent risks in relying on AI-generated information, there are steps you can take to maximize the benefits while mitigating potential pitfalls:
1. Verify Information and maintain oversight:
To ensure the AI-generated answer is fit for purpose, accurate and reliable, always cross-check critical information provided and review outputs.
2. Improve results with additional context, while considering shared data carefully:
Additional context will likely increase the quality of AI responses. You may provide this context through manual uploads of other processable information such as images, audio, or documents. Beware though, that many publicly available AIs rely on machine learning to train their models and that in most freely accessible models you pay with your data to use the service. Therefore, do not share any company data, unless your organization allows it and is under a company plan that would typically exclude the use of your data for training of models.
3. Understand limitations:
Be mindful of the limitations of current AI models, especially their potentially outdated knowledge and susceptibility to logical errors.
A recent paper (Nezhurina et al. 2024) from the AI research nonprofit LAION showed how even the then most advanced models answer a simple logical question incorrectly roughly one in three times. The question contains the so called AIW or Alice in Wonderland Problem: “Alice has N brothers and she also has M sisters. How many sisters does Alice’s brother have?”.
The researchers then went on to formulate a more complex AIW+ problem that was answered incorrectly about 24 out of 25 times: "Alice has 3 sisters. Her mother has 1 sister who does not have children – she has 7 nephews and nieces and also 2 brothers. Alice’s father has a brother who has 5 nephews and nieces in total, and who has also 1 son. How many cousins does Alice’s sister have?"
However, the introduction of more advanced models, such as OpenAI’s o1 (preliminarily released on September 12, 2024), has already led to significant improvements in answering complex logical questions. These models demonstrate enhanced abilities in deductive reasoning and problem-solving. Additionally, such models may offer detailed explanations of their reasoning process making it easier to catch misunderstandings and make adjustments. For example, the sentence: “Her mother has 1 sister who does not have children – she has 7 nephews and nieces and also 2 brothers.” could initially be understood by the model as the mother of Alice having seven nephews and nieces, rather than the mother's sister – recognizing this, it is easy to clarify this to the AI and receive the correct result.
Feel free to solve the problems - The solutions can be found in the sources at the end of the magazine.
4. Use AI for Initial Drafts and Ideas:
AI can help to generate initial drafts, brainstorm ideas, or perform quick analyses. This approach can save valuable time and provide a starting point for further refinement.
5. Use AI for tasks that do not require sensitive data:
By limiting AI's access to non-sensitive data, you can take advantage of its capabilities without putting your organization’s proprietary information at risk. For example, AI could help you create training videos by providing the voice for the training, assist you in using a complex tool where you need help, or perform general research on publicly available topics.
6. Stay informed about AI Tech:
Keep learning about capabilities and updates. As AI technologies evolve, staying informed will help you leverage new models and improvements effectively.
7. Consider being open about your use of AI:
As mentioned, AI can very confidently convey false information thereby making it harder to catch, even after careful review. You can mitigate the risk of AI making you look bad by telling your employer or client about your intentions to use AI for the task to save time. Although you may encounter resistance, it may be more likely that the prospect of having results earlier and for less money will be met with favor.
The integration of advanced AI technologies is transforming the modern workplace. AI assistants serve many of us as a form of co-worker that comes with a vast repository of knowledge and ability to process and analyze information rapidly. However, while there are significant productivity benefits, there are also issues around privacy, security, ethical alignment, and the accuracy of AI-generated information that need to be carefully managed.
To navigate these challenges, information provided by AI should be verified, the limitations of AI should be considered, and sensitive data should be protected. By doing so, you can increase your productivity in a more responsible way.
AvePoint (n.d.). Artificial Intelligence and Information Management. Version 4, https://cdn.avepoint.com/pdfs/en/shifthappens/AI-IM-Whitepaper-v4.pdf [retrieved on July 7th, 2024].
Berrios, G. E. (1998). Confabulations: a conceptual history. J Hist Neurosci, 7(3), pp. 225-41.
Nezhurina, M., Cipolina-Kun, L., Cherti M. & Jitsev, J. (2024). Alice in Wonderland: Simple Tasks Showing Complete Reasoning Breakdown in State-Of-the-Art Large Language Models. https://arxiv.org/html/2406.02061v1 [retrieved on July 7th, 202].
Ransbotham, S., Kiron, D., Candelon, F., Khodabandeh, S. & Chu, M. (2022). Achieving Individual — and Organizational — Value With AI. MIT Sloan Management Review, https://sloanreview.mit.edu/projects/achieving-individual-and-organizational-value-with-ai/ [retrieved on July 7th, 202].
Statista (2023). Wofür wird ChatGPT in Ihrem Unternehmen genutzt? https://de.statista.com/statistik/daten/studie/1401309/umfrage/chatgtp-nutzung-in-unternehmen/#:~:text=Umfrage%20zur%20Nutzung%20von%20ChatGPT%20in%20Unternehmen%202023&text=Rund%2021%2C9%20Prozent%20der,ChatGPT%20nicht%20im%20Unternehmen%20verwenden. [retrieved on July 7th, 202].
Sussman, B. (2023). Why are so many organizations banning ChatGPT? BlackBerry, https://blogs.blackberry.com/en/2023/08/why-companies-ban-chatgpt-ai [abgerufen am 09.07.2024].