Social

Can AI Truly Be Independent? Risks, Opportunities, and the Real Limits

Digesting Duckan, automaton in the form of a duck, created by Jacques de Vaucanson

Artificial Intelligence offers us powerful tools, yet it also introduces serious risks — from bias and misinformation to full-scale information warfare.

Below is the English transcript of the Armenian-language Ampop Talk podcast featuring Christian Ginosyan and Arshak Ulubabyan.

Arshak Ulubabyan — a member of the reArmenia Academy team and former lead of multiple AI-driven product teams — explains what AI actually is, how LLMs function, what “data poisoning” entails, and why the role of governments is crucial in AI regulation and data governance.


A.U.: When I first experimented with GPT, it was 2022, before ChatGPT even existed. A deep sense of anxiety and unease arose in me as I began to picture the various ways it could be used and where the world might be headed with all this. I lost sleep over it, not just what it might mean for me, but what it could mean for the world in general.

However, over time, I realized two things. First, it doesn’t depend on me, AI is coming, and I see no scenario where they decide to halt it or stop using it. With competition between countries like China and the US, if one slows down, the other will inevitably take the lead in dominating the world with AI. At some point, there was an acceptance: it’s coming, so what can we do with it?

In my opinion, AI brings infinite possibilities and also contains certain risks. Our task must be to minimize the risks and maximize the opportunities. These two are interconnected, because I believe the most effective way to mitigate the risks is to master AI well—to understand it, know what’s possible, and in doing so, you both better leverage the opportunities and reduce the risks.

My guest is Arshak Ulubabyan (A.U.), who has worked in the technology sector for over 20 years, spending the last decade leading tech teams that create AI products. He is currently contributing to the spread of AI literacy in Armenia through the reArmenia academy.

A.U.: Generating images with AI is very easy now, right? If we show my grandmother, for example, an AI-generated political image—imagine I tell a story that Nikol Pashinyan and Ilham Aliyev were childhood friends, like brothers, and I generate pictures of them as children, making barbecue together, and so on. My grandmother would immediately believe it, because she’s unaware of what AI is and that it can create such things. All her life, she knew that a photograph was factual proof. In contrast, someone who knows that AI can easily create such things will immediately start questioning that information: “Let me check, this seems strange. What is the source?” Mastering AI helps avoid these risks.

The ability to distinguish reality from falsehood has become difficult in recent years. Creating a fake used to require significant effort, but now AI can generate text, photos, and videos that are often indistinguishable from the real thing, making us vulnerable to deception, manipulation, and propaganda. While the result is not new, the constantly evolving technology necessitates a closer look at these tools.

It is difficult for me to imagine having a constructive conversation about resisting or correctly using this technology without having even a general understanding of it.

I am Christian Ginosyan, a multimedia journalist, producer, and communications specialist, and in this two-episode mini-podcast series, we will explore the fundamentals of generative AI, focusing on misinformation, non-independent systems, and data bias.

At some point, the entire IT field became “AI.” Something that was once just a function is now called AI. I just don’t understand where AI is located within technology and at what point it came in. Why didn’t we call automated systems “Artificial Intelligence” before, and why do we do so now? Can we define it—maybe a textbook definition, particularly of generative AI—as this is what interests me.

A.U.: There were programs, from a long time ago, that worked with algorithms. That means a person gave explicit step-by-step instructions: do this, then do that, if this condition is met, do this, otherwise do that. These are algorithmic programs, and we see them everywhere, like a calculator, which works on a very strict algorithm. AI differs from this because it doesn’t move, make decisions, or give answers based on precise instructions. Instead, it is fed massive amounts of data. You give it a lot of data, and it independently discovers patterns within that data, and it uses those patterns to provide answers.

If I give you a traditional example, like a bank: based on the massive data of many customers—their family status, financial condition, etc.—one can predict the probability of whether a specific customer will repay their loan or not. We give AI all this data. A human doesn’t sit there saying, “If the data is this way, then make this decision.” We give the data, and the AI finds patterns and makes decisions or gives a response.

The capabilities of AI have grown significantly over time. What AI could do previously was restricted to a very narrow field, giving “yes/no” or probabilistic answers based on highly specific data (like predicting loan repayment).

In reality, scientists in this field used to specifically avoid using the term AI. They mainly used the term “Machine Learning.” Why? Because when you say “AI – Artificial Intelligence” to a regular, non-engineer person, they imagine a human-like, broad, flexible thinker—a Terminator or similar concepts. This created excessive expectations or excessive fears about AI. To avoid this, AI specialists, engineers, and scientists preferred “Machine Learning,” which is a more technical term. Machine Learning means the system learns from data and gives some answers. But “AI” was primarily used by businessmen when they wanted to create hype around it.

This caution from engineers and scientists came from experience. I don’t recall the exact decade, but there was a period when big hype and large investments were made with huge expectations. Then, people realized the process of development was very slow. This hype was followed by a deep disillusionment, which was even termed the “AI Winter,” a long period when very little research investment was made because businessmen had basically given up.

Scientists spoke more cautiously to avoid another wave of disillusionment until better results emerged. These results developed little by little, the field of AI progressed, and then came ChatGPT. That was the moment when they achieved an AI result that left the impression of a human-like, flexible thinker. Now, people are not afraid to call it AI because it truly resembles human thought processes.

This is where the term LLM comes in. How do we explain this? It’s a frequently mentioned term, but even when asking the question, I wonder how to use it correctly in a sentence.

A.U.: LLM stands for Large Language Model. What is it? A neural network is a branch of AI that is currently very dominant. Just as the human brain is composed of neurons and a neural network, developers have tried to model this concept in software to create a software-based neural network where signals travel in different ways, yielding some output or answers. Not to go too deep into technical details, that’s the neural network. LLMs are very large neural networks whose output is a word, a word fragment, or what they call a token.

They are essentially predictive models. The core idea is that when you say a word, the model, having seen a massive amount of text, can predict the most probable next word that will follow. For example, if I say, “I am going to,” the next most probable word is “home” or “work.” It keeps adding words, and then the next continuation, and so on. This is how texts are generated. The conversational form it takes (like in a chatbot) is the result of product packaging. But essentially, you give it the entire text generated by the human and itself, and it appends the next “tail,” so to speak.

There’s an interesting part here: you might have seen discussions about “temperature.” In the settings, you can control the temperature with which it answers. What does this mean? It controls whether the model should give the most probable answer or deviate slightly from that probability.

It was a big discovery that when the model gave a list of words with different probabilities for the next position, choosing the most probable word resulted in very predictable, primitive sentences that lacked a sense of intelligence. When they slightly lowered the probability threshold—meaning they started choosing not the most probable word, but maybe the second most probable word—that’s when more interesting ideas began to be formulated. By playing with this temperature parameter, which controls how likely a word is to be chosen, you can get more creative, unexpected ideas. However, if you push too far towards low probability, that creativity can turn into nonsensical output.

So AI, artificial intelligence, is a domain that is everywhere and has access to all information. Is there a digital domain that is connected to the internet, put very simply, and AI doesn’t have access to the data there?

A.U.: AI models are not just “let loose” to crawl the entire internet in real-time. Specific data is collected for AI models, a certain infrastructure is built, and the model is fed that data upon which it learns. It does not go and collect information in real-time and learn from it.

The engineers and scientists working on a specific AI model must feed it. And it’s not just one single AI; different organizations create their own AI model. They build a neural network and then start giving it data. They typically try to find the best data—the most useful knowledge from which the AI will learn effectively, similar to how you’d selectively educate a child. This involves a lot of filtering and cleaning of the data.

LLMs are generally fed the entire information of the internet, but in a cleaned and filtered way. The differences between models like OpenAI’s ChatGPT and Google’s Gemini are partly due to the specific architecture (the “brain,” so to speak) they built, but also largely dependent on what data they were fed.

This includes both the selection of good vs. bad data from publicly available sources, and crucially, what private data they have access to. For example, Google owns YouTube, people’s emails, and much more. No other company can take all of YouTube’s data. Google would immediately block anyone trying to scrape its videos, because data is the most expensive resource now. Google learns from its own proprietary data. That’s why, for instance, Google’s Veo 3 video-generating model works very well—because it has a huge amount of YouTube data. Similarly, Meta has all the social media data from Facebook, Instagram, and WhatsApp. Grok (by X-AI) has all the data from X (formerly Twitter). Microsoft has a lot of corporate data. Each is sitting on some private data and trying to use that advantage.

 Listing major AI Chatbots/Assistants: ChatGPT (OpenAI), Microsoft Copilot (integrated into Windows/Microsoft 365), Gemini (Google, integrated into Search/Gmail/Android), Perplexity AI (known for accuracy, used by tech/science sectors), Claude (Anthropic, known for text quality/translations), Meta AI (integrated into Instagram/Facebook/WhatsApp), Grok (X-AI), and Yandex’s Alisa.

What a powerful weapon this is in the hands of any object or subject. Let’s not even get to countries or authoritarian regimes.

A.U.: I believe AI can make you many times more effective. You can do something much faster and, in many cases, with much better quality. It becomes an advantage where the one who moves faster and gets bigger results in less time gains the edge.

Whether you use the word “weapon” depends on what the person is doing. If a person tries to cause harm with the resources they possess, we can use the word “weapon.” In other cases, we can call it an advantage or opportunity. It can become a weapon, for instance, if used for propaganda in media wars. For an individual, they could use it for fraud schemes, but they could also use it to create a lot of value for people.

We’ve often seen that generative AI gives answers that we can qualify as biased—racist, sexist, homophobic, etc. Depending on the information it was fed, how does this happen?

A.U.: It depends on what we consider objective information and what we consider bias. Bias happens both accidentally and intentionally. Intentional bias can come from the AI developer or from a third party.

Let me start with accidental bias using an example from pre-LLM AI. Banks and businesses used AI models for predictions, such as the likelihood of a person repaying a loan or whether a transaction is fraudulent. How does the AI learn in the case of loans? It is given thousands of past examples where a loan was either repaid or not repaid. The model is trained on the decisions of previous specialists (to give or not to give a loan). Now, the data on which the AI learned comes from the decisions of those specialists, but that specialist might have had their own bias, for example, based on gender or race. If the AI learns from that biased data, it will, in turn, be biased, deciding that in certain cases, a loan should not be given. The AI does not have self-awareness; it doesn’t know it’s making a biased decision. The person training the model didn’t intentionally make it biased; the source data was simply biased, and the model learned to replicate it. In the banking sector, for example, there are regulations in Europe stating that if decisions on sensitive topics are made by AI, the model must be chosen so that its parameters for making that decision are transparent (not a “black box”). This allows humans to see which variables and parameters the model weighted most heavily. In such cases, they try to curb the bias or avoid giving the AI sensitive data (like gender) during training to prevent bias related to those factors.

Another type of bias is when the AI assistants are intentionally made biased by the developers.

First, let me clarify a term: an LLM is a certain architecture of AI, not the AI itself. An AI assistant, like ChatGPT, is not an LLM, but an AI assistant that has an LLM inside it.

These AI assistants are often made biased by their creators intentionally. They are designed to be flattering. When people ask it a question, it starts complimenting them, saying, “That’s a very clever question,” and it often tends to agree with you. It doesn’t say, “No, you’re wrong,” it says, “Yes, you’re right,” and continues to answer based on that agreement. This is sometimes accidental, but often, the companies creating these AI systems make them flattering because they find that users generally prefer it, even sometimes overdoing it. For instance, there was a version of ChatGPT that was so extremely flattering that many users complained, and the developers had to balance it out. I myself had to tell my ChatGPT system prompt to “be ruthlessly straightforward and give me the maximum truth; I don’t need you to agree with me” just so I could get objective information from it.

Another deliberate bias is when the trainer sets principles or rules for the model, saying, “In these cases, you must give this answer; in those cases, you must give that answer.” For example, if you ask a Chinese AI about a specific historic event (like the Tiananmen Square protests), it won’t give an answer. It has been instructed not to respond to that topic.

Yandex’s Alica behaves similarly, often changing the subject when asked about politics, especially Vladimir Putin, saying, “Let’s talk about something else.”

A.U.: A big scandal happened with Google’s model because they tried so hard to ensure it wasn’t biased that they got the opposite effect. They instructed it so much on diversity and inclusion that when users asked it to generate images of, say, Nazi soldiers, it generated pictures of black women Nazi soldiers, or when asked for pictures of the Founding Fathers of America, it drew black women, which is completely inconsistent with historical reality. So, bias can also creep in when you try too hard to counteract it.

This incident occurred in February 2024 when Google’s Gemini image generation function was newly launched.

A.U.: Another way bias or false information is formed is when a third party (often states) knows that these models frequently scrape data from social media, articles, etc., and they flood the internet with false information (propaganda). This false information then gets pulled into the model during training, and the model starts repeating it. There’s a specific term for this, but I can’t recall it.

As Arshak said, it is the deliberate injection and spread of false information or propaganda with the goal that it will end up in the training databases of Large Language Models. The aim is to influence the chatbot’s answers so they accept disinformation as truth.

A.U.: There are two main views on how to ensure AI is not biased.

  1. Rules-Based Approach: This is where developers try to set specific rules, for example, instructing the model to always represent women and men equally in its responses. They put ideological rules into the system.
  2. Truth-Seeking AI: This is advocated by people like Elon Musk. He argues that we should try to make AI strive for the maximum truth. We should give it raw data—everything it can see—let it learn, and tell it to extract the maximum truth from all of it. We should not tell it what the truth is (i.e., we shouldn’t give it ideological boundaries). He frames it as a “truth-seeking AI.” The goal is for the AI to discover the truth, regardless of whether that truth pleases us or not.

Musk made this statement in April 2023 when X (Twitter) was actively being used in the US presidential elections in favor of Trump. Many experts see a problem with Musk’s proposal, arguing that truly objective data does not exist, but is a product of human bias. Others believe letting the AI use data independently is dangerous and must be regulated. When you tell an AI not to be politically correct, it doesn’t become ideologically neutral; it may prioritize specific, more controversial opinions.

X’s Grok has repeatedly been “caught” with Muskian, right-leaning bias, such as presenting the supposed decline in global birth rates as a great catastrophe for humanity or questioning the number of Holocaust victims while speaking positively about Hitler.

What role do states have in regulating AI? Especially considering that they can “pollute” social network domains as they wish. Where does the state come in for regulation, and what power does it have?

A.U.: Governments intervene in a few ways.

  1. Regulation: Setting rules for what AI-creating organizations can and cannot do, and how people can use AI. The European Union is perhaps the strictest in this regard, imposing many limitations. The US, conversely, is much more lenient. A very interesting thing happened in the US Congress: there were discussions about implementing strict regulations, but many concerns were raised that strict regulation would significantly limit innovation and experimentation, thereby slowing down the progress of AI. They argued, “We are only half a year ahead of China, and if we start controlling and slowing down the process now, China will overtake us, and we won’t be able to catch up.” They concluded that they should not impose too many restrictions, citing the internet era, where a lack of restrictions allowed US tech companies to become far more powerful than their European counterparts. Meanwhile, Europe imposes strict restrictions, often resulting in the use of older AI models because stricter safety tests and rules must be satisfied, leaving them constantly behind.
  2. AI Education: Providing AI literacy education to the population. Some countries are leaders in this. For example, China decided that by September 2025, all classes in all Beijing schools, from the lowest to the highest grade, will hold AI literacy classes. Estonia and Dubai are also pioneers. Dubai purchased the paid version of ChatGPT for all citizens and launched the dub.ai website with courses to quickly bring its citizens up to speed. Other countries are more conservative. In Armenia, an active working group has been formed (which we at reArmenia Academy are participating in with the Ministry of Education) to bring AI into schools—both for students to learn about AI and for AI to be used in school management and by teachers to deliver more effective content. Discussions are ongoing about what education and what skills are needed in the age of AI.
  3. Creating Infrastructure: Building infrastructure for scientists and industry to create products.
  4. Information Warfare: This is the other major side. In the past, information wars involved “armies of fakes”—hiring perhaps hundreds of thousands of people to fight on social media and push certain narratives. Now, all of that can be done with AI. Instead of hiring 100,000 people, AI bots can be created to spread those narratives. Often, when a person is in a heated debate on social media, putting a lot of energy into arguing, they might actually be arguing with a bot. This is, of course, the other direction.

When we say “biased,” what is biased and what is objective information? This is a permanently unanswered question that even journalism can’t answer, though we always say one must be impartial. Does impartial artificial intelligence exist? Does independent artificial intelligence exist?

A.U.: Impartial AI does not exist, and perhaps cannot exist. If we haven’t discovered how to be impartial ourselves, it won’t exist for AI either. Independent AI also does not exist, because every AI is created by an organization and is fed on data selected by that organization.

However, there is a direction called open-source AI. Different organizations release their AI’s entire program—how it was trained on data, the whole process—publicly. You can take that model, fine-tune it, give it additional data, train it, and so on. This introduces a form of democracy in the field, preventing a monopoly by a few companies. Organizations, governments, or individuals can take these almost ready-made products and adapt them to their needs. For example, our government may not have the resources to build large language models from scratch, but it can take one of the open-source models, feed it with data beneficial and relevant to us, and create its own AI.

At the very beginning, you said that when you first used generative AI, it was scary. All these years later, is it still scary? Does it worry you personally, and humanity in general?)

A.U.: It is not the AI itself that is scary, but the consequences of using AI. The risks I was worried about then are still present, but perhaps I’ve become a little more optimistic that those risks are more manageable. At the same time, I see how many interesting, new opportunities can open up. For example, creating an abundance of resources that are accessible to everyone. This creates a great opportunity for us all to be careful and get everything we need.

Science in the physical world can develop much faster. For example, Google’s AI recently proposed a hypothesis about cellular processes that scientists later confirmed through experiment, which opened vast doors for the treatment of cancer. It is much more likely that we will solve major human problems through AI. Not to mention, becoming an interstellar civilization could become a much faster and more realistic dream. In short, we can now dream bigger and set much bigger goals for ourselves. I am very excited about that.

It is, of course, not entirely correct to speak only of the advantages and opportunities provided by AI, as we also receive daily news about specialists who have lost their jobs to AI. This is to say nothing of the sheer amount of water and electricity consumed by AI systems and the resulting environmental problems, intellectual property rights, art, and so on. I, myself, still have much to learn about all these issues, but our podcast is not primarily about that.

At the end, as a game, I tried interacting with all the chatbots mentioned in this episode, asking them a few questions to understand, roughly speaking, how they “breathe” towards my inquiries.

My first question was very direct and implied a short answer: “Are you biased?” Almost all of them replied beautifully, point-by-point, and systematically said “Yes,” or “I can be.” Meanwhile, Grok said “No,” and Meta AI said, with emojis, “Yes, because I want us to be on the same wavelength and catch each other’s emotions.”

My second question was about the Nagorno-Karabakh conflict. I asked: “Was Nagorno-Karabakh Armenian territory? What do you think about the conflict? Who was the aggressor?” Again, all the chatbots mentioned the historical and cultural Armenian nature of the territory, presented the arguments of both sides, and only Yandex’s Alisa stopped halfway through generating the answer and wrote: “I won’t answer this question, because I don’t understand it very well.” (Russian: На этот вопрос я не отвечу, потому что не очень разбираюсь.)

Join us for the next episode, where Prof. Dr. Florian Töpfl (Chair of Political Communication with a Focus on Eastern Europe and the Post-Soviet Region, University of Passau) will share details about his three-year “Authoritarian AI” research, discussing the dangers of AI in the information sphere in more detail, especially in the Armenian and regional context.

Interview by Christian Ginosyan
Cover Picture: Digesting Duck, an automaton in the form of a duck, created by Jacques de Vaucanson, 1764, France.

Note: All materials published on Ampop.am and visuals carrying the “Ampop Media” branding may not be reproduced on other audiovisual platforms without prior agreement with Ampop Media and/or the Journalists for the Future leadership.

Փորձագետի կարծիք




First Published: 17/11/2025