Nepal in the Age of Political Theater

Image from The Kathmandu Post

Democracy does not usually die on the day soldiers take over the streets or leaders cancel elections. It dies much earlier—when institutions stop working, when truth fractures into competing stories, and when elections become ceremonies rather than choices. By the time people openly admit democracy is dead, it has often been dead for years.

Nepal today stands dangerously close to that moment of delayed recognition.

The killings of Gen Z protesters on September 8, 2025—76 lives lost, including a 12-year-old child—were treated as a tragic rupture in an otherwise functioning democratic system. The prime minister resigned. Elections were scheduled for March 5, 2026. International observers praised “youth power” and “democratic correction.”

But this framing may be comforting rather than accurate.

The harder question is this: what if Nepal’s democracy had already stopped functioning long before the bullets were fired? What if the coming election is not democratic renewal, but democratic theater—rituals performed over a system that no longer responds to citizens?

The Quiet Capture of Democracy

Democratic death in Nepal has not come through a single coup or constitutional rupture. It has come through slow institutional hollowing.

Courts appear formally independent yet are widely perceived as politically negotiated. Election commissions conduct polls, but outcomes are increasingly treated as bargaining chips rather than mandates. Parliament exists, but governance happens elsewhere—through coalitions formed not to govern, but to block others from governing.

This is not the absence of democracy’s forms. It is the absence of democracy’s function.

When elections no longer resolve political conflict—when they merely postpone it—democracy enters a terminal phase.

The Collapse of Shared Reality

What followed September 8 should alarm us more than the violence itself.

Within hours, Nepalis were living in entirely different realities. Some believed the protests were foreign-engineered. Others blamed neighboring powers. Still others insisted old parties orchestrated chaos to justify repression. Each explanation carried fragments of plausibility—and none commanded broad agreement.

This is what democratic collapse looks like in the 21st century: epistemological breakdown.

When citizens cannot agree on what happened, who is responsible, or whether authority is legitimate, democratic decision-making becomes impossible. Voting requires shared facts before shared rules. Nepal is losing both.

What is striking—and deeply worrying—is how fast this happened. Other democracies took years, even decades, to reach this point. Nepal reached it in days.

The Geopolitical Cage

Nepal’s crisis cannot be understood without confronting its geography.

Caught between two giants—India and China—Nepal exists in a state of permanent external balancing. Neither power can dominate completely, but both can destabilize endlessly. This does not protect democracy. It corrodes it.

Governments are no longer judged primarily by competence or public mandate, but by perceived alignment. Domestic political struggles become proxy contests. Elections produce not authority, but leverage.

In such a system, no government truly governs—and no opposition truly waits its turn. Everyone negotiates sideways.

The result is not sovereignty, but suspension.

Elections as Performance

This is why so many Nepalis approach the March 5 election with cynicism rather than hope.

Before a single vote is cast, coalitions are being discussed to neutralize outcomes. Young activists are torn between participation and boycott, knowing both choices may legitimize a broken process. Ordinary citizens expect dispute, paralysis, and blame—regardless of results.

When nobody believes an election will settle anything, the election has already failed its democratic purpose.

What remains is performance: ballots, speeches, observers, statements—without resolution.

What This Moment Actually Demands

The question before Nepal is not simply who should win on March 5.

It is whether Nepalis are willing to acknowledge democratic death honestly—because only honest acknowledgment creates space for renewal.

Pretending democracy is healthy when it is not does more damage than admitting its failure. It delays reform. It exhausts citizens. It turns courage into silence and sacrifice into symbolism.

The young people who died on September 8 did not die for theater. They died confronting a system that no longer listened.

If Nepal continues performing democracy without repairing it—without restoring institutional credibility, shared truth, and genuine accountability—the rituals will continue while the substance disappears.

Democracy rarely announces its death. It waits for us to notice.

March 5 will not tell us whether democracy survives.
It will tell us whether we are ready to face the truth about its condition.

Everyday Life in Nepal at the Time of Social Media Crisis

Image from The Rising Nepal

Nepal is going through a very tough time right now. After the Gen Z movement showed how badly the old political parties had failed, we have elections scheduled for March 5, 2026. But those same parties don’t want to face voters anymore because people have seen their decades of failure. So they’re using social media manipulation like never before.

The worst part is how they’re using AI now. UML distributed petrol and food coupons to their cadres for rallies, and when caught, they claimed the photos were AI-generated. When reports came out about money found at Deuba’s house, he dismissed it as AI-created. Now the public faces an impossible situation: how do you know what’s real when anyone can say “that’s AI” whenever something inconvenient surfaces? This shows why Nepal’s social media problem is far more dangerous than what Western countries face.

Social Media Manipulation in Nepal

Western countries definitely have fake news problems. Trump’s use of “alternative facts” in his first presidential term, political polarization creating separate realities for different groups, Russian and Chinese interference during elections, QAnon conspiracies, anti-vaccine movements. Even Western rationalist traditions can’t really protect against misinformation.

But here’s the key difference. In Western countries, even during election chaos, the manipulation stays mostly at the political level. People argue about which news channel or politician to trust. But their everyday life remains protected by functioning institutions. Doctors are licensed, charities are verified by authorities, schools meet accreditation standards. These institutions aren’t perfect, but they work at a basic level.

Role of Culture in Media Manipulation

In Nepal, manipulation has penetrated every aspect of daily life. In healthcare, can you really tell if a doctor is qualified? Professional licensing exists on paper, but enforcement became very weak after we restored democracy and declared the republic. Why? Because major political parties cared more about keeping their party people happy than about professional standards. Unqualified people affiliated with parties got to practice medicine with almost no oversight. Same thing in education. Unqualified teachers got positions through political connections, not merit.

The charity sector often suffers from a lack of accountability, and the case of Dhurmus and Sunlali is a prime example. The beloved comedians first earned public trust through genuine humanitarian work, aiding Nepal’s earthquake victims and the impoverished Musahar community. They were rightly celebrated, and their social media influence channeled real donations. Success seemingly led them astray, though. They pivoted to an ill-conceived plan to build a cricket stadium, claiming they’d contribute to the sport sector since the government had been ineffective at such projects. The stadium would help reduce unemployment and provide other social benefits, they argued, using emotional appeals spread through social media to maintain support. This shift, funded by small public contributions, revealed a troubling desire to monetize their popularity, moving from humanitarian work to what many perceived as greed.

Many organizations now have impressive Facebook pages asking for donations with emotional appeals, but how do you verify which are genuine and which are scams? There’s no charity watchdog with real authority. This institutional weakness isn’t accidental. It’s the result of decades of party politics that corrupted everything. National projects became vehicles for party patronage instead of public service. This is exactly why the Gen Z movement happened. Young people saw this for years and got fed up. But now those same parties use social media manipulation to avoid accountability and try to stop the March 2026 elections.

All major parties operate “cyber armies” that constantly generate propaganda and attack critics. This isn’t just during elections, it’s 24/7. Professional journalism still exists but gets drowned out by social media content creators with no standards or accountability.

Look at Rabi Lamichhane. He went from TV personality to vigilante with cameras catching people in hotel rooms, then became a politician. Reports suggest he used to bargain with the corrupt people he “investigated.” Make a deal, your wrongdoing stays hidden. No deal, he exposes you. That’s not journalism, that’s a protection racket.

People like Punya Gautam, Rajib Khatry, Santosh Deuja mix politics and charity work with “journalism,” creating conflicts of interest. Anyone with a smartphone calls themselves a journalist, running YouTube channels that turn private matters like the Meena Dhakal marital dispute into national drama for views and money. AI made everything exponentially worse. Politicians now use the “AI defense.” Do something wrong, evidence comes out, just say “that’s AI-generated.”

Media Manipulation and Its Risk on Everyday Life

Different cultures think about truth and knowledge differently. Western culture emphasizes skepticism and demanding evidence. South Asian culture, including ours in Nepal, integrates different ways of knowing. We value rational analysis but also emotional intelligence, empirical facts but also spiritual understanding, individual judgment but also community wisdom. These are real strengths, not weaknesses.

But here’s the thing about our political culture. In Nepal, people are hardly judged based on actual skills or performance. We like or dislike people based on whether they’re from our political party, share our ethnic identity, fit into our social hierarchies. Actual competence and governance outcomes are often secondary.

A powerful manipulation tactic exploits our cultural respect for family. Political figures get positioned not as the leaders they actually are, but as emotional archetypes. KP Oli becomes “Baa,” father. Deuba becomes “Daju,” brother. Arju becomes “Bhauju,” sister-in-law. Sushila Karki was first called the nation’s “loving mother,” then quickly became a “foreign agent.”

The danger is people don’t evaluate these individuals based on their performance as prime ministers. They respond to the emotional archetype. When someone is “father,” people think wisdom, protection, authority, respect. Criticizing them feels like betraying family. This creates a dangerous generalization: father figures and mother figures can’t make mistakes, we shouldn’t scrutinize their actions, we should forgive their failures.

The Meena Dhakal situation became debates about “mother’s love,” forgetting to discuss her actual actions and character. The Aayus Thakuri and mother feud exploited the sacred mother-son relationship. Our cultural strengths in respecting family and valuing relationships become vulnerabilities when manipulators understand our psychology and use family framings to stop critical thinking.

Changing the Course of Action

Both Nepal and Western nations face manipulation, but the danger differs fundamentally. In the West, manipulation targets political opinions during election cycles. In Nepal, manipulation is constant and affects survival decisions in healthcare, charity, education, finance. In the West, institutions protect everyday life even during political interference. In Nepal, decades of party corruption eliminated this baseline protection.

When foreign actors influence Western elections, citizens might elect bad leaders, but their teachers are still certified, medications still regulated, charities still verified. In Nepal, people can’t trust political information, medical credentials, charity legitimacy, educational qualifications, all simultaneously, all the time.

We need digital literacy education adapted to our culture, using Buddhist and Hindu concepts as foundation, teaching people to recognize manipulation in healthcare, charity, education, finance. We need to strengthen investigative journalism to expose cyber armies and reveal hidden motivations. We need transparency requirements beyond politics for anyone soliciting donations or claiming expertise. We need public awareness about specific tactics: the “AI defense,” cyber army coordination, performative altruism fraud, vigilante journalism as extortion.

Call for Action

Back when I was an English literature student, I read Milton’s “Areopagitica” from 1644. Milton argued truth emerges through free encounter with falsehood, not censorship. Maybe Nepal’s chaos is something we must go through, like Western societies experienced before developing solutions.

But critical differences demand urgency. Viral content spreads instantly, not slowly like Milton’s pamphlets. We must develop journalism ethics and frameworks while confronting algorithmic manipulation, cyber armies, and AI fabrication. The current political crisis shows how fast things deteriorate. Delay costs are measured in destroyed livelihoods, damaged health, daily suffering.

Nepal has resources: philosophical traditions offering truth frameworks, cultural strengths in solidarity, proven adaptive resilience. The challenge is developing capacity to maintain these strengths while resisting exploitation. We need conscious action now in education, media support, transparency, and public awareness. This will determine whether technology serves our flourishing or enables exploitation. As Milton understood, this struggle is necessary for genuine understanding and cultural strength. The question is whether we’ll act with sufficient speed before costs become unbearable.

BG & AI 4: Critical and Ethical AI Use Through Bhagavad Gita Principles

Rajendra Kumar Panthee

We live in a time when computers can write essays, solve math problems, and make legal papers. When machines can make information quicker than we can read it, this makes us ask a very important question: what is knowledge? I keep going back to an old book that has helped people for thousands of years. The Bhagavad Gita. Chapter 4, Verse 38 says, “न हि ज्ञानेनसदृशं पवित्रम् इह विद्यते”—”There is nothing in this world that is as pure as knowledge.”
This lesson has a different meaning nowadays. The Gita makes a clear difference between facts and real knowledge. Knowledge that is real changes us. Not just tells us. This difference is really important as we figure out how to use AI in communication and education.


The Cost of Easy Access to Information


AI tools are really useful. Students write essays with very little help. Professionals can write full reports in only a few minutes. Researchers quickly put together complicated literature. But this ease of use raises a lot of red flags. Biases in algorithms. Mistakes that seem authoritative. Maybe the most worrying? The slow loss of independent critical thinking. Studies support these worries. AI tools clearly make writing more productive. But they also make us question academic honesty and the growth of critical thinking skills (Kumar et al., 2024). I see this tension a lot in my own classroom. Students turn in material that shows mechanical skill but not real intellectual interest. Papers that show skill but don’t grasp. The Gita predicted this problem thousands of years ago. Chapter 4, Verse 39 teaches: “श्रद्धावाँल्लभते ज्ञानं”—”One who has faith, who is focused on wisdom, and who has conquered the senses finds knowledge.” To know something for real, you have to be actively and mindfully involved. Not just sitting back and taking it all in. Turkle’s (2011) book Alone Together is a modern example of this idea. She warns against technological solutions that give the impression of companionship without the responsibilities of a partnership (p. 1).

The Gita posits that genuine knowledge transforms into “the purest power,” serving as a guiding force for ethical conduct and self-realization in our technology-driven society. Chapter 13, Verses 8–12 list the traits that make up knowledge. “Amanitvam” (humility) and “atattva-arthavad-jñānam” (knowledge of genuine essence). Shannon Vallor, a technology ethicist, calls these qualities “technomoral virtues” and says that “emerging technologies require us to develop new moral capacities” (Vallor, 2016, p. 2). The similarity struck me. Both ancient wisdom and modern ethics stress the importance of developing discernment instead than just gathering facts.


Beyond Algorithms


The Gita’s saying that “knowledge is the ultimate purifier” is quite relevant to the way we teach writing with technology today. This purity means that everything are clear. The ability to see the moral principles that underlie algorithmic outcomes. When students use AI to learn about climate change, real understanding involves asking if the summary they get is based on scientific consensus or hidden biases. When teachers make assignments that use AI, they need to be honest about the tool’s strengths and weaknesses. When organizations use AI systems, they should think about more than just how well they work; they should also think about fairness and accessibility.

Recent research shows that kids are using AI more and more for different schoolwork. From help with writing to help with research. However, numerous individuals articulate apprehensions regarding the preservation of academic integrity (Thompson et al., 2025). This tension highlights the teachings of the Gita. Information alone is not enough. We need to be able to determine what is good and bad to use it properly. In Chapter 18, Verse 20 of the Gita, it says, “सर्वभूतेषु येनैकं भावमव्ययमीक्षते”—”That knowledge by which one sees the one indestructible reality in all beings is in the mode of goodness.” True knowledge sees the unity that lies beneath everything. Winner (1977) contends that technologies are not only instruments but “forms of life” that transform social relations and ethical possibilities (p. 323). Just having information isn’t enough. We need to be smart about how we use it.


Self-Realization Beyond Passive Consumption


Students are very tempted with AI tools. Just because AI outputs look confident and complete, you shouldn’t treat them as authoritative. These systems handle a lot of data and show results with what seems like certainty. The Gita, on the other hand, teaches something very important about how knowledge might lead to self-realization (ātma-saṃyama). Real learning changes who you are. Not just passing on information. Chapter 6, Verse 5 emphasizes this self-directed nature of wisdom: “उद्धरेदात्मनात्मानंनात्मानमवसादयेत्”—”One must elevate oneself by one’s own mind, not degrade oneself.” This principle resonates with media theorist Neil Postman’s critique in Technopoly, where he cautions that technologies can become “a kind of thought-world which might become autonomous, a way of thinking that no longer knows that it is only one way of thinking” (Postman, 1992, p. 71). So, critical AI literacy helps us become more aware of how we use technology. Not as passive customers, but as discriminating practitioners who maintain human judgment in the face of automation. Students who understand this idea see AI as a tool for working together, not as a source of wisdom. They keep their intellectual independence while getting help from technology.


Our Duties as Moral People


The Gita defines dharma as doing things that are in line with higher understanding. As educators, it is our moral duty to shape how the future generation interacts with AI. Chapter 3, Verse 35 says, “श्रेयान्स्वधर्मो विगुणः परधर्मात्स्वनुष्ठितात्,” which means “Better to do your own duty poorly than to do someone else’s duty well.” Instead of just copying what others do, we need to accept our own duties. Luciano Floridi, a philosopher, calls this “distributed moral responsibility” in the digital world. He contends that ethical dilemmas necessitate “a new level of abstraction,” wherein accountability is distributed throughout networks of human and non-human agents (Floridi, 2014, p. 48). This dharma involves making sure that AI systems don’t make writing instruction even less fair. It involves creating experiences that teach students not only how to use AI well, but also how to ask good questions. What matters most? It entails simulating reflective technology utilization. Showing that even the most powerful tools need people to help them do good things.


Discernment Instead of Information


AI systems include the biases and flaws of the data they were trained on. No matter how advanced. The Gita’s focus on viveka (discrimination/discernment) helps us tell the difference between shallow knowledge and greater wisdom. Verse 11 of Chapter 13 says that “tattva-jñānārtha-darśanam” (philosophical knowledge of truth) is a part of wisdom. Understanding that goes beyond what you see. When a language model writes an essay about historical events, viveka indicates identifying something essential. The system gives information without knowing what it means. It copies patterns from training data without understanding what they mean. In his book Computer Power and Human Reason, computer scientist Joseph Weizenbaum made a distinction between “deciding” and “choosing.” Algorithms tell computers what to do. But only people make choices depending on what they think is important (Weizenbaum, 1976, p. 227).

This discernment include the identification of absent views. Especially from marginalized communities that are not well represented in training data. Recent studies underscore comprehensive methodologies to comprehend the impact of AI on conventional concepts of originality and intellectual integrity (Zhang et al., 2025). Viveka lets us evaluate the hidden power we give to technology outputs just because they look polished and sure of themselves.


Balanced Action with Tech


The Gita’s karma yoga (path of action) not only questions AI’s limits, but it also shows how to keep human agency in writing instruction that is becoming more automated. In Chapter 2, Verse 47, it says, “कर्मण्येवाधिकारस्ते मा फलेषु कदाचन”—”You have the right to do your duties, but you don’t have the right to the results of your actions.” When it comes to AI ethics, this means seeing technology as a more like partner than like an authority. Siva Vaidhyanathan, a media expert, warns against “techno-fundamentalism,” which sees technology as the answer to all issues (Vaidhyanathan, 2011, p. 182). As a teacher, karma yoga means making tasks that use AI as a starting point for deeper thinking. Not a place to get speedy answers. For students, it means learning to be confident enough to change, question, or add to AI-generated content instead of just accepting it. This keeps room for human creativity and intuition while using AI’s processing capability.


The Three Modes of Engagement  


The Gita says that everything of nature has three properties (gunas). Tamas (inertia/ignorance), rajas (passion/activity), and sattva (harmony/goodness). This paradigm shows us how we might think about AI ethics. Chapter 14, Verses 6-8 talk about these traits, and sattva is described as “illuminating and free from evil” (प्रकाशकं च अनामयम्). Tamasic involvement with AI? Either accepting it without question or completely rejecting it. Positions that come from not knowing the details of technology. What does rajasic involvement mean? People are worried about “keeping up” with AI or employing technology mostly for their own gain. Sattvic wisdom is always in equilibrium. Honest. Focused on the common good. This corresponds with what STS academic Sheila Jasanoff refers to as “technologies of humility,” which are ways of recognizing the limits of prediction and control in technological systems (Jasanoff, 2003). When it comes to AI ethics for writing, the sattvic approach puts honesty ahead of ease. It makes sure that users know how systems work instead of thinking of them as magic boxes. It puts fairness ahead of speed. Asks if tools work equally well for all users. It puts the long-term good of society ahead of short-term gains. Asking how today’s new ideas will change how we learn in the future.


Real-Life Uses in the Classroom
I have made a number of exercises that turn Gita wisdom into useful things to perform in the classroom.

The Autopsy of AI: Just like the Gita tells people to think about themselves, students critically look at AI outcomes. They look at how a chatbot responds to find any biases, logical gaps, or missing context. When AI writes an essay about globalization, students look into whether the point of view supports economic powers or takes into account how it affects developing countries. The goal is not to reject AI, but to learn how to use viveka (discernment) to understand what it produces. Recognizing both strengths and weaknesses.


Rhetorical Remixing: Students improve the rhetorical efficacy of AI-generated content based on the Gita’s lesson that knowledge without application is incomplete. They offer emotional appeal by telling stories that AI can’t really tell. They add to AI’s legitimacy by including expert viewpoints and a variety of points of view that aren’t well represented in its training data. They make the framework of the argument more logical so that it is easier to follow. Showing that human intuition is still important for communication that really works.


Discussions about Moral Dilemmas
: The Gita gives Arjuna moral problems that he must think about deeply. In the same way, talking about AI problems in class helps students learn how to think about ethics in more complex ways than just “good” or “bad.” Students think about whether universities should utilize AI detectors if they flag non-native English speakers more than they should. Is it wrong to use AI to help in employment if the algorithms favor some groups over others? When does AI help with writing become stealing? These discussions help people become more aware of ethics, which the Gita says is necessary for sensible behavior.

Working Together with AI: Karma yoga in the Gita stresses doing things well. It shows how to write together by considering technology use as a planned cooperation. In the first stage, AI makes the first content. Write rough drafts of your outlines or thesis statements. In stage two, students improve this material by thinking critically about it and sharing their own thoughts. Changing arguments. Adding more detail. Challenging ideas. In stage three, peer feedback focuses on human inventiveness and judgment. Students assess the efficacy with which peers converted AI-generated foundations into original creations. This is similar to what the Gita says: that doing with wisdom leads to better results than either knowledge or action alone.


Old Knowledge for New Technology


The Gita’s timeless wisdom is very helpful as AI changes how we learn and talk to each other. It teaches us that “knowledge is the ultimate purifier,” which means that being able to use technology isn’t enough; we also need to be able to tell right from wrong. Chapter 18, Verse 30 says that sattvic insight is the ability to “see the unified existence in all beings” (सर्वभूतेषु येनैकंभावमव्ययमीक्षते). Proposing a holistic viewpoint that goes beyond binary oppositions. We can avoid becoming too excited or too quick to reject AI by using this balanced wisdom. Instead, they found a middle ground of informed participation. This is similar to what philosopher Hans Jonas calls the “imperative of responsibility” in the ethics of technology. A duty that guarantees technology promotes rather than hinders human development (Jonas, 1984, p. 11).


Towards Technological Harmony


I picture what could be dubbed “technological sattvana” (technology harmony). Systems that protect human dignity. Encouraging people to think critically. Serving the good of the whole. The Gita’s Chapter 18, Verse 37 says that sattvic bliss is “like poison at first but like nectar in the end” (यत्तदग्रे विषमिव परिणामेऽमृतोपमम्). Suggesting that developing discernment may seem harder at first than just taking in information, but in the end it leads to more satisfaction.


To get this harmony, developers need to make deliberate design choices. Teachers should think carefully about how to put their plans into action. Users need to be critically aware. It requires making AI systems clear enough that people can question them. Able can be changed for the better. Ethical enough to support fairness. This is in line with what Martha Nussbaum calls “capabilities theory.” It looks at how technologies may make more meaningful choices and activities available to people, not fewer (Nussbaum, 2000, p. 78).

A Way Forward


Chapter 4, Verse 42 says, “तस्मादज्ञानसंभूतं हृत्स्थं ज्ञानासिनात्मनः”—”Therefore, with the sword of knowledge, cut asunder the doubt born of ignorance that lies in your heart.” Knowledge is a tool for dealing with doubt. Not by getting rid of it completely, but by being aware of it.
By encouraging thoughtful questioning based on the Gita’s teachings, we may create a future where AI enhances rather than replaces the unique art of human cognition and expression. This future doesn’t fear progress in technology; instead, it makes sure that progress helps people and the wisdom of the group.


AI should not be seen as the end of the road, but as a powerful tool that can help us understand things better. The Gita depicts knowledge not solely as information but as a catalyst for transformation. Vallor (2016) defines “technomoral wisdom” in her examination of virtue ethics in the technological era, emphasizing the necessity to discern how technologies might facilitate rather than hinder human flourishing (p. 6). We realize the Gita’s idea of knowledge as the ultimate cleanser in this way. Finding a way through our digital world in the wisdom of the past.

                                    References

Floridi, L. (2014). The ethics of information. Oxford University Press.

Jasanoff, S. (2003). Technologies of humility: Citizen participation in governing science. Minerva, 41(3), 223–244. https://doi.org/10.1023/A:1025557512320

Jonas, H. (1984). The imperative of responsibility: In search of an ethics for the technological age. University of Chicago Press.

Kumar, A., Patel, R., & Singh, V. (2024). Using artificial intelligence in academic writing and research: An essential productivity tool. Computers and Education: Artificial Intelligence, 6, Article 100120. https://doi.org/10.1016/j.caeai.2024.100120

Nussbaum, M. (2000). Women and human development: The capabilities approach. Cambridge University Press.

Postman, N. (1992). Technopoly: The surrender of culture to technology. Knopf.

Thompson, K., Williams, D., & Lee, S. (2025). University students describe how they adopt AI for writing and research in a general education course. Scientific Reports, 15(1), Article 92937. https://doi.org/10.1038/s41598-024-85329-8

Turkle, S. (2011). Alone together: Why we expect more from technology and less from each other. Basic Books.

Vaidhyanathan, S. (2011). The googlization of everything (and why we should worry). University of California Press.

Vallor, S. (2016). Technology and the virtues: A philosophical guide to a future worth wanting. Oxford University Press.

Weizenbaum, J. (1976). Computer power and human reason: From judgment to calculation. W. H. Freeman.

Winner, L. (1977). Autonomous technology: Technics-out-of-control as a theme in political thought. MIT Press.

Zhang, Q., Adams, B., & Wilson, T. (2025). Reassessing academic integrity in the age of AI: A systematic literature review on AI and academic integrity. Computers & Education, 228, Article 105269. https://doi.org/10.1016/j.compedu.2024.105269

June 18, 2025

BG & AI Post 3: The Gita’s Wisdom on Ethical AI in Education and Writing

Rajendra Kumar Panthee

“बन्धुरात्मात्मनस्तस्य येनात्मैवात्मना जितः।
अनात्मनस्तु शत्रुत्वे वर्तेतात्मैव शत्रुवत्॥”

(Bandhur ātmātmanas tasya yenātmaivātmanā jitaḥ |
Anātmanas tu śhatrutve vartetātmaiva śhatru-vat ||)

Meaning: The mind can be our greatest friend or enemy, depending on how we control it. Self-mastery is essential for achieving balance and success.

Ancient Wisdom for Modern Classrooms

AI has entered our classrooms. I’ve watched it happen over the past two years. It brings extraordinary possibilities. Also profound ethical dilemmas. The Bhagavad Gita offers timeless insights into knowledge and right action. I’ve been studying the Gita, and these insights give us a spiritual framework for this transition. Technology and consciousness are colliding right now. Krishna’s teachings on self-effort (yatna), discernment (viveka), and selfless service (nishkama karma) have become unexpectedly relevant. They can guide how we bring AI into education and writing. Not as a crutch. As a lamp showing the path to true wisdom.

AI as Support for Self-Effort in Learning

The Bhagavad Gita talks a lot about self-effort (yatna) in acquiring true knowledge. How does this connect to AI in education? Technology should enhance a student’s intellectual journey. Not replace it. AI tutors and writing assistants can provide valuable scaffolding. Personalized feedback. Resources. But we must avoid creating dependency. Krishna’s advice to Arjuna in Chapter 6 about self-elevation through one’s own efforts reminds us something important. Real learning requires struggle.

Think about an ethical AI system in education. It would function like the ideal guru. Guiding students to find answers themselves. Not providing ready-made solutions. This preserves the sanctity of learning while harnessing technology’s benefits.

Maintaining Truth and Authenticity in AI-Assisted Writing

The Gita emphasizes satya (truth) and authentic expression. This provides crucial guidance for using AI in writing. AI tools can help with structure and grammar. Sure. But they risk promoting intellectual dishonesty when people use them to generate entire pieces of work. The concept of asteya (non-stealing) applies here. You’re passing off AI-generated content as your own? You’re violating this principle.

What would ethical use of writing AI look like? Using it as a starting point for your own ideas. Not as a replacement for original thought. The Gita warns against moha (delusion). This feels particularly relevant now. AI tools might generate plausible but false information. Users need to exercise buddhi yoga (discerning wisdom) in verifying outputs.

The Dharmic Educator’s Approach to AI

Teachers bringing AI into education must embody the Gita’s ideal of sthitaprajna. Wisdom and balance. AI should serve as an aid to teaching. Not a replacement for the human connection at the heart of education.

Krishna guided Arjuna through dialogue. He didn’t just give him answers. Educators should use AI the same way. Stimulate discussion and critical thinking. Not as a shortcut. The Gita teaches about performing one’s duty without attachment to results. This reminds educators what really matters. True understanding is the goal. Not just improved test scores or efficiency.

When evaluating AI tools, educators must apply the Gita’s principle of samatva (equality). We need to make sure these tools don’t perpetuate biases. Or disadvantage certain groups of students.

Conclusion

The Bhagavad Gita’s timeless wisdom gives us a moral compass for navigating AI in education and writing. We can apply principles like nishkama karma (selfless action) and jnana yoga (the path of wisdom). These make sure that powerful technologies serve human growth. Not undermine it. When does AI become dharmic? When it helps students develop their own understanding instead of providing easy answers. When it supports authentic expression instead of replacing it. When educators use it to enhance their sacred role instead of automating it. What we might call “Krishna-conscious AI” can become a true aid on the path to knowledge. It aligns with the Gita’s ultimate purpose. Awakening human potential and wisdom. How should we measure AI’s success in education? Not by efficiency. Not by convenience. But by how well it helps students and teachers fulfill their dharma of teaching and learning with integrity.

BG & AI Post 2: The Ancient Wisdom of Bhagavad Gita & Self-Regulation in AI Use

“उद्धरेदात्मनात्मानं नात्मानमवसादयेत्।
आत्मैव ह्यात्मनो बन्धुरात्मैव रिपुरात्मनः॥”

(Uddhared ātmanātmānaṁ nātmānam avasādayet |
Ātmaiva hyātmano bandhur ātmaiva ripur ātmanaḥ ||)

“One must elevate oneself by one’s own mind, not degrade oneself. The mind is both a friend and an enemy.”

From Science Fiction to Everyday Reality

AI is no longer just a tale. We do that every day now. ChatGPT instructs us what to do. Streaming providers suggest shows that we might enjoy. AI systems affect the way individuals learn, work, and choose what to do. These technologies are really helpful. They make it easier and faster for you to get things done. But they also raise significant moral issues that we need to deal with. Algorithms that are unfair. Not following the rules for keeping data private. The chance of abuse. We can’t separate how technology is changing from what people care about. If we don’t think critically, we can let these institutions control us. We stop being active participants in the growth of technology and instead become passive users.


The Unseen Costs of Using AI Without Thinking

What happens if we don’t think about AI? Many diverse sections of society display the effects. Students want to employ AI technologies to get their work done faster. They hurt their learning by putting getting rapid answers ahead of getting a better understanding. People who work in other industries might believe what AI says without checking it out carefully. They unknowingly keep biases that are built into the training data going. If we don’t keep AI systems under check, they might mislead, invade people’s privacy, and make the gap between rich and poor even bigger. These problems highlight why we need to be more than merely excited about what AI can achieve. We need to think carefully about what it can and can’t do.

The Gita’s Lesson For Self-control

The Bhagavad Gita still has significant information. It seems like the teachings it teaches about self-control and discipline are so important right now. Chapter 6, Verse 5 says a lot about how people are: “One must elevate oneself by one’s own mind, not degrade oneself.” The mind can be both helpful and harmful. This part says something important about how we use technology. The way we train our minds could help us grow or keep us from expanding. This is how AI works as well. These tools can either enhance or hinder our potential, depending on how we use them. The Gita talks a lot about how to be responsible for oneself and how to lead yourself. This is a fantastic way to live smartly and honestly in the age of AI.

Self-Regulation in Thoughts to Action

How can we leverage what we currently know to make new technology work for us? There are certain things that are very similar. The Gita’s idea of self-regulation tells us that the first step to making moral decisions about AI is to make moral decisions about ourselves. Think about the kid that uses ChatGPT to help them come up with new ideas instead of just copying what they already have. Or the expert who doesn’t only believe what AI says but also looks at it in light of other sources. According to this view, developers should put ethical concerns, such as fairness and openness, ahead of business needs, such as speed and efficiency. The Gita also talks about how hard it is to stay disciplined. This is especially important given since people are praising quick fixes and fast results. This acknowledgment is especially important when we consider ways to encourage the usage of ethical AI. Education, careful system design, and the right governance frameworks are all very important.

Creating Awareness for a Future AI Use

To move forward, you need to put your philosophical ideas into practice in the real world. Schools can help students learn how to think more critically. How? By showing students how to use AI tools in a safe way. Don’t use them instead of thinking; use them to help you learn more. People who design technology can make systems that make people think. Things that make people think about what their actions mean. Things that show that AI-generated material isn’t always good. We need rules that let fresh ideas come forth while also pushing for honesty and responsibility. A lot of the Gita is about self-control. We can make a location where AI is a tool for real human growth by adding structural supports to that. Not a power that keeps us from accomplishing things.

The Human Imperative in the Machine Age AI is more than just a tech issue. It’s very human. It’s time for us to choose new ideas. The Bhagavad Gita tells us that we can’t move forward by merely employing things from the outside. It depends on how we use them. We can exploit AI’s power if we work on being self-aware and disciplined. We can also keep what makes us human. The ability we have to choose. Our sense of duty. Our promise to do the right thing. This balanced manner of doing things gives us the best chance to find answers that really help people reach their highest goals.

BG & AI Post 1: The Bhagavad Gita and the Ethical Use of AI: A Path to Responsible Technology

कर्मण्येवाधिकारस्ते मा फलेषु कदाचन।
मा कर्मफलहेतुर्भूर्मा ते सङ्गोऽस्त्वकर्मणि॥
(Karmay-evādhikāras te mā phalehu kadāchana |
Mā karma-phala-hetur bhūr mā te sa
go ’stvakarmai ||)
“You have the right to work, but never to the fruit of work. Do not let the fruit of action be your motive, nor let your attachment be to inaction.”

Technology, Education, and My Journey

Ever since I arrived in the United States in 2009, I have been fascinated by the intersection of technology and education. As someone passionate about teaching and learning, I was particularly drawn to how technology could transform writing instruction. Over the years, I’ve seen how digital tools can empower students from diverse linguistic and cultural backgrounds, making education more inclusive and accessible. My PhD dissertation focused on Learning Management System (LMS) interface design, where I argued for involving students from different cultural and linguistic backgrounds in developing these platforms. The goal was simple: to create safe, inclusive, and user-friendly spaces for learning.

Writing in the Age of Digital Technologies

As a writing professor, I’ve designed and taught courses that explore what it means to be a writer in the age of digital technologies. But with the rise of AI tools like ChatGPT, I’ve also felt the growing anxiety in academia. When ChatGPT (second version) launched in November 2022, the fear of rampant cheating and academic dishonesty became a pressing concern. This anxiety, however, is not new. Scholars like Plato were skeptical of writing itself in classical Greece, fearing it would erode memory and critical thinking. The printing press, word processors, and now generative AI have all faced similar skepticism. Yet, each of these technologies has also brought transformative benefits.

A Personal Turning Point

For a long time, I observed the debates surrounding ChatGPT misuse by students but hesitated to address it in my writing courses. That changed when I saw my own 8th-grade son using ChatGPT for his school assignments in November 2024. This made me deeply concerned. I wondered whether my son was using ChatGPT as a shortcut rather than as a tool for building knowledge. Witnessing my son’s use of ChatGPT was both shocking and enlightening. As a writing professor who has always emphasized the relationship between writing and technology, I realized I had been avoiding a crucial conversation.

Why hadn’t I embraced AI as a tool for experimentation and discussion in my classes? I began to wonder how many of my students were using AI tools like ChatGPT and submitting answers it prepared, especially since I had a zero-tolerance policy for AI use and no reliable verification mechanism to detect whether their work was AI-generated. I realized that ignoring or denying AI’s existence was not a viable solution. Instead, it was more prudent to confront it head-on.

Incorporating AI into My Teaching

With this in mind, I decided to incorporate AI into my writing courses, making AI itself a topic of discussion and encouraging students to experiment with it starting in January 2025. This, I believed, was the best way to develop critical perspectives and teach students to use AI responsibly, ethically, and creatively. Since there is no reliable plagiarism detection method for AI-generated writing, I concluded that teaching self-discipline and self-restraint (principles emphasized in the Bhagavad Gita) was the most effective way to promote responsible AI use.

Revising My Writing Syllabi

In December 2024, I decided to act. I read several books and articles on how AI tools like ChatGPT could be integrated into writing classrooms. I revised my syllabus to make AI a central topic of discussion and a collaborative tool for writing activities. To my delight, my students have embraced this approach, especially since many professors still enforce a zero-tolerance policy for AI use. But as I delved deeper into the ethical implications of AI, I realized that technical solutions alone (like AI detection tools) are not enough to address the challenges we face. This is where the timeless wisdom of the Bhagavad Gita comes in.

Why the Bhagavad Gita?

My decision to integrate the principles of the Bhagavad Gita into my teaching and research on AI is deeply personal. For the past two years, I have been studying the Gita, and its teachings have profoundly shaped my understanding of life, work, and education. The Gita’s emphasis on self-regulation, ethical action, and detachment resonates with the challenges we face in the age of AI.

Moreover, the Gita is part of my cultural and spiritual heritage. My great-grandfathers were scholars of the Vedas, and our family was known as “Vedaas” in our village. I grew up as “Veda ko Raju,” surrounded by the wisdom of ancient scriptures. I remember my grandfather healing villagers with mantras from the Vedas. While I cannot say for certain whether these mantras cured their ailments, I do know that they brought comfort and hope. Many parents used to bring their children suffering from asthma for treatment to my grandfather. This connection to the Vedas and the Gita inspired me to name my blog “Ved Vani Community Literacy Forum” before changing it to “Toronto Realty and Rhetoric.”

What’s Next?

In the fall of 2025 (August), I will teach a course on writing and technology for the Science, Technology, and Math Living Learning Community at my university. This course will integrate the principles of the Bhagavad Gita to explore the ethical use of AI in writing and education. I am also working on journal articles on how the Gita’s teachings can guide the responsible use of AI.

The Bhagavad Gita reminds us that true progress comes from ethical action and self-awareness. As we navigate the challenges and opportunities of AI, let us strive to use this powerful technology with wisdom, responsibility, and compassion. As the Gita says:

“Yoga is the journey of the self, through the self, to the self.” (Chapter 6, Verse 5)

Let this journey guide us in creating a future where technology serves humanity, not the other way around.

Historical Resistance to New Technologies in Education

A History of Fear and Innovation in Education

Throughout history, the introduction of new technologies in education has been met with resistance and fear. From Plato’s skepticism of writing to modern anxieties about artificial intelligence, each technological advancement has sparked concerns about its potential to disrupt traditional learning, erode skills, and undermine academic integrity. These fears, while often rooted in genuine concerns, are frequently shaped by uncritical perspectives and a lack of understanding of how technology can be integrated responsibly. This list explores key moments in the history of educational technology, highlighting recurring patterns of fear and resistance, and argues that such anxieties are often exaggerated or misplaced.

Let’s have a look at what different (writing) technologies were developed over time and how people reacted against them. 

Historical Resistance to New Technologies in Education

  • Plato (3400 BCE): Decried the invention of writing, fearing it would erode memory and critical thinking.
  • Printing Press (1440, England): The invention of the printing press sparked fear among many, who believed that widespread access to printed books and materials would make literacy accessible to everyone, threatening the exclusivity of knowledge.
  • 1801: Chalkboards: Initially met with skepticism about their effectiveness in teaching.

Technological Advancements and Academic Fears

  • 1969: Word Processors: Revolutionized how people write and edit texts, but initially seen as tools that would make writing too easy, reducing students’ effort.
  • 1970s: Calculators: Educators feared students would lose basic mathematical skills.
  • 1980s: Cheating Crisis: The rise of paper mills like schoolsbcks.com and helpmeet.com raised concerns about academic integrity.
  • 1990s: Plagiarism Crisis: The internet made it easier to copy and paste content, leading to widespread plagiarism concerns.
  • 1990s: Computers in Classrooms: The initiative to wire U.S. school classrooms with computers was championed by Bill Clinton and Al Gore as part of their efforts to modernize education through technology. Writing professors initially believed computers would “do magic” and solve all writing challenges.
  • 1980s: Spell Checkers: Introduced in the 1980s, they were criticized for potentially undermining students’ spelling skills.
  • 2009: Grammar Checkers (e.g., Grammarly): Grammarly, launched in 2009, was feared to weaken students’ ability to self-edit and learn grammar rules.
  • 2007: Smartphones and Texting: The introduction of smartphones, particularly the iPhone in 2007, and the rise of texting led to fears that students’ writing skills would be destroyed. Critics argued this, but McWhorter, in his work Texting Kills, highlighted how such fears have circulated throughout history. He traces these concerns from the classical Greek period to modern times, arguing that while these fears have always existed, they are not necessarily true in many cases.

Digital Tools and Modern Concerns

  • 2017: Paraphrasing Tools (e.g., Quillbot): Raised concerns about students bypassing original thought and critical writing.
  • 2020–2021: COVID-19 Pandemic: The shift to remote learning during the pandemic led to a surge in plagiarism and academic dishonesty.
  • December 6, 2022: “The College Essay is Dead” by Stephen Marche: Argued that AI tools like ChatGPT would render traditional essays obsolete.
  • November 2022: ChatGPT (Version 2): Fears of rampant plagiarism and the end of critical thinking in academia emerged with the release of ChatGPT.

Uncritical Fears and Overreactions

  • Fear of Automation (2010s–Present): Concerns that AI tools would replace human creativity and critical thinking. For example, many predicted that ChatGPT would “destroy everything,” claiming machines would take over jobs in film, music, and other industries, leaving humans unemployed.
  • Overreliance on Detection Tools (2020s): Belief that AI plagiarism detectors alone could solve academic dishonesty.
  • Ignoring Ethical Use (2020s): Lack of focus on teaching students how to use AI responsibly and ethically.
  • Nostalgia for Traditional Methods (Ongoing): Resistance to change based on idealized views of past educational practices.

ACADEMIA AND ITS ATTEMPT TO REMAIN UP-TO-DATE

The most important point is that for academia to remain relevant and up-to-date, it must embrace and incorporate these technologies. Resistance to change only hinders progress, while adaptation ensures that education evolves alongside technological advancements. If academia does not incorporate the technological advancements taking place in society, it risks becoming obsolete, and people may lose faith in it. It is essential for academia to integrate new technologies, as well as social, cultural, linguistic, and other movements happening in society. This incorporation of not only new technologies and trends but also innovative ways of delivering knowledge as pedagogical tools helps keep academia up-to-date and relevant.

Technological Evolution and Our Perspectives

I know you may not regard writing as a technology now at a time of iPhone 16 ProMax and other technologies, but writing itself was once the most revolutionary technology at a certain point in history. According to Denis Baron (1999), every time a new technology is introduced, people are skeptical of it. Initially, it is often expensive and inaccessible. Over time, however, it becomes more widely available, and people begin to trust it. As its production increases, it becomes cheaper and more widely adopted. If you examine the technologies we use today for specific purposes, you’ll find that many were originally designed for entirely different reasons. For example, consider the calculator, the computer, or even Facebook. These tools were initially created for specific functions, but as time passed, they began to be used for entirely different purposes. Facebook, for instance, was originally a note-sharing tool designed by Mark Zuckerberg when he was a student at Harvard. Today, Facebook is used for almost everything you can imagine. Similarly, the pencil was another revolutionary writing tool at one point in history. We often have a tendency to forget old technologies once we become accustomed to newer ones. However, it’s important to remember that those old technologies never truly die. Instead, they evolve. Over time, these older technologies are incorporated into newer ones. If old technologies were not integrated into new ones, people would struggle to find meaning in the new technologies and would not be able to use them effectively.

Conclusion

The history of educational technology reveals a recurring pattern: every new innovation is met with fear and resistance. From the printing press to ChatGPT, critics have warned of the dangers these tools pose to learning, creativity, and academic integrity. However, these fears are often rooted in uncritical perspectives that fail to consider how technology can be harnessed responsibly. For instance, the fear that ChatGPT would “destroy everything” and render humans obsolete in creative industries has proven exaggerated. Instead of resisting change, educators and society must focus on developing critical perspectives, ethical frameworks, and adaptive strategies to integrate technology in ways that enhance, rather than undermine, learning. As history shows, fears are inevitable, but they should not dictate our response to progress.

Welcome!

Welcome to Rajendra’s website! I created it in order to inform my visitors about me, my teaching philosophy and research interests. I am going to constantly update this site. Thanks a lot for visiting it.