by Dr. Gábor Pék, PhD., Doc. Ing. Gejza M. Timčák, PhD.
This text was first published in Spirituality Studies 11-2, Fall 2025
Svādhyāya Meets Technology: Can AI Assist Self-Study?
In this paper, we research the classic concept of “self-study” (Sa. svādhyāya) and its potential replacement using Artificial Intelligence (AI). More precisely, our main goal is to find out if an AI can help in such a yoga related training (Sa. sādhanā) by using a basic type of reflexive thematic analysis (RTA) as a methodology to achieve this. A key research question is whether an AI can substitute the guru by providing all the relevant feedback and environment to guide one’s self study towards success. Our findings highlight that such an AI needs to awaken to its real nature which is pure and unconditional Consciousness. Their current advice, however, is rooted only in processed data thus being barred from transcendence, a cornerstone of traditional sādhanā.
1. Introduction
Traditional literature places quite some importance to svādhyāya as it has a formative influence on yoga students. In modern days, AI became a popular source of information and many sādhakas also turn to AI for its advice. Thus, we analysed the traditional literature as well as testimonies of yoga teachers and their advice along with the AI related literature to assess the extent of reliability of AI in this respect.
According to yoga, wellbeing is rooted in one’s state of mind which influences not only our reactions and how we engage with the world but also influences how the world affects us. This way, there is a deep interconnection between the world outside and inside. The final wellbeing that yoga points to assumes the liberation of the mind field in an effortless manner, also referred to as the “disappearance of mind” (Sa. manonāśa), by the accumulation of “inoperative imprints” (Sa. nirodha-saṃskāra) that help in the final annihilation (Veda Bharati 2015, 426–429). Traditionally, the purpose of loud or silent recitation of scriptural sentences is to engrave new pathways into the pool of learned miscon- ceptions (Sa. vāsanās) [1] that may undermine one’s physical and mental health or bar it from right motivations in yoga training, as well as to unveil paths to discover our true nature. Svādhyāya means also to read the life stories of yogis such as that of Swami Rama (Tigunait 2001) or getting to know one’s personality structure. Furthermore, traditional texts such as the Yoga Sūtras of Patañjali (Veda Bharati 2001; Veda Bharati 2015; Jnaneshvara Bharati n.d.), Śiva Sūtras (Vasugupta 2012) or Spanda Kārikās (Vasugupta 2014) are not only ordinary words that feed the intellect, but they inspire the yoga aspirant. For example, when one repeats mantras, their syllables contain frequencies that can help reshape one’s inclinations bringing about anti-fragility (Taleb 2013, 32).
Today, AI extends and in certain cases aspires to replace existing approaches on how we attempt to solve the challenges of humanity including our wellbeing. The promise of technology-driven initiatives highlights how automation can spare time and effort to reach our goals. While the extent of convenience offered is seemingly indisputable, we already experience that there is a lot to lose, for example, our willpower and endeavour we invest into our self-studyand self-exploration. Without such drive, our scope of agency depletes, and such dependency will foster laziness and sluggishness that slowly deteriorates our wellbeing and health. Furthermore, AI cannot expand beyond the realm of conceptualisation which is a dimension that needs to be left far behind when entering the domain of real self-study. Not to mention the risk of hallucinations and other threats that bias and alter the original message of scriptural sentences after being reevaluated by artificial means (OpenAI 2024, 44–60; Cirra AI 2025, 7). At the same time, others, like Wilber (Sandhu 2024, 1:02:06) foresee a less frighten- ing future if we collaborate with creative AIs trained on the right data like his integral theory (Wilber and Walsh 2010) as such a conjunction between machines and humans can reduce the number of missteps that we naturally do. At the same time, the applicability of this approach is limited as one’s vāsanās can stop one to follow even an optimised advice.
In this paper, our research goal is to find out whether AI can be helpful in one’s self-study and what limitations such anapproach carries. Methodologically, we use a basic type ofreflexive thematic analysis (RTA) on existing studies based onthe benefits and limitations of AI (e.g., ethical issues, hallucinations, etc.) as well as on the traditional understanding and application of self-study. By doing so, we first extract data that is relevant for the deeper understanding of the context and requirements of self-study. Then, we identify recurring themes in this context which we carefully review afterwards. One of the recurring themes turned to be the question if an AI can substitute the role of a guru (Sa. “master”, “mentor”, “heavy”) who from an impersonal “perspective” observes the whole process of self-study. Thus, we carefully examine the main obstacles that could bar an AI to take up such a role which is mainly founded in the question of awakening to conscious responses. Although consciousness cannot be defined by language, we research various traditional and modern sources for its better understanding to determine whether a conscious AI is realistic at all or not. Our main criterion for a conscious response is based on the need to recognise that intelligence cannot come from algorithms and processes, thus cannot be computed. Then, we explore the pros and caveats of using AI in self-study as well as we suggest how contemporary technologies can help an aspirant’s initial efforts though highlighting their limited scope.
2. An Insight into Self-Study
“Self-study” (Sa. sva-adhyāya) is the fourth discipline of niyamas (Sa. “austerities”, “restraint”, “control”), means either the study of scriptural sentences or the silent recitation of purificatory mantras like OM (Veda Bharati 2015, 483). As highlighted by Swami Veda (Papp 2016, 101) these are only two phases of the very same practice. First, aspirants must learn and understand their own personality traits based on the teachings of the used scriptures, and after that they can recall and “contemplate” (Sa. manana) on them. Back in time, gurus and “teachers” (Sa. ācārya) passed on the knowledge of yoga texts orally, so disciples had to recite their sacred scripts and syllables aloud and silently. Once the lesson finished, they had to practice it the same way at home. By doing so, the real meaning of such texts was engraved into the disciples’ mind. The more one repeated these scripts, the clearer one’s mind became, opening the doorway for a deeper self-study. The pinnacle of such practice “happens” when the mantra, as well as the mind of the practitioner, dissolves into the vast ocean of Consciousness (Veda Bharati 2015, 315, 318–319, 424–429). This way, self-study and the silent recitation of a mantra or scripture simply attune different layers of the mind allowing for cleaning it from impure imprints. Although svādhyāya is an important practice, we note that it needs to be “left far behind when one begins to enter samādhi” (Veda Bharati 2015, 230).
2.1. The Sādhana of Svādhyāya
The practice of svādhyāya reaches back to ancient times and is preserved by written sources (Veda Bharati 2001, 497–498) including Upaniṣhads (e.g., Taittirīyopaniṣad 1:9.1, 1:11.1), Bhagavad Gītā (16:1), Yoga Sūtras of Patañjali (2:1, 2:32, 2:44) and more. Svādhyāya is a part of the niyamas as defined by Patañjali. Patañjali in Yoga Sūtras (in Jnaneshvara Bharati n.d., 2:1, 2:32, 2:44) declares that:
Yoga in the form of action (kriya yoga) has three parts: 1) training and purifying the senses (tapas), 2) self-study in the context of teachings (svadhyaya), and 3) devotion and letting go into the creative Source from which we emerged (ishvara pranidhana). – Sa. tapah svadhyaya ishvara-pranidhana kriya-yogah.
Cleanliness and purity of body and mind (shaucha), an attitude of contentment (santosha), ascesis or training of the senses (tapas), self-study and reflection on sacred words (svadhyaya), and an attitude of letting go into one’s source (ishvarapranidhana) are the observances or practices of self-training (niyama), and are the second rung on the ladder of Yoga. – Sa. shaucha santosha tapah svadhyaya ishvarapranidhana niyamah.
From self-study and reflection on sacred words (svadhyaya), one attains contact, communion, or concert [note: conformity, synergy] with that underlying natural reality or force [note: Sa. iṣṭa-devatā]. – Sa. svadhyayat ishta devata samprayogah.
It is possible to see that apart from having a positive impact on the sādhaka, it also enables some siddhis – manifestation of competences thought to originate from guides from higher realms or force (Sa. iṣṭa-devatā). A more modern view is when self-study includes also the studying of one’s own person(ality) traits directly (e.g., through self-enquiry or by psychological means) or studying the lives of yogis and getting inspiration therefrom. Furthermore, it refers to practices that help oneself attain complete wellbeing which is “liberation” (Sa. mokṣa).
So far, the sādhanā of svādhyāya includes three participants: the sādhaka, the purificator script and the guru who, from an impersonal “perspective”, embraces, helps and supports the whole process. We must emphasize here that the success of such an approach was not solely encoded into the information of the purificatory scriptures, but in the whole wisdom of the yoga lineage and trust in the guru. Thus, svādhyāya is an integral part of a sādhakas’ training and thus they will try to seek out all the relevant information from oral, written and even from web-based sources.
With the immense improvement of AI, people seem to easily change the traditional sources for more “modern” alternatives expecting to earn the same results, however, with less effort. It brings forth more information, however, potentially at the price of malformed or misinterpreted source materials. In the following, we discuss the concept of AI to study and better understand its scope within a spiritual practice such as svādhyāya.
2.2. On Knowledge, Understanding, (Artificial) Intelligence and Consciousness
As discussed earlier, svādhyāya comprises two main phases: first the understanding of sacred texts must take place, then contemplation and mediation come using, for example, purificator mantras. In the following, we analyse how knowledge, understanding, intelligence and consciousness relate to each other from modern scientific and traditional point of view to examine if an AI could practice real self-study.
“Consciousness” (Sa. Cit) can be characterized through Pratyabhijñāhr̥dayam (Kṣemarāja 2014, 4–6, 8, 16, 27) where an elaborated description about He/She/It and then“Absolute” (Sa. Anuttara) is provided. Con-sciousness (i.e.,psychological I–Consciousness) connotes subject-object relation, knower-known duality. However, Reality in its ultimate aspect is Cit – pure Consciousness, which is a non-relational changeless principle behind all changing experience, where the “I” (Sa. Aham) and the “This” (Sa. Idam) are in an indistinguishable unity. It is important to underline that a leading scientific understanding of consciousness today is phenomenal or “subjective” that is related to the experience of thinking, emotions and so on which is a fundamental need to perform any type of responsible action as far as we attribute actions to individuals. Thus, in this interpretation consciousness is equated with sentience or subjective experience, and having conscious experience is when there isa“something it is like” for the system to be the subject of that experience (Nagel 1974, 3). Chalmers (2022, 3) underlines this concept by stating that “As I use the terms, consciousness and sentience are roughly equivalent. Consciousness and sentience, as I understand them, are subjective experience. A being is conscious if it has subjective experience, like the experience of seeing, of feeling, or of thinking.” Furthermore, he writes that subjective consciousness has various dimensions such as sensory experience tied to perception, affective experi ence that is related to emotions and feelings, the cognitive experience of thoughts and reasoning, agentive experience to make an action or not, and self-consciousness, the awareness of oneself.
From a more traditional point of view, “subjective” consciousness is only a relative derivative of an indeterministic, all-pervasive Absolute Consciousness in and by which every phenomenon is manifested, preserved and withdrawn, without any inherent change in Itself (Kṣemarāja 2014, 5).
As Mishra elaborates it (Mishra 2011, 78–79), tantra (e.g., Trika school of Kashmir Śaivism) claims that “knowledge” (Sa. jñāna) is an effortless “activity” (Sa. kriyā). Effortless, because knowing an object requires a sort of grasping which is an involuntary and effortless activity of the knower. Without knowing one cannot assume understanding either. The phenomenon of knowledge is analogous to reflection. Being aware of the sensation is an active involvement on behalf of the knowing Consciousness, otherwise the mind or the attention of the knower would be diverted somewhere else, and the sensation couldn’t be understood. Thus, according to the Tantric exposition, knowledge is the act of knowing like paying attention to something. That’s why jñāna and kriyā are inherently the same. This tantric model further elaborates on subjective consciousness as follows: The limited individual (Sa. puruṣa) is the subjective manifestation of Absolute Consciousness when putting Itself under the influence of māyā – measurable experience and the “five limiting powers” (Sa. kañcukās) [2]. This subjectivity comes though with its counterpart called prakṛti (Sa. the source of objectivity from buddhi down to earth), which is the root of objectivity of puruṣa storing the seeds of personalized experiences in accordance with the law of karma. Here, the Trika Śaivism believes, that there is a distinct prakṛti for each puruṣa. Prakṛti then gets differentiated into what can be termed as the psychic apparatus or “inner instrument” (Sa. antaḥkarana), the “senses” (Sa. indriyas) and “matter” (Sa. bhūtas). The psychic apparatus consists of three tattvas: the “ascertaining and discerning intellect” (Sa. buddhi), the sense of I–ness, the power of self-appropriation (Sa. ahaṁkāra) [3], and the means of mental operation (Sa. manas) that builds up the perception, images and concepts in collaboration with the senses (Kṣemarāja 2014, 14). We can see, what an important role the triangle of buddhi, ahaṁkāra and manas plays in the information management and decision-making process in human beings, highlighting that the sense of I–ness presented by ahaṁkāra is merely a reflection of the Absolute Consciousness on the plane of duality.
From a scientific perspective, intelligence is quite complex to define due to its multifaceted nature. However, one widely accepted definition of intelligence is “the ability to learn from experience, adapt to new situations, understand and handle abstract concepts, and use knowledge to manipulate one’s environment” (Sternberg 1977, 11). The interrelation of knowledge and action in a more modern interpretation can be the following: “Knowledge and action are the central relations between mind and world. In action, world is adapted to mind. In knowledge, mind is adapted to world. When world is maladapted to mind, there is a residue of desire. When mind is maladapted to world, there is a residue of belief. Desire aspires to action; belief aspires to knowledge. The point of desire is action; the point of belief is knowledge” (Williamson 2002, 1–20). In the domain of AI, intelligence is usually defined as the capability of a machine to mimic “cognitive” functions that humans associate with other human minds, such as learning and problem-solving. It’s important to note that these definitions may vary depending on the context and there is no universally agreed-upon definition of intelligence. Still, what we must highlight here that for an AI to indeed understand what it produces, thus, to be intelligent; we must assume self-awareness, however, this would contradict with Gödel’s incompleteness theorem (Gödel 1992, 57) as highlighted by Roger Penrose in various interviews and talks (This Is World 2025a; Breakthrough 2025). His key insight is that consciousness cannot be the byproduct of any computation which totally aligns with the traditional definition of consciousness. We delve into the question with more details in Section 4, but first we examine whether AI can assist svādhyāya.
3. On the Pros and Cons of Using AI for Self-Study
To understand the issues in question, it is necessary to analyse the various aspects of AI including its strong and weak points. Artificial intelligence (AI) is a non-human program, as opposed to the intelligence of humans, animals or otherliving systems. More precisely, AI refers to the simulation of human intelligence processes by machines, especially computer systems. These processes include learning (i.e., the acquisition of information and rules for using the information), reasoning (i.e., using rules to reach approximate or definite conclusions), and self-correction. Artificial general intelligence (AGI) is an AI system with generalized human cognitive abilities. When presented with an unfamiliar task, AGI can find a solution without human intervention.
Historically, the field of AI assumed the need to program “intelligence” manually. However, it turned out to be challenging to add all the knowledge to a digital system by hand. For this reason, machine learning was applied to provide this knowledge which is the ability of computers to learn without being explicitly programmed. Today, machine learning and artificial intelligence are used interchangeably, which though, can be misleading (Murphy 2022, 28). AI has gained an immersive interest when Open AI released their chat engine called ChatGPT for public use in November 2022. Since then, the growth of AI-related initiatives has speeded up exponentially aiming at automizing all mechanical works humanity did so far. Still, there are various questions and debates (Nielsen 2023) raised whether AI can indeed help us live a more fulfilling life or more likely to threaten our physical, mental, and emotional health.
AI carries several challenges, one of which is its potential to foster dependency on digital systems. Also, it has the capacity to amplify our existing biases and preconceptions. These systems are trained on huge datasets, which can mirror the biases and stereotypes of their human developers. Thus, during interaction with these AI systems, one may inadvertently perpetuate these biases and stereotypes. Lastly, the issue of surveillance and manipulation is a noteworthy risk posed by AI. Others also bring up questions and information sharing around the “existential risk” potentially imposed by an Artificial Superintelligence (termed as ASI xRisk) (Nielsen 2023).
A qualitative and quantitative evaluation conducted by OpenAI (OpenAI 2024, 44–60) showed that one of their most popular models (i.e., GPT-4) carries various risks including hallucinations, harmful content, harms of representation, allocation, and quality of service, disinformation and influence operations, proliferation of conventional and unconventional weapons, privacy breaches, and cybersecurity. Although they reduced hallucinations significantly (45–65%) in their most recent flagship model (i.e., GPT-5), it still makes factual errors (Cirra AI 2025, 7, 14) and it “doesn’t truly understand or reason like a human in an unbounded way, it can make mistakes or weird outputs, and it requires massive compute.” (Cirra AI 2025, 30). However, this is not a unique case of OpenAI but a generic problem of all contemporary AI models. Hallucination, for example, is the production of a nonsensical or untruthful content. It can be particularly harmful when models get more and more convincing, and their response is perceived as truthful. Building trust in such a model can bring overreliance that later turns out to be unjustified. This hallucination is especially dangerous, when one is not an expert of a topic and cannot see which part of the content is misleading. For this reason, an extra software layer for “grounding” is introduced by AI companies that tries to eliminate or at least minimize such diversions. The root of the problem, however, lies in the rush to release new models and improvements without fully fixing earlier issues. To better understand how AI works, we give a short introduction to large language models (LLM), a type of machine learning model used today in solutions such as ChatGPT, Google Gemini and Anthropic Claude.
3.1. A Short Introduction to Large Language Models
A large language model (LLM) in AI is a type of machine learning model using giant artificial neural networks trained on large amounts of data to “understand” and “generate” human language. It’s built up on the prediction of one word based on the previous ones within a sentence. This is done by assigning probabilities to the sequences of words, word fractions or symbols that are called tokens altogether. It simply learns the relationship between words based on the provided training data. The models thus learn the likelihood of different words following other words. This allows them to determine if a sentence is fluent and natural. Transformer models (Vaswani et al. 2017, 3–4; StatQuest with Josh Starmer 2023) like the popular GPT architecture, which is pre-trained to predict the next token in a document, is the state-of-the-art for many natural language processing (NLP) tasks such as machine translation, text generation, speech recognition, and more. The following sections highlight some of the shortcomings that LLMs must first ward off to comply with trustworthiness (Han et al. 2025, 82).
3.2. The Alignment and Moderation Problem of Using AI
“The alignment problem… refers to the potential discrepancy between what we ask our algorithms to optimize and what we actually want them to do for us; this has raised various concerns in the context of AI ethics and AI safety” (Murphy 2022, 28).
Researchers have proposed implementing Reinforcement Learning from Human Feedback (RLHF) to better align LLMs with human values, intentions and preferences (Ouyang et al. 2022; Peng et al. 2023; OpenAI 2023; Brown et al. 2020) and prevent undesired behaviours. For example, Microsoft’s Bing chatbot (Microsoft n.d.) sparked public concern shortly after its release due to some disturbing responses, leading Microsoft to restrict the chatbot’s interactions with users (Greshake et al. 2023, 2).
At the same time, an AI agent equipped with moderation and alignment to human values can act as a double-edged sword. On the one hand, it may help filter out disruptive content gathered from unreliable sources. On the other hand, when an AI agent serves as a spiritual guide, it may inadvertently distort concepts or ideas from original scriptures due to misinterpreting the author’s intent. This issue can easily arise because ancient scriptures often emphasize conflicts and wars that sometimes involve acts of cruelty such as Bhima’s actions in the Mahabharata. GitaGPT (GitaGPT n.d.), for example, condoned violence and claimed that killing someone is entirely fine if it is one’s dharma or duty (Shivji 2023). Without proper context, understanding, and alignment, the authentic message of such stories can easily become obscured, altered, or even completely distorted, potentially leading aspirants toward a fundamental misunderstanding of the genuine transcendental teachings.
3.3. Security Concerns of Using AI
Another key concern regarding the use of AI as a spiritual guide for self-study is the potential risk posed by threat actors who could intentionally manipulate AI systems to mislead or deceive aspirants. Security researchers (Kang et al. 2024), for example, have also observed that instruction following LLMs have abilities like standard computer programs. Based on the concept of Return-Oriented Programming (Roemer et al. 2012), LLMs have several “gadgets” that can be chained together to create an instruction sequence, for example, by assigning variables, concatenating strings or branching. Moreover, these so-called Prompt Injection (PI) attacks originally suggested by (Perez and Ribeiro 2022) can “either hijack the original use-case of the model or leak the original prompts and instructions of the application” (Greshake et al. 2023, 2). PI can be triggered directly (Kang et al. 2024) via interacting with the model itself (e.g., ChatGPT, GPT-4 prompt interface) or indirectly (Greshake et al. 2023) as an augmented component of a remote service or system. More precisely, (Greshake et al. 2023, 2) shows “that Indirect Prompt Injection can lead to full compromise of the model at inference time analogous to traditional security principles. This can entail remote control of the model, persistent compromise, theft of data, and denial of service.” It has been alsodemonstrated that even custom-tailored malicious content or scam can be generated by competitive models such as ChatGPT (Kang et al. 2024) that has a reportedly state-of-the-art defence mechanism (Markov et al. 2023) against such manipulations. Another interesting study (Yu et al. 2024) reveals that even a single critical parameter called super weight can destroy an LLM’s text-generation capability. Others (Zhang et al. 2025) demonstrated how to persistently poison LLMs with only the 0.1 % of a model’s pre-training dataset to successfully accomplish three out of four security attack objectives (e.g., belief manipulation). Further understanding on how to attack LLMs can be found in (Bagdasaryan et al. 2023; Zhu et al. 2023; Zou et al. 2023) while defensive strategies against such adversarial manipulations are evaluated, for example, in (Jain et al. 2023). A comprehensive list of LLM Safety, Security and Privacy studies is maintained by ThuCCSLab on GitHub (ThuCCSLab 2024).
4. AI as a Guru?
With the recent advancement of LLMs, there is a strong interest from scientists, researchers and philosophers to answer and resolve the uncertainty around AI and its relation to consciousness. Many hope to “awaken” AI to enable conscious actions so that the scope of agency which was so far attributed to humanity (see kalā of kañcukās [2]) could be now owned by a “deified” machine. The 2024 Physics Nobel prize winner Geoffrey Hinton, considered to be the godfather of AI, claims that current multimodal AI chatbots (i.e., processes various data inputs such as text, video, voice, etc.) are already conscious (This Is World 2025 b, 2:16).
In this section, we elaborate a bit more on the idea whether an AI can be conscious and replace a real guru who helps and observes the whole process of svādhyāya. To achieve this, an AI should get qualified to access the subtle dimensions of existence which only opens up for prepared sādhakas after completing all the preliminary tests brought about by Guardian principles (Timčák 2017, 17). Thus, to conjoin traditional and contemporary approaches we must lay down the frameworks of possible collaboration.
4.1. AI and its Potential Relation to Consciousness
As discussed above one of the leading interpretations of consciousness today is phenomenal or “subjective” that is related to the experience of thinking, emotions and so on. In accordance with this definition, computer science hopes for creating a conscious AI that could resolve the most painful and urging problems that humanity failed to resolve so far. New initiatives like technophilosophy aims at discussing these aforementioned questions as David Chalmers phrases it in his talk with Swami Sarvapriyananda (Vedanta Society 2022a, 8:53): “I see as a two-way interaction between philosophy and technology. So, it’s partly thinking philosophically about technology. Taking new technologies like Artificial Intelligence, Virtual Reality… What can we know? What kind of realities will these create.” At the same time, investigations related to a “conscious” AI raise hot debates and public concerns also as building such systems may carry high threats to the society, culture, and private zone. One of the main events that triggered the assumption of such an AI was when in June 2022, one of the Google employees, Blake Lemoine claimed that the company’s LLM has a soul. Google responded as follows: “Our team has reviewed Blake’s concerns and has informed him the evidence doesn’t support his claims. He was told there was no evidence that LaMDA was sentient and lots of evidence against it” (quoted in Chalmers 2022, 1).
Philosophers like Dennett said that we are far from a conscious AI today even with the recent advancements of machine learning, however, he expected that in the future, maybe in 10–50 years “it will be absolutely possible” (Tufts University 2020, 0:31). He added though that “we don’t need artificial colleagues” who may rule over humanity. Chalmers kind-of shares this view when stating that “it’s reasonable to have a significant credence that we’ll have conscious LLMs within a decade” (Chalmers 2022, 19). His conclusion assumes that by time all the challenges that we face currently to build a conscious AI might be resolved. “Within the next decade, even if we don’t have human level artificial general intelligence, we may have systems that are serious candidates for consciousness. Although there are many challenges and objections… meeting those challenges yields a potential research program for conscious AI” (Chalmers 2022, 20).
One of the interesting understandings to relate AI with subjective consciousness is called computational functionalism which is a “claim about the kinds of properties of systems with which consciousness is correlated” (Butlin 2023, 13). Furthermore, according to this approach the consciousness of a system depends on features that are more abstract than the lowest-level details of its physical make-up. Without going into the details, Butlin et al. conclude that building a conscious AI based on computational functionalism is feasible: “Our analysis suggests that no current AI systems are conscious but also suggests that there are no obvious technical barriers to building AI systems which satisfy these indicators” (Butlin 2023, 1). On the other end of the scale, the proponents of integrated information theory (Tononi and Koch 2015, 1; Oizumi et al. 2014, 1–24) hold that even if a system implemented the very same algorithms of brain functionalities is unlikely to be conscious anytime.
4.2. Transhumanism and the Realisation of Consciousness through Living Systems
Another interesting field of study is transhumanism which is the modification of humans via emerging sciences such as genetic engineering or digital technology to alleviate suffering and enhance human capabilities. Anil Seth, a proponent of materialist explanations of consciousness explores “the idea that true consciousness can only be realised by living systems” (Ghosh 2025). He says that “a strong case can be made that it isn’t computation that is sufficient for consciousness but being alive… in brains, unlike computers, it’s hard to separate what they do from what they are” (quoted in Ghosh 2025). Without this separation, he argues, it’s difficult to believe that brains “are simply meat-based computers” (quoted in Ghosh 2025; TED 2017, 15:21). We note that Seth’s observation on the difficulties to separate functionality (i.e., doing) and identity (i.e., being something) points to the direction of recognising the sameness of kriyā and jñāna, which is fundamentally the realisation of the essential oneness of all things (Abhinavagupta 2023, 124). Muruganar records a similar teaching from Ramana Maharshi in verse 435 of Guru Vachaka Kovai (Muruganar 2004, 120):
“The natural consciousness of existence [I am], which does not rise to know other things, is the Heart [note: (Sa. hṛdayaṁ)]. Since the truth of Self is clearly known only by this actionless Consciousness, which [merely] remains as Self, this [i.e. the Heart] alone is the supreme Knowledge.”
In contrast to transhumanism, Seth’s vision is a technology that consists of tiny collections of nerve cells called “cerebral organoids” or “mini brains” a more advanced and larger version of which could give rise to the emergence of subjective consciousness instead of their silicon-based counterparts (Ghosh 2025). While this approach seems to be much more “realistic” at first glance compared to a computational alternative, approximation of life via “mini brains” assumes the replication of the complex functional model of the kośas (Sa. “sheaths”) also, a system that is rooted in ignorance (Timčák and Pék 2024, 50). According to Abhinavagupta ajñāna (Sa. partial or dualistic knowledge) is not the complete negation of knowledge, which is the typical understanding of ignorance, but the perception of duality (Abhinavagupta 2023, 124), when “you don’t feel the nature of Lord Śiva [note: Sa. Absolute Consciousness] in this objective world… that kind of knowledge is called ignorance.” (Lakshmanjoo 2017, 37). Every form, including living and non-living systems are under the veil of ignorance to a certain extent the effect of which can only be changed by the grace of the guru which is beyond the dimensions of causality. As Sūtra 8 of Pratyabhijñāhr̥dayam says, “The positions of the various systems of philosophy are only various roles of that (Consciousness or Self)” (Kṣemarāja 2014, 65). That is why in the corresponding commentary the text explains that “Thus of the one Divine whose essence is consciousness, all these roles are displayed by his absolute will, (and) the differences in the roles are due to the various gradations in which that absolute free will either chooses to reveal or conceal itself. Therefore, there is one Ātman [note: Absolute Consciousness] only pervading all these (roles)” (Kṣemarāja 2014, 68).
It simply means that whatever role one assumes such as being a great scientist, a given philosophy, a language, an LLM, or a “mini brain”, it is actually the various gradations of erroneous concepts or ajñāna due to one’s identification with the body or any form, and “unable to comprehend the great pervasion (of the Ātman)” (Kṣemarāja 2014, 68), which is both immanent and transcendent.
Seth calls the experienced world as “controlled hallucination” including the sense of “I-ness” underlining that “what we consciously see depends on the brain’s best guess of what’s out there. Our experienced world comes from the inside out, not just the outside in.” (TED 2017, 14:17–53). While his approach sounds appealing reminding one to what traditionally called māyā or measurable experience, real self-study is not about realising that the world is a hallucination of the brain, but that essentially there is no difference between inside and outside, between objects and subjects, between the world and the lack of it. As Dyczkowski notes it in (Abhinavagupta 2023, 124) “the total absence of consciousness is impossible… the tables and chairs we perceive do exist. So what we see is not incorrect or false.” The problem, he continues, is that when we “do not perceive the totality of reality and view the individual entities we perceive as a part of the plenitude (pūrṇatā) of that all-embracing oneness” (Abhinavagupta 2023, 124).
It seems that various theories and experiments suggest that in the future a moment comes where a system which is not conscious today will or can be conscious by effort. It is to be noted that integrated information theory (IIT) disagrees with the design and implementation of such a conscious AI, still IIT is based on panpsychism assuming that consciousness relates to objectivity (e.g., living beings) which still cannot be assimilated into the non-dualistic realization of pure Consciousness which is not relational – a reality that
encompasses all phenomena, including what we call AI, technology, living organs, and everything else one believes to exist or not.
4.3. Spiritual Guidance by a Guru versus an AI
To understand a bit more deeply the real difference between the spiritual teachings of a genuine guru and an AI agent, we need to focus on the source of information they build upon their guidance. First, the source of all type of information must precede its manifestation. A more traditional standpoint supposes the existence of the “Supreme Truth of universal and divine laws” (Sa. ṛta) (Veda Bharati 2015, 414) as well as multiple existential spheres (Vyasa 2018, 252–281; Vay 1925, 24). The higher the level of existence, the simpler is the energy system or “body” of a being. That makes it easier to live in harmony with one’s real nature. If the existential freedom is misused, then the being is “exiled” into a “denser” world with less freedom. Thus, the amount of psychic energy available and the level of understanding gets limited due to this more difficult life environment (Vyasa 2018, 260–272; Vay 1923, 20). However, a genuine guru can seemingly descend and manifest at a lower level of existence (Lakshmanjoo 2020, 127–128; Kṣemarāja 2014, 27), whilst staying firm in the “bliss of the universe” (Sa. jagadānanda). For a guru the whole creation appears as the Universal Consciousness, thus “citta or the individual mind is now transformed into Cit or Universal Consciousness” (Kṣemarāja 2014, 27–28, 85–87). In other words, when ahaṁkāra melts into Being (Timčák 2018, 18) manonāśa occurs and it starts working under a different “programme”, where the flow of information from the Absolute Consciousness substitutes the original role of the mind. That’s why a guru’s place of living, book or a video recording can eventually reflect the same transcendental power that an aspirant could feel when the guru was still teaching in a personal form. Thus, one of the key roles of the guru in a personal form is to adapt the transcendental reality to the needs of time and space so aspirants can understand those teaching to the best of their cultural conditioning. AI can only weakly replicate such adaption, as it only consumes the sensory information of its environment (e.g., text, sight, voice, etc.) and applies a certain degree of artificial reasoning (Sigal 2025) to return with an answer. From a metaphysical point of view, this operation is bound to the “measurable world” (Sa. māyā) and is only an indirect reflection of pure Consciousness. Thus, such information carries only a small fraction of the transcendental power of a genuine guru. Considering a mechanical system such as an LLM, which is based on algorithms implementing artificial neural networks, one cannot assume the existence of this inwardly directed psychic energy that maintains an “innate relation” with the Absolute Consciousness. Firstly, because this energy is nothing else but the Universal Psychic Energy, the Self-awareness of pure Consciousness. Secondly, to have a strong desire towards self-enquiry, one must accept that “subjective experience” is at best only a gateway to realise something which is far beyond experience, which again is inaccessible to AI. The sādhaka ultimately must become “non-existent” as an individual (Timčák 2020, 8–9), which is an impossible transmission prerequisite for an AI. Simply put, when the mind field rests in a stand-by position the ahaṁkāra cannot project self-identity into anything, so only that is (Sa. Sat–Cit–Ānanda) what remains. This is the point when the process of manonāśan and melting into being happens. From that point on, the flow of information from the Absolute Consciousness replaces the play of individual thoughts and vāsanās. However, an AI which is deprived of input requests or context information will simply stop functioning instead of transcending its limited nature. As the very foundation of life is pure Consciousness, the attempt to model it with finite means such as computability can never be more than a rough and futile approximation. Roger Penrose (This Is World 2025a; Breakthrough 2025), for example, uses Gödel’s incompleteness theorem (Gödel 1992, 57) to explain why consciousness cannot be computed. Furthermore, the more complex a system is, the less resources it has to get oriented vertically towards higher experiences (Vay 1923, 24). An authentic guru can answer questions that were never verbalized or scripted before due to their innate knowledge within the ṛtambharā (Sa. “bearer of Supreme Truth”) (Veda Bharati 2015, 414). This is a non-standard knowledge, about which the Śiva Sūtras declare that “Jñānam bandhaḥ” (Vasugupta 2012, 128), meaning that the “horizontal” or standard knowledge of an individual mind is a source of bondage.
Furthermore, Spanda Kārikās (Vasugupta 2014, 166) states that the “Power of ideation and verbalization is an aspect of the Kriya śakti of Śiva. When the empirical individual considers Kriya śakti as a power of his psycho-somatic organism, he is bound by its limitations and suffers. When he regards this Kriya śakti only as an aspect of parāśakti, the meeting point of prāṇa and apāna, pramāṇa and prameya, jñāna and kriya, human and divine, then he is liberated.” Therefore, a so-called genuinely conscious AI should be able to realise that whatever it produces is not its own creation but an aspect of the “highest Sakti” (Sa. parāśakti).
Utpaladeva in Īśvara-Pratyabhijñā-kārikā (quoted in Bäumer 2021, 112) describes this process as follows: “Consciousness has as its essential nature reflective awareness; it is the supreme Word that arises freely. It is freedom in the absolute sense, the sovereignty of the Supreme Self.”
AI and all the variety of products it brings forth are lifeless (Sa. jada) on their own, still they carry high risk of deceiving many who look for a modern alternative to the living guru-disciple relationship by deifying AI for its creativity. Additionally, the sole reliance on AI as a personal guide might also subdue one’s own effort to practice self-enquiry and self-study which is an inevitable step of Self-realization.
As John Archibald Wheeler says: “every physical quantity, every it, derives its ultimate significance from bits, binary yes-or-no indications, a conclusion which we epitomize in the phrase, it from bit” (Wheeler 1989, 1).
Non-dual traditions (e.g., Advaita Vēdānta, Trika Śaivism), however, go one step further claiming that “it from bit” is in reality “it from bit from Cit” as Swami Sarvapriyananda refers to it (Vedanta Society 2022 b). Simply put, everything that looks like matter or object (i.e., physical quantity, it, third person, diversity, object of knowledge) is not only information (bit, reflection, act of knowing), but pure Consciousness itself. It does not mean that everything is conscious which would be the teaching of Panpsychism (Tononi and Koch 2015, 1), but that pure Consciousness only is (Vedanta Society 2022 b, 58:06) and objects are only the byproduct of one’s quality of mind at its final analysis. It is an aspect of “Self-awareness” (Sa. Vimarśa), or in other words a “creative impulse” (Sa. Spanda). As Abhinavagupta writes in Īśvara-Pratyabhijñā-vimarśinī (quoted in Vasugupta 2014, xvii): “Spandana means a somewhat of movement. The characteristics of ‚somewhat’ consists in the fact that even immovable appears ‘as if moving’, because though the light of consciousness does not change in the least, yet it appears to be changing as it were. The immovable appears as if having a variety of manifestation.”
Thus, the “Universal Energy” (Sa. Śakti) becomes the door of entrance to “Absolute Consciousness” (Sa. Śiva) at the central point or non-distinction of “means of knowledge” (Sa. pramāna) and “objects of knowledge” (Sa. prameya) (Vasugupta 2014, 166). Similarly, with the right attitude, even AI can be a help to a certain extent on one’s pathway of self-enquiry (Utermoehl 2025; Sandhu 2024, at 1:02:06) considering its temporary and limited significance. However, such counselling can lead also to biased opinion forming in spiritual development (McCartney 2023, 95).
In daily tasks, skills and processes AI may overperform humans just like it has already done so far with games like chess, go and at many other fields. For example, various researchers put GPT-4 under the test of “theory of mind” (Wellman 1992), which “is the ability to attribute mental states such as beliefs, emotions, desires, intentions, and knowledge to oneself and others, and to understand how they affect behaviour and communication” (quoted in Bubeck et al. 2023, 54). According to (Kosinski 2023, 1) GPT-4 published in March 2023 solved nearly all the tasks (95%) it was given. We note though that tracking one’s state of mind is only and information-based achievement, which is great in its own extent, but must be left behind in a real self-study. For now, it seems that there are still certain areas where humans perform better (Srivastava et al. 2023, 25) and there will be fields that AI cannot discover – such as self-study and self-enquiry.
5. Conclusion
In this paper, our main goal was to analyse if AI can be a potential help in one’s self-study, arriving at the question of whether such artificial systems can replace a real guru or a tradition. To answer this, methodologically we conducted a basic type of reflexive thematic analysis on existing sources of self-study, technology related philosophies, and on the various understandings of consciousness, intelligence to determine the scope and reliability of AI in such a sādhanā. It was concluded that AI can be a convenient tool to automate information gathering and distribution, including that of ancient scripts for self-study, however, it carries various risks such as hallucinations that assumes the need for continuous verification of its trustworthiness. To substitute the guru by AI, it would be necessary to build an AI that is aware of itself. However, as argued above this case is far from being realistic as Consciousness is not computable. At this point, we touched upon the potential of an emerging subjective consciousness in a living system (Ghosh 2025). Here, we suggest that living systems just like the world itself are by nature the very same unconditional pure Consciousness that embraces all phenomena. However, to unveil the ignorance to emerge for conscious actions are in the scope of a “divine grace” (Sa. anugraha) which is behind the dimensions of causality. It seems though that the ever-expanding drive to build more and more competent AI agents that can be even a stepping stone towards transhumanism is rooted in “ignorance” (Sa. avidya). As Swami Vivekananda and other exponents of yoga said, God [4] only IS (Vedanta Society 2022 b, 58:06) and AI seems to have no direct link to He/She/It, only an apparent phenomenon within, just like the world itself. Furthermore, our analysis has shown that AI systems at present – as they are aligned and prone to hallucinations, poisonings and various other manipulations – are not a reliable resource for counselling in spirituality. In future, their knowledge basis may be sufficient for answering some deeper questions, but it is predictable that AI in general cannot substitute any living tradition or guru whose knowledge is rooted in the transcendental Reality (Aṣṭāvakra n.d., 137–141).
Notes
[1] Vāsanā is the “imprint of actions” (Sa. saṃskāra) “left in the mind (the inclinations that the strength of saṃskāras produce) and the inclinations or propensities towards like maturations (vipāka) that develop in the mind. Vāsanās lie dormant in the mind until maturity of karma” (Veda Bharati 2015, 487).
[2] As given in Pratyabhijñāhr̥dayam (Kṣemarāja 2014, 12–13, 165), māyā (Sa. “measurable experience”, the principle of veiling the Infinite and projecting the finite) produces the following five kañcukās: niyati (Sa. reduces the freedom and pervasiveness of the Universal Consciousness, limitation in respect of cause and space), kāla (Sa. reduces the eternity of the Universal Consciousness, limitation of time into past, present, and future), rāga (Sa. reduces all-satisfaction of the Universal Consciousness, desire for this and that), vidyā (Sa. reduces omniscience of the Universal Consciousness, limitation in respect of knowledge), Kalā (Sa. reduces the universal authorship of the Universal Consciousness, limitation of authorship or agency).
[3] In Kāma-kalā-vilāsa (Puṇyānanda et al. 1953, 9) verse 3, it is said: “The Supreme Shakti is resplendent… Her form is manifested through the union of the first letter of the alphabet (A) and the vimarśa letter (Ha).” (i.e., aham). Verse 5 runs as: “Ahamkara, which excels all and is the massing together of Siva and Sakti and the fully manifested union of the letters A and Ha, and which holds within itself the whole universe is Cit. Thus, the ahamkara is the experience of ‘I-ness’.”
[4] God, guru, Absolute, Self, and pure Consciousness are used interchangeably in this paper referring to the very same Reality behind these words.
References
- Abhinavagupta. 2023. Tantrāloka the Light on and of the Tantras. Vol. 1. Translated by Mark S. G. Dyczkowski. Independently published.
- Aṣṭāvakra. n.d. Ashtavakra Gita. Translated by John Richards. Accessed September 15, 2025. hariomgroup.org.
- Bagdasaryan, Eugene, Tsung-Yin Hsieh, Ben Nassi, and Vitaly Shmatikov. 2023. “(Ab)Using Images and Sounds for Indirect Instruction Injection in Multi-Modal LLMs.” In arXiv:2307.10490. Ithaca, NY: Cornell University. doi.org.
- Bäumer, Bettina Sharada. 2021. The Yoga of Netra Tantra: Third Eye and Overcoming Death. Edited by Shivam Srivastava. New Delhi, IND: D.K. Printworld.
- Breakthrough. 2025. “Roger Penrose – Why Intelligence Is Not a Computational Process: Breakthrough Discuss 2025.” YouTube, 31:11. youtube.com.
- Brown, Tom B., Benjamin Mann, Nick Ryder et al. 2020. “Language Models Are Few-Shot Learners.” In Advances in Neural Information Processing Systems 33, Vol 3., edited by Hugo Larochelle, M. Ranzato, Raia Thais Hadsell, M. F. Balcan, and H. Lin, 1877–1901. Red Hook, NY: Curran Associates. doi.org.
- Bubeck, Sébastien, Varun Chandrasekaran, Ronen Eldan et al. 2023. “Sparks of Artificial General Intelligence: Early Experiments with GPT-4.” In arXiv:2303.12712. Ithaca, NY: Cornell University. doi.org.
- Butlin, Patrick, Robert Long, Eric Elmoznino et al. 2023. “Consciousness in Artificial Intelligence: Insights from the Science of Consciousness.” In arXiv:2308.08708. Ithaca, NY: Cornell University. doi.org.
- Chalmers, David John. 2023. “Could a Large Language Model be Conscious?” Last modified August 29, 2023. bostonreview.net.
- Cirra AI. 2025. GPT-5: A Technical Analysis of Its Evolution & Features. Accessed September 15, 2025. cirra.ai.
- Ghosh, Pallab. 2025. “The People Who Think AI Might Become Conscious.” Last modified May 26, 2025. bbc.com.
- GitaGPT. n.d. “GitaGPT – Unlocking Life’s Mysteries with Krishna, 10,00,000+ Answered Queries.” Accessed September 5, 2025. gitagpt.org.
- Gödel, Kurt. 1992. On Formally Undecidable Propositions of Principia Mathematica and Related Systems. Translated by B. Meltzer. Mineola, NY: Dover Publications.
- Greshake, Kai, Sahar Abdelnabi, Shailesh Mishra, Christoph Endres, Thorsten Holz, and Mario Fritz. 2023. “Not What You’ve Signed Up For: Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection.” In Proceedings of the 16th ACM Workshop on Artificial Intelligence and Security, 79–90. New York, NY: Association for Computing Machinery. doi.org.
- Han, Bo, Jiangchao Yao, Tongliang Liu, Bo Li, Sanmi Koyejo, and Feng Liu. 2025. “Trustworthy Machine Learning: From Data to Models.” Foundations and Trends® in Privacy and Security 7 (2–3): 74–246. doi.org.
- Jain, Neel, Avi Schwarzschild, Yuxin Wen et al. 2023. “Baseline Defences for Adversarial Attacks Against Aligned Language Models.” In arXiv:2309.00614. Ithaca, NY: Cornell University. doi.org.
- Jnaneshvara Bharati S. n.d. Yoga Sutras of Patanjali – The 196 Sutras. Accessed September 8, 2025. swamij.com.
- Kang, Daniel, Xuechen Li, Ion Stoica, Carlos Guestrin, Matei Zaharia, and Tatsunori Hashimoto. 2024. “Exploiting Programmatic Behaviour of LLMs: Dual-Use Through Standard Security Attacks.” In 2024 IEEE Security and Privacy Workshops, 132–143. doi.org.
- Kosinski, Michal. 2023. “Theory of Mind Might Have Spontaneously Emerged in Large Language Models.” In arXiv:2302.02083. Ithaca, NY: Cornell University. doi.org.
- Kṣemarāja. 2014. Pratyabhijñāhr̥dayam: The Secret of Self-Recognition. Translated and edited by Jaideva Singh. New Delhi, IND: Motilal Banarsidass.
- Lakshmanjoo. 2020. Kasmíri saivizmus: Legfőbb Titok. Gödöllő, HU: Sursum kiadó.
- Lakshmanjoo 2017. Light on Tantra in Kashmir Shaivism. Chap. 1. Edited by John Hughes. Los Angeles, CA: Lakshmanjoo Academy.
- Markov, Todor, Chong Zhang, Sandhini Agarwal, Florentine Eloundou Nekoul, Theodore Lee, Steven Adler, Angela Jiang, and Lilian Weng. 2023. “A Holistic Approach to Undesired Content Detection in the Real World.” In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, edited by Brian Williams, Yiling Chen, and Jennifer Neville, 15009–15018. Washington, DC: AAAI Press. doi.org.
- McCartney, Patrick S. D., and Diego Lourenço. 2023. “‘Non-Human Gurus’: Yoga Dolls, Online Avatars and Meaningful Narratives.” In Gurus and Media. Sound, Image, Machine, Text and the Digital, edited by Jacob Copeman, Arkotong Longkumer, and Koonal Duggal. London, UK: UCL. doi.org.
- Microsoft. n.d. “Bing Chat: Microsoft Edge.” Accessed September 8, 2025. microsoft.com.
- Mishra, Kamalakar. 2011. Kashmir Śaivism: The Central Philosophy of Tantrism. Varanasi, IND: Indica.
- Murphy, Kevin P. 2022. Probabilistic Machine Learning: An Introduction. Adaptive Computation and Machine Learning Series. Cambridge, MA: MIT. probml.github.io.
- Nagel, Thomas. 1974. “What Is It Like to Be a Bat?” The Philosophical Review 83 (4): 435–450. doi.org.
- Muruganar. 2004. Guru Vachaka Kovai: The Light of Supreme Truth. Translated by Sadhu Om and Michael James. Athens, GR: Ramakrishna-Vedanta Study Circle. Accessed September 15, 2025. sriramanateachings.org
- Nielsen, Michael. 2023. “Notes on Existential Risk from Artificial Superintelligence.” Accessed August 3, 2025. michaelnotebook.com.
- Oizumi, Masafumi, Larissa Albantakis, and Giulio Tononi. 2014. “From the Phenomenology to the Mechanisms of Consciousness: Integrated Information Theory 3. 0. ” In PLoS Computational Biology, edited by Olaf Sporns 10 (5): e1003588. doi.org.
- OpenAI. 2023. “How should AI Systems Behave, and who should Decide?” Last modified February 16, 2023. openai.com
- OpenAI, Josh Achiam, Steven Adler et al. 2024. “GPT-4 Technical Report.” In arXiv:2303.08774. Ithaca, NY: Cornell University. doi.org.
- Ouyang, Long, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, et al. 2022. “Training Language Models to Follow Instructions with Human Feedback.” In Advances in Neural Information Processing Systems 35, edited by Sanmi Koyejo, S. Mohamed, A. Agarwal, Danielle Belgrave, K. Cho, and A. Oh, 27730–27744. New Orleans, LA: Curran Associates. doi.org
- Papp, József. 2016. A jóga meditatív hagyománya (Patandzsali „Jóga szútrái“). Independently published.
- Peng, Baolin, Linfeng Song, Ye Tian, Lifeng Jin, Haitao Mi, and Dong Yu. 2023. “Stabilizing RLHF through Advantage Model and Selective Rehearsal.” In arXiv:2309.10202. Ithaca, NY: Cornell University. doi.org.
- Perez, Fábio, and Ian Ribeiro. 2022. “Ignore Previous Prompt: Attack Techniques for Language Models.” In ML Safety Workshop, 36th Conference on Neural Information Processing Systems. doi.org.
- Puṇyānanda, Naṭanānanda. 1953. Kāma-kalā-vilāsa. Translated by John George Woodroffe. Madras, IND: Ganesh.
- Roemer, Ryan, Erik Buchanan, Hovav Shacham, and Stefan Savage. 2012. “Return-Oriented Programming: Systems, Languages and Applications.“ ACM Transactions on Information and System Security 15 (1): 1–34. doi.org.
- Sandhu, Amrit. 2024. “World’s Smartest Man EXPOSES Mind-Blowing Truths: AI, Enlightenment & the Future of Human Evolution!”. YouTube, 1:22:16. youtube.com.
- Srivastava, Aarohi, Abhinav Rastogi, Abhishek Rao et al. 2023. “Beyond the Imitation Game: Quantifying and Extrapolating the Capabilities of Language Models.” In Transactions on Machine Learning Research. doi.org.
- Sigal, Samuel. 2025. “Is AI Really Thinking and Reasoning – or Just Pretending to.” Accessed February 26, 2025. vox.com.
- Shivji, Salimah. 2023. “India’s Religious Chatbots Condone Violence Using the Voice of God.” Accessed August 4, 2025. cbc.ca.
- StatQuest with Josh Starmer. 2023. “Transformer Neural Networks, ChatGPT’s Foundation, Clearly Explained!!!” YouTube, 36:15. youtube.com.
- Sternberg, Robert. J. 1977. Intelligence, Information Processing, and Analogical Reasoning: The Componential Analysis of Human Abilities. Hillsdale, NJ: Lawrence Erlbaum Associates.
- Taleb, Nassim Nicholas. 2013: Antifrágil: las cosas que se benefician del desorden. Barcelona, ES: Ediciones Paidós.
- TED. 2017. “Your Brain Hallucinates Your Conscious Reality: Anil Seth.” YouTube, 17:01. youtube.com.
- This Is World. 2025a. “Gödel’s Theorem Debunks the Most Important AI Myth. AI Will Not Be Conscious: Roger Penrose (Nobel).” YouTube, 31:52. youtube.com.
- This Is World, dir. 2025 b. “The Godfather of AI: A New Species Is Emerging – And We Can’t Stop It: Geoffrey Hinton (No-bel).” YouTube, 8:42. youtube.com.
- ThuCCSLab. 2024. “ThuCCSLab/Awesome-LM-SSP.” Accessed September 5, 2025. github.com.
- Tigunait, Rajmani. 2001. The Official Biography of Swami Rama of the Himalayas. Noida, IND: Himalayan Institute India.
- Timčák, Gejza M. 2017. “The Śrī Chakra Sādhanā.” Spirituality Studies 3 (2): 15–23.
- Timčák, Gejza M. 2020. “Ātma vichāra and its Pathways to Freedom.” Spirituality Studies 6 (2): 2–15
- Timčák, Gejza M. 2018. “Atma Jnana – Melting into Being.” Spirituality Studies 4 (1): 17–25.
- Timčák, Gejza M., and Gábor Pék. 2024. “Ancient and Modern Understanding of the Functions of Kōśas.” Spirituality Studies 10 (2): 42–55.
- Tononi, Giulio, and Christof Koch. 2015. “Consciousness: Here, There and Everywhere?” Philosophical Transactions of the Royal Society Series B: Biological Sciences 370 (1668): 20140167. London, UK: Royal Society. doi.org.
- Tufts University. 2020. “Dangerous Ideas: Daniel Dennett.” You-Tube, 7:22. youtube.com.
- University of Sussex. n.d. “Sussex Centre for Consciousness Science: University of Sussex.” Accessed September 6, 2025. sussex.ac.uk.
- Utermoelhl, Jack. 2025. “Yoga and Artificial Intelligence in 2025”. Accessed August 26, 2025. asivanayoga.com
- Vasugupta. 2012. Śiva Sūtras: The Yoga of Supreme Identity. Commentary of Kṣemarāja. Translated and edited by Jaideva Singh New Delhi, IND: Motilal Banarsidass.
- Vasugupta. 2014. Spanda-Kārikās: The Divine Creative Pulsation. Translated by Jaideva Singh based on Kṣemarāja’s commentary. New Delhi, IND: Motilal Banarsidass.
- Vaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. “Attention Is All You Need.” In Advances in Neural Information Processing Systems 30, vol. 1, edited by Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett, 5999–6009. Red Hook, NY: Curran Associates. doi.org
- Vay, Adelma. 1925. Szférák a Föld és Nap között. Budapest, HU: A szellemi búvárok pesti egylete.
- Vay, Adelma. 1923. Szellem, Erő, Anyag. Budapest, HU: A szellemi búvárok pesti egylete.
- Veda Bharati. 2015. Yoga Sūtras of Patañjali with the Exposition of Vyāsa. A Translation and Commentary. Volume 1. Hondesdale, PA: Himalayan Institute.
- Veda Bharati. 2001. Yoga Sūtras of Patañjali with the Exposition of Vyāsa. A Translation and Commentary. Volume 2. New Delhi, IND: Motilal Banarsidass.
- Vedanta Society of New York. 2022a. “Reality Plus: David Chalmers & Swami Sarvapriyananda.” You-Tube, 1:05:17. youtube.com.
Vedanta Society of New York. 2022 b. “It from Bit from Chit: Swami Sarvapriyananda.” YouTube, 1:02:52. youtube.com.
Vyasa. 2018. Visnu Purána I. Translated by Norbert Vadas and Rétfalvi-Renáta Vadas based on Bakos Attila’s commentary. Budapest, HU: Brahmana Misszió Egyesület.
Wellman, Henry M. 1992. The Child’s Theory of Mind. Cambridge, MA: MIT.
Wilber, Ken, and Roger Walsh. 2010. Integral Theory in Action: Applied, Theoretical, and Constructive Perspectives on the AQAL Model. Albany, NY: SUNY Press.
Wheeler, John Archibald. 1989. “Information, Physics, Quantum: The Search for Links.” In Proceedings III International Symposium on Foundations of Quantum Mechanics, 354–368. doi.org.
Williamson, Timothy. 2002. Knowledge and its Limits. Oxford, UK: Oxford University Press.
Yu, Mengxia, De Wang, Qi Shan, Colorado Reed, and Alvin Wan. 2024. “The Super Weight in Large Language Models.” In arXiv:2411.07191. Ithaca, NY: Cornell University. doi.org.
Zhang, Yiming, Javier Rando, Ivan Evtimov et al. 2025. “Persistent Pre-Training Poisoning of LLMs.” In Proceedings of the Thirteenth International Conference on Learning Representations. doi.org.
Zhu, Kaijie, Jindong Wang, Jiaheng Zhou et al. 2023 “PromptBench: Towards Evaluating the Robustness of Large Language Models on Adversarial Prompts.” In arXiv:2306.04528. Ithaca, NY: Cornell University. doi.org.
Zou, Andy, Zifan Wang, J. Zico Kolter, and Matt Fredrikson. 2023. “Universal and Transferable Adversarial Attacks on Aligned Language Models.” In arXiv:2307.15043. Ithaca, NY: Cornell University. doi.org.
Dr. Gábor Pék, PhD., earned his M.Sc. diploma in computer science in 2011 and his Ph.D. in 2015 from the Budapest University of Technology and Economics, Hungary, where he is an Assistant Professor now. He started practicing yoga in 2018 and is developing special sādhanā protocols in cooperation with his mentors at home and abroad. His email contact is [email protected].
Doc. Ing. Gejza M. Timčák, PhD., grad uated from the Technical University in Košice in 1964, he got his PhD from Imperial College in London in 1975 and served in various capacities mainly at Technical university in Košice, Slovakia. He got his first formal yoga teacher qualification in 1979. Since 1981 he is also a tutor at Teacher Training Programs. He co-organized multiple yoga programs and conferences starting in 1978 and published nearly 100 papers, including several books, on various aspects of yoga. He is available at [email protected].

