Saturday 10 September 2022

LINEAR A , PROTO GREEK LANGG, TILL 1450 BCE , THEN LINEAR B , DECODING GREEK CIVILISATION X SOCRATIC CLTEL

 


A


PEARL BARLEY GOOD FOR DYSLIPIDEMIA


A

Not at all, Dhayan (meditation) is needed when Gyan (brush up) is practised.

Gyan is not about acquiring knowledge it is about removing layers of ignorance.

With the removal of each layer (deprogramming) chaos is created in the mind and body. To cope up with changes you need to practise meditation.

Layer after layer your meditation gets deeper and deeper and you pass through all the stages of meditation just like Buddha.

What are the states of meditation?

There are four states of mind - beta, alpha, theta and delta.

The beta state is a normal awake state and it is characterised by a brain wave frequency of 14 cps( cycles per second) to 100 cps. The higher the frequency more erratic will be the mind.

The Alpha state:- it is the first state of meditation where brain wave frequency is 8–13 cps. It is a dream state.

The Theta state (4 to 7 cps):- It is similar to the alpha state but deeper and characterised by sudden intuitive insights.

The Delta state (3 cps or lower): This is the deep sleep in which there is no consciousness.

Dreaming occurs in the alpha and theta states, not in delta state.

But it is possible to be awake in the delta state that is where Buddha is born - this is called Turiya- the fourth state is beyond human consciousness - waking- sleep.

And Good news is:-

You will pass through all the stages of meditation without noticing them consciously.

A

Devotee has surrendered 

a
CONSC

Since many of the comments below reference terms like “emergent property” and the “hard problem” let me digress, and in doing so ultimately address the question.

Consciousness, in a broad sense is something that we experience in ourselves and infer to obtain in some subset of other living things, such as ( for sake of argument) higher animals. In a rough sense, and at the risk of some circularity, we can think of the state of being conscious as the having of a mind. Now taking mind in this broad sense, it seems a little strange to ask about mind being an attribute of matter, as what we generally mean by mind includes all kinds of things that we attribute more to nervous systems and the creatures that have them. But let’s break things down, and in doing so we will invoke one of the more useful distinctions in recent philosophy of mind, that of the “hard vs easy problem” of consciousness. This distinction, introduced by the philosopher David Chalmers, has succeeded in focusing and clarifying what we are asking about when we ask questions about consciousness.

To start, let’s describe (not explain!) consciousness , and for sake of simplicity let’s limit the discussion to humans, recognizing that the distinctions in play likely apply to many other creatures such as dogs, cats, bunny rabbits etc. As a start we can say that consciousness is that phenomenon that includes our thinking, having emotions, having sensations, reporting on things, reporting on ourselves, taking in information, communicating, remembering, forgetting, learning, loving, hating, enjoying, considering, reacting to, understanding, not understanding, discriminating etc. You get the idea. Now the thing is this. We can take all this stuff, and seem to characterize it in terms of input and output to a system. So for example, conversing can be analyzed as verbal information going in, and coherent, relevant verbal output coming forth. Sensing can be understood as being in the presence of an external stimulus - noise, light, physical contact, odor - and now either reacting (behaving, reporting etc) in some coherent manner, or being disposed to react at a later time. Being in an emotional state, “happy” can be understood as reacting and producing verbal reports, acting in a certain way and exhibiting certain kinds of facial expressions that fall into certain categories and by which, in observing them, we conclude that someone is in fact happy, or conversely sad or in pain. Now we can go on and on in this manner and account for everything we see in living things that we believe to be conscious. That is, we can characterize everything in terms of observable events, even if the account is very complex. And of course, the fact that we can do this is what allows us to have such disciplines as psychology or psychiatry. Now if we take this to one conclusion, it would seem that in principle we could (even if currently far beyond our technological reach) build a robot, an automaton, that perfectly duplicated the behavior of a live human (think of the Data character on Star Trek). Now suppose we accomplished this, and suppose the simulation was so good that it could fool us, (for those that are familiar with it, the so-called Turing Test) I.e. we could hang out with this robot and think it was a real person. Now coming back to Chalmers’ easy/hard problem distinction, in creating this robot we would have solved the easy problem of consciousness. That is, we would have worked out all the input/output and functionsl relationships needed - the software, if you will - to produce observable human bevahior and psychology. Now let’s bring back the question of emergence. At this point we could safely say that the psychological properties that this robot exhibited would be emergent properties, in that they are not to be found in any individual components of the hardware that make up its robot brain. Instead, they are realized at a higher level of description that is the whole. This is analogous to to liquidity as emergent at the level of description of large ensembles of water molecules. So in an analogous manner, were we to create an artificial mind of this sort, where the full range of human psychology would “emerge” from vast ensembles of synthetic circuitry, we would now see how mind can be an emergent property of brains.

Now clearly this would be a monumental achievement. But in something of a cheek in tongue sense, Chalmers calls it “easy” in that it is “simply” a matter of facing up to and unraveling the enormous complexity, extreme as it may be. But on this view a second problem yet remains: the hard problem, and the spirit of that problem is reflected when the layperson asks “but does the Robot feel anything?”. Here is the hard problem: We can go about the business of neurobiology - in an effort that is informed and enriched by cognitive science, AI etc - and continue to work towards figuring out all the processing that allows coherent human behavior and psychology to stream out of a brain receiving information that penetrates that bony skull through the spinal cord and cranial nerves. And we are in fact slowly doing that, and gradually beginning to piece things together. Moreover, in taking on this research program (in the broad sense) we have implicitly accepted what might be called an assumption of psycho-neuronal correspondence, where we expect to continue on a path- however long - where we will continue to find the underlying neuronal processes, circuitry and networks that account for everything that we can observe humans doing-from walking down the street and gracfully negotiating obstacles, to yelling out in anger or waxing poetic over a sunset, to reporting on their subjective experience, as when saying”I am in pain”. But the key word here is observe. We can observe this stuff in others as we would in a sophisticated robot. (We could even have a robot wired and “programmed” in such a way it that cries out in pain if we dismantle it!) But something else seems to be going on in us as well. We very much seem to also be experiencing this stuff from the inside, from the 1st person, where “it is like something” to be in these states.

Suppose we observe a simple creature writhing in pain. Suppose also we are in some distant future where we had a far more complete neurobiology. Now imagine we observe the creatures nervous system using very sophisticated functional imaging. We see all the noxious inputs to its nervous system, we observe output directing its movements in escape behavior, we even trace memory stores retrieving information from internal maps of its environment to facilitate efforts to escape, we see memory traces forming, such that this event and associate circumstance can recalled to avoid the painful stimulimin the future. We may even see neouronal maps of the organism as a whole, implying a representation of the organism to itself, allowing its internal states to register that the whole is in a state of distress. We can go on and on reaching deeper levels of organization. But with all this we might still ask, “but is this organism actually feeling anything?” In the language of philosophy of mind, or consciousness studies, with such a question we would be asking if the organism had private subjective states or 1st person phenomenality or phenomenology. We might also equivalently ask if we could attribute so-called “qualia” to that organism.

The hard problem of consciousness is the problem of explaining qualia. Qualia are the the particular individual feelings of what it is like to be a conscious system as felt from the inside, from the first person subjective side. So if the organism from our example had “pain qualia” that would mean that there was in fact a subjective occurrence of pain, that it felt like something to the organism, and where that feeling like something, and what that is actually like (see Thomas Nagel “what it is like to be a bat”) could only be known by the organism in question. It is also the reason we might feel compassion for the creature.

There are as many examples of qualia as there are states of mind. Color qualia provide a particular concrete example. When staring at the blue of the sky, a very specific and private subjective “what blue looks like” seems very much to be present to us. This is blue qualia. Now the problem is this. What in our neurobiology can possibly explain how and why this is occurs. Our neurobiology can in principle explain all the observables and other reportables: what we associate blue with, why it’s our favorite color, when we first saw it, when we say we are seeing it vs when we say we are not, how the networks that internally represent blue objects link to our linguistic centers so that we can call up the word “blue”, and on and on. But how in all this elaboration of input output and functional neuronal relationships does the purely subjective blueness of blue come into the account, I.e. blue qualia.

To be more concretely directed to the heart of the hard problem, I am going to borrow from the philosopher Frank Jackson (google Mary the blind neurobiologist) and ask the following question: how would a fully colorblind neurobiologist - I.e. one who saw entirely in black and white - ever come known what blue qualia is like by studying human brains? The problem this question poses captures the hard problem, which is equivalently the problem of how of to explain qualia.

Whatever position we take on the hard problem (see later in this essay), understanding what it is helps clear up a frequent confusion that that recurs all too often. Specifically, this is the incorrect claim that neurobiology is somehow weighing in on all this. In avoiding this, what is critical to understand is that our current neurobiological research program can only address (and there is a long very long way to go in just doing that) the so-called easy problem. The reason here can be made clear, and involves understanding the so-called Explanatory Gap. Our neurobiology looks at various aspects of the 3rd person observable stuff and finds (as experimental methods increase in sophistication) deeper and deeper levels neurobiological correlates that obtain. This is the easy problem. The hard problem emerges when we ask how we can take all those structure function accounts and squeeze an account of the what it is like of subjective experience, I.e. qualia , out of the mix. As David Chalmers would point out, every conceivable neurobiological explanation that one offers of any behavioral, psychological, cognitive , emotive, etc. event is still completely consistent with that occurring in a flesh and blood robot, I.e. in the absence of qualia or subjective states. That is, there is nothing in the neurobiological explanation or account that guarantees that it actually “feels like something” to be the system in that state for whom the structure function account applies. Moreover it is in no way clear what such an account could possibly be like, I.e. how it could cross this explanatory gap and, using the Frank Jackson example, allow our blind neurobiologist to know color qualia, and thereby know what colors actually look like. Now notice how including subjective accounts into a research program does not solve the problem and cross the explanatory gap. So for example, scanning a brain-even at a level that fully describes neural networks in the visual cortex - while a subjective report is issued, such as “I am seeing blue” or equivalently “I am experincing blue qualia”, will not acomplish this, since all it can do is determine that when V1 (the visual cortex) is in such and such a state blue is seen. The account could even follow and explain neuronal events all the way down to the verbal subjective report above. But the Hard Problem is that none of this translates into account of the “what it is like” of specific subjective quality of blueness as it obtains, for exalmple, distinct from redness. None of this would seem to cross the explanatory gap and allow the blind neurobiologist to know blue qualia, even when the full neurobiological account is known.

Now what then do we make of the hard problem, and can we bring it all the way back - after this long digression - to the question of emergence vs fundamental property? Now at the risk of oversimplifing, there are two fundamental ways to go here, and each lies at the opposite end of the current highly polarized debate. The first of these is what some have called Type A Materialism, or eliminitivism. This approach, while highly counterintuitive to many, is rather tidy both philosophicaly and neurobiologicaly. The stance here is that qualia quite simply don’t exist. The eliminitivist argues that our insistence that there are these fundamentally inner private subjective states -where the particular and distinct “what it is actually like to the conscious agent” qualities obtain - is in fact an illusion of sorts. Nothing of this sort obtains at all. Moreover fictional characters like data -who look and act exactly like us - are in fact no different from us in this sense. In other words, our private, fundamentally hidden inner world of subjective qualities is just as non existent as his. But why then do we speak of qualia? On their view, our cognition, including non conscious (in their sense non reportable) cognition is so incredibly rich, that we are led to a false belief that a purely subjective side to it obtains. The analogy is often made to what was once known as Elan Vital. At one time life and living things where felt to involve some special thing, a life force or life energy that made living things fundamentally different from mechanical things. And, extending the analogy, the argument is made that we might have spoken of a “Hard Problem of life”, where in addition to explaining reproduction, growth, feeding and - in the case of some living things - locomotion, etc, we had to somehow also explain the Elan Vital. We now know however that our prior intuition that there is such a thing as Elan Vital was misguided, we were fooled -by the incredible biological complexity of living things - to believe there was something more than mere structure and function to explain. And so it is for the eliminitavist with qualia, the hard problem and the explanatory gap. Once we completely unravel the easy problem (on their view mislabeled, since it really is the “only” problem) we will see that there is in fact no hard problem, no explanatory gap, and no qualia. On this view qualia and the subjective private side of mind are neither emergent nor fundamental. They simply, like Elan Vital, don’t exist. What is emergent however is our strong tendency to claim that they are there, as if present to us in some undeniable self verifying fashion. So on this view the blind Neurobiologist will, after fully decoding the brain, realize that there was no purely subjective side of blueness to learn about, and he or she will in fact see, in neurobiological terms, why we are so compelled to think that there is. Proponents of this view include Daniel Dennett and Paul and Patricia Churchland.

Now at the other end of the debate are the Naturalistic Dualists. These individuals argue that qualia are real, the explanatory gap is real, and the hard problem is intractable given our current ontology (the basic furniture of reality upon which we build physics). They argue that there surely is a fundamentally subjective side to reality, and they argue that this must be built into our explanatory framework if we are ever going to build a full science of consciousness. Now one point of confusion comes up here that must be addressed. There is often the claim that such contemporary versions of dualism are Cartesian and also at odds with the current progress of neurobiology. They are not. Cartesian Dualism 1) made claim to a mind stuff , a metal substance - a ghost in the machine - that existed along with extended matter. Contemporary dualism does not make this overly strong claim. 2) Catertesian Dualism held that mind stuff could at a some level work in ways independent of the machinery of the brain, thereby violating our current held view that all metal events, down to the smallest level of romantic detail, correspond to events in the brain (perhaps also violating physical conservation laws!) Naturalistic Dualists do not make this claim. They grant the progress and validity of our neurobiological research, and the one to one correspondence between mental and neuronal/phyisical events.

There is also the claim that Naturalistic Dualists are claiming that qualia, the hard problem and the explanatory gap are somehow beyond science. They are not saying this. Instead they are saying that our ontology must be braodened to include subjective states as somehow being fundamental, and then build them into future neurobiological explanations. An analogy to physics helps here, in that physics has braodened its ontology at various points in its history - e.g. fields, and more recently spacetime. Similarly, if more radically, Naturalistic Dualists argue for a broadening of ontology to make room for subjective states.

Getting back to the question of emergence vs fundamentality, on the Naturalistic Dualist view subjectivity is not emergent, as liquidity is to water molecules (statistical mechanics is part of what bridges that gap there), but fundamental. Proponents of this view include David Chalmers and Thomas Nagel.

Now there is a middle ground of sorts that a number of neurobiologists sort of default to. Philosophicaly, however this gets quite technical, as it involve very formal discussions about the nature of metaphysical possibility, as this is something of a have it both ways position. Worth noting is that many who default to this position are not sensitive to the complexities that it entails. (Eliminativism and Naturalist Dualism are in many ways far simpler positions to take). The position here is sometimes called non reductive physicalism or type b materialism. The idea here is that the Hard Problem, qualia, and the explanatory gap are taken seriously, seen for what they are and not dismissed. However there is a refusal to broaden ontology. And this is what gets it into very conceptualy diffcult territory, as it is in no way clear how one can have it both ways here. The empirical way out, and hope of the non reactive materialist, is that somehow concepts and neurobiological research will evolve to a point where we see how qualia obtain as an emergent phenomenon when large masses of neurons work together as they do in our brain. So just as statistical mechanics allowed us to see how liquidity emerges from water molecules without needing to expand our ontology to include liquidity as a fundamental category of reality, some future neurobiology will cross the explanatory gap and do the same for qualia, even it is currently beyond our ability to foresee how such an account could obtain. Proponents of such a view include Brian Loar, Chris Hill & Brian McLaughlin. A final interesting spin on all this is the position of Colin McGuinn who argues that an account of how qualia obtain is there, but is as far beyond our cognitive ability as calculus is for dogs. In any event. Hope this helps, and hope my position is not too evident. Ima a NDist


A



 DISCIPLE COMES TO LEARN

No comments: