|COURSE TITLE||COGNITIVE PSYCHOLOGY, LEARNING AND MEMORY|
|COURSE CODE||MPC – 01|
|ASSIGNMENT CODE||MPC -01 /MPC -01 /TMA/2022|
Disclaimer/Special Note: These are just the sample of the Answers/Solutions to some of the Questions given in the Assignments. These Sample Answers/Solutions are prepared by Private Teacher/Tutors/Authors for the help and guidance of the student to get an idea of how he/she can answer the Questions given the Assignments. We do not claim 100% accuracy of these sample answers as these are based on the knowledge and capability of Private Teacher/Tutor. Sample answers may be seen as the Guide/Help for the reference to prepare the answers of the Questions given in the assignment. As these solutions and answers are prepared by the private teacher/tutor so the chances of error or mistake cannot be denied. Any Omission or Error is highly regretted though every care has been taken while preparing these Sample Answers/Solutions. Please consult your own Teacher/Tutor before you prepare a Particular Answer and for up-to-date and exact information, data and solution. Student should must read and refer the official study material provided by the university.
MPC 01 : COGNITIVE PSYCHOLOGY, LEARNING AND MEMORY
COURSE CODE: MPC 01 2022
Note: This TMA consists of ten questions, out of which you have to attempt any five. The question carries 20 marks each and should be answered in about 500 words. Send your TMA to the Coordinator of your Study Centre.
NOTE: All questions are compulsory.
SECTION – A
Answer the following questions in 1000 words each.
- Define cognitive psychology and describe the domains of cognitive psychology.
Cognitive psychology involves the study of internal mental processes—all of the things that go on inside your brain, including perception, thinking, memory, attention, language, problem-solving, and learning. Learning more about how people think and process information not only helps researchers gain a deeper understanding of how the human brain works, but it allows psychologists to develop new ways of helping people deal with psychological difficulties.
While it is a relatively young branch of psychology, it has quickly grown to become one of the most popular subfields. Cognitive psychology became more predominant during the period between the 1950s and 1970s. Prior to this time, behaviorism was the dominant perspective in psychology, but researchers began to grow more interested in the internal processes that affect behavior instead of just the behavior itself. This shift is often referred to as the cognitive revolution in psychology. During this time, a great deal of research on topics including memory, attention, and language acquisition began to emerge.
DOMAINS OF COGNITIVE PSYCHOLOGY
- i) Cognitive Neuroscience: Only within the past few years have cognitive psychologists and cognitive neuroscientists formed a close working relationship. Thus far, this union has produced some of the most provocative developments in the study of our mental character. Cognitive psychologists are seeking neurological explanations for their findings, and neuroscientists are turning to cognitive psychologists to explain observations made in their laboratories. Every part of the cognitive process from sensation to memory is supported by basic electrochemical processes taking place in the brain and nervous system.
- ii) Perception: The branch of psychology directly involved with the detection and interpretation of sensory stimuli is perception. From experiments in perception, we have a good understanding of the sensitivity of the human organism to sensory signals and more important to cognitive psychology of the way we interpret sensory signals. The experimental study of perception has helped identify many of the parts of this process. However, the study of perception alone does not adequately account for the expected performance; other cognitive systems are involved, including pattern recognition, attention, consciousness, and memory.
- iii) Pattern Recognition: Environmental stimuli rarely are perceived as single sensory events; they usually are perceived as part of a more meaningful pattern. The things we sense – see, hear, feel, taste, or smell—are almost always part of a complex pattern of sensory stimuli. Think about the problem of reading. Reading is a complex effort in which the reader is required to form a meaningful pattern from an otherwise meaningless array of lines and curves. By organising the stimuli that make up letters and words, the reader may then access meaning from his or her memory. The entire process takes place in a fraction of a second, and considering all the neuroanatomical and cognitive systems involved, this feat – performed daily by all sorts of people – is wondrous.
- iv) Attention: Although we are information-gathering creatures, it is evident that under normal circumstances we are also highly selective in the amount and type of information to which we attend. Our capacity to process information seems to be limited to two levels – sensory and cognitive. If too many sensory clues are imposed upon us at any given time, we can become overloaded; if we try to process too many events in memory, we can become overloaded, which may cause a breakdown in performance. All of us have felt the same way at one time or another.
- v) Consciousness: Consciousness is defined as “the current awareness, of external or internal circumstances.” Rejected as being “unscientific” by the behaviourists, the word consciousness and the concept it represents simply did not fade away. For most people, consciousness and unconscious thoughts are very real. For example, when you glance at your watch while studying and it reads “10:42 (P.M.),” you are conscious, or, aware, of that external signal. However, your reading of the time also brings up another conscious thought, one that was initially activated by reading the time but is from “inside.” That conscious thought might be, “It’s getting late: I’d better finish this chapter and go to bed”. Consciousness has gained new respectability recently and now is a concept studied seriously in modern cognitive psychology.
- vi) Memory: Memory and perception work together. The information available to us comes from our perception, short-term memory, and long-term memory. Most obvious long-term storage is the knowledge of the language. We draw words from LTM and more or less use them correctly. In a fleeting second, we are able to recall information about an event of years before. Such information does not come from an immediate perceptual experience; it is stored along with a vast number of other facts in the LTM.
- vii) Representation of Knowledge: Fundamental of all human cognition is the representation of knowledge: how information is symbolised and combined with the things stored in the brain. This part of cognition has two aspects: the conceptual representation of knowledge in mind and the way the brain stores and process information. The conceptual representation in different individuals can be considerably different. In spite of these inherent dissimilarities between representations of knowledge, most humans do experience and depict experience in similar enough ways to get along well in the world. The content of this information is also hugely different. But our neurological web entraps information and experiences and holds them in structures that are similar in all human brains.
- viii) Cognitive Psychology Imagery: Cognitive psychologists are especially interested in the topic of internal representations of knowledge. The mental images of the environment are formed in the form of a cognitive map, a type of internal representation of the juxtaposed buildings, streets, street signs, spotlights, and so on. From the cognitive maps, we are able to draw out significant cues. Although the experimental study of mental imagery is relatively new to psychology, some significant research has recently been reported.
- ix) Language: One form of knowledge shared by all human societies is the knowledge of language. Language is the principal means by which we acquire and express knowledge; thus, the study of how language is used is a central concern of cognitive psychology. Human language development represents a unique kind of abstraction, which is basic to cognition. Language processing is an important component of information processing and storage. Language also influences perception, a fundamental aspect of cognition.
- x) Developmental Psychology: Developmental psychology is another important area of cognitive psychology that has been intensely studied. Recent studies and theories in developmental cognitive psychology have greatly expanded our understanding of how cognitive structures develop. As adults, we have all lived through childhood and adolescence and we share maturational experiences with all members of our species.
- xi) Thinking and Concept Formation: Thinking is the crown jewel of cognition. Thinking is the process by which a new mental representation is formed through the transformation of information. Advances in cognitive psychology have led to a formidable arsenal of research techniques and theoretical models. An ability to think and form concepts is an important aspect of cognition. Similar concepts help in the understanding and processing of information. There is a considerable body of knowledge about the laws and processes of concept formation.
- xii) Human and Artificial Intelligence: Human intelligence includes the ability to acquire, recall, and use knowledge to understand concrete and abstract concepts and the relationships among objects and ideas, to understand a language, to follow instructions, to convert verbal descriptions into actions, and to behave according to the rules, and to use knowledge in a meaningful way
- Critically discuss Sternberg’s Information processing approach.
Another theorist firmly grounded in the information processing approach is Sternberg (1988). Sternberg’s theory suggests that development is skills-based and continuous rather than staged and discontinuous as stage theorists believe, and his focus is on intelligence. This focus on intelligence separates his ideas from stage theorists because it rejects the idea of incremental stages, but rather suggests that development occurs in the same way throughout life differentiated only by the expertise of the learner to process new information. First, and very importantly, Sternberg’s model does not differentiate between child and adult learning. Also, he deals solely with information processing aspects of development and does not incorporate any facets of biological development into his theory. Cognitive development is viewed as a novice to expert progression; as one becomes better at interaction and learning, one is able to learn more and at higher levels. Development changes as a result of feedback, self-monitoring, and automatisation. In this theory, intelligence is comprised of three kinds of information processing components: metacomponents, performance components, and knowledge-acquisition components.
In Sternberg’s (1988) model, each of these three components works together to facilitate learning and cognitive development. Metacomponents are executive in nature. They guide the planning and decision making in reference to problem solving situations; they serve to identify the problem and connect it with experiences from the past. There is, however, no action directly related to metacomponents, they simply direct what actions will follow. Performance components are the actions taken in the completion of a problem-solving task. Performance components go beyond metacomponents in that they perform the function also of weighing the merit and or consequences of actions in comparison to other options rather than simply identifying options. Sternberg’s third proposed type of intelligence is the knowledge-acquisition component. This type is characterised by the ability to learn new information in order to solve a potential problem. This type is much more abstract and may or may not be directly related to a current problem-solving task (Driscoll, 2001). This three-leveled view of intelligence comprises the componential aspect of Sternberg’s theory, but this is only one of three parts to his larger triarchic theory of intelligence (Kearsley, 2001).
Sternberg’s (1988) theory adds the components of feedback to theories of cognitive development; this suggests that an individual’s social interaction has some impact on cognitive development. In fact, one of the three parts of his theory is based on the context in which learning takes place; this subpart of the theory “specifies that intelligent behaviour is defined by the sociocultural context in which it takes place and involves adaptation to the environment, selection of better environments, and shaping of the present environment” (Kearsley, 2001). The addition of social context as a factor in cognitive development links Sternberg to the interactional theories of development of Bruner (1977, 1986) and Vygotsky (1978). These theories, and others of this type, are premised on the assumption that learning does not occur in a vacuum. Therefore, one must discuss the social and cultural contexts of learning. Driscoll (2001) says, “Of central importance is viewing education as more than curriculum and instructional strategies. Rather, one must consider the broader context in how culture shapes the mind and provides the toolkit by which individuals construct worlds and their conceptions of themselves and their powers”. These theories all work under the assumption that new information can most effectively be learned if the material can be matched to memory structures already in place (Winn and Snyder, 2001). Most theories hold that the mind contains some type of framework into which new information is placed. This structure is multi-leveled and has varying degrees of specificity. New information can be matched with, compared to, contrasted to, joined with, or modified to fit with existing structures. This in-place structural system allows for differing levels of complexity of information processing. The formation of and continual building of these structures, then, is critical in order for learners to process information in various ways and at higher levels.
- Explain the concept of IQ. Describe the history of measurement of intelligence.
The Concept of IQ
The most important development in the area of intelligence testing was adaptation of Stern’s (1912) concept of an intelligence quotient in the Stanford-Binet Intelligence Scale. Stern put forth the notion that to derive an intelligence quotient (IQ) and Terman incorporated this concept into the 1916 version of StanfordBinet Scale. To obtain the IQ a person’s mental age is divided by his/her chronological or real age. This product is further multiplied by hundred to avoid decimal fractions.
IQ is a type of standard score that indicates how far above, or how far below, his/her peer group an individual stands in mental ability. The peer group score is an IQ of 100; this is obtained by applying the same test to huge numbers of people from all socio-economic strata of society, and taking the average
HISTORY OF MEASUREMENT OF INTELLIGENCE
Galton and Cattell
The first institutional effort to measure individual differences came from the British biologist Sir Francis Galton who administered simple tests of visual discrimination, determining highest audible pitch and kinesthetic discrimination. He thought that intelligence could be measured by the tests of sensory discrimination. He believed that the ability to discriminate among heat, cold and pain could discriminate the intelligent persons from the mentally retarded ones. The term ‘mental test’ was used first time in the psychological literature by the American psychologist James McKeen Cattell in 1890. He described a number of tests to measure intellectual level of persons which included measures of muscular strength, speed of movement, sensitivity to pain, keenness of vision and of hearing, weight discrimination, reaction time, memory etc
Contribution of Alfred Binet
Alfred Binet (1857-1911) set out to develop a series of tasks designed to measure individual differences on the request of the French government due to the need for a reliable diagnostic system to identify children with mental retardation. The differences that he intended to delineate included a number of complex mental facilities, such as memory, imagery, imagination, attention, comprehension, aesthetic sentiment, moral sentiment, muscular strength, motor ability, and handeye coordination. Together with physician Theodore Simon, Binet created the Binet-Simon scale, which was published in 1905. The 1905 Binet-Simon scale differed greatly from the scale that we use today. The original scale consisted of 30 pass/fail items. The tasks were also different from today’s items and required a combination of mental and physical strategies to complete each task. The major breakthrough of the Binet-Simon scale was the complexity of the tasks and the breadth of mental abilities measured. Furthermore, intelligence was finally able to be measured during a clinical interview, as opposed to in laboratories or by using physical measurements. Although the Binet-Simon scale is quite antiquated with regard to today’s intelligence scale standards, many current day innovations were derived from this scale. The concepts of strict administration, age-graded norms, and a rank order of items ranging from least to most difficult, are but a few. Furthermore, the inclusion of age-graded norms provided for the first estimate of mental age. The first revision of the Binet scale was in 1908; however, the majority of the scale was left unchanged. By 1911, the scale was in its second revision and the age range had been extended through adulthood, as opposed to its previous use for the diagnosis of mental retardation in children. With the inclusion of adults, the scales needed to be rebalanced, which Binet did by including five items for each age level.
The Concept of IQ
The most important development in the area of intelligence testing was adaptation of Stern’s (1912) concept of an intelligence quotient in the Stanford-Binet Intelligence Scale. Stern put forth the notion that to derive an intelligence quotient (IQ) and Terman incorporated this concept into the 1916 version of Stanford Binet Scale.
World War I and Army Personnel
Selection During World War I in 1917 a committee of American Psychological Association, under leadership of Robert M. Yerkes, prescribed the use of intelligence tests for rapid classification of army personnel. In view of this, American Army psychologists developed two tests: (i) Army Alpha and (ii) Army Beta. Both the tests were group tests in which the first was a language test, while the second was a non-language-performance test.
SECTION – B
Answer the following questions in 400 words each
- Describe the principles of information processing.
Information processing is the change (processing) of information in any manner detectable by an observer. Within the field of cognitive psychology, information processing is an approach to the goal of understanding human thinking. It began in the 1940s and 1950s. Educators are very interested in the study of how humans learn. This is because how one learns, acquires new information, and retains previous information guides selection of long-term learning objectives and methods of effective instruction. To this end, cognition as a psychological area of study goes far beyond simply the taking in and retrieving information. It is a broad field dedicated to the study of the mind holistically. Neisser (1967), one of the most influential researchers in cognition, defined it as the study of how people encode, structure, store, retrieve, use or otherwise learn knowledge. Cognitive psychologists hypothesise an intervening variable or set of variables between environment and behaviour—which contrasts it with behavioural theories. Even though there are widely varying views within cognitive psychology, there is general agreement among most cognitive psychologists on some basic principles of the information processing system .
The first is the assumption of a limited capacity of the mental system. This means that the amount of information that can be processed by the system is constrained in some very important ways. Bottlenecks, or restrictions in the flow and processing of information, occur at very specific points (e.g., Broadbent, 1975; Case, 1978). A second principle is that a control mechanism is required to oversee the encoding, transformation, processing, storage, retrieval and utilisation of information (e.g., Atkinson & Shiffrin, 1971). That is, not all of the processing capacity of the system is available; an executive function that oversees this process will use up some of this capability. When one is learning a new task or is confronted with a new environment, the executive function requires more processing power than when one is doing a routine task or is in a familiar environment. A third principle is that there is a two-way flow of information as we try to make sense of the world around us. We constantly use information that we gather through the senses (often referred to as bottom-up processing) and information we have stored in memory (often called top-down processing) in a dynamic process as we construct meaning about our environment and our relations to it. This is somewhat analogous to the difference between inductive reasoning (going from specific instances to a general conclusion) and deductive reasoning (going from a general principle to specific examples.) A similar distinction can be made between using information we derive from the senses and that generated by our imaginations. A fourth principle generally accepted by cognitive psychologists is that the human organism has been genetically prepared to process and organise information in specific ways. For example, a human infant is more likely to look at a human face than any other stimulus. Other research has discovered additional biological predispositions to process information
- Explain the cellular bases of learning and memory.
How does the activity of different brain regions change as memories are formed? Most models of the cellular bases of memory hold that it is the result of changes in the strength of synaptic interactions among neurons in neural networks. How would synaptic strength be altered to enable learning and memory? Neil Carlson (1994) described some basic physiological mechanisms for learning new information. One basic mechanism is Hebb’s law, named after the man who posited it, Canadian psychologist Donald Hebb, in 1949. Hebb’s rule states that if a synapse between two neurons is repeatedly activated at about the same time the postsynaptic neuron fires, the structure or the chemistry of neuron changes and the synapse will be strengthened—this is known as Hebbian learning. A more general, and more complex, mechanism is called long-term potentiation (LTP). In this process, neural circuits in the hippocampus that are subjected to repeated and intense electrical stimulation develop hippocampal cells that become more sensitive to stimuli. That an excitatory input and postsynaptic depolarisation are needed to produce LTP is explained by the properties of the doubly gated N-methyl-D-aspartate (NMDA) receptor located on the dendritic spines of postsynaptic neurons that show LTP. Glutamate is the major excitatory transmitter in the hippocampus, and it can bind with NMDA and non-NMDA receptors. When 2-amino-5- phosphonopentanoate (AP5) is introduced to neurons, NMDA receptors are chemically blocked and LTP induction is prevented. But the AP5 treatment does not produce any effect on previously established LTP in these cells. Therefore, NMDA receptors are central to producing LTP but not maintaining it. It turns out that maintenance of LTP may depend on the non-NMDA receptors. Long-Term Potentiation and Memory Performance – This effect of enhanced response can last for weeks or even longer, suggesting to many that this could be a mechanism for long-term learning and retention (Baddeley, 1993). Disrupting the process of long-term potentiation (say, through different drugs) also disrupts learning and remembering. Chemically blocking LTP in the hippocampus of normal mice impairs their ability to demonstrate normal place learning; thus, blocking LTP prevents normal spatial memory. In a similar way, genetic manipulations that block the cascade of molecular triggers for LTP also impair spatial learning
- Discuss Spearman’s Two-factor theory of intelligence.
Charles Spearman published an epoch-making study in 1904, which indeed proved to be the crucial step toward quantitative testing of theories, as opposed to simple quantification or measurement. He used the techniques of correlational analysis and factor analysis, both of which had been developed earlier by Karl Pearson, in relation to the scores obtained by groups of children on various intelligence tests. His historical significance can be seen in the development of the factor analytical method and in its explicit use for the first time. It is with regard to such importance that Guilford (1954, p. 472) has stated: “No single event in the history of mental testing has proved to be of such momentous importance as Spearman’s proposal of his famous two-factor theory in 1904.” Spearman was critical of Binet and Simon’s (1905) practice of assembling a hodgepodge of problems for testing intelligence without first testing for the presence of a general factor or without weighing the problems in terms of their loadings on the general factor. He was concerned to test the theory that the obtained intercorrelations between various tests of intelligence were due entirely to a general intellective factor “g”. In addition to that, he also recognised specific factors, “s” factors, which were specific to particular tests. Eysenck (1972, pp. 1-2) has contended that “essentially his point was that under these conditions matrices of intercorrelations between tests should be of rank one; he did not use matrix algebra himself, but his formulas are the equivalent of more modern versions.” Spearman (1927) elaborated and revised his work in “The abilities of man.”
Critical Appraisal of Two-Factor Theory
Several criticisms were levelled against formulation of the two-factor theory. One of the standard criticisms of the factor analytic approach is that it was purely psychometric and failed to provide a cognitive theory. However, Sternberg and Frensch (1990) have convincingly argued that this criticism was misplaced. Spearman (1923) proposed that intelligence depended on a number of qualitative principles of cognition, for example “the presenting of any character together with any relation tends to evoke immediately the knowing of the correlative character” (p.91). According to M. W. Eysenck (1990) Spearman also described “five quantitative principles of cognition, which are relevant to intelligence: conative control, fatigue, mental energy, primordial potencies, and retentivity” (p. 192). Jensen (1998) confirmed the existence of “g” by the method of confirmatory factor analysis. Carroll (1993) also noted the presence of “g” at Stratum III in her hierarchical factor analysis. We will now attempt a critical appraisal of the two-factor theory and see how it has helped in the development of newer models of intelligence.
- Define creativity. Discuss the measurement of creativity.
Creativity is a goal directed thinking which is unusual, novel and useful. Many of such creative thinking become so important that they influence the whole human civilisation and are called as historical creativity. The Mona Lisa, the laws of thermodynamics, the laws of motion, the theory of relativity are some of the ideas that were never thought before and changed the human civilisation altogether in a great way in their respective spheres of life. Although we can accept its existence and importance, it has been a highly difficult task for the researchers to define creativity
Newell, Shaw and Simon (1963) have explained the nature of creativity on the basis of following four criteria:
- a) Novelty and usefulness
- b) Rejects previously accepted ideas
- c) Requires intense motivation and persistence
- d) Results from organising the unclear situation in a coherent, clear and new way
Beghetto and Kaufman (2007) conceptualised creativity in three different ways. They defined creativity as novel and personally meaningful interpretation of experiences, actions, and events. However, the novelty and meaningfulness of these interpretations need not require to be original or (even meaningful) to others. Indeed, the judgment of novelty and meaningfulness that constitutes creativity is an intrapersonal judgment. This intrapersonal judgment is what distinguishes creativity from other forms of creative expressions.
Measurement of Creativity
Houtz and Krug (1995) provide a review of several tests developed for the assessment of creativity. The review reveals that most of the tests of creativity intend to measure divergent thinking. Within the category of divergent thinking, Houtz and Krug (1995) present the Torrance Test of Creative Thinking (TTCT) (Torrance 1966), The Wallach and Kogan Tests, The Guilford Battery. The most widely used test on creativity is the Torrance Test of Creative Thinking (TTCT). It is also the one that has the most extended research on their reliability and validity (Kim 2006). This test has been translated into more than 30 languages and it is used in different places as a tool to assess creative potential. It is based on Guilford’s Structure of the Intellect (SOI) battery that included some measures of divergent thinking. Thus, it measures creativity through divergent thinking. The TTCT was developed in 1966, and it has been re-normed four times: 1974, 1984, 1990 and 1998. There are two forms, TTCT-Verbal and Figural with two parallel tests (form A and B). Each test is expected to measure
1) Fluency: The number of ideas: Total number of relevant responses.
2) Originality: The rarity of ideas: Number of statistically infrequent ideas. The score is 0 if the idea is common, and 1 if it is unique.
3) Elaboration: The number of added ideas.
4) Flexibility: Number of categories of the relevant responses.
The 1998 manual provides norms for the United States and includes both grade related and age related norms. Thus, there is some country specificity in the measurement of creativity. Kim (2006) reported some normative measures in other countries. These norms have usually been developed for research activities.
- Explain the basic concepts of multilingualism.
Multilingualism is the knowledge of more than one language by a person or within a social group; it assumes the ability to switch from one language to another in speech, in writing, or in reading. Other terms describing this phenomenon include bilingualism, polylingualism, plurilingualism, diglossia, and languages-in-contact. Multilingualism may be personal, social, or intersubjective. A generic term for multilingual persons is polyglot. Poly (Greek word) means “many”, glot (Greek) means “language”; and for the monolinguals is monoglot. Personal multilingualism refers to the knowledge and verbal behaviour of an individual, not necessarily shared by the whole community. Social multilingualism refers to the communicative practices of a nation, tribe, or other social group that sustains two or more languages. As in India, nearly 200 languages are spoken by its natives.
India is said to be a socio-linguistic giant and the nerve system of this giant is multilingualism. “Indian multilingualism is huge in size, having 1620 mother tongues reduced to 200 languages….With the population of many of minorities larger than European countries”(Annamalai E. 2001). This multilingual character of India is represented by its metropolitan cities like Mumbai and New Delhi, where people from all over come and settle down. For example, in Mumbai every child is exposed to at least four languages right from its infancy (Pai, 2005). Government of India has introduced the Three Language Formula in its educational system, which means every child has to study two more languages other than their first language. The two languages are introduced simultaneously at upper primary level.
SECTION – C
Answer the following questions in 50 words each.
- Nature vs. Nurture
Nature versus nurture is a long-standing debate in biology about the balance between two competing factors which determine fate: environment (nurture) and genetics (nature). The alliterative expression “nature and nurture” in English has been in use since at least the Elizabethan period and goes back to medieval French.
The complementary combination of the two concepts is an ancient concept Nature is what people think of as pre-wiring and is influenced by genetic inheritance and other biological factors. Nurture is generally taken as the influence of external factors after conception e.g. the product of exposure, experience and learning on an individual.
10.Neuroscience and cognitive psychology
Neuroscience is the scientific study of the nervous system. It is a multidisciplinary science that combines physiology, anatomy, molecular biology, developmental biology, cytology, computer science and mathematical modeling to understand the fundamental and emergent properties of neurons, glia and neural circuits. The understanding of the biological basis of learning, memory, behavior, perception, and consciousness has been described by Eric Kandel as the “epic challenge” of the biological sciences
11.Miller’s magic number
Miller’s Magic Number – George Miller’s classic 1956 study found that the amount of information which can be remembered on one exposure is between five and nine items, depending on the information. Applying a range of +2 or -2, the number 7 became known as Miller’s Magic Number, the number of items which can be held in Short-Term Memory at any one time. Miller himself stated that his magic number was for items with one aspect. His work is based on subjects listening to a number of auditory tones that varied only in pitch. Each tone was presented separately, and the subject was asked to identify each tone relative to the others s/he had already heard, by assigning it a number. After about five or six tones, subjects began to get confused, and their capacity for making further tone judgments broke down. He found this to be true of a number of other tasks. But if more aspects are included, then we can remember more, depending upon our familiarity and the complexity of the subject (in Miller’s research, there was only one aspect — the tone). For example, we can remember way more human faces as there are a number of aspects, such as hair color, hair style, shape of face, facial hair, etc. We remember phone numbers by their aspects of 2 or more groupings, i.e. chunking. We don’t really remember “seven” numbers. We remember the first group of three and then the other grouping of four numbers. If it is long distance, then we add an area code. So we actually remember 10 numbers by breaking it into groups of three
12.Encoding, Storage and Retrieval
Encoding occurs during the initial processing of a stimulus or event.Maturation and experience influence this process. In terms of maturation, Dempster (1981) suggests that the adult capacity for short-term memory of 5 + 2 digits might be as much as 2 digits lower for children aged 5 and 1 digit lower for children aged 9. As for experience, in a series of well-known studies of expertise, novices remember new information less well than experts (e.g., Chi, 1978; Schneider, Korkel, & Winert, 1989). One of the most important differences between novices and experts is the structure and organisation of domain-specific knowledge.
13.Knowledge base in PASS theory
Based on information processing approach, J.P. Das, Naglieri and Kirby (1994) proposed four main cognitive functions, i.e. Planning, Attention, Simultaneous, and Successive processing (hence the name PASS). Das has used A. R. Luria’s, neuropsychological conceptualisation of human cognitive processes as a base for developing this model. The basic statement of the model is that intelligence can be understood as a result of the interdependent functioning of three neurological systems. These systems constitute those who are responsible for arousal (and attention), coding (or processing) and planning. The two coding processes are simultaneous and successive. Thus, the theory is known as PASS (Planning, Attention, Simultaneous and Successive) Theory.
It is taken in terms of physical space, as do architects and sailors–very aware of their environments. They like to draw, do jigsaw puzzles, read maps, daydream. They can be taught through drawings, verbal and physical imagery. Tools include models, graphics, charts, photographs, drawings, 3-D modeling, video, videoconferencing, television, multimedia, texts with pictures/charts/graphs.
An algorithm is a step-by-step procedure that will always produce a correct solution. A mathematical formula is a good example of a problem-solving algorithm. While an algorithm guarantees an accurate answer, it is not always the best approach to problem solving. This strategy is not practical for many situations because it can be so time-consuming. For example, if we were trying to figure out all of the possible number combinations to a lock using an algorithm, it would take a very long time
It’s the day before your best friend’s birthday and the gift you’ve ordered her still has not arrived. What are you going to do? Most likely, when you realize the present isn’t going to make it in time, you will identify another gift you can purchase at a local store, set aside time to stop by the store, make the purchase, get it wrapped, and deliver it without her ever realizing there was a problem.
Whether you understand it or not, there was a clearly defined process you went through when you realized her birthday gift wasn’t going to be delivered in time. That area is something known as a problem space.
A problem space is all of the various components that go into creating a resolution for a problem. Think of it like a frame, which acts as something of a border to help define an area. A problem space helps you or, on a larger scale, a business, figure out what the problem is, work through ways to correct them, and then drives implementation of the appropriate solution. The ultimate purpose is to take corrective action for some identified problem
Duncker (1935) coined the term functional fixedness for describing the difficulties in visual perception and in problem solving that arise from the fact that one element of a whole situation already has a (fixed) function which has to be changed for making the correct perception or for finding the solution to the problem. In his “candle problem”, the situation was defined by the objects such as a candle, a box of thumb-tacks and a book of matches. The task was to fix the candles on the wall without any additional elements. The difficulty of this problem arises from the functional fixedness of the box, which originally contained thumb-tacks. It is a container in the problem situation but must be used as a shelf in the solution situation
The problem solver starts at the goal state with the backward search heuristic. Sometimes, it is useful to start at the goal state of a problem and attempt to work backward to the initial state. In solving a paper-pencil maze, it may be easier to see the correct path by starting at the end. Working backwards can be a very useful heuristic, particularly for problems that contain a uniquely specified goal state. For example, a backward search would be ideal for a maze with many paths out of the beginning point yet only one path leading form the goal. The reason working backward helps lies in the sub–goals that one begins to see by starting with the final goal. Once the problem solver can envision a string of sub–goals projecting backward from the goal state, then going about solving the sub–goals in a forward direction can be readily accomplished.