Thursday, June 30, 2011

If I'm male, then how do my sister's son and daughter relate to me? If I'm male, then how do my brother's son and daughter relate to me?

Family relationship terms can be tricky, especially once you get into first and second cousins twice removed. See the link below for a handy chart for figuring out those terms. 


If you're male and your sister has both a son and a daughter, then her daughter will be your niece. Her son will be your nephew. You will be their uncle. If your brother also has a son and a daughter, then his son will be your nephew and his daughter will be your niece. You will be their uncle. Your sister will be an aunt to your brother's children and your brother will be an uncle to your sister's children. Your brother's children and your sister's children will all be each other's cousins.

What is ethology?


Introduction

Ethology, from the Greek ethos (“behavior" or "manner”), is the study of animal behavior.
It is concerned primarily with the accurate description and rigorous experimental evaluation of animals’ behavior under natural conditions. Unlike the field of behaviorism, which traditionally emphasized the sole importance of the environment on behavior, ethology also recognizes the genetic and physiological mechanisms that regulate behavioral processes. Ethologists operate under the primary assumption that much of behavior is hereditary and thus strongly influenced by the forces of natural selection. Natural selection is the process of differential survival and reproduction that leads to heritable characteristics that are best suited for a particular environment.






In their search for a common, unifying explanation of behavioral processes, ethologists have sought to address three specific issues: the accurate, nonanthropomorphic description of behavior under natural conditions; the underlying mechanisms that regulate and control behavior; and the adaptive significance of various behavior patterns.




Descriptive Approach

In its earliest stages, ethology was characterized by a highly descriptive approach. Early ethologists were concerned primarily with accurate and objective accounts of behavior. Behavior, however, unlike other aspects of an organism’s biology (such as morphology or physiology), was a difficult and elusive thing to characterize and thus required careful, unbiased approaches to understanding the ways in which animals responded to stimuli in their environment. Konrad Lorenz, one of the early founders of the field, insisted that the only way to study behavior was to make objective observations under completely natural field conditions. This approach, most evident in his classic studies on aggression and imprinting (the innate behavioral attachment that a young animal forms with another individual such as its mother, with food, or with an object during a brief critical period shortly after birth), greatly enhanced understanding of communication in the animal kingdom. In contrast to Lorenz’s very subjective approach, the rigorous field experiments of Nikolaas Tinbergen and Karl von Frisch
were similar to those that later would characterize modern ethology.


The classic work of all three of these early ethologists helped demonstrate how an animal’s sensory limitations and capabilities can shape its behavior. For example, in a series of classic learning experiments, von Frisch convincingly documented the unusual visual capabilities of the honeybee. He first trained honeybees to forage at small glass dishes of sugar water and then, by attaching different visual cues to each dish, provided the animals with an opportunity to learn where to forage through the simple process of association. From these elegant but simple experiments, he found that bees locate and remember foraging sites by the use of specific colors, ultraviolet cues, and polarized light, a discovery that revolutionized the way in which humans view the sensory capabilities of animals.




Mechanistic Behavior

With the classic work of Lorenz, Tinbergen, and von Frisch came an increasing appreciation for the ways in which physiological limitations define behavioral differences between species. This awareness eventually gave way to a mechanistic approach to behavior, in which ethologists sought to determine how internal factors such as physiology, development, and genetics regulate and control behavior. The physiologically oriented ethologists, for example, focused on the influence of neuronal pathways and sensory organs on behavior. They were concerned with topics such as the control of feeding in insects, echolocation in bats, electric field detection in fish, and infrared detection in snakes. Similarly, neurobiologists attempted to show how behavioral changes are linked to modifications in the function of nerves and neuronal pathways. By observing the response of individual nerves, neurobiologists can observe changes that occur in the nerves when an animal modifies its behavior in response to some stimulus. In a similar way, they can show how learning and behavior are affected when specific nerve fibers are experimentally cut or removed.




Adaptive Behavior

The third and perhaps most significant area in ethology is that which deals with the evolutionary (adaptive) significance of behavior. Since the seminal work of Charles Darwin, ethologists have maintained that a species’ behavior is controlled largely by its genes. Darwin argued that an animal’s behavior was no different from any other phenotypic characteristic (physical expression of the genes) in that it was heritable and therefore subject to the same kinds of selective processes that lead to evolutionary change among organisms. He considered instinctual (or innate) behavior a tremendous adaptation that frees some organisms from the risky and sometimes costly business of trial-and-error learning. At the same time, he recognized the adaptive plasticity that accompanies the more complex behaviors that involve various degrees of learning.


Both Lorenz and Tinbergen also recognized the importance of evolutionary questions in behavior, but Tinbergen was the first to put such hypotheses to rigorous experimental tests. In a classic experiment on the evolutionary significance of spines in sticklebacks, he tested predation rates by pike on several species of these fish. He found predation rates to be lowest on the three-spined stickleback (a conspicuous species with large horizontal spines), moderate on the more cryptic ten-spined stickleback (which possesses ten smaller vertical spines on its dorsal surface), and highest for unarmored minnows.




Mechanisms of Heredity

More recently, behavioral geneticists have shown that much of learning, and of behavior in general, is intimately tied to mechanisms of heredity. The results of hybridization experiments and artificial breeding programs, as well as studies on human twins separated at birth, clearly demonstrate a strong genetic influence on behavior. In fact, it has been well documented that many animals (including both invertebrates and vertebrates) are genetically programmed (or have a genetic predisposition) to learn only specific kinds of behaviors. Such is the case for song learning in birds.


Thus, ethology places tremendous importance on the evolutionary history of an organism. It emphasizes the adaptive significance of the various types of behaviors, and it assumes that an animal’s behavior is constrained largely by its genetic and evolutionary background.




Learning Process Research

The field of ethology has contributed markedly to the understanding of several psychological and behavioral phenomena. One such area is the learning process. Learning is defined as any modification in behavior (other than that caused by maturation, fatigue, or injury) that is directed by previous experience.


The early experiments of the behaviorist psychologists on conditioning (the behavioral association that results from the reinforcement of a response with a stimulus) led to the notion that all behavior is learned. Traditionally, behaviorists maintained that all complex behaviors are learned by means of either classical or operant conditioning. Classical conditioning, first demonstrated by the Russian psychologist Ivan Petrovich Pavlov, is a form of associative learning in which an animal responds to an unrelated, novel stimulus after it is repeatedly paired with a more relevant stimulus. Operant conditioning, also a form of associative learning, occurs when an animal learns by manipulating some part of its environment (for example, the animal might ring a bell to receive a reward). This form of learning usually improves with experience and is therefore referred to as trial-and-error learning.


The primary objective of the approaches employed by the early behaviorists was to eliminate or control as many variables as possible and thereby remove any uncertainty about the factors that may influence the learning process. These approaches were especially successful at identifying the external mechanisms responsible for learning. Such techniques focused only on the input (stimulus) and output (response) of an experiment, however, and consequently deemphasized the importance of proximate mechanisms such as physiology and genetics. In addition, these approaches generally ignored the evolutionary considerations that ethologists considered so fundamental to the study of behavior.




Innate Behavior

In contrast, studies by the early ethologists suggested that much of behavior was dominated by innate processes that were constrained by the physiological and genetic design of the organism. Lorenz and Tinbergen, for example, demonstrated that many behavioral responses in the animal kingdom are fixed or stereotyped (instinctive) and are often elicited by simple environmental stimuli. They referred to such responses as fixed action patterns and to the stimuli that triggered them as sign stimuli.


The egg-rolling behavior of the greylag goose is perhaps one of the most widely cited examples of this kind of innate behavior. When one of these ground-nesting birds notices an egg outside its nest, it stands, walks to the egg, extends its bill in a very characteristic manner, and proceeds to roll the egg back to the nest. Although at first glance this may seem to represent a simple learned response, Lorenz and Tinbergen found this to be a highly ritualized behavior that was initiated by a very specific environmental stimulus. Through a series of clever experiments, Tinbergen showed that this behavior could be elicited by an egglike object (a ball) or even any object with a convex surface (a bottle or can), and that objects larger than eggs caused a more vigorous (supernormal) response. He also found that once the behavior was initiated, it always ran to completion. In other words, even when the egg was removed, the goose would continue with the motions as if it were returning the egg to the nest.


This and countless other examples of very ritualized behaviors, such as the avoidance response of ducklings to hawk models, the imprinting of young vertebrates on their mothers, the aggressive displays of male stickleback fish to the red bellies of other males, and the various courtship displays of a wide range of species, led early ethologists to conclude that much of behavior is governed by instinct.


These opposing views of ethologists and behaviorist psychologists eventually led to the misconception that learned behavior is governed entirely by the animal’s environment, whereas instinct is completely controlled by the genes. It is now widely accepted, however, that nearly all forms of behavior and learning involve certain degrees of both processes. Countless studies, for example, have demonstrated that numerous animals are genetically programmed to learn only certain behaviors. In contrast, it has been shown that instinct need not be completely fixed, but instead can be modified with experience.




Sociobiology

A second area of ethology that has received much attention from a variety of behavioral researchers and in some cases has sparked considerable controversy is sociobiology. In the early 1970s, Edward O. Wilson
and Robert Trivers of Harvard University initiated a new area of behavioral research when they began their investigations of the evolutionary basis of social behavior in animals. Their attention focused on the evolutionary enigma presented by altruistic
behaviors—acts that one organism performs (often at its own expense) to benefit another. Examples include alarm calls in the presence of a predator and nest-helping behavior. The most extreme cases of such behavior are found in those insect societies in which only a few individuals reproduce and others work to maintain the colony. Through careful experimentation and observation, it was soon determined that such unselfish behaviors are directed toward related individuals and that such behaviors probably evolve because they promote the survival of other individuals who also possess the genes for those same altruistic acts.


Although they initially sparked much debate, studies of the evolutionary basis for social behavior eventually strengthened the ethologists’ long-held notion that much of behavior is coded in the genes.




Research Debates

Although ethology had its beginnings with the work of Darwin and other early naturalists, it was von Frisch, Lorenz, and Tinbergen who conducted the first formal ethological studies and who received a joint Nobel Prize for their pioneering work in 1973. Their approach represented a considerable departure from that of behaviorist psychologists, and the differences between the two fields sparked a heated debate during the 1950s and 1960s, often referred to as the nature-versus-nurture controversy. Although this debate eventually led to the decline and virtual demise of behaviorism, it also helped shape modern ethology into a rigorous biological discipline that now holds a compatible niche within the realm of psychology.


Although the early ethologists argued that behaviorists treated their study organisms as “black-boxes” and ignored the genetic, physiological, and evolutionary backgrounds of their subjects, the behaviorists leveled several criticisms in return. In addition to their disbelief in the genetic control of behavior, they were most critical of the methodological approaches employed by ethologists. In contrast with the rigorously controlled laboratory experiments of psychologists, in which blind observers (observers unaware of the experimenters’ hypotheses or experimental design) were often used to collect data, behaviorists held that early ethologists conducted nearly all their studies under natural conditions without any regard for experimental control. In addition, their observations were often highly subjective and almost never quantified. Even when attempts were made to quantify the behavior, they never involved the rigorous statistical and analytical techniques of the behaviorists.


Furthermore, although the early ethologists argued that much of behavior is shaped by evolution and constrained by an organism’s physiological hardware, little evidence was initially available to support these contentions. Behaviorists, for example, held that ethologists often observed a behavior and casually assigned some adaptive significance to it without testing such evolutionary hypotheses.


These criticisms forced early ethologists to improve their approaches to data collection, experimental design, and data analysis, and as their approaches to the study of behavior were strengthened, so were their original hypotheses about the underlying control of behavior. Thus, as ethologists gained ground, behaviorism began to fall out of favor with most of the scientific community.


The basic views of early ethologists are still well preserved in all prominent areas of ethological research. In fact, the work of nearly all modern ethologists can best be characterized by the two basic sets of questions they seek to answer: the “how questions,” concerning underlying proximate causes, and the “why questions,” concerning ultimate causes (or evolutionary bases). The first of these is pursued by traditional ethologists and neurobiologists, while the latter is primarily the realm of behavioral ethologists. The fields of ethology and comparative psychology have begun to complement each other, and, increasingly, researchers from the two areas are merging their efforts on a diversity of research topics.




Bibliography


Alcock, John. Animal Behavior: An Evolutionary Approach. 8th ed. Sunderland: Sinauer Associates, 2005. Print.



Burkhardt, Richard W. Patterns of Behavior: Konrad Lorenz, Niko Tinbergen, and the Founding of Ethology. Chicago: U of Chicago P, 2005. Print.



Eibl-Eibesfeldt, Irenaus. Human Ethology. New Brunswick: Aldine Transaction, 2007. Print.



Fisher, Arthur. “Sociobiology: A New Synthesis Comes of Age.” Mosaic 22 (1991): 2–9. Print.



Gould, James L. Ethology: The Mechanisms and Evolution of Behavior. New York: Norton, 1982. Print.



Grier, James W. Biology of Animal Behavior. 2nd ed. New York: McGraw-Hill, 1992. Print.



Hötzel, Maria José, and Luiz Carlos Pinheiro Machado Filho, eds. Applied Ethology. Wageningen: Wageningen, 2013. Print.



Krebs, J. R., and N. B. Davies. An Introduction to Behavioral Ecology. 2nd ed. Oxford: Blackwell, 1991. Print.



McFarland, David, ed. The Oxford Companion to Animal Behavior. Rev. ed. New York: Oxford UP, 1987. Print.



Manning, Aubrey, and Marian Stamp Dawkins. An Introduction to Animal Behavior. 5th ed. New York: Cambridge UP, 2006. Print.



Plaisance, Kathryn S., and Thomas A. C. Reydon, eds. Philosophy of Behavioral Biology. New York: Springer, 2012. Print.



Raven, Peter H., and George B. Johnson. Biology. 7th ed. New York: McGraw-Hill, 2005. Print.



Ristau, Carolyn A., ed. Cognitive Ethology: The Minds of Other Animals. New York: Psychology, 2014. Print.

Wednesday, June 29, 2011

What is the main conflict in Ray Bradbury's story, "All Summer in a Day?"

The main conflict in the story is between Margot, a child who has relatively recently moved to Venus from Earth, and the other children in her class. The story takes place on Venus, a planet of constant rain, except for a few hours every seven years, when the sun briefly comes out. Margot is withdrawn and poetic, and refuses to play the games the other children enjoy. But her real crime is memory: "that she had come here only five years ago from Earth, and she remembered the sun and the way the sun was and the sky was when she was four in Ohio. And they, they had been on Venus all their lives, and they had been only two years old when last the sun came out and had long since forgotten the color and heat of it and the way it really was." Margot's memory of the sun, in a sense, is her memory of being human. When the children lock her in a closet and forget about her during the time when the sun finally comes out, it is as if they are trying to erase, or deny, her humanity. The story ends, however, without any resolution to this conflict: what Margot does, after they let her out of the closet, is anyone's guess.

Explore the various dimensions of being “lost” that Remarque presents in All Quiet on the Western Front and show how this motif helps us to...

From the beginning of his service in the military in the novel All Quiet on the Western Front, the protagonist, Paul Baumer, feels lost and disconnected from his earlier life. In Chapter Two, he thinks of the poems and a play he used to write at home and says, "but that has become so unreal to me I cannot comprehend it anymore" (page 19). One form of being lost in the book is being lost in time and cut off from one's past. 


When Paul gets to the front, he feels continually lost. In Chapter 4, he falls asleep after putting stakes in the ground and, when he wakes up, he does not know where he is. He says, "Then waking suddenly with a start I do not know where I am" (page 60). He thinks at first, in his disorientation, that he is at a garden party after seeing the rockets in the air. This represents the idea that Paul and his fellow soldiers don't understand their role in a confusing, massive operation of which they are unimportant parts.


Later, when Paul visits his home in Chapter 7, he also feels lost. He says, "I am not myself there. There is a distance, a veil between us" (page 160). This sense of being lost represents Paul's disconnection from his civilian life and from normal life absent the war. Remarque's use of the motif of being lost shows how disconnected soldiers in World War I were from their past and from ordinary civilian life. However, they also feel lost as soldiers, as they have no sense of the purpose of the war and what their efforts really amount to. As a result, soldiers like Paul begin to understand the futility and purposelessness of the war.

What is calcium as a dietary supplement?


Overview


Calcium is the most abundant mineral in the body, making up
nearly 2 percent of total body weight. More than 99 percent of the calcium in the
body is found in bones, but the other 1 percent is perhaps just as important for
good health. Many enzymes depend on calcium to work properly, as do the nerves,
heart, and blood-clotting mechanisms.


To build bone, a person needs to have enough calcium in his or her diet. However, even with the availability of calcium-fortified orange juice and with the best efforts of the dairy industry, most Americans are calcium deficient. Calcium supplements are a simple way to make sure one is getting enough of this important mineral.


One of the most important uses of calcium is to help prevent and treat
osteoporosis, the progressive loss of bone mass to which
menopausal women are especially vulnerable. Calcium works best when combined with
vitamin
D. Other meaningful evidence suggests that calcium may have
an additional use: reducing symptoms of premenstrual syndrome (PMS).







Requirements and Sources

Although there are some variations between recommendations issued by different groups, the official U.S. and Canadian recommendations for daily intake of calcium (in milligrams) are as follows: infants to six months of age (210) and seven to twelve months of age (270); children one to three years of age (500) and four to eight years of age (800); children nine to eighteen years of age (1,300); adults age nineteen to fifty years (1,000), age fifty-one years and older (1,200); and pregnant and nursing girls (1,300) and women (1,000).


To absorb calcium, the body also needs an adequate level of vitamin D. Various
medications may impair calcium absorption or metabolism, either directly or
through effects on vitamin D. Implicated medications include corticosteroids, heparin, isoniazid, and anticonvulsants.
People who use these medications may benefit by taking extra calcium and vitamin
D. Calcium carbonate might interfere with the effects of anticonvulsant drugs, so
it should not be taken at the same time of day.


Milk, cheese, and other dairy products are excellent sources of calcium. Other good sources include orange juice or soy milk fortified with calcium, fish (such as sardines) canned with its bones, dark green vegetables, nuts and seeds, and calcium-processed tofu. Many forms of calcium supplements are available on the market, each with its own advantages and disadvantages.



Naturally derived forms of calcium. Naturally derived forms of calcium come from bone, shells, or the earth: bonemeal, oyster shell, and dolomite, respectively. Animals concentrate calcium in their shells, and calcium is found in minerals in the earth. These forms of calcium are economical, and a person can get as much as 500 to 600 mg in one tablet. However, there are concerns that the natural forms of calcium supplements may contain significant amounts of lead. The level of contamination has decreased in recent years, but it still may present a health risk. Calcium supplements rarely list the lead content of their sources. The lead concentration should always be less than two parts per million.



Refined calcium carbonate. Refined calcium carbonate is the most
common commercial calcium supplement, and it is also used as a common
antacid. Calcium carbonate is one of the least expensive
forms of calcium, but it can cause constipation and bloating, and it may not be
well absorbed by people with reduced levels of stomach acid. Taking it with meals
improves absorption because stomach acid is released to digest the food.



Chelated calcium. Chelated calcium is calcium bound to an organic acid (citrate, citrate malate, lactate, gluconate, aspartate, or orotate). The chelated forms of calcium offer some significant advantages and disadvantages compared with calcium carbonate.


Certain forms of chelated calcium (calcium citrate and calcium citrate malate) are widely thought to be significantly better absorbed and more effective for osteoporosis treatment than calcium carbonate. However, while some studies support this belief, others do not. The discrepancy may come from the particular calcium carbonate products used; some calcium carbonate formulations may dissolve better than others. One study found that calcium citrate malate in orange juice is markedly better absorbed than tricalcium phosphate/calcium lactate in orange juice.


A form of calcium called active absorbable algal calcium has also been promoted as superior to calcium carbonate. The study upon which claims of benefit are founded actually used quite questionable statistical methods (technically, post-hoc subgroup analysis).


Chelated calcium is much more expensive and bulkier than calcium carbonate. In other words, a person would have to take larger pills, and more of them, to get enough calcium. It is not uncommon to need to take five or six large capsules daily to supply the necessary amount, a quantity some people may find troublesome.




Therapeutic Dosages

Unlike some supplements, calcium is not taken at extra high doses for special therapeutic benefit. Rather, for all its uses, it should be taken in the recommended amounts, along with the recommended level of vitamin D.


Calcium absorption studies have found evidence that the body cannot absorb more than 500 mg of calcium at one time. Therefore, it is most efficient to take one’s total daily calcium in two or more doses.


It is not possible to put all the calcium one needs in a single multivitamin-multimineral tablet, so this is one supplement that should be taken on its own. Furthermore, if taken at the same time, calcium may interfere with the absorption of chromium and manganese. This means that it is best to take the multivitamin-multimineral pill at a different time than the calcium supplement.


Although the calcium in some antacids or supplements may alter the absorption of magnesium, this effect apparently has no significant influence on overall magnesium status. Calcium may also interfere with iron absorption, but the effect may be too slight to cause a problem. Some studies show that calcium may decrease zinc absorption when the two are taken together as supplements; however, studies have found that in the presence of meals, zinc levels may be unaffected by increases of either dietary or supplemental calcium. Finally, the use of prebiotics known as inulin fructans may improve calcium absorption.




Therapeutic Uses

According to most studies, the use of calcium (especially in the form of calcium citrate) with vitamin D may modestly slow the bone loss that leads to osteoporosis. A rather surprising potential use of calcium came to light when a large, well-designed study found that calcium is an effective treatment for PMS. Calcium supplementation reduced all major symptoms, including headache, food cravings, moodiness, and fluid retention. It is remotely possible that there may be a connection between these two uses of calcium: Weak evidence hints that PMS might be an early sign of future osteoporosis. One small but carefully conducted study suggests that getting enough calcium may help control symptoms of menstrual pain too.


Some observational and intervention studies have found evidence that calcium supplementation may reduce the risk of colon cancer. Risk reduction might continue for years after calcium supplements are stopped. However, calcium supplements might increase risk of prostate cancer in men. For menopausal women, the use of calcium supplements, especially with vitamin D added, may reduce cancer risk in general.


Persons who are deficient in calcium may be at greater risk of developing high
blood pressure. Among persons who already have hypertension,
increased intake of calcium might slightly decrease blood
pressure, according to some studies. Weak evidence hints that
the use of calcium by pregnant women might reduce the risk of hypertension in
their children. Calcium supplementation has also been tried as a treatment to
prevent preeclampsia in pregnant women. While the evidence from studies is
conflicting, calcium supplementation might offer at least a minimal benefit.


The drug metformin, used for diabetes, interferes with the absorption of vitamin B12. Calcium supplements may reverse this, allowing the B12 to be absorbed normally. Also, calcium supplements might slightly improve the cholesterol profile.


Rapid weight loss in overweight postmenopausal women appears to slightly accelerate bone loss. For this reason, it may make sense to take calcium and vitamin D supplements when deliberately losing weight. It has been additionally suggested that calcium supplements, or high-calcium diets, may directly enhance weight loss, but the evidence is more negative than positive.


Finally, calcium is also sometimes recommended for attention deficit
disorder, migraine headaches, and periodontal
disease, but there is no meaningful evidence that it is effective for these
conditions. It is important to note that despite the benefits of calcium
supplementation for certain conditions, a large placebo-controlled trial involving
more than 36,000 postmenopausal women found that daily supplements of 1,000 mg of
calcium carbonate combined with 400 international units (IU) of vitamin D(3) for
an average of seven years did not significantly reduce death rates from all
causes.




Scientific Evidence


Osteoporosis. A number of double-blind, placebo-controlled studies indicate that calcium supplements (especially as calcium citrate and taken with vitamin D) are slightly helpful in preventing and slowing bone loss in postmenopausal women. Contrary to some reports, milk does appear to be a useful source of calcium for this purpose. However, the effect of calcium supplementation in any form is relatively mild and may not be strong enough to reduce the rate of osteoporotic fractures. The use of calcium and vitamin D must be continual. Any improvements in bone density rapidly disappear once the supplements are stopped. A large study of more than three thousand postmenopausal women age sixty-five to seventy-one years found that three years of daily supplementation with calcium and vitamin D was not associated with a significant reduction in the incidence of fractures. Calcium carbonate may not be effective.


One study found benefits for elderly men using a calcium- and vitamin D-fortified milk product. Calcium and vitamin D supplementation may help bones heal that have become fractured because of bone thinning. Also, calcium supplements may do a better job of strengthening bones when people have relatively high protein intake.


Heavy exercise leads to a loss of calcium through sweat, and the body does not compensate for this by reducing calcium loss in the urine. The result can be a net calcium loss great enough so that it presents health concerns for menopausal women, already at risk for osteoporosis. One study found that the use of an inexpensive calcium supplement (calcium carbonate), taken at a dose of 400 mg twice daily, is sufficient to offset this loss.


Calcium supplementation could, in theory, be useful for young girls as a way to build a supply of calcium for the future to prevent later osteoporosis. However, the benefits seen in studies have been modest to nonexistent, and this approach may only produce results when exercise is also increased.


Evidence suggests that the use of calcium with vitamin D can help protect against the bone loss caused by corticosteroid drugs, such as prednisone. A review of five studies covering 274 participants reported that calcium and vitamin D supplementation significantly prevented bone loss in corticosteroid-treated persons. For example, in a two-year, double-blind, placebo-controlled study that followed sixty-five persons with rheumatoid arthritis taking low-dose corticosteroids, daily supplementation with 1,000 mg of calcium and 500 IU of vitamin D reversed steroid-induced bone loss, causing a net bone gain. Also, one study found that in calcium-deficient pregnant women, calcium supplements can improve the bones of their unborn children.


There is some evidence that essential fatty acids may enhance the effectiveness of calcium. In one study, sixty-five postmenopausal women were given calcium with either placebo or a combination of omega-6 fatty acids (from evening primrose oil) and omega-3 fatty acids (from fish oil) for eighteen months. At the end of the study period, the group receiving essential fatty acids had higher bone density and fewer fractures than the placebo group. However, a twelve-month, double-blind trial of forty-two postmenopausal women found no benefit. The explanation for the discrepancy may lie in the differences between the women studied. The first study involved women living in nursing homes, while the second studied healthier women living on their own. The latter group of women may have been better nourished and may have been receiving enough essential fatty acids in their diet.



Premenstrual syndrome. According to a large and well-designed study published in 1998 in the American Journal of Obstetrics and Gynecology, calcium supplements act as a simple and effective treatment for a variety of PMS symptoms. In a double-blind, placebo-controlled study of 497 women, 1,200 mg daily of calcium as calcium carbonate reduced PMS symptoms by one-half through three menstrual cycles. These symptoms included mood swings, headaches, food cravings, and bloating. These results corroborate earlier, smaller studies.



High cholesterol. In a twelve-month study of 223 postmenopausal women, use of calcium citrate at a dose of 1 gram (g) daily improved the ratio of HDL (good) cholesterol levels to LDL (bad) cholesterol levels. The extent of this improvement was statistically significant (compared with the placebo group) but not very large in practical terms. Similarly modest benefits were seen in a smaller, double-blind, placebo-controlled study. A third double-blind, placebo-controlled study failed to find any statistically significant effects.



Colon cancer. Evidence from observational studies showed that a high calcium intake is associated with a reduced incidence of colon cancer, but not all studies have found this association. Some evidence from intervention trials supports these findings.


A four-year, double-blind, placebo-controlled study followed 832 persons with a history of colon polyps. Participants received either 3 g daily of calcium carbonate or placebo. The calcium group experienced 24 percent fewer polyps overall than the placebo group. Because colon polyps are the precursor of most colon cancer, this finding strongly suggests benefit. Combining the results for two trials, involving a total of 1,346 participants also with a history of polyps, researchers found that 1,200 or 2,000 mg of daily elemental calcium led to a significant reduction in polyp recurrence compared with placebo in a three-to-four-year period. Another large study found that calcium carbonate at a dose of 1,200 mg daily may have a more pronounced effect on dangerous polyps than on benign ones.


A gigantic (36,282-participant) and long-term (average of seven years) study of postmenopausal women failed to find that calcium carbonate supplements at a dose of 1,000 mg daily had any effect on the incidence of colon cancer. Given these conflicting results, if calcium supplementation does have an effect on colon cancer risk, it is probably small.



Hypertension. A large randomized, placebo-controlled trial of more than 36,000 postmenopausal women found daily supplementation with 1,000 mg of calcium plus 400 IU of vitamin D did not reduce or prevent hypertension during seven years of follow-up. These results are possibly limited by calcium use unrelated to the study.




Safety Issues

In general, it is safe to take up to 2,500 mg of calcium daily, although this
is more than a person needs. Excessive intake of calcium can cause numerous side
effects, including dangerous or painful deposits of calcium within the body. For
persons with cancer, hyperparathyroidism, or sarcoidosis,
calcium should be taken only under a physician’s supervision.


Some evidence hints that the use of calcium supplements might slightly increase
kidney
stone risk. However, increased intake of calcium from food
does not seem to have this effect and could even help prevent stones. One study
found that if calcium supplements are taken with food, there is no increased risk.
Calcium citrate supplements may be particularly safe regarding kidney stones
because the citrate portion of this supplement is used to treat kidney stones.


There is preliminary evidence that calcium supplementation in healthy, postmenopausal women may slightly increase the risk of cardiovascular events, such as myocardial infarction. However, it remains far from clear whether this possible risk outweighs the benefits of calcium supplementation in this population.


Large observational studies have found that, in men, higher intakes of calcium
are associated with an increased risk of prostate
cancer. This seems to be the case whether the calcium comes
from milk or from calcium supplements.


Calcium supplements combined with high doses of vitamin D might interfere with
some of the effects of drugs in the calcium channel blocker family. It is
very important that one consult a physician before trying this combination.


Concerns have been raised that the aluminum in some antacids may be harmful. There is some evidence that calcium citrate supplements might increase the absorption of aluminum; for this reason, one probably should not take calcium citrate at the same time of day as aluminum-containing antacids. Other options are to use different forms of calcium or to avoid antacids containing aluminum.


When taken over the long term, thiazide diuretics tend to increase
levels of calcium in the body by decreasing the amount excreted by the body. It is
not likely that this will cause a problem. Nonetheless, persons using thiazide
diuretics should consult with a physician on the proper doses of calcium and
vitamin D.


Finally, calcium may interfere with the absorption of antibiotics in the
tetracycline and fluoroquinolone families and with
thyroid hormone. Persons taking any of these drugs should take calcium supplements
a minimum of two hours before or after the medication dose.




Important Interactions

Persons may need more calcium if also taking corticosteroids, heparin, or isoniazid. Persons taking aluminum hydroxide should take it and calcium citrate a minimum of two hours apart to avoid increasing aluminum absorption.


Persons may need more calcium if they are also taking any of the following anticonvulsants: phenytoin (Dilantin), carbamazepine, phenobarbital, or primidone. It may be advisable to take the dose of anticonvulsant and the calcium supplement a minimum of two hours apart because each substance interferes with the other’s absorption.


For persons taking antibiotics in the tetracycline or fluoroquinolone (cipro, floxin, noroxin) families or taking thyroid hormone, the calcium supplement should be taken a minimum of two hours before or after the dose of medication, because calcium interferes with absorption (and vice versa). Also, one should not take extra calcium except on the advice of a physician if also taking thiazide diuretics. Finally, one should not take calcium with high-dose vitamin D except on the advice of a physician if also taking calcium channel blockers.


Persons may need extra calcium if also taking iron, manganese, zinc, or chromium. Ideally, one should take calcium at a different time of day from these other minerals because it may interfere with their absorption.


Finally, it may be advisable to wait two hours after taking calcium supplements to eat soy (or vice versa). A constituent of soy called phytic acid can interfere with the absorption of calcium.




Bibliography


Caan, B., et al. “Calcium Plus Vitamin D Supplementation and the Risk of Postmenopausal Weight Gain.” Archives of Internal Medicine 167 (2007): 893-902.



Dodiuk-Gad, R. P., et al. “Sustained Effect of Short-Term Calcium Supplementation on Bone Mass in Adolescent Girls with Low Calcium Intake.” American Journal of Clinical Nutrition 81 (2005): 168-174.



LaCroix, A. Z., et al. “Calcium plus Vitamin D Supplementation and Mortality in Postmenopausal Women: The Women’s Health Initiative Calcium-Vitamin D Randomized Controlled Trial.” Journals of Gerontology: Series A–Biological Sciences and Medical Sciences 64 (2009): 559-567.



Lappe, J. M., et al. “Vitamin D and Calcium Supplementation Reduces Cancer Risk.” American Journal of Clinical Nutrition 85 (2007): 1586-1591.



Margolis, K. L., et al. “Effect of Calcium and Vitamin D Supplementation on Blood Pressure.” Hypertension 52 (2008): 847-855.



Martin, B. R., et al. “Exercise and Calcium Supplementation: Effects on Calcium Homeostasis in Sportswomen.” Medicine and Science in Sports and Exercise 39 (2007): 1481-1486.



Matkovic, V., et al. “Calcium Supplementation and Bone Mineral Density in Females from Childhood to Young Adulthood.” American Journal of Clinical Nutrition 81 (2005): 175-188.



Reid, I. R., and M. J. Bolland. “Calcium Supplementation and Vascular Disease.” Climacteric 11 (2008): 280-286.



Wagner, G., et al. “Effects of Various Forms of Calcium on Body Weight and Bone Turnover Markers in Women Participating in a Weight Loss Program.” Journal of the American College of Nutrition 26 (2007): 456-461.



Winzenberg, T., et al. “Calcium Supplements in Healthy Children Do Not Affect Weight Gain, Height, or Body Composition.” Obesity 15 (2007): 1789-1798.



Zemel, M. B., et al. “Effects of Calcium and Dairy on Body Composition and Weight Loss in African-American Adults.” Obesity Research 13 (2005): 1218-1225.

Tuesday, June 28, 2011

When Rikki attacks, who is Karait about to bite in "Rikki-Tikki-Tavi?"

Karait was about to bite Teddy. 


As a house mongoose, it is Rikki-tikki's job to protect the household and garden from snakes.  While he focuses most of his time on the bigger snakes, such as the pair of cobras, there are also smaller snakes around.  Even a small snake can be dangerous.



But just as Teddy was stooping, something flinched a little in the dust, and a tiny voice said: "Be careful. I am death!'' It was Karait, the dusty brown snakeling that lies for choice on the dusty earth; and his bite is as dangerous as the cobra's.



As we are reminded, snakes like Karait can actually be more dangerous because they are not as easy to see.  Nobody knows they are there, nobody looks out for them, and they can strike without warning.  Rikki-tikki is much more alert, however, and he swoops in to the rescue.


Teddy is not bitten, and he shouts that Rikki is killing a snake.



Rikki-tikki heard a scream from Teddy's mother. His father ran out with a stick, but by the time he came up, Karait had lunged out once too far, and Rikki-tikki- had sprung, jumped on the snake's back, dropped his head far between his fore-legs, bitten as high up the back as he could get hold, and rolled away. 



The man tries to kill Karait with the stick, but Rikki finds that amusing.  There is no point, because Rikki paralyzed Karait by biting him, but he did not eat him.  He was worried that eating the snake would slow him down, because “a full meal makes a slow mongoose” and the cobras are still on the loose.  


Taking his job very seriously, Rikki kills first Nag and then Nagaina.  After the cobras are dead, the garden is safe for the people and animals.  Rikki continues patrolling, intending to keep it that way.

What is brachydactyly?


Risk Factors

The greatest risk factor for the development of brachydactyly is the inheritance of one of several disease-causing mutations. Brachydactyly can also result from embryologic disturbances and has been observed among infants exposed to drugs known to alter fetal development (teratogens); in this case, it is usually found in conjunction with other malformations.







Etiology and Genetics

Although brachydactyly was the first human trait to be ascribed an autosomal dominant Mendelian inheritance pattern, it has subsequently proven to be a genetic smorgasbord, demonstrating the concepts of incomplete penetrance (wherein carriers of causative mutations do not show evidence of disease), variable expressivity (differences in clinical presentation among individuals with the same mutation), and locus heterogeneity (multiple genes or gene combinations leading to the same phenotype). In addition, evidence suggests that certain forms of brachydactyly may be inherited as semidominant or autosomal recessive traits.


Brachydactyly can be inherited alone (isolated brachydactyly), in association with skeletal abnormalities, or as part of a syndrome. In 1951, Julia Bell developed a classification scheme for isolated brachydactyly based on the characteristic hand malformations found in family pedigrees. Categorization of brachydactyly still follows this general model of types A through E, with subtypes used to further delineate particular patterns of digit abnormalities. The majority of isolated brachydactyly types are very rare; however, brachydactyly types A3 and D are relatively common findings within certain populations.


Causative mutations have been identified for many, but not all, of the isolated brachydactyly types. Mutations in the Indian hedgehog (IHH) gene (2q35-q36) have been identified in families with brachydactyly type A1, although linkage has also been shown to a locus on 5p13.3-p13.2. Mutations in two separate genes have been associated with brachydactyly type A2: the bone morphogenetic protein receptor 1B (BMPR1B) gene (4q21-q20) and the growth and differentiation factor 5 (GDF5) gene (20q11.2). This divergence among families with a common phenotype exemplifies the genetic heterogeneity within brachydactyly.


The phenotype of patients with brachydactyly type B has been shown to correlate with the nature of the mutation in the receptor kinase-like orphan receptor 2 (ROR2) gene (9q22). Mutations in ROR2 have also been identified in patients with autosomal recessive Robinow syndrome. More recently, mutations in the noggin (NOG) gene (17q22) have been identified in patients with brachydactyly type B for whom ROR2 mutations were not detected.


The inheritance pattern of brachydactyly type C is not straightforward and has been suggested to be autosomal dominant, autosomal recessive, or semidominant. As observed for brachydactyly type A2, mutations in GDF5 have been identified in families with brachydactyly type C.


Both brachydactyly types D and E have been linked to mutations in the homeobox-containing (HOXD13) gene (2q31-q32).




Symptoms

Isolated brachydactyly is characterized by shortening of one or more digits and may affect the hands, feet, or both. Other finger abnormalities, including syndactyly (fused digits), clinodactyly (sideways deviation of the finger), or symphalangism (fused phalanges), may also be present. Syndromic forms of brachydactyly may be associated with skeletal defects (such as short stature, shortened limbs, and scoliosis), hypertension, cardiac malformations, intellectual disability, or a host of other abnormalities.




Screening and Diagnosis

Family history is a strong predictor of disease. The benign nature of isolated brachydactyly makes prenatal screening unnecessary, although it may be valuable for syndromic forms of the disease. Prenatal ultrasound performed from twelve to seventeen weeks of gestation has been used to successfully diagnose brachydactyly. Diagnosis based on analysis of DNA from the fetus is possible if the familial mutation is known.




Treatment and Therapy

Plastic surgery is an option to enhance hand function but is not applicable in most cases. If needed, hand function may also be improved through physical therapy. For those with syndromic brachydactyly, treatment of associated conditions (such as blood pressure medication for patients with hypertension) may be indicated.




Prevention and Outcomes

There is no method available for preventing brachydactyly occurrence among individuals who inherit disease-causing mutations. The prognosis for patients with isolated brachydactyly is generally favorable; the ability to achieve normal hand function is reliant on the extent and severity of the defect. In cases of syndromic brachydactyly, prognosis is influenced by the nature of the associated conditions.




Bibliography


Everman, David B. “Hands and Feet.” Human Malformations and Related Anomalies. Ed. Roger E. Stevenson and Judith G. Hall. 2nd ed. Oxford: Oxford UP, 2006. Print.



Firth, Helen V., Jane A. Hurst, and Judith G. Hall, eds. Oxford Desk Reference: Clinical Genetics. Oxford : Oxford UP, 2005. Print. .



Glorieux, Francis H., John M. Pettifor and Harald Jüppner. Pediatric Bone: Biology and Diseases.2nd ed. London: Academic P/ Elsevier, 2012. Print.



Goel, Ayhush, et al. "Brachydactyly." Radiopaedia.org. Radiopaedia.org, 2014. Web. 31 July 2014.



Pereda, Arrate, Intza Garin, Maria Garcis-Barcina, Blanca Gener, Elena Beristain, Ane Miren Ibañez, and Guiomar Perez de Nanclares. "Brachydactyly E: Isolated or as a Feature of a Syndrome." Orphanet Journal of Rare Diseases 8 (2013): 141. Print.



Temtamy, Samia A., and Mona S. Aglan. “Brachydactyly.” Orphanet Journal of Rare Diseases 3 (2008): 15. Print.

Monday, June 27, 2011

What is motor skill development?


Physical and Psychological Factors

Motor skill development, the process of change in motor behavior with increasing age, focuses on adjustments in posture, movement, and the skillful manipulation of objects. Early researchers attributed essentially all developmental changes to modifications occurring within the central nervous system, with increasing motor abilities reflecting increasing neural maturation. Modern researchers have determined that the central nervous system works in combination with other body systems (such as the musculoskeletal, cardiovascular, and respiratory systems) and the environment to influence motor development, with all systems interacting in an extremely complex fashion as the individual ages.



Prenatal development of motor behavior takes place between approximately seven weeks after conception and birth, as was first determined during the 1970s using technology to visualize the fetus in utero. Following approximately eight weeks of gestation, the fetus is able to exert reflex and reaction actions, as well as active spontaneous movement. It is currently believed that the ability to self-initiate movements within the womb is an integral part of development, as compared to the traditional view that the fetus is passive and reflexive.


Infancy, the period from birth until the child is able to stand and walk, lasts approximately twelve months. The neonate begins life essentially helpless against the force of gravity and gradually develops the ability to align body segments with respect both to other body segments and to the environment. The Bayley Scales of Infant Development measure the following milestones of motor skill development for the first year of life (with the average age of accomplishment listed in parentheses): erect and steady head holding (0.8 months), side to back turning (1.8 months), supported sitting (2.3 months), back to side turning (4.4 months), momentary independent sitting (5.3 months), rolling from back to stomach (6.4 months), steady independent sitting (6.6 months), early supported stepping movements (7.4 months), arm pull to standing position (8.1 months), assisted walking (9.6 months), independent standing (11.0 months), and independent walking (11.7 months). The transition from helplessness to physical independence during the first twelve months creates many changes for growing children and their caregivers. New areas of exploration open up for the baby as greater body control is gained, the force of gravity is conquered, and less dependence on holding and carrying by caregivers is required.


During the first three months after birth, the infant’s motor skill development focuses on getting the head aligned from the predominating posture of flexion. Flexor tone, the tendency to maintain a flexed posture and to rebound back into flexion when the limbs are extended and released, probably results from a combination of the elasticity of soft tissues that were confined to a flexed position while in the womb and of central nervous system activity. As antigravity activity progresses, the infant develops the ability to lift the head. Movements during this period involve brief periods of stretching, kicking, and thrusting of the limbs, in addition to turning and twisting of the trunk and head. Infants tend to be the most active prior to feeding and more quiet and sleepy after feeding.


The third to sixth months after birth are marked by great strides in overcoming the force of gravity by both flexion and extension movements. The infant becomes more competent in head control with respect to symmetry and midline orientation with the rest of the body, is able to sit independently for brief periods, and can push up onto hands and knees. These major milestones enable considerably more independence and permit a much greater ability to interact with the rest of the world.


During the sixth to ninth months after birth, the infant is constantly moving and exploring the surrounding environment. As nine months is approached, most babies are able to pull themselves into a standing position using a support such as furniture. The child expends a great deal of energy to stand and often bounces up and down once standing is achieved. The up-and-down bouncing eventually leads to the shifting of body weight from side to side and the taking of first steps, with a caregiver assisting alongside the furniture; this is often called cruising.


The ninth to twelfth months involve forward creeping on hands and knees. This locomotor pattern requires more complicated alternating movements of the opposite arms and legs. Some infants have a preference for creeping even after they are able to walk independently, with many preferring plantigrade creeping (on extended arms and legs) to walking. The ease to which the child moves from sitting to creeping, kneeling, or standing is greatly improved and balance is developed to the point where the child can pivot around in circles while sitting, using the hands and feet for propulsion. The child begins to move efficiently from standing to floor sitting and can initiate rolling from the supine position using flexed legs. Unsupported sitting is accomplished with ease, and weight while sitting can be transferred easily from buttocks to hands.


The early childhood period lasts from infancy until about six years. It involves the child attaining new skills but not necessarily new patterns of movement, with the learning patterns that were acquired during the first year of life being put to use in more meaningful activities. The locomotor pattern of walking is refined, and new motor skills that require increased balance and control of force—such as running, hopping, jumping, and skipping—are mastered.


Running is usually begun between years two and four, as the child learns to master the flight phase and the anticipatory strategies necessary when there is temporarily no body contact with the ground. It is not until about age five or six that control during running with respect to starting, stopping, and changing directions is effectively mastered. Jumping develops at about age 2.5, as the ability and confidence to land after jumping from a height such as a stair is achieved. The ability to jump to reach an overhead object then emerges, with early jumpers revealing a shallow preparatory crouch that progresses to a deeper crouch. Hopping, an extension of the ability to balance while standing on one leg, begins at about age 2.5 but is not performed well until about age six, when a series of about ten hops can be performed consecutively and are incorporated into games such as hopscotch. Skipping, a step and a hop on one leg followed by a step and hop of the other leg, is generally not achieved until about six years, with the opportunity and encouragement for practice being a primary determining factor, as with other locomotor skills.


Throwing is typically acquired during the first year, but advanced throwing, striking (such as with a plastic baseball and bat), kicking (such as with a soccer ball), and catching are not developed until early childhood. Catching develops at approximately age three, with the child initially holding the arms in front of the body and later making anticipatory adjustments to account for the direction, speed, and size of the thrown object. Kicking, which requires balancing on one foot while transferring force to an object with the other foot, begins with little preparatory backswing and eventually develops to involve the knee, hip, and lean of the trunk at about age six.


Fine motor manipulation skills in the upper extremity that are important to normal activities of daily living such as feeding, dressing, grooming, and handwriting are greatly improved in early childhood. The key components include locating a target, which requires the coordination of eye-head movement; reaching, which requires the transportation of the hand and arm in space; and manipulation, which includes grip formation, grasp, and release.


During later childhood (the period from seven years to about eleven years), adolescence, and adult life throughout the remainder of the life span, changes in movement are influenced predominantly by age. Adolescence begins with the onset of the physical changes of puberty, at approximately eleven to twelve years of age in girls and twelve to thirteen years of age in boys, and ends when physical growth is curtailed. Most authorities believe that the growth spurt of adolescence leads to the emergence of new patterns of movement within the skills that have already been acquired. Most adolescents have strong drives to develop self-esteem and become socially acceptable with their peers in school and various recreational activities. Cooperation and competition become strong components of motor skill development, whereby many skills are stabilized prior to adolescence and preferences for various sports activities emerge. Boys typically demonstrate increased speed and strength as compared to girls, despite recent dramatic changes in available opportunities for girls in recreational and competitive sports activities. Even though age-related changes in motor behavior continue throughout adulthood, the physical skills that permit independence are primarily acquired during the first year of life.


Psychological factors that influence motor skill development include attention level, stimulus-response compatibility, arousal level, and motivation. The level of attention when attempting a motor task is critical, with humans displaying a relatively fixed capacity for concentration during different stages of development. Stimulus identification, response selection, and response programming stages—whereby an individual remembers or determines how to perform a task—affect skill development because the central nervous system takes longer to synthesize and respond to more complex skills. Also important are stimulus-response compatibility—the better the stimulus matches the response, the shorter the reaction time—and arousal, which is described as an “inverted U” by the Yerkes-Dodson model. The inverted U hypothesis implies that there exists an optimal level of psychological arousal to learn or perform a motor skill efficiently, with performance declining when the arousal level at a given moment in time is too great or too small. At a low level of arousal, the scope of perception is broad, and all stimuli (including irrelevant information) are being processed. As arousal level increases, perception narrows so that
when the optimal level of arousal is reached and attention is sufficiently focused, concentration on only the stimuli relevant to successful skill learning and performance is enabled. If arousal level surpasses this optimal level, perception narrows to the point of tunnel vision, some relevant stimuli are missed, and learning and skill performance are reduced. The influence of personal motivation during motor skill development encompasses the child’s perceived relevance of the activity and also the child’s individual ability to recognize the goal of the activity and desire to achieve it.


Three main factors that affect motor skill development in early and later childhood include feedback, amount of practice, and practice conditions. Feedback can be intrinsic, arising from the somatosensory system and senses such as vision and hearing, as information is gathered about a movement and its consequences rather then the actual achievement of the goal. In pathological conditions such as cerebral palsy, intrinsic feedback is often greatly impaired. Feedback can also be extrinsic and is often divided by researchers into knowledge of results, or information about the success of the movement in accomplishing the goal that is available after the skill is completed, and knowledge of performance, or information about skill performance technique or strategy. Knowledge of results provides information about errors as well as successes. True learning occurs by a process of trial and error, with the nervous system serving to detect and correct inappropriate or inefficient movements.




Disorders and Effects

Physical therapists, psychologists, teachers, and other professionals who work with pediatric patients often plan their treatment interventions and instructional lessons based on the normal age-related progression of motor skill development. Motor skill development is often significantly decreased as a consequence of a neurological impairment, however, with the child’s resulting movement patterns revealing primary impairments such as inadequate activation of muscle, secondary impairments such as contractures, and compensatory strategies that are adopted to overcome the impairment and achieve mobility. The categories for impairments that have an impact on motor development can generally be divided into musculoskeletal, neuromuscular, sensory, perceptual, and cognitive.


Damage to various nervous system structures somewhat predictably reduces the motor control of movement via both positive symptoms (the presence of abnormal behavior) and negative symptoms (the loss of normal behavior). Positive symptoms include the presence of exaggerated reflexes and abnormalities of muscle tone. Negative symptoms include the loss of muscular strength and the inappropriate selection of muscles during task performance. The broad spectrum of muscle tone abnormalities ranges from flaccidity to rigidity, with muscle spasticity defined as the velocity-dependent increase in tonic stretch reflexes (also called muscle tone), with exaggerated tendon jerks resulting from changes in the threshold of the stretch reflex.


Secondary effects of central nervous system lesions are not directly caused by the lesions themselves but develop as a consequence of the lesions. For example, children with cerebral palsy often exhibit the primary problem of spasticity in muscles of the lower extremities, which causes the secondary problem of muscular and tendon tightness in the ankles, knees, and hips. The secondary problem of limited range of motion in these important areas for movement often impairs motor skills more than the primary problem of spasticity, with the resulting movement strategies reflecting the growing child’s best attempt to compensate.


Another common compensatory strategy seen in children with a motor development dysfunction involves standing with the knee hyperextended because of an inability to generate enough muscular force to keep the knee from collapsing. Standing with the knee in hyperextension keeps the line of gravity in front of the knee joint. Contractures of joints are frequent consequences of disordered postural and movement patterns. For example, a habitual crouched sitting posture results in chronic shortening of the hamstring, calf, and hip flexor muscles, and a backward-tipped pelvis accommodates the shortened hamstrings. Chronic shortening of the calf muscles often results in toe walking (in which the heel does not strike the ground) and a reduced walking speed and stride length, because of decreased balance and leg muscle strength. Changes in the availability of sensory information and cognitive factors such as fear of falling and inattention may also contribute strongly to motor skill development in some pediatric patients.




Perspective and Prospects

Interest in the scientific study of motor development was greatly enhanced by Myrtle B. McGraw’s The Neuromuscular Maturation of the Human Infant (1945). It described four stages of neural maturation: a period in which movement is governed by reflexes as a result of the dominance of lower centers within the central nervous system; a period in which reflex expression declines as a result of maturation of the cerebral cortex and the inhibitory effect of the cortex over lower centers; a period in which an increase in the voluntary quality of activity as a result of increased cortical control produces deliberate or voluntary movement; and a period in which integrative activity of the neural centers takes place, as shown by smooth and coordinated movements.



Arnold Gesell then used cinematography to conduct extensive observations of infants during various stages of growth. He described the maturation of infants based on four behavior categories: motor behavior, adaptive behavior, language development, and personal-social development. Gesell identified six principles of development. The principle of motor priority and fore-reference states that the neuromotor system is laid down before it is voluntarily utilized. The principle of developmental direction states that development proceeds in head-to-foot and proximal-to-distal directions. The principle of reciprocal interweaving states that opposing movements such as extension and flexion show a temporary dominance over one another until they become integrated into mature motor patterns. The principle of functional asymmetry states that humans have a preferred hand, a dominant eye, and a lead foot, with this unilateral dominance being subject to change during development. The principle of self-regulation states that periods of stability and instability culminate into more stable responses as maturity proceeds. The principle of optimal realization states that the human action system has strong growth potential toward normal development if environmental and cultural conditions are favorable and if compensatory and regeneration mechanisms come into play when damage occurs to facilitate attainment of the maximum possible growth.


Esther Thelen suggested the dynamic systems theory. This theory argues that the maturing nervous system interacts with other biomechanical, psychological, and social environment factors to create a dimensional system whereby behavior represents a compression of the degrees of freedom.


A more refined systems theory of motor control developed by Anne Shumway-Cook and Marjorie Woollacott claims that the three main factors that interact in the development of efficient locomotion are progression (ability to generate rhythmic muscular patterns to move the body in the desired direction), stability (the control of balance), and adaptation (the ability to adapt to changing task and environmental requirements). These three factors generally appear sequentially, with muscular patterns appearing first, followed by equilibrium control, and finally adaptive capabilities. Although research on the emergence of human motor skills has primarily concentrated on the developmental milestones of infants and children, it appears that important changes in motor behavior continue throughout the human life span.




Bibliography


Berk, Laura E. Child Development. 9th ed. Boston: Pearson/Allyn & Bacon, 2013.



Feldman, Robert S. Development Across the Life Span. 6th ed. Upper Saddle River, N.J.: Pearson/Prentice Hall, 2011.



Haywood, Kathleen, Mary Ann Roberton, and Nancy Getchell. Advanced Analysis of Motor Development. Champaign, Ill.: Human Kinetics, 2012.



Kail, Robert V., and John C. Cavanaugh. Human Development: A Life-Span View. 6th ed. Belmont, Calif.: Wadsworth Cengage Learning, 2013.



Kalverboer, Alex F., Brian Hopkins, and Reint Geuze, eds. Motor Development in Early and Later Childhood: Longitudinal Approaches. New York: Cambridge University Press, 1993.



Ludlow, Ruth, and Mike Phillips. The Little Book of Gross Motor Skills. London: Featherstone Education, 2012.



Nathanson, Laura Walther. The Portable Pediatrician: A Practicing Pediatrician’s Guide to Your Child’s Growth, Development, Health, and Behavior from Birth to Age Five. 2d ed. New York: HarperCollins, 2002.



Newell, K. M. “Motor Skill Acquisition.” Annual Review of Psychology 42 (1991): 213–37.



Shumway-Cook, Anne, and Marjorie Woollacott. Motor Control: Translating Research into Clinical Practice. 4th ed. Philadelphia: Lippincott Williams & Wilkins, 2012.



Sugden, David, and Michael G. Wade. Typical and Atypical Motor Development. London: Mac Keith Press, 2013.



Thelen, Esther, and Linda B. Smith. A Dynamic Systems Approach to the Development of Cognition and Action. 5th ed. Cambridge, Mass.: MIT Press, 2002.

Sunday, June 26, 2011

What is rheumatic fever?


Causes and Symptoms


Rheumatic fever is an inflammatory disease affecting the heart that may follow infection by the bacterium Streptococcus pyogenes. The streptococci constitute a large number of gram-positive cocci, some of which are pathogens. They were originally classified in the 1930s by Rebecca Lancefield into groups based on characteristics of carbohydrates and proteins in their cell walls. S. pyogenes is the sole species of streptococcus in group A. Group A streptococcus causes a wide array of illnesses, most notably pharyngitis (causing strep throat) and impetigo. The most serious complication associated with infection by specific strains of S. pyogenes is rheumatic fever.


Rheumatic fever may develop one to five weeks after recovery from a streptococcal infection, often strep throat. The onset is sudden, with the patient exhibiting severe polyarthritis, fever, and abdominal pain. There may be chest pain and heart palpitations. Transient circular lesions may develop on the skin. Rheumatic nodules may be noted on joints and tendons, along the spine, and even on the head. Sydenham’s chorea, the exhibition of irregular body movements, may also appear during the course of the illness. In severe cases, the patient may become incapacitated.


While there is no specific diagnostic test for rheumatic fever, the combination of clinical symptoms may suggest its onset, particularly if there was a recent sore throat. The production of serum antibodies against streptococcal antigens is also indicative of rheumatic fever.


Most of the time, the symptoms subside with bed rest. Mild cases generally last three or four weeks, while more severe cases may last several months. A single bout with rheumatic fever may be followed by recurrent episodes with additional infections by B hemolytic streptococci.


Rheumatic fever is an autoimmune phenomenon. Certain proteins in specific strains of group A streptococcus contain segments that cross-react with heart tissue, including that found in muscle and valves. As the body responds to the streptococcal infection, the immune response may also involve cardiac tissue, resulting in inflammation and possible damage. Since the immune reaction occurs over a period of days to weeks, the onset of rheumatic fever may be considerably removed from the actual infection.




Treatment and Therapy

Because rheumatic fever represents an autoimmune reaction to an earlier streptococcal infection, antibiotic treatment is of limited value. Penicillin or similar antibiotics may be administered for their prophylactic value, preventing further streptococcal infection and recurrence of the illness during the recovery period.


Bed rest and restriction of activities is recommended during the course of the illness. The patient should receive large amounts of fluids. Steroids or other anti-inflammatory compounds may also be administered in response to severe polyarthritis or valvular inflammation. The duration of such inflammation is generally no more than two weeks.


Repeated infections with streptococci may trigger additional episodes of rheumatic fever, so antibiotics may be administered on a regular basis. While not all streptococcal infections trigger rheumatic fever, any previous cardiac episode is likely to be repeated after an additional streptococcal infection, often resulting in greater damage. For this reason, prophylactic antibiotic treatment may be long term.


If rheumatic heart disease has resulted in permanent damage to heart tissue, additional therapies may be necessary. Often, such damage may not be apparent for years. Thickening or scarring of the heart valves, particularly the mitral and aortic valves, may necessitate valve replacement at some point in the future.




Perspective and Prospects


Thomas Sydenham, called the “English Hippocrates,” in 1685 provided the first description of what was probably rheumatic fever. He also described what has become known as Sydenham’s chorea, now known to be symptomatic of rheumatic fever. In 1797, London doctor Matthew Baillie noted the damage to heart valves among patients suffering from the illness. The association of rheumatic fever with bacterial infection, however, was not established until well into the twentieth century.


In part this delay resulted from the inability to isolate an organism either from the diseased heart or from blood of patients with rheumatic fever. In 1928, Homer Swift, a New York physician, suggested that rheumatic fever was an allergic response following streptococcal infections. A few years later, the role of serologic group A, B hemolytic streptococci as the actual agent associated with the disease was established by Alvin Coburn.


A decline in the incidence of rheumatic fever in the United States began in the first decades of the twentieth century. The reason is unclear in this period before antibiotics; the decrease may have been attributable in part to the presence of less virulent strains of the bacteria. With the introduction of penicillin in the 1940s as an effective treatment for streptococcal infections, the incidence of acute rheumatic fever continued its decline.


A resurgence of the disease was first noted in the 1980s. The reasons remain unclear. Since different strains of streptococci differ in their ability to induce rheumatic fever, it is suspected that the increase may have resulted from the introduction of new bacterial strains into the population. The disease has also been seen to cluster in families, suggesting that a genetic predisposition may exist in the general population which contributes to the rise in numbers of cases. Fortunately, the streptococci have not yet established the widespread resistance to antibiotics seen among other bacteria, and rheumatic fever as a sequela to streptococcal pharyngitis may be prevented with proper treatment.




Bibliography


English, Peter C. Rheumatic Fever in America and Britain: A Biological, Epidemiological, and Medical History. New Brunswick, N.J.: Rutgers University Press, 1999.



Kiple, Kenneth F., ed. The Cambridge World History of Human Disease. New York: Cambridge University Press, 1999.



McCance, Kathryn L., and Sue M. Huether. Pathophysiology: The Biologic Basis for Disease in Adults and Children. 6th ed. St. Louis, Mo.: Mosby/Elsevier, 2010.



Murray, Patrick R., Ken S. Rosenthal, and Michael A. Pfaller. Medical Microbiology. 7th ed. Philadelphia: Mosby/Elsevier, 2013.



"Rheumatic Fever." Health Library, March 15, 2013.



"Rheumatic Fever." Mayo Clinic, January 21, 2011.



Steeg, Carl N., Christine A. Walsh, and Julie S. Glickstein. “Rheumatic Fever: No Cause for Complacence.” Patient Care 34, no. 14 (July 30, 2000): 40–61.



Woolf, Alan D., et al., eds. The Children’s Hospital Guide to Your Child’s Health and Development. Cambridge, Mass.: Perseus, 2002.

Saturday, June 25, 2011

What is motivation in psychology?


Introduction

Research in motivation is pivotal to such fields as educational psychology, social psychology, behavioral psychology, and most other subareas of psychology. Motivation is centrally concerned with the goals people set for themselves and with the means they take to achieve these goals. It is also concerned with how people react to and process information, activities directly related to learning. People’s motivation to process information is influenced by two major factors: the relevance of the topic to the person processing the information, which affects their willingness to think hard about the topic; and the need for cognition, or people’s willingness to think hard about varied topics, whether they are directly relevant to them or not. The relevance of a topic is central to people’s motivation to learn about it.














For example, if the community in which a person lives experiences a severe budgetary crisis that will necessitate a substantial increase in property taxes, every resident in that community, home owners and renters alike, is going to be affected directly or indirectly by the increase. Because this increase is relevant to all the residents, they will, predictably, be much concerned with the topic and will likely think hard about its salient details. If, on the other hand, a community in a distant state faces such a crisis, residents in other communities, reading or hearing about the situation, will not have the motivation to do much hard thinking about it because it does not affect them directly.


The second category of motivation rests in the need of some individuals for cognition. Their inherent curiosity will motivate them to think deeply about various topics that do not concern them directly but that they feel a need to understand more fully. Such people are deliberative, self-motivated thinkers possessed of an innate curiosity about the world that surrounds them. They generally function at a higher intellectual level than people who engage in hard thinking primarily about topics that affect them directly. One of the aims of education at all levels is to stimulate people to think about a broad variety of topics, which they will do because they have an inherent curiosity that they long to satisfy.




Early Concerns with Motivation

During the late nineteenth century, Austrian psychoanalyst Sigmund Freud
developed theories about motivation that are usually categorized as the psychodynamic approach. He contended that people have psychic energy that is essentially sexual or aggressive in its origins. Such energy seeks results that please, satisfy, or delight. This pleasure principle, as it was called, had to function within the bounds of certain restraints, identified as the reality principle, never violating the demands of people’s conscience or of the restraints or inhibitions that their self-images imposed. In Freudian terms, the superego served to maintain the balance between the pleasure principle and the reality principle. In Beyond the Pleasure Principle (1922), Freud reached the conclusion that all motivation could be reduced to two opposing sources of energy, the life instinct and the death instinct.


Heinz Hartmann went a step beyond Freud’s psychodynamic theory, emphasizing the need for people to achieve their goals in ways that do not produce inner conflict, that are free of actions that might compromise or devastate the ego. More idealistic was Robert White, who denied Freud’s contention that motivation is sexual or aggressive in nature. White contended that the motivation to achieve competence is basic in people. Everyone, according to White, wishes to be competent and, given proper guidance, will strive to achieve competence, although individual goals and individual determinations of the areas in which they wish to be competent vary greatly from person to person.


Such social psychologists as Erik H. Erikson, Carl Jung, and Karen Horney turned their attention away from the biological and sexual nature of motivation, focusing instead on its social aspects. They, like Freud, Hartmann, and White before them, sought to understand the unconscious means by which psychic energy is distributed as it ferrets out sources of gratification.




The Behaviorists

The behavioral approach to motivation is centrally concerned with rewards and punishments. People cultivate behaviors for which they are rewarded. They avoid behaviors that experience has shown them will result in pain or punishment. B. F. Skinner was probably the most influential behaviorist. Many educators accepted his theories and applied them to social as well as teaching situations.


Clark L. Hull, working experimentally with rats, determined that animals deprived of such basic requirements as food or punished by painful means, such as electric shock, develop intense reactions to these stimuli. John Dollard and Neal Miller extended Hull’s work to human subjects. They discovered that the response elicited by these means depends on the intensity of the stimulus, not on its origin. The stimuli employed also evoke previously experienced stimulus-response reactions, so that if subjects are hurt or punished following a volitional act, they will in future avoid such an act. In other words, if the negative stimuli are rapidly reduced, the responses that immediately preceded the reduction are reinforced. These researchers concluded that physiological needs such as hunger are innate, whereas secondary drives and the reaction to all drives, through conditioning, are learned.


Ivan Petrovich Pavlov demonstrated the strength of conditioned responses in his renowned experiments with dogs. He arranged for a bell to sound immediately before the dogs in his experiment were to be fed. The dogs came to associate the sound of a bell with being fed, a pleasurable and satisfying experience. Eventually, when Pavlov rang the bell but failed to follow its ringing with feeding, the dogs salivated merely on hearing the sound because they anticipated the feeding to which they had become conditioned. Over time, the motivation to satisfy their hunger came to be as much related to hearing the bell as it was to their actually being fed. Pavlovian conditioning is directly related to motivation, in this case the motivation to satisfy hunger.




Konrad Lorenz’s Hydraulic Model

Freud argued that if instinctive urges are bottled up, they will eventually make the individual ill. They demand release and will find it in one way or another as the unconscious mind works to direct the distribution of people’s psychic energy.


Konrad Lorenz carried this notion a step beyond what Freud had postulated, contending that inherent drives that are not released by external means will explode spontaneously through some inherent releasing mechanism. This theory, termed Lorenz’s hydraulic model, explains psychic collapses in some people, particularly in those who are markedly repressed.


Erich Fromm carried Freud’s notions about the repression of innate drives one step beyond what Lorenz espoused. Fromm added a moral dimension to what Freud and Lorenz asserted by postulating that humans develop character as a means of managing and controlling their innate physiological and psychological needs. He brought the matter of free will into his consideration of how to deal in a positive way with innate drives.




The Hedonistic Theory of Motivation

Hedonism emphasizes pleasure over everything else. The hedonistic theory of motivation stems from Freud’s recognition of the pleasure principle, which stipulates that motivation is stimulated by pleasure and inhibited by pain.


Laboratory experiments with rats demonstrated unequivocally that, given a choice, rats work harder to get food that tastes good to them than to get food that is nutritious. Indeed, laboratory animals will take in empty calories to the point of emaciation as long as the food that contains such calories tastes good. It is thought that hedonistic motivation is directly related to pleasure centers in the brain, so that organisms work both consciously and unconsciously toward stimulating and satisfying these pleasure centers.




The Incentive Theory of Motivation

Alfred Adler, the Austrian psychologist who founded the school of individual psychology, rejected Freud’s emphases on sex and aggression as fundamental aspects of motivation. Breaking from Freud, who had been among his earliest professional associates, Adler contended that childhood feelings of helplessness led to later feelings of inferiority. His means of treating the inferiority complex, as this condition came to be known, was to engage his patients in positive social interaction. To do this, he developed an incentive theory of motivation, as articulated in his two major works, Praxis und Theorie der Individual psychologie (1920; The Practice and Theory of Individual Psychology, 1924) and Menschenkenntnis (1927; Understanding Human Nature, 1927).


Adler’s theory focused on helping people to realize the satisfaction involved in achieving superiority and competence in areas in which they had some aptitude. The motivation to do so is strictly personal and individual. Adler’s entire system was based on the satisfactions to be derived from achieving a modicum of superiority. The incentive approach views competence as a basic motivation activated by people’s wish to avoid failure. This is a reward/punishment approach, although it is quite different from that of the behaviorists and is in essence humanistic. The reward is competence; the punishment is failure. Both factors stimulate subjects’ motivation.




The Activation Theory of Motivation

Drive reductionists believed that if all of an organism’s needs are fulfilled, that organism will lapse into a lethargic state. They conclude that increasing needs will cause the organism to have an increased drive to fulfill those needs. Their view is that the inevitable course that individual organisms select is that of least resistance.


Donald O. Hebb, however, takes a more sanguine view of motivation, particularly in humans. In his activation theory, he contends that a middle ground between lethargy at one extreme and incapacitating anxiety at the other produces the most desirable level of motivation. This theory accounts for states of desired arousal such as that found in such pursuits as competitive sports.


The drive reductionists ascribe to the reward/punishment views of most of the behaviorists, who essentially consider organisms to be entities in need of direction, possibly of manipulation. The drive inductionists, on the other hand, have faith in the innate need of organisms to be self-directive and to work individually toward gaining competence. Essentially they accept the Greek ideal of the golden mean as a guiding principle, which has also been influential in the thinking of such humanistic psychologists.




The Humanistic Approach to Motivation

Abraham Maslow devised a useful though controversial hierarchy of needs
required to satisfy human potential. These needs proceed from low-level physiological needs such as hunger, thirst, sex, and comfort, through such other needs as safety, love, and esteem, finally reaching the highest level, self-actualization. According to Maslow, human beings progress sequentially through this hierarchy as they develop. Each category of needs proceeds from the preceding category, and no category is omitted as the human develops, although the final and highest category, self-actualization, which includes curiosity, creative living, and fulfilling work, is not necessarily attained or attainable by all humans.


The humanists stipulate that people’s primary motives are those that lead toward self-actualization, those that capitalize on the unique potential of each individual. In educational terms, this means that for education to be effective, it must emphasize exploration and discovery over memorization and the rote learning of a set body of material. It must also be highly individualized, although this does not imply a one-on-one relationship between students and their teachers. Rather than acting as fonts of knowledge, teachers become facilitators of learning, directing their students individually to achieve the actualization of the personal goals that best suit them.


Carl R. Rogers traced much psychopathology to conflicts between people’s inherent understanding of what they require to move toward self-actualization and society’s expectations, which may run counter to individual needs. In other words, as many people develop and pass through the educational system, they may be encouraged or required to adopt goals that are opposed to those that are most realistic for them. Humanistic views of human development run counter to the views of most of the psychodynamic and behaviorist psychologists concerned with learning theory and motivation as it relates to such theory.




Cognitive Approaches to Motivation

The research of Kurt Lewin in the subjective tension systems that work toward resolution of problems in humans along with his research, done in collaboration with Edward C. Tolman, that emphasizes expectancies and the subjective value of the results of actions has led to a cognitive approach to motivation. Related to this research is that of Leon Festinger, whose theory of
cognitive dissonance stipulates that if people’s beliefs are not in harmony with each other, they will experience a discomfort that they will attempt to eliminate by altering their beliefs.


People ultimately realize that certain specific behaviors will lead to anticipated results. Behavior, therefore, has a purpose, but the number of goals related to specific behaviors is virtually infinite. People learn to behave in ways that make it most likely to achieve expected results.


Robert Rosenthal and Lenore Jacobson demonstrated that teacher expectations have a great deal to do with the success of the students with whom they work. Their experiment, detailed fully in Pygmalion in the Classroom (1968), relates how they selected preadolescent and adolescent students randomly and then told the teachers of those students that they had devised a way of determining which students were likely to show spurts of unusual mental growth in the coming year.


Each teacher was given the names of two or three students who were identified as being on the brink of rapid intellectual development. The researchers tested the students at the end of the school year and found that those who had been designated as poised on the brink of unusual mental development tested above the norm even though they had been selected randomly from all the students in the classes involved. In this experiment, teacher motivation to help certain students succeed appears to have been central to those students’ achieving goals beyond those of other students in the class.




Bibliography


Boekaerts, Monique, Paul R. Pintrich, and Moshe Zeidner, eds. Handbook of Self-Regulation. San Diego: Academic Press, 2007. Print.



Elliot, Andrew J., and Carol S. Dweck, eds. Handbook of Competence and Motivation. New York: Guilford, 2007. Print.



Ferguson, Eva Dreikurs. Motivation: A Biosocial and Cognitive Integration of Motivation and Emotion. New York: Oxford UP, 2000. Print.



Glover, John A., Royce R. Ronning, and Cecil R. Reynolds, eds. Handbook of Creativity. New York: Plenum, 1989. Print.



Greenwood, Gordon E., and H. Thompson Fillmer. Educational Psychology: Cases for Teacher Decision-Making. Columbus: Merrill, 1999. Print.



Kendrick, Douglas T., Steven L. Neuberg, and Robert B. Cialdini, eds. Social Psychology: Unraveling the Mystery. 4th ed. Boston: Pearson, 2007. Print.



Kreitler, Shulamith. Cognition and Motivation: Forging an Interdisciplinary Perspective. New York: Cambridge UP, 2013. Print.



Lawler, Edward E., III. Rewarding Excellence: Pay Strategies for the New Economy. San Francisco: Jossey, 2000. Print.



Lesko, Wayne A., ed. Readings in Social Psychology: General, Classic, and Contemporary Selections. 7th ed. Boston: Pearson, 2009. Print.



Reeve, Johnmarshall. Understanding Motivation and Emotion. 5th ed. Hoboken: Wiley, 2009. Print.



Rosenthal, Robert, and Lenore Jacobson. Pygmalion in the Classroom. New York: Holt, 1968. Print.



Sinnott, Jan D. Positive Psychology: Advances in Understanding Adult Motivation. New York: Springer, 2013. Print.



Tracy, Brian. Motivation. New York: American Management Assoc., 2013. Print.



Wagner, Hugh. The Psychobiology of Human Motivation. New York: Routledge, 1999. Print.



Wong, Roderick. Motivation: A Biobehavioural Approach. New York: Cambridge UP, 2000. Print.

How does the choice of details set the tone of the sermon?

Edwards is remembered for his choice of details, particularly in this classic sermon. His goal was not to tell people about his beliefs; he ...